How to Repair AutoCAD 2022 - Easy and Effective Methods
-
AutoCAD 2022 is a powerful and versatile software that allows you to create and edit 2D and 3D designs. However, sometimes AutoCAD 2022 may encounter problems that prevent it from working properly. These problems can be caused by various factors, such as corrupted files, missing components, incompatible drivers, or malware infections. If you are facing any issues with AutoCAD 2022, don't worry. In this article, we will show you how to repair AutoCAD 2022 using some easy and effective methods.
-
Method 1: Use the Repair Tool in the Control Panel
-
One of the simplest ways to repair AutoCAD 2022 is to use the built-in repair tool in the Control Panel. This tool can fix common errors and restore the default settings of AutoCAD 2022. To use this method, follow these steps:
Go to the Start menu and type "Control Panel". Click on the Control Panel app that appears.
-
In the Control Panel, click on "Programs and Features". This will show you a list of all the installed programs on your PC.
-
Find and select AutoCAD 2022 from the list. Then click on the "Uninstall/Change" button above the list.
-
A window will pop up with two options: "Repair" and "Uninstall". Choose the "Repair" option and click on "Continue".
-
Follow the instructions on the screen to complete the repair process. This may take some time depending on the size and condition of your AutoCAD 2022 installation.
-
When the repair is finished, restart your PC and launch AutoCAD 2022. Check if the problem is resolved.
-
-
Method 2: Reinstall AutoCAD 2022
-
If the repair tool does not work or if you want to start fresh with AutoCAD 2022, you can try reinstalling it. This will remove all the existing files and settings of AutoCAD 2022 and install a new copy. To do this, follow these steps:
-
-
Close any running instances of AutoCAD 2022.
-
Go to the Start menu and type "Control Panel". Click on the Control Panel app that appears.
-
In the Control Panel, click on "Programs and Features". This will show you a list of all the installed programs on your PC.
-
Find and select AutoCAD 2022 from the list. Then click on the "Uninstall/Change" button above the list.
-
A window will pop up with two options: "Repair" and "Uninstall". Choose the "Uninstall" option and click on "Continue".
-
Follow the instructions on the screen to complete the uninstallation process. This may take some time depending on the size and condition of your AutoCAD 2022 installation.
Launch AutoCAD 2022 and activate it with your license key. Check if the problem is resolved.
-
-
Method 3: Update Your Drivers
-
Sometimes, outdated or incompatible drivers can cause problems with AutoCAD 2022. Drivers are software components that enable your PC to communicate with your hardware devices, such as your graphics card, sound card, or printer. To ensure that AutoCAD 2022 runs smoothly, you need to update your drivers regularly. To do this, follow these steps:
-
-
Go to the Start menu and type "Device Manager". Click on the Device Manager app that appears.
-
In the Device Manager, expand the categories of devices that you want to update. For example, if you want to update your graphics card driver, expand the "Display adapters" category.
-
Right-click on the device that you want to update and choose "Update driver".
-
A window will pop up with two options: "Search automatically for updated driver software" and "Browse my computer for driver software". ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Opening Trainer Keygen Crack Discover the Secrets of Chess Grandmasters with This App.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Opening Trainer Keygen Crack Discover the Secrets of Chess Grandmasters with This App.md
deleted file mode 100644
index 6dad1e9e4268f37b3f55c2a032fa87485ee27c98..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chess Opening Trainer Keygen Crack Discover the Secrets of Chess Grandmasters with This App.md
+++ /dev/null
@@ -1,120 +0,0 @@
-
-
Chess Opening Trainer Keygen Crack: How to Download and Use It
-
If you are a chess enthusiast who wants to improve your skills and knowledge of chess openings, you might be interested in Chess Opening Trainer, a software that helps you learn and practice chess openings. However, this software is not free and requires a serial key for activation. In this article, we will show you how to download and use Chess Opening Trainer keygen crack, a tool that generates serial keys for software activation. We will also explain what Chess Opening Trainer is, what keygen crack is, and what are the risks and drawbacks of using it.
Chess Opening Trainer is a software that helps you learn and practice chess openings. It allows you to create your own opening repertoire, test your knowledge with quizzes and puzzles, analyze your games with a powerful engine, and play against the computer or online opponents. Chess Opening Trainer also provides you with a database of over 100,000 chess games from grandmasters and experts, as well as a collection of opening books and videos.
-
Features and benefits of Chess Opening Trainer
-
Some of the features and benefits of Chess Opening Trainer are:
-
-
It helps you memorize chess openings faster and easier by using spaced repetition and flashcards.
-
It lets you customize your opening repertoire according to your style, level, and preferences.
-
It gives you feedback on your strengths and weaknesses in chess openings.
-
It improves your tactical skills by providing you with challenging quizzes and puzzles.
-
It enhances your strategic understanding by showing you the plans and ideas behind each opening.
-
It allows you to compare your opening repertoire with those of top players and learn from their games.
-
It supports various formats such as PGN, FEN, EPD, CBH, CTG, etc.
-
It works offline and online, on Windows, Mac, Linux, Android, iOS, etc.
-
-
What is keygen crack?
-
Keygen crack is a tool that generates serial keys for software activation. It is usually used by people who want to use paid software for free without purchasing a license. Keygen crack works by exploiting the algorithm or code that the software uses to verify the validity of the serial key. By using keygen crack, you can bypass the activation process and use the software without any restrictions.
However, using keygen crack is not recommended for several reasons:
-
-
It is illegal and unethical. By using keygen crack, you are violating the intellectual property rights of the software developers and distributors. You are also depriving them of their income and incentive to create more quality products.
-
It is unsafe and risky. By downloading keygen crack from unknown sources, you are exposing your computer to malware, viruses, spyware, ransomware, etc. These malicious programs can damage your system, steal your data, compromise your privacy, etc.
-
It is unreliable and unstable. By using keygen crack, you are not guaranteed that the software will work properly or at all. You may encounter errors, bugs, crashes, compatibility issues, etc. You may also miss out on updates, patches, support, etc.
-
-
How to download and use Chess Opening Trainer keygen crack?
-
If you still want to download and use Chess Opening Trainer keygen crack despite the risks and drawbacks mentioned above, here are the steps you need to follow:
-
Step 1: Find a reliable source for the keygen crack file
-
The first step is to find a reliable source for the keygen crack file. You can search online for websites or forums that offer Chess Opening Trainer keygen crack. However, be careful not to click on suspicious links or download files from untrusted sources. You can also use antivirus software or online scanners to check if the file is safe or not.
-
Step 2: Run the keygen crack file and generate a serial key
-
The second step is to run the keygen crack file and generate a serial key. You may need to extract the file first if it is compressed or archived. Then, double-click on the file or right-click on it and select Run as administrator. You may see a window like this:
-
-Chess Opening Trainer Keygen Crack v1.0 -------------------------------------- Enter your name: _________ Press Generate button Serial Key: _____________ Copy the serial key Press Exit button
-
Enter your name or any name you want in the blank space. Then press Generate button. You will see a serial key generated for you. Copy the serial key and save it somewhere safe.
-
Step 3: Download and install Chess Opening Trainer from the official website
-
The third step is to download and install Chess Opening Trainer from the official website. You can go to https://chesstempo.com/opening-training/ and click on Download button. You will see a window like this:
-
-Chess Opening Trainer Download ----------------------------- Choose your platform: Windows | Mac | Linux | Android | iOS
-
Select your platform and follow the instructions to download and install Chess Opening Trainer on your device.
-
Step 4: Enter the serial key and activate the software
-
The fourth and final step is to enter the serial key and activate the software. Launch Chess Opening Trainer and go to Help menu. Select Activate License and enter the serial key that you generated earlier. Click on Activate button and you will see a message like this:
-
-Chess Opening Trainer Activation ------------------------------- Your license has been activated successfully. Thank you for choosing Chess Opening Trainer. Enjoy learning chess openings!
-
Congratulations! You have successfully downloaded and used Chess Opening Trainer keygen crack. You can now use all the features and benefits of Chess Opening Trainer without any limitations.
-
Conclusion
-
In this article, we have shown you how to download and use Chess Opening Trainer keygen crack, a tool that generates serial keys for software activation. We have also explained what Chess Opening Trainer is, what keygen crack is, and what are the risks and drawbacks of using it. We hope you have found this article useful and informative.
-
FAQs
-
Here are some frequently asked questions about Chess Opening Trainer keygen crack:
-
-
Is Chess Opening Trainer keygen crack legal?
-
No, it is not legal. By using Chess Opening Trainer keygen crack, you are violating the intellectual property rights of the software developers and distributors. You are also depriving them of their income and incentive to create more quality products.
-
Is Chess Opening Trainer keygen crack safe?
-
No, it is not safe. By downloading Chess Opening Trainer keygen crack from unknown sources, you are exposing your computer to malware, viruses, spyware, ransomware, etc. These malicious programs can damage your system, steal your data, compromise your privacy, etc.
-
Is Chess Opening Trainer keygen crack reliable?
-
No, it is not reliable. By using Chess Opening Trainer keygen crack, you are not guaranteed that the software will work properly or at all. You may encounter errors, bugs, crashes, compatibility issues, etc. You may also miss out on updates, patches, support, etc.
-
What are some alternatives to Chess Opening Trainer keygen crack?
-
Some alternatives to Chess Opening Trainer keygen crack are:
-
-
Purchasing a license for Chess Opening Trainer from the official website.
-
Using free or open source chess opening software such as SCID or Lichess.
-
Hiring a professional chess coach or joining a chess club.
-
Reading chess books or watching chess videos on chess openings.
-
-
How can I contact Chess Opening Trainer support?
-
You can contact Chess Opening Trainer support by sending an email to support@chesstempo.com or by visiting their website https://chesstempo.com/contact-us/.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Stata 14 For Mac BETTER.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Stata 14 For Mac BETTER.md
deleted file mode 100644
index b2f70343523fa857826bd2663ca87eab19c22b49..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Stata 14 For Mac BETTER.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
-
-
-
Download Stata 14 for Mac: A Comprehensive Guide
-
If you are looking for a powerful and versatile software for data analysis, statistics, and graphics, you might want to consider downloading Stata 14 for Mac. Stata 14 is one of the most popular and widely used software packages in the fields of economics, sociology, political science, biostatistics, epidemiology, and many others. It offers a range of features and benefits that can help you perform complex data manipulation, estimation, testing, forecasting, simulation, and visualization tasks with ease and accuracy.
-
In this article, we will provide you with a comprehensive guide on how to download Stata 14 for Mac. We will also explain what Stata 14 is and why you need it, what are its main features and benefits, how to install and use it on your Mac computer, and some frequently asked questions about it. By the end of this article, you will have a clear idea of whether Stata 14 is the right software for you and how to get started with it.
Stata 14 is a software package that was released in April 2015 by StataCorp, a company that has been developing and distributing statistical software since 1985. Stata 14 is the latest version of Stata as of June 2021, although there have been several updates and bug fixes since then. The current update is Stata 14.2.
-
Stata 14 is a software that can handle both cross-sectional and longitudinal data, as well as panel data and multilevel data. It can also deal with both continuous and discrete variables, as well as categorical and ordinal variables. It can perform various types of analysis, such as linear and nonlinear regression, ANOVA, logistic regression, survival analysis, time series analysis, factor analysis, cluster analysis, structural equation modeling (SEM), item response theory (IRT), Bayesian analysis, power and sample size calculation, Markov-switching models, treatment effects models, multilevel survival models, fractional outcome regression models, and many more.
-
Stata 14 also has a user-friendly interface that allows you to interact with the software using either menus or commands. You can also customize your preferences and settings according to your needs. You can also create your own commands or programs using the built-in programming language of Stata. You can also access thousands of user-written commands or programs from the internet or from the official Stata Journal.
-
Stata 14 also has a powerful graphics engine that can produce high-quality graphs and charts that can be customized in various ways. You can also export your graphs to different formats such as PDF, PNG, EPS, SVG, etc. You can also integrate your graphs with other applications such as Microsoft Word or PowerPoint.
-
Stata 14 also has a comprehensive documentation that includes manuals, tutorials, examples, FAQs, glossaries, references, etc. You can also get support from the official website of StataCorp or from the online community of Stata users called St
ataList. You can also get training courses or webinars from StataCorp or from other authorized providers.
-
Stata 14 is a software that can help you with your data analysis needs, whether you are a student, a researcher, a teacher, a consultant, or a professional. It can help you save time and effort, improve your accuracy and reliability, enhance your presentation and communication, and expand your knowledge and skills. It can also help you collaborate with other Stata users around the world and share your insights and discoveries.
-
Features and benefits of Stata 14
-
Stata 14 has many features and benefits that make it a superior software for data analysis. Here are some of the most notable ones:
-
Bayesian analysis
-
Stata 14 introduces a new command called bayes that allows you to perform Bayesian analysis using Markov chain Monte Carlo (MCMC) methods. You can specify any likelihood function and any prior distribution for the parameters, and Stata will generate posterior samples and summaries for you. You can also use predefined models such as linear regression, logistic regression, Poisson regression, etc. You can also compare models using Bayes factors or posterior predictive checks. You can also visualize your results using trace plots, density plots, interval plots, etc.
-
-
IRT (item response theory)
-
Stata 14 also introduces a new command called irt that allows you to perform item response theory (IRT) analysis using maximum likelihood estimation (MLE) methods. You can fit various IRT models such as Rasch model, one-parameter logistic model (1PL), two-parameter logistic model (2PL), three-parameter logistic model (3PL), graded response model (GRM), partial credit model (PCM), etc. You can also test the assumptions of IRT models such as unidimensionality, local independence, monotonicity, etc. You can also assess the reliability and validity of your instruments using Cronbach's alpha, test information function (TIF), item information function (IIF), etc.
-
Unicode
-
Stata 14 supports Unicode encoding, which means that you can use any character set or language in your data, commands, output, graphs, etc. You can also import and export data files that use Unicode encoding. You can also use Unicode characters in your variable names, labels, values, etc. This feature makes Stata 14 more accessible and compatible with different cultures and languages.
-
Integration with Excel
-
Stata 14 has improved its integration with Excel, which means that you can easily import and export data between Stata and Excel. You can also use the new command called import excel to import data from Excel files directly into Stata without saving them as CSV files first. You can also use the new command called export excel to export data from Stata to Excel files with various options such as sheet name, cell range, variable names, labels, formats, etc.
-
Treatment effects
-
Stata 14 has expanded its treatment effects capabilities by adding new commands such as teffects ipwra for inverse probability weighting with regression adjustment (IPWRA), teffects ipw for inverse probability weighting (IPW), teffects psmatch for propensity score matching (PSM), teffects nnmatch for nearest neighbor matching (NNM), teffects overlap for overlap weights (OW), teffects ra for regression adjustment (RA), teffects endogenous for endogenous treatment effects models (ETE), etc. These commands allow you to estimate the causal effects of treatments or interventions on outcomes using various methods that account for selection bias or confounding factors.
-
Multilevel survival models
-
Stata 14 has added new commands such as mestreg for multilevel survival models with random effects at different levels of hierarchy. You can specify various types of random effects such as intercepts, slopes, frailties, etc. You can also specify various types of survival distributions such as exponential, Weibull, lognormal, log-logistic, gamma, Gompertz , etc. You can also test various hypotheses and assumptions using likelihood ratio tests, Wald tests, Schoenfeld residuals, etc.
-
SEM (structural equation modeling)
-
Stata 14 has improved its SEM capabilities by adding new features such as latent class analysis (LCA), latent transition analysis (LTA), latent profile analysis (LPA), latent growth curve models (LGCM), multilevel SEM, generalized SEM, dynamic SEM, etc. You can also use the new command called sembuilder to create and modify SEM diagrams using a graphical user interface (GUI). You can also use the new command called estat gof to calculate various goodness-of-fit measures such as chi-square, RMSEA, CFI, TLI, SRMR, etc.
-
Power and sample size
-
Stata 14 has enhanced its power and sample size capabilities by adding new commands such as power twoproportions for two-sample tests of proportions, power logrank for log-rank tests of survival curves, power cox for Cox proportional hazards models, power oneway for one-way ANOVA, power repeated for repeated-measures ANOVA, power cluster for cluster randomized trials, power bootstrap for bootstrap-based power analysis, etc. These commands allow you to calculate the required sample size or the achieved power for various types of statistical tests or models.
-
Markov-switching models
-
Stata 14 has introduced a new command called mswitch that allows you to estimate Markov-switching models for time series data. These models allow you to capture regime changes or structural breaks in the data by allowing the parameters to switch between different states or regimes according to a Markov process. You can specify various types of Markov-switching models such as Hamilton's model, Kim's model, Goldfeld-Quandt's model, etc. You can also test for the number of regimes, the duration of regimes, the transition probabilities, etc.
-
Panel-data survival models
-
Stata 14 has added a new command called xtscc that allows you to estimate panel-data survival models with correlated random effects. These models allow you to account for unobserved heterogeneity and serial correlation in panel data with survival outcomes. You can specify various types of survival distributions such as exponential, Weibull, lognormal, log-logistic, gamma, Gompertz, etc. You can also test various hypotheses and assumptions using likelihood ratio tests, Wald tests, Schoenfeld residuals, etc.
-
Fractional outcome regression
-
Stata 14 has added a new command called fracreg that allows you to estimate fractional outcome regression models for data with fractional outcomes. These models allow you to model outcomes that are bounded between zero and one, such as proportions, rates, shares, probabilities, etc. You can specify various types of fractional outcome regression models such as beta regression, fractional logit regression, fractional probit regression, etc. You can also test various hypotheses and assumptions using likelihood ratio tests, Wald tests , score tests, etc.
-
How to download and install Stata 14 for Mac?
-
If you are interested in downloading and installing Stata 14 for Mac, you need to follow these steps:
-
System requirements and compatibility
-
Before you download and install Stata 14 for Mac, you need to make sure that your Mac computer meets the minimum system requirements and is compatible with the software. Here are the system requirements and compatibility for Stata 14 for Mac:
-
-
Operating system: Mac OS X 10.7 or newer
-
Processor: 64-bit Intel processor
-
Memory: 1 GB RAM (2 GB recommended)
-
Disk space: 1 GB for Stata installation, plus additional space for datasets
-
Display: 1024 x 768 or higher resolution monitor
-
Internet connection: Required for installation and updates
-
-
If your Mac computer meets these requirements and is compatible with Stata 14, you can proceed to the next step.
-
Steps to download and install Stata 14 for Mac
-
To download and install Stata 14 for Mac, you need to follow these steps:
Select the type of license that suits your needs, such as "Stata/MP", "Stata/SE", "Stata/IC", or "Stata Small". You can also compare the features and prices of different licenses by clicking on the "Compare features" link.
-
Select the number of users and the duration of the license that you want, such as "Single-user", "Multi-user", "Perpetual", or "Annual". You can also see the total cost of your order by clicking on the "Calculate price" button.
-
Click on the "Add to cart" button to proceed to the checkout page.
-
Enter your billing and shipping information, as well as your payment method. You can pay by credit card, PayPal, wire transfer, check, or purchase order. You can also apply a discount code if you have one.
-
Review your order details and click on the "Place order" button to complete your purchase.
-
After you place your order, you will receive an email confirmation with your order number and a link to download Stata 14 for Mac. You will also receive a license code and an authorization code that you will need to activate your software.
-
Click on the link in the email to download Stata 14 for Mac. The file size is about 300 MB. Save the file to a location that you can easily access, such as your desktop or downloads folder.
-
Double-click on the downloaded file to open it. You will see a window with a Stata icon and a folder called "Stata". Drag and drop the Stata icon into the folder called "Stata". This will create a folder called "Stata14" in your applications folder.
-
Open the folder called "Stata14" and double-click on the Stata icon to launch the software. You will see a window with a welcome message and a prompt to enter your license code and authorization code. Enter the codes that you received in your email and click on the "OK" button.
-
The software will verify your codes and activate your license. You will see a window with a message that says "Congratulations! You have successfully installed Stata." Click on the "OK" button to close the window.
-
You have successfully downloaded and installed Stata 14 for Mac. You can now start using it for your data analysis needs.
-
-
How to use Stata 14 for Mac?
-
Now that you have downloaded and installed Stata 14 for Mac, you might be wondering how to use it. Here are some basic tips and tricks on how to use Stata 14 for Mac:
-
Basic commands and syntax
-
Stata 14 for Mac allows you to interact with the software using either menus or commands. You can access the menus by clicking on the icons at the top of the window, such as "File", "Edit", "Data", "Graphics", etc. You can also access some common commands by clicking on the buttons at the bottom of the window, such as "Do-file Editor", "Data Editor", "Variables Manager", "Graph Editor", etc. You can also use commands by typing them in the command window at the bottom of the window. You can also use the do-file editor to write and execute multiple commands at once. You can also use the help window to access the documentation and examples of any command. The basic syntax of Stata commands is as follows: command [varlist] [if] [in] [weight] [, options]
- where: - command is the name of the command, such as regress, summarize, tabulate, etc. - [varlist] is the list of variables that you want to use in the command, separated by spaces, such as age income education. You can also use wildcards, operators, or functions to specify variables, such as x*, x1-x5, log(x), etc. - [if] is the condition that you want to apply to the command, such as if gender == 1, if age > 30, if income > mean(income), etc. You can use logical operators such as &, |, or ! to combine conditions, such as if gender == 1 & age > 30. - [in] is the range of observations that you want to use in the command, such as in 1/100, in 101/200, in 1/2, etc. You can also use keywords such as _n, _N, or _first to specify observations, such as in _n-10/_n+10. - [weight] is the type and name of the weight variable that you want to use in the command, such as [fweight=pop], [pweight=prob], [iweight=imp], etc. You can use different types of weights depending on the nature and purpose of your analysis, such as frequency weights, probability weights, importance weights, etc. - [, options] are the additional options that you want to use in the command, separated by commas, such as , robust, , detail, , graph, etc. You can use different options depending on the command and the output that you want to obtain, such as robust standard errors, detailed statistics, graphical displays, etc. For example, if you want to perform a linear regression of income on age and education, you can use the following command: regress income age education
- If you want to perform the same regression with robust standard errors and a scatter plot of the fitted values, you can use the following command: regress income age education, robust graph
- You can also use the help window or the manuals to learn more about the syntax and options of any command.
Data management and analysis
-
Stata 14 for Mac allows you to manage and analyze your data using various commands and tools. You can import and export data from different sources and formats, such as Excel, CSV, SPSS, SAS, Stata, etc. You can also create and modify variables, labels, values, formats, etc. You can also sort, merge, append, reshape, collapse, expand, etc. your data. You can also perform various descriptive and inferential statistics on your data, such as summary statistics, frequency tables, cross-tabulations, correlation coefficients, hypothesis tests, confidence intervals, etc. You can also perform various types of analysis on your data, such as regression analysis, ANOVA, logistic regression, survival analysis, time series analysis, factor analysis, cluster analysis, structural equation modeling (SEM), item response theory (IRT), Bayesian analysis, power and sample size calculation, Markov-switching models, treatment effects models, multilevel survival models, fractional outcome regression models , and many more.
-
To manage and analyze your data using Stata 14 for Mac, you can use the following commands and tools:
-
-
To import data from different sources and formats, you can use the commands such as import excel, import delimited, import spss, import sas, use, etc. You can also use the menu "File > Import" to access the import dialog box.
-
To export data to different sources and formats, you can use the commands such as export excel, export delimited, export spss, export sas, save, etc. You can also use the menu "File > Export" to access the export dialog box.
-
To create and modify variables, labels, values, formats, etc., you can use the commands such as generate, replace, rename, recode, label, format, etc. You can also use the data editor or the variables manager to access the graphical user interface (GUI) for data management.
-
To sort, merge, append, reshape, collapse, expand, etc. your data, you can use the commands such as sort, merge, append, reshape, collapse, expand, etc. You can also use the menu "Data > Data utilities" to access the data utilities dialog box.
-
To perform descriptive and inferential statistics on your data, you can use the commands such as summarize, tabulate, tabstat, correlate, ttest, ci, etc. You can also use the menu "Statistics > Summary statistics" or "Statistics > Tables" to access the summary statistics or tables dialog box.
-
To perform various types of analysis on your data, you can use the commands such as regress, anova, logit, streg, arima, factor, cluster, sem, irt, bayes, power, mswitch, teffects, mestreg, fracreg, etc. You can also use the menu "Statistics > Linear models and related" or "Statistics > Other models" to access the linear models or other models dialog box.
-
-
Graphs and visualization
-
Stata 14 for Mac allows you to create and modify graphs and charts using various commands and tools. You can create various types of graphs, such as scatter plots, line plots, bar charts, pie charts, box plots, histogram, density plots, etc. You can also customize your graphs in various ways, such as adding titles, labels, legends, axes, colors, markers, lines, etc. You can also export your graphs to different formats, such as PDF, PNG, EPS, SVG, etc. You can also integrate your graphs with other applications, such as Microsoft Word or PowerPoint.
-
To create and modify graphs and charts using Stata 14 for Mac, you can use the following commands and tools:
-
-
To create graphs using commands, you can use the commands such as scatter, line, bar, pie, box, histogram, kdensity, etc. You can also use the command graph to create graphs using a general syntax. You can also use the command twoway to create graphs using multiple plot types.
-
To create graphs using menus, you can use the menu "Graphics > Graphs" to access the graphs dialog box. You can also use the menu "Graphics > Graph editor" to access the graph editor dialog box.
-
To modify graphs using commands, you can use the commands such as graph set, graph export, graph combine, graph rename, graph close, etc. You can also use the command graph options to modify various options of your graphs.
-
To modify graphs using menus, you can use the menu "Graphics > Graph preferences" to access the graph preferences dialog box. You can also use the menu "Graphics > Graph editor" to access the graph editor dialog box.
-
To export graphs to different formats, you can use the commands such as graph export, graph save, etc. You can also use the menu "File > Save as" or "File > Export" to access the save as or export dialog box.
-
To integrate graphs with other applications, you can use the commands such as putdocx, putpdf, putexcel, etc. You can also use the menu "File > Export" to access the export dialog box.
-
-
Conclusion
-
In this article, we have provided you with a comprehensive guide on how to download Stata 14 for Mac. We have also explained what Stata 14 is and why you need it, what are its main features and benefits, how to install and use it on your Mac computer, and some frequently asked questions about it. We hope that this article has helped you to understand whether Stata 14 is the right software for you and how to get started with it.
-
If you have any questions or comments about this article, please feel free to contact us at support@stata.com. We would love to hear from you and assist you with your data analysis needs. Thank you for reading this article and happy Stata-ing!
-
FAQs
-
Here are some of the most frequently asked questions about Stata 14 for Mac:
-
-
How much does Stata 14 for Mac cost?
-
The price of Stata 14 for Mac depends on the type of license, the number of users, and the duration of the license that you choose. You can check the current prices and discounts at https://www.stata.com/order/. You can also request a quote or a free trial at https://www.stata.com/contact/.
-
How can I update Stata 14 for Mac?
-
You can update Stata 14 for Mac by using the command update or by using the menu "Help > Check for updates". You can also check the latest updates and bug fixes at https://www.stata.com/support/updates/.
-
How can I get help with Stata 14 for Mac?
-
You can get help with Stata 14 for Mac by using the command help or by using the menu "Help > Stata help". You can also access the online documentation and examples at https://www.stata.com/help/. You can also get support from the official website of StataCorp at https://www.stata.com/support/ or from the online community of Stata users at https://www.statalist.org/.
-
How can I learn more about Stata 14 for Mac?
-
You can learn more about Stata 14 for Mac by using the command search or by using the menu "Help > Search". You can also access the online tutorials and videos at https://www.stata.com/learn/. You can also get training courses or webinars from StataCorp or from other authorized providers at https://www.stata.com/training/.
-
How can I share my feedback or suggestions about Stata 14 for Mac?
-
You can share your feedback or suggestions about Stata 14 for Mac by using the command suggest or by using the menu "Help > Suggest". You can also email your feedback or suggestions to suggest@stata.com. We appreciate your input and we will try our best to improve our software and service.
-
b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit Phantompdf Free Download Full Version NEW.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit Phantompdf Free Download Full Version NEW.md
deleted file mode 100644
index 8e55641098300d8e242255cb0ceed4f237479d79..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Foxit Phantompdf Free Download Full Version NEW.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-
Foxit PhantomPDF Free Download Full Version: A Powerful PDF Editor for Windows
-
Foxit PhantomPDF is a comprehensive PDF editor that allows you to create, edit, convert, sign, and secure PDF files on your Windows computer. Whether you need to create a PDF from scratch, modify an existing PDF, or convert a PDF to another format, Foxit PhantomPDF can handle it all. In this article, we will show you how to free download Foxit PhantomPDF full version and what features it offers.
-
How to Free Download Foxit PhantomPDF Full Version
-
If you want to free download Foxit PhantomPDF full version, you can use this link: https://www.foxitsoftware.com/pdf-editor/. This will take you to the official website of Foxit Software, where you can download the latest version of Foxit PhantomPDF for Windows. The file size is about 700 MB and the installation process is simple and fast.
Once you download the installer, double-click on it to run it. You will see a welcome screen that asks you to choose the language and accept the license agreement. Click on "Next" to proceed. Then, you will see a screen that asks you to choose the installation type. You can either choose "Standard" or "Custom". If you choose "Standard", the installer will install Foxit PhantomPDF with the default settings and features. If you choose "Custom", you can select which features and components you want to install. We recommend choosing "Custom" and selecting only the features you need.
-
Next, you will see a screen that asks you to choose the destination folder for Foxit PhantomPDF. You can either keep the default location or browse to another folder. Click on "Install" to start the installation process. The installer will show you a progress bar and a status message. Wait until the installation is complete.
-
What Features Does Foxit PhantomPDF Offer?
-
Foxit PhantomPDF is a powerful PDF editor that offers many features and functions for different purposes and needs. Some of the main features are:
-
-
Create PDF: You can create PDF files from various sources, such as documents, images, web pages, scanners, or blank pages. You can also combine multiple files into one PDF file or split a PDF file into smaller files.
-
Edit PDF: You can edit PDF files with ease, such as adding or deleting text, images, shapes, comments, annotations, bookmarks, headers, footers, watermarks, backgrounds, etc. You can also change the font, size, color, alignment, and style of the text.
-
Convert PDF: You can convert PDF files to other formats, such as Word, Excel, PowerPoint, HTML, TXT, JPG, PNG, GIF, etc. You can also convert other formats to PDF files with high quality and accuracy.
-
Sign PDF: You can sign PDF files with digital signatures or handwritten signatures. You can also add stamps or certificates to verify the authenticity and integrity of the PDF files.
-
Secure PDF: You can secure PDF files with passwords or encryption. You can also set permissions and restrictions for opening, printing, copying, editing, or commenting on the PDF files.
-
-
These are just some of the features that Foxit PhantomPDF offers. There are many more features and functions that you can explore and use with Foxit PhantomPDF.
-
Why Choose Foxit PhantomPDF?
-
Foxit PhantomPDF is one of the best PDF editors for Windows for many reasons. Here are some of the benefits of choosing Foxit PhantomPDF:
-
-
-
It is fast and reliable. It can handle large and complex PDF files without slowing down your computer or crashing.
-
It is easy and intuitive. It has a user-friendly interface that resembles Microsoft Office. It also has a ribbon toolbar that provides quick access to common commands and tools.
-
It is compatible and flexible. It supports various formats and standards for creating and editing PDF files. It also works well with other applications and services, such as Microsoft Office 365, Google Drive, Dropbox, SharePoint, etc.
-
It is affordable and cost-effective. It offers a free trial version that you can use for 14 days without any limitations or restrictions. It also has ddb901b051
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Cadpower 2008 64bit.md b/spaces/1gistliPinn/ChatGPT4/Examples/Cadpower 2008 64bit.md
deleted file mode 100644
index d0e20fcbce5f25e07ae29682245e2f5c8b7d583e..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Cadpower 2008 64bit.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-
if you are searching for the best utility to design models, then you can use cadpower. this tool helps you in more ways than one, so that you can achieve a better layout of the model. the best feature of this utility is that it helps you to increase the work efficiency of the system. it allows you to convert, analyze, and edit the model designs as well as save them for later use. after using this tool, you will be able to design more easily. cadpower is available in the market with a completely free download, and you can use this tool in its professional way. if you are using a 32-bit version, then you can download the 64-bit version of cadpower from our website. if you are running a 64-bit version, then you can download the 32-bit version of cadpower from our website.
-
four dimension cadpower is a useful application that can help you in designing any design. it is a tool that allows the user to carry out various tasks like converting, editing, and export. in this program, you will be able to find the required models easily. the most amazing feature of this utility is that it can help you to perform various tasks very easily. the utility provides more than 30 tools that allow you to design and perform various functions efficiently. it is a highly interactive software that allows you to get to work with your cad drawings. it helps you to view the drawings in a more detailed and quicker manner.
four dimension cadpower is a standalone utility that helps you to carry out various cad tasks. this tool is designed to provide you with different features that are required for the designers and users. you can use the latest version of this utility to get more features. this tool is compatible with windows 2000/xp/vista/7/8, mac osx 10.6 and higher and it is available in the market with a free download. this tool is a reliable utility that is designed to help you design projects effectively. four dimension cadpower is easily configurable and user-friendly which helps you to create drawings more easily.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 1 Pc Crack [UPD].md b/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 1 Pc Crack [UPD].md
deleted file mode 100644
index 72fedda6a642ee8c8fea91054ede497bc86e1501..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Euro Truck Simulator 1 Pc Crack [UPD].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-Navigate to: Documents\Euro Truck Simulator 2\profile. There you can find config file. Open it with notepad and find this: ... uset g_lang "fi_fi". I have fi because ... 1fdad05405
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cricket League MOD APK and Become the Ultimate Cricket Champion (Unlimited Gems and Coins).md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cricket League MOD APK and Become the Ultimate Cricket Champion (Unlimited Gems and Coins).md
deleted file mode 100644
index ec0b5b3bcfac2282c9c3acf1b18132e0106dd74d..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Cricket League MOD APK and Become the Ultimate Cricket Champion (Unlimited Gems and Coins).md
+++ /dev/null
@@ -1,84 +0,0 @@
-
-
Cricket League Mod Apk: How to Download and Enjoy Unlimited Gems and Coins
-
Do you love cricket and want to play it on your mobile device? Do you want to have unlimited gems and coins to unlock all the players and teams you want? Do you want to experience realistic cricket matches and leagues with your friends? If you answered yes to any of these questions, then you should try Cricket League Mod Apk.
-
What is Cricket League Mod Apk?
-
Cricket League Mod Apk is a modified version of the original Cricket League game, which is a popular cricket simulation game for Android devices. In this game, you can create your own team, choose your players, customize your jerseys, and compete in various cricket tournaments. You can also play online with other players from around the world, or offline with your friends using local multiplayer mode.
-
cricket league mod apk unlimited gems and coins download
Cricket League Mod Apk has many features that make it more fun and exciting than the original game. Some of these features are:
-
Unlimited Gems and Coins
-
Gems and coins are the main currencies in the game, which you can use to buy new players, upgrade your skills, unlock new stadiums, and more. However, in the original game, you have to earn them by playing matches, completing missions, or watching ads. This can be time-consuming and frustrating, especially if you want to get the best players and teams quickly. With Cricket League Mod Apk, you don't have to worry about that. You will get unlimited gems and coins as soon as you start the game, and you can spend them as much as you want without running out.
-
Unlocked All Players and Teams
-
In the original game, you have to unlock new players and teams by spending gems and coins, or by winning certain tournaments. This can be challenging and tedious, especially if you want to play with your favorite players and teams. With Cricket League Mod Apk, you don't have to do that. You will get access to all the players and teams in the game, including the legendary ones. You can choose any player or team you want, and customize them according to your preferences.
-
Realistic Cricket Experience
-
Cricket League Mod Apk offers a realistic cricket experience that will make you feel like you are playing on a real pitch. The game has high-quality graphics, sound effects, animations, and physics that will immerse you in the game. The game also has various modes, such as T20, ODI, Test, World Cup, IPL, PSL, BBL, CPL, and more. You can play in different weather conditions, day or night matches, different pitch types, and different difficulty levels. You can also use different strategies, such as batting order, bowling order, fielding positions, power play, etc.
-
How to Download and Install Cricket League Mod Apk?
-
If you are interested in playing Cricket League Mod Apk, you can follow these simple steps to download and install it on your Android device:
-
cricket league mod apk free download with unlimited gems and coins
-download cricket league mod apk latest version with unlimited gems and coins
-how to get unlimited gems and coins in cricket league mod apk
-cricket league hack mod apk download with unlimited gems and coins
-cricket league mod apk unlimited everything (gems, coins, money)
-cricket league 2023 mod apk download with unlimited gems and coins
-cricket league mod apk for android with unlimited gems and coins
-cricket league mod apk offline with unlimited gems and coins
-cricket league premium mod apk download with unlimited gems and coins
-cricket league pro mod apk with unlimited gems and coins
-cricket league mod apk unlimited gems and coins no root
-cricket league mod apk unlimited gems and coins no verification
-cricket league mod apk unlimited gems and coins online
-cricket league mod apk unlimited gems and coins generator
-cricket league mod apk unlimited gems and coins hack
-cricket league 3d mod apk download with unlimited gems and coins
-cricket league fantasy mod apk with unlimited gems and coins
-cricket league manager mod apk download with unlimited gems and coins
-cricket league simulator mod apk with unlimited gems and coins
-cricket league world cup mod apk download with unlimited gems and coins
-best cricket league mod apk with unlimited gems and coins
-real cricket league mod apk download with unlimited gems and coins
-super cricket league mod apk with unlimited gems and coins
-ultimate cricket league mod apk download with unlimited gems and coins
-world cricket league mod apk with unlimited gems and coins
-
Step 1: Enable Unknown Sources
-
Before you can install any mod apk file on your device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, then security, then unknown sources, and turn it on.
-
Step 2: Download the Mod Apk File
-
Next, you need to download the mod apk file of Cricket League from a reliable source. You can search for it on Google, or use the link below to download it directly. The file size is about 100 MB, so make sure you have enough space on your device.
After you have downloaded the mod apk file, you need to install it on your device. To do this, locate the file in your file manager, and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on install, and wait for a few seconds until the installation is complete.
-
Step 4: Launch the Game and Enjoy
-
Finally, you can launch the game and enjoy unlimited gems and coins, and all the features of Cricket League Mod Apk. You will see a welcome screen with some instructions and tips. You can skip them or read them as you wish. Then, you can create your profile, choose your team, and start playing the game.
-
Why Should You Play Cricket League Mod Apk?
-
Cricket League Mod Apk is a great game for cricket lovers who want to have more fun and excitement in their mobile gaming. Here are some of the pros and cons of playing this game:
-
Pros of Cricket League Mod Apk
-
Free and Easy to Play
-
One of the best things about Cricket League Mod Apk is that it is free and easy to play. You don't have to spend any money to download or play this game. You also don't have to worry about any complicated controls or rules. The game has a simple and intuitive interface that will guide you through the game. You can also adjust the settings according to your preferences and comfort level.
-
Fun and Engaging Gameplay
-
Another great thing about Cricket League Mod Apk is that it has a fun and engaging gameplay that will keep you hooked for hours. The game has various modes, tournaments, challenges, and missions that will test your skills and strategy. You can also play with other players online or offline, and chat with them using the in-game chat feature. The game also has a leaderboard and achievements system that will motivate you to improve your performance and rank.
-
Customizable and Diverse Options
-
A third great thing about Cricket League Mod Apk is that it has customizable and diverse options that will make your game more enjoyable and unique. You can choose from hundreds of players and teams, each with their own stats and abilities. You can also customize your jerseys, logos, bats, balls, etc. You can also play in different stadiums, weather conditions, pitch types, etc.
-
Cons of Cricket League Mod Apk
-
Requires Internet Connection
-
One of the drawbacks of Cricket League Mod Apk is that it requires an internet connection to play online mode or update the game. This can be a problem if you have a slow or unstable internet connection, or if you don't have access to Wi-Fi or mobile data. You may experience lagging, crashing, or loading issues while playing the game.
-
May Contain Ads and Bugs
-
Another drawback of Cricket League Mod Apk is that it may contain ads and bugs that can affect your gaming experience. Since this is a mod apk file, it may not be compatible with some devices or versions of Android. It may also have some glitches or errors that can cause the game to freeze or crash. You may also see some ads popping up while playing the game, which can be annoying or distracting.
-
Conclusion
-
Cricket League Mod Apk is a fantastic cricket simulation game that will give you unlimited gems and coins, and access to all the players and teams in the game. You can also enjoy realistic cricket matches and leagues with your friends online or offline. The game has high-quality graphics, sound effects, animations, and physics that will make you feel like you are playing on a real pitch. The game also has various modes, such as T20, ODI, Test, World Cup, IPL, PSL, BBL, CPL, and more.
-
If you are a cricket fan who wants to have more fun and excitement in your mobile gaming, then you should definitely try Cricket League Mod Ap k. However, you should also be aware of the drawbacks of this game, such as requiring an internet connection, and containing ads and bugs. You should also be careful about downloading and installing mod apk files from unknown sources, as they may contain viruses or malware that can harm your device or data.
-
We hope this article has helped you learn more about Cricket League Mod Apk, and how to download and enjoy it. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
FAQs
-
Here are some of the frequently asked questions about Cricket League Mod Apk:
-
-
Is Cricket League Mod Apk safe to download and install?
-
Cricket League Mod Apk is generally safe to download and install, as long as you get it from a reliable source. However, you should always scan the file with an antivirus or malware detector before installing it, and backup your data before playing the game. You should also avoid giving any personal or sensitive information to the game or its developers.
-
Is Cricket League Mod Apk legal to play?
-
Cricket League Mod Apk is not legal to play, as it violates the terms and conditions of the original game and its developers. By playing this game, you are infringing on the intellectual property rights of the original game and its developers. You may also face legal consequences if you are caught playing this game by the authorities or the original game developers.
-
How can I update Cricket League Mod Apk?
-
Cricket League Mod Apk does not have an official update system, as it is not supported by the original game developers. You may have to download and install a new mod apk file every time there is a new version of the original game. However, this may not work if the new version of the original game has some changes or features that are incompatible with the mod apk file.
-
Can I play Cricket League Mod Apk with my friends?
-
Yes, you can play Cricket League Mod Apk with your friends online or offline. You can join or create online matches with other players from around the world, or use local multiplayer mode to play with your friends using Bluetooth or Wi-Fi. However, you may not be able to play with your friends who are using the original game, as they may have different versions or features than you.
-
Can I play Cricket League Mod Apk on PC or iOS devices?
-
No, you cannot play Cricket League Mod Apk on PC or iOS devices, as it is only designed for Android devices. You may be able to use some emulators or converters to run this game on PC or iOS devices, but they may not work properly or cause some issues. We do not recommend using any emulators or converters to play this game on PC or iOS devices.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Burn Belly Fat and Sculpt Your 6 Pack Abs with This Amazing APK.md b/spaces/1phancelerku/anime-remove-background/Burn Belly Fat and Sculpt Your 6 Pack Abs with This Amazing APK.md
deleted file mode 100644
index 5531ba604ba350fd0fba5ff280c9c291eaff4bb9..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Burn Belly Fat and Sculpt Your 6 Pack Abs with This Amazing APK.md
+++ /dev/null
@@ -1,89 +0,0 @@
-
-
6 Pack Abs APK - What Is It and How Does It Work?
-
If you want to get six pack abs without going to the gym or spending money on expensive equipment, you might want to try 6 pack abs apk. This is a free app that provides you with a 30-day workout plan that targets your upper and lower abdominal muscles. The app also features animations and video guides that show you how to perform each exercise correctly and effectively. You can also customize your workout reminders and track your progress automatically.
But why should you care about getting six pack abs in the first place? Well, there are many benefits of having six pack abs, both physical and psychological. Here are some of them:
-
-
Improved posture and core strength. Having six pack abs means having strong abdominal muscles that support your spine and pelvis. This can help you improve your posture, prevent lower back pain, and enhance your core strength.
-
Better sporting performance and agility. Having six pack abs also means having more power transfer between your upper and lower body. This can help you improve your sporting performance, agility, balance, coordination, and speed.
-
Increased basal metabolic rate and fat burning. Having six pack abs also means having more muscle mass in your body. This can help you increase your basal metabolic rate, which is the amount of calories you burn at rest. This can also help you burn more fat and reduce your body fat percentage, which is necessary to reveal your six pack abs.
-
-
As you can see, 6 pack abs apk is a great app that can help you get six pack abs and enjoy many benefits. But before you download it and start working out, there are some myths about six pack abs that you need to be aware of.
-
The Myths About 6 Pack Abs APK
-
There are many myths and misconceptions about six pack abs that can prevent you from achieving your goal or even harm your health. Here are some of the most common ones and why they are not true:
Some people think that they need to take a fat burner supplement or follow a low carb diet to get six pack abs. This is not true. Fat burners are not effective or safe, as they can cause side effects such as insomnia, anxiety, high blood pressure, and liver damage. Low carb diets are also not necessary or sustainable, as they can cause fatigue, mood swings, muscle loss, and nutrient deficiencies. The best way to get six pack abs is to eat a healthy, balanced diet that provides enough calories and macronutrients (protein, carbs, and fats) for your body and activity level.
-
You Can Crunch Your Way to a Six Pack
-
Some people think that they can crunch their way to a six pack by doing hundreds of crunches every day. This is not true. Crunches are not enough to reveal your six pack abs, as they only target one part of your abdominal muscles (the rectus abdominis). To get six pack abs, you need to work out all the muscles in your core, including the obliques, the transverse abdominis, and the lower back. You also need to reduce your body fat percentage by doing cardio and strength training exercises that burn calories and build muscle mass.
-
You Must Train Abs Every Day or Use Special Equipment
-
Some people think that they must train their abs every day or use special equipment such as ab rollers, ab machines, or ab belts to get six pack abs. This is not true. Training your abs every day is not necessary or beneficial, as it can lead to overtraining, injury, and muscle imbalance. Your abs need rest and recovery just like any other muscle group. You should train your abs two to three times a week with adequate rest days in between. Using special equipment is also not required or effective, as they can limit your range of motion, isolate your muscles, and create false expectations. The best way to train your abs is to use bodyweight exercises that challenge your core stability, strength, and endurance.
-
The Tips for Using 6 Pack Abs APK Effectively
-
Now that you know what 6 pack abs apk is and how it works, and what are the myths about six pack abs that you should avoid, here are some tips for using 6 pack abs apk effectively:
-
Follow the Workout Plan Consistently
-
The first tip is to follow the workout plan provided by 6 pack abs apk consistently. The app offers a 30-day workout plan that consists of three levels of difficulty (beginner, intermediate, and advanced) and various exercises for the upper and lower abs. Each workout takes about 10 minutes and can be done at home or anywhere else. The app also provides animations and video guides that show you how to perform each exercise correctly and effectively. To get the best results from 6 pack abs apk, you should follow the workout plan without skipping any days or sessions. Consistency is key to getting results.
-
Eat a Healthy, Balanced Diet
-
The second tip is to eat a healthy, balanced diet that supports your workout plan and your goal of getting six pack abs. As mentioned earlier, nutrition is important for muscle growth and fat loss. You should eat enough calories and macronutrients (protein, carbs, and fats) for your body and activity level. You should also eat foods that are rich in vitamins, minerals, antioxidants, and fiber. Some examples of healthy foods are lean meats, eggs, fish, dairy products, nuts, seeds, beans, fruits, vegetables, whole grains, and healthy oils. You should also avoid foods that are high in sugar, salt, trans fats, and processed ingredients. Some examples of unhealthy foods are candy, soda, chips, cookies, cakes, fast food, and fried food. Eating a healthy, balanced diet can help you get six pack abs by providing your body with the nutrients it needs to function properly and recover from your workouts.
-
Drink Plenty of Water and Get Enough Sleep
-
The third tip is to drink plenty of water and get enough sleep to support your workout plan and your goal of getting six pack abs. Water is essential for your body, as it helps you stay hydrated, regulate your body temperature, flush out toxins, transport nutrients, and lubricate your joints. You should drink at least eight glasses of water a day, or more if you exercise or sweat a lot. Water can also help you get six pack abs by suppressing your appetite, boosting your metabolism, and preventing water retention. Sleep is also vital for your body, as it helps you restore your energy, repair your muscles, consolidate your memory, and regulate your hormones. You should get at least seven to nine hours of sleep a night, or more if you need it. Sleep can also help you get six pack abs by reducing your stress levels, improving your mood, enhancing your performance, and preventing cravings.
-
Conclusion
-
6 pack abs apk is a free app that can help you get six pack abs in 30 days by providing you with a workout plan that targets your upper and lower abdominal muscles. The app also features animations and video guides that show you how to perform each exercise correctly and effectively. You can also customize your workout reminders and track your progress automatically.
-
Getting six pack abs can provide you with many benefits, such as improved posture and core strength, better sporting performance and agility, increased basal metabolic rate and fat burning, and more confidence and self-esteem. However, to get six pack abs, you need to avoid some myths and misconceptions that can hinder your progress or harm your health. These include the myths that you need a fat burner or a low carb diet, that you can crunch your way to a six pack, and that you must train abs every day or use special equipment.
-
To use 6 pack abs apk effectively, you need to follow some tips that can help you achieve your goal faster and easier. These include the tips of following the workout plan consistently, eating a healthy, balanced diet, drinking plenty of water and getting enough sleep.
-
If you follow these tips and use 6 pack abs apk regularly, you will be able to get six pack abs in no time. So what are you waiting for? Download 6 pack abs apk today and start working on your dream body!
-
FAQs
-
Here are some frequently asked questions about 6 pack abs apk:
-
-
How do I download 6 pack abs apk?
-
You can download 6 pack abs apk from the Google Play Store or the App Store for free. Just search for "6 pack abs apk" and install it on your device.
-
How do I use 6 pack abs apk?
-
You can use 6 pack abs apk by following the instructions on the app. First, you need to choose your level of difficulty (beginner, intermediate, or advanced). Then, you need to start the workout plan that consists of various exercises for the upper and lower abs. You can also set reminders for your workouts and track your progress automatically.
-
How long does it take to see results with 6 pack abs apk?
-
The time it takes to see results with 6 pack abs apk depends on several factors, such as your starting point, your diet, your exercise routine, your genetics, and your commitment. However, if you follow the workout plan consistently, eat a healthy, balanced diet, drink plenty of water and get enough sleep, you should be able to see some results in as little as four weeks. Of course, the more you stick to the plan and the more you challenge yourself, the faster and better your results will be.
-
Is 6 pack abs apk safe and effective?
-
Yes, 6 pack abs apk is safe and effective, as it is based on scientific research and proven methods. The app provides you with a workout plan that targets your abdominal muscles with various exercises that are suitable for different levels of difficulty. The app also provides you with animations and video guides that show you how to perform each exercise correctly and effectively. The app also allows you to customize your workout reminders and track your progress automatically. The app does not require any special equipment or supplements, and it does not promote any unhealthy or unrealistic practices.
-
Can I use 6 pack abs apk with other fitness apps or programs?
-
Yes, you can use 6 pack abs apk with other fitness apps or programs, as long as they are compatible and complementary. For example, you can use 6 pack abs apk with a running app or a yoga app to add some cardio and flexibility training to your routine. You can also use 6 pack abs apk with a weight lifting app or a bodyweight app to add some strength and resistance training to your routine. However, you should not use 6 pack abs apk with another ab workout app or program, as this can lead to overtraining, injury, and muscle imbalance. You should also not use 6 pack abs apk with an app or program that contradicts or conflicts with the principles and guidelines of 6 pack abs apk.
-
What if I have questions or feedback about 6 pack abs apk?
-
If you have any questions or feedback about 6 pack abs apk, you can contact the developers of the app through their email address or their social media accounts. You can also leave a review or a rating on the Google Play Store or the App Store to share your experience and opinion with other users. The developers of 6 pack abs apk are always happy to hear from their users and to improve their app based on their suggestions and feedback.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Topaz AI and Learn How to Use It to Improve Your Image Quality in Minutes.md b/spaces/1phancelerku/anime-remove-background/Download Topaz AI and Learn How to Use It to Improve Your Image Quality in Minutes.md
deleted file mode 100644
index 6a2de4aab7dfbe6a6dd2ac3bd65e745d695929c4..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Topaz AI and Learn How to Use It to Improve Your Image Quality in Minutes.md
+++ /dev/null
@@ -1,129 +0,0 @@
-
-
Download Topaz AI: How to Enhance Your Photos and Videos with Artificial Intelligence
-
Do you want to improve the quality of your photos and videos with the power of artificial intelligence? If so, you should download Topaz AI, a suite of software products that use cutting-edge image enhancement technology to magically transform your images and videos. In this article, you will learn what Topaz AI is, how it works, how to download and install it on your computer, how to use it from your image editor, and how to apply it to different scenarios. By the end of this article, you will be able to enhance your photos and videos like never before with Topaz AI.
-
Topaz Photo AI: Maximize Your Image Quality on Autopilot
-
Topaz Photo AI is a collection of four products that use artificial intelligence to sharpen, remove noise, and increase the resolution of your photos. These products are:
Gigapixel AI: This product allows you to upscale your images by up to 6x while increasing actual resolution and real detail. You can use it to enlarge your photos for printing, cropping, or restoring old photos.
-
DeNoise AI: This product allows you to remove noise from your images while preserving detail and color. You can use it to shoot anywhere in any light without worrying about noise.
-
Sharpen AI: This product allows you to sharpen your images while keeping them natural. You can use it to reverse motion and focus blur, or simply enhance the sharpness of your photos.
-
Video Enhancer AI: This product allows you to upscale, denoise, sharpen, and deinterlace your videos with stunning results. You can use it to convert SD to HD or HD to 4k, or simply improve the quality of your videos.
-
-
Topaz Photo AI uses deep learning algorithms that have been trained on millions of data points to understand what image quality means. Unlike regular image processing filters that often remove details and boost noise/artifacts, Topaz Photo AI enhances image quality by analyzing and enhancing the most important aspects of each image. You can use Topaz Photo AI as a standalone application or as a plug-in for your favorite image editor.
-
Topaz Video AI: Create Naturally Better Video Quality with AI
-
Topaz Video AI is a product that uses artificial intelligence to upscale, denoise, sharpen, and deinterlace your videos. It is based on the same technology as Topaz Photo AI, but optimized for video processing. You can use Topaz Video AI to:
-
-
Upscale your videos: You can increase the resolution of your videos by up to 4x while preserving or enhancing the original quality. You can use it to convert SD to HD or HD to 4k, or simply make your videos look better on larger screens.
-
Denoise your videos: You can remove visible image noise from your videos while retaining details and colors. You can use it to improve the quality of videos shot in low-light conditions, or reduce the compression artifacts from online videos.
-
Sharpen your videos: You can increase the perceived sharpness of your videos by applying a natural-looking sharpening effect. You can use it to make your videos look more crisp and clear, or correct the softness caused by upscaling or noise reduction.
-
Deinterlace your videos: You can convert interlaced videos to progressive ones while preserving image definition and reducing artifacts. You can use it to improve the quality of videos from older sources, such as DVDs or TV broadcasts.
-
-
Topaz Video AI uses deep learning algorithms that have been trained on thousands of hours of video data to understand what video quality means. Unlike regular video processing filters that often introduce artifacts and distortions, Topaz Video AI enhances video quality by analyzing and improving the most important aspects of each frame. You can use Topaz Video AI as a standalone application or as an external editor for your favorite video editor.
-
How to Download and Install Topaz AI on Your Computer
-
If you want to download and install Topaz AI on your computer, you need to follow these steps:
-
-
Visit the official website of Topaz Labs: Go to https://topazlabs.com/ and click on the "Download" button at the top right corner of the page.
-
Select the products you want to download: You will see a list of all the products available from Topaz Labs, including Topaz Photo AI and Topaz Video AI. You can select one or more products by clicking on the checkboxes next to them. You can also download a free trial version of each product by clicking on the "Try Free" button below them.
-
Enter your email address and password: If you already have an account with Topaz Labs, you can enter your email address and password to log in. If you don't have an account, you can create one by clicking on the "Create Account" button and filling in the required information.
-
Download the installer file: After logging in or creating an account, you will see a download link for each product you selected. Click on the link to download the installer file for your operating system (Windows or Mac).
-
Run the installer file: After downloading the installer file, locate it on your computer and double-click on it to run it. Follow the instructions on the screen to install the product on your computer.
-
Activate the product: After installing the product, launch it from your desktop or start menu. You will see a window asking you to activate the product with your license key. If you have purchased the product, you can enter your license key in the field provided and click on "Activate". If you are using a free trial version, you can click on "Start Trial" to activate it for 30 days.
-
-
Congratulations! You have successfully downloaded and installed Topaz AI on your computer. Now you can start using it to enhance your photos and videos with artificial intelligence.
-
How to Access Topaz AI from Your Image Editor
-
If you want to access Topaz AI from your image editor, such as Photoshop, Lightroom, or other compatible editors, you need to follow these steps:
-
download topaz labs photo ai
-download topaz video ai for windows
-download topaz gigapixel ai free trial
-download topaz sharpen ai mac
-download topaz denoise ai crack
-download topaz photo ai full version
-download topaz video ai latest version
-download topaz gigapixel ai portable
-download topaz sharpen ai review
-download topaz denoise ai coupon
-download topaz photo ai tutorial
-download topaz video ai system requirements
-download topaz gigapixel ai update
-download topaz sharpen ai before and after
-download topaz denoise ai vs lightroom
-download topaz photo ai bundle
-download topaz video ai reddit
-download topaz gigapixel ai license key
-download topaz sharpen ai plugin
-download topaz denoise ai presets
-download topaz photo ai online
-download topaz video ai alternative
-download topaz gigapixel ai comparison
-download topaz sharpen ai serial number
-download topaz denoise ai settings
-download topaz photo ai software
-download topaz video ai beta
-download topaz gigapixel ai tutorial
-download topaz sharpen ai standalone
-download topaz denoise ai trial
-download topaz photo ai app
-download topaz video ai blog
-download topaz gigapixel ai coupon code
-download topaz sharpen ai discount code
-download topaz denoise ai manual
-download topaz photo ai for android
-download topaz video ai for macos
-download topaz gigapixel ai for photoshop
-download topaz sharpen ai for lightroom
-download topaz denoise ai for premiere pro
-download topaz photo ai guide
-download topaz video ai help center
-download topaz gigapixel ai installation guide
-download topaz sharpen ai keygen
-download topaz denoise ai license code
-download topaz photo ai price
-download topaz video ai release notes
-download topaz gigapixel ai support forum
-
-
Install Topaz AI as a plug-in or external editor: When you install Topaz AI on your computer, it will automatically detect and install itself as a plug-in or external editor for some of the most popular image editors, such as Photoshop and Lightroom. If you want to install it for other editors, you can manually install it by following the instructions on the Topaz Labs support page.
-
Open your image in your image editor: Launch your image editor and open the image you want to enhance with Topaz AI.
-
Access Topaz AI from your image editor: Depending on your image editor, you can access Topaz AI in different ways. For example, in Photoshop, you can go to Filter > Topaz Labs > and select the product you want to use. In Lightroom, you can right-click on the image and go to Edit In > and select the product you want to use. For other editors, you can refer to the Topaz Labs support page for more details.
-
Edit your image with Topaz AI: After accessing Topaz AI from your image editor, you will see a new window with the interface of the product you selected. You can use the tools and settings on the left panel to adjust the parameters of the enhancement, and preview the results on the main panel. You can also compare the before and after images by using the buttons on the bottom panel.
-
Save and return to your image editor: After editing your image with Topaz AI, you can save and return to your image editor by clicking on the "Apply" button on the top right corner of the window. Your image will be updated with the changes made by Topaz AI.
-
-
That's it! You have successfully accessed and used Topaz AI from your image editor. Now you can enjoy the benefits of artificial intelligence for your photos.
-
How to Use Topaz AI to Enhance Your Photos and Videos
-
If you want to use Topaz AI to enhance your photos and videos, you need to follow these steps:
-
-
Select the product that suits your needs: Depending on what you want to achieve with your photos or videos, you can choose from different products within Topaz Photo AI or Topaz Video AI. For example, if you want to upscale your images, you can use Gigapixel AI. If you want to remove noise from your videos, you can use Video Enhancer AI.
-
Open your photo or video in Topaz AI: You can open your photo or video in Topaz AI either as a standalone application or as a plug-in or external editor for your image or video editor. See the previous section for more details on how to access Topaz AI from your editor.
-
Select the mode that suits your needs: Depending on the product you are using, you can select from different modes that offer different levels of enhancement or customization. For example, in Gigapixel AI, you can choose from Auto, Manual, or Custom modes. In Video Enhancer AI, you can choose from Standard Quality, High Quality, or Custom Quality modes.
-
Adjust the settings that suit your needs: Depending on the mode and product you are using, you can adjust various settings that affect the outcome of the enhancement. For example, in Gigapixel AI, you can adjust the scale factor, output size, noise reduction, face refinement, and more. In Video Enhancer AI, you can adjust the output format, frame rate, bitrate, and more.
-
Preview and compare the results: Depending on the product you are using, you can preview and compare the results of the enhancement before applying it. For example, in Gigapixel AI, you can zoom in and out of the image and see how it looks at different resolutions. In Video Enhancer AI, you can play back a short clip of the video and see how it looks at different qualities.
-
Apply and save the results: After previewing and comparing the results, you can apply and save them by clicking on the "Apply" or "Save" button on the top right corner of the window. Your photo or video will be enhanced and saved with Topaz AI.
-
-
Congratulations! You have successfully used Topaz AI to enhance your photos or videos with artificial intelligence. Now you can enjoy the improved quality of your images and videos.
-
Conclusion: Why You Should Download Topaz AI Today
-
Topaz AI is a suite of software products that use artificial intelligence to enhance your photos and videos with amazing results. With Topaz AI, you can:
-
-
Upscale your images and videos by up to 6x or 4x respectively while increasing actual resolution and real detail.
-
Remove noise from your images and videos while preserving detail and color in any lighting condition.
-
Sharpen your images and videos while keeping them natural and reversing motion and focus blur.
-
Deinterlace your videos while preserving image definition and reducing artifacts.
-
Use Topaz AI as a standalone application or as a plug-in or external editor for your favorite image or video editor.
-
-
Topaz AI is easy to use, fast, and reliable. It uses deep learning algorithms that have been trained on millions of data points to understand and improve image and video quality. It offers different modes and settings that allow you to customize the enhancement according to your needs and preferences. It also lets you preview and compare the results before applying them, so you can see the difference for yourself.
-
If you want to take your photos and videos to the next level, you should download Topaz AI today. You can try it for free for 30 days, or buy it for a reasonable price. You will be amazed by the results you can achieve with Topaz AI.
-
FAQs: Frequently Asked Questions about Topaz AI
-
Here are some of the most common questions and answers about Topaz AI:
-
-
What are the system requirements for Topaz AI?
-
Topaz AI requires a Windows or Mac computer with at least 8 GB of RAM, 2 GB of VRAM, and an OpenGL 3.3 compatible graphics card. For optimal performance, it is recommended to have 16 GB of RAM, 4 GB of VRAM, and an NVIDIA or AMD graphics card with CUDA or OpenCL support.
-
How long does it take to process an image or video with Topaz AI?
-
The processing time depends on several factors, such as the size and resolution of the image or video, the mode and settings of the product, and the speed and power of your computer. Generally, it takes a few seconds to a few minutes to process an image, and a few minutes to a few hours to process a video.
-
Can I batch process multiple images or videos with Topaz AI?
-
Yes, you can batch process multiple images or videos with Topaz AI. You can do this by selecting multiple files in the file browser of the standalone application, or by using the batch processing feature of your image or video editor.
-
Can I use Topaz AI on my smartphone or tablet?
-
No, Topaz AI is not available for mobile devices. It is only compatible with Windows or Mac computers.
-
Where can I find more information and support for Topaz AI?
-
You can find more information and support for Topaz AI on the Topaz Labs website. There you can access the user guides, tutorials, forums, blogs, and customer service for each product.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Everskies Oyna A Fun and Creative Way to Express Yourself Online.md b/spaces/1phancelerku/anime-remove-background/Everskies Oyna A Fun and Creative Way to Express Yourself Online.md
deleted file mode 100644
index 0388c79236bf2283efd1642f822f1f7ac56d81d8..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Everskies Oyna A Fun and Creative Way to Express Yourself Online.md
+++ /dev/null
@@ -1,106 +0,0 @@
-
-
Everskies Oyna: A Guide to the Virtual Dress Up Game
-
Do you love dressing up, designing clothes, and meeting new people? If so, you might want to try Everskies Oyna, a virtual dress up game that lets you create your own avatar, design your own fashion items, participate in outfit competitions and events, earn money and XP, and find people with similar interests and meet new friends. Everskies Oyna is a fun and creative game for everyone who enjoys fashion, art, and socializing. In this article, we will show you how to play Everskies Oyna and give you some tips and tricks to make the most of your experience.
-
How to Create Your Own Avatar in Everskies
-
One of the first things you need to do in Everskies Oyna is to create your own avatar. Your avatar is your virtual representation in the game world, and you can customize it to match your personality and style. Here are the steps to create your own avatar in Everskies:
Step 1: Choose your gender, skin tone, and facial features. You can choose from male or female avatars, and select from different skin tones, face shapes, eyebrows, noses, mouths, ears, freckles, moles, scars, etc.
-
Step 2: Customize your hair, eyes, and makeup. You can choose from different hair styles, colors, lengths, bangs, highlights, etc. You can also change your eye shape, color, size, lashes, etc. You can also apply different makeup products such as eyeshadow, eyeliner, mascara, blush, lipstick, etc.
-
Step 3: Dress up your avatar with different outfits, accessories, and shoes. You can choose from over 150000 items to dress up your avatar with different fashion outfits, accessories, and shoes. You can mix and match different items to create your own unique look. You can also save your outfits for later use or share them with other users.
-
-
Creating your own avatar in Everskies Oyna is easy and fun. You can express yourself through your avatar and show off your style to the world.
-
How to Design Your Own Fashion Items in Everskies
-
Another cool feature of Everskies Oyna is that you can design your own fashion items and sell them in the shop or trade them with other users. You can create your own clothing, accessories, shoes, hair, makeup, etc. and show off your creativity and talent. Here are the steps to design your own fashion items in Everskies:
-
-
Step 1: Go to the Creative tab and select an item template. You can choose from different categories such as tops, bottoms, dresses, jackets, hats, bags, jewelry, etc. You can also filter by gender, style, season, etc.
-
Step 2: Use the drawing tools and filters to create your own design. You can use different tools such as pencil, brush, eraser, fill, color picker, etc. to draw your design on the item template. You can also use different filters such as hue, saturation, brightness, contrast, etc. to adjust the color and tone of your design.
-
Step 3: Save and submit your item for approval. You can name your item, add a description, and set a price for it. You can also preview how it looks on different avatars. Once you are happy with your design, you can save it and submit it for approval. The approval process may take up to 24 hours, and you will be notified if your item is accepted or rejected.
-
-
Designing your own fashion items in Everskies Oyna is a great way to unleash your inner designer and earn some money and XP. You can also get feedback from other users and improve your skills.
-
How to Participate in Outfit Competitions and Events in Everskies
-
If you want to challenge yourself and compete with other users in Everskies Oyna, you can participate in outfit competitions and events. Outfit competitions and events are themed contests that require you to create an outfit that matches the theme and criteria. You can win prizes such as money, XP, items, badges, etc. Here are the steps to participate in outfit competitions and events in Everskies:
-
-
Step 1: Check the event calendar and the competition rules. You can find the event calendar on the homepage or on the Events tab. You can see the current and upcoming competitions and events, as well as their themes, criteria, deadlines, prizes, etc. You can also read the competition rules and guidelines before entering.
-
Step 2: Create an outfit that matches the theme and criteria. You can use any items that you own or buy from the shop to create your outfit. You can also use items that you designed yourself or traded with other users. Make sure that your outfit follows the theme and criteria of the competition or event.
-
Step 3: Vote for other entries and wait for the results. After you submit your entry, you can vote for other entries by giving them stars from one to five. You can vote for up to 10 entries per day. The more you vote, the more XP you earn. The results of the competition or event will be announced after the deadline, and you will be notified if you won any prizes.
-
-
Participating in outfit competitions and events in Everskies Oyna is a fun and rewarding way to test your fashion sense and creativity. You can also get inspired by other users' outfits and discover new styles.
-
How to Earn Money and XP in Everskies
-
Money and XP are two important currencies in Everskies Oyna that allow you to buy items from the shop, level up your avatar, and access more features in the game. There are many ways to earn money and XP in Everskies Oyna, such as:
-
-
Step 1: Play mini-games such as Memory, Tic Tac Toe, and Planet Popper. You can find the mini-games on the Games tab or on the homepage. You can play the mini-games for free or for a small fee, and you can win money and XP depending on your score and performance.
-
Step 2: Sell your fashion items in the shop or trade them with other users. You can sell your fashion items that you designed yourself or bought from the shop in the shop or in the trade center. You can set your own price for your items, and you can earn money and XP when someone buys or trades them.
-
Step 3: Join clubs, forums, chat rooms, and group messages to socialize and get tips. You can join or create clubs, forums, chat rooms, and group messages that match your interests and hobbies. You can interact with other users, share your outfits, give feedback, and have fun. You can also get tips and tricks from other users on how to play Everskies Oyna better.
-
-
Earning money and XP in Everskies Oyna is easy and enjoyable. You can use your money and XP to buy more items, level up your avatar, and unlock more features in the game.
-
everskies oyna online
-everskies oyna ücretsiz
-everskies oyna mobil
-everskies oyna nasıl
-everskies oyna türkçe
-everskies oyna apk
-everskies oyna indir
-everskies oyna kaydol
-everskies oyna giriş yap
-everskies oyna hileleri
-everskies oyna kıyafet yarışması
-everskies oyna avatar oluştur
-everskies oyna mini oyunlar
-everskies oyna forumlar
-everskies oyna kulüpler
-everskies oyna sohbet odaları
-everskies oyna grup mesajları
-everskies oyna sanal para kazan
-everskies oyna tasarım sat
-everskies oyna öğeler al sat
-everskies oyna benzeri oyunlar
-everskies oyna yorumlar
-everskies oyna ipuçları
-everskies oyna rehberi
-everskies oyna sorunları çözümü
-everskies oyna güncellemeleri
-everskies oyna haberleri
-everskies oyna etkinlik takvimi
-everskies oyna özel setler
-everskies oyna starpass nedir
-everskies oyna burç yarışması
-everskies oyna doğum taşı yarışması
-everskies oyna doğa korkutucu yarışması
-everskies oyna gurur ayı kutlama
-everskies oyna satranç zamanı yarışması
-everskies oyna moda bebekleri yarışması
-everskies oyna deniz kızı belki yarışması
-everskies oyna zümrüt yarışması
-everskies oyna everchanted orman yarışması
-everskies oyna elmas yarışması
-everskies oyna altskyler yarışması
-everskies oyna akvaryum yarışması
-everskies oyna hissediyorum tatlı yarışması
-everskies oyna kırmızı halı yarışması
-everskies oyna dükkan güncellemeleri
-
How to Find People with Similar Interests and Meet New Friends in Everskies
-
One of the best things about Everskies Oyna is that you can find people with similar interests and meet new friends from all over the world. Everskies Oyna is a friendly and welcoming community that supports diversity and creativity. You can connect with other users who share your passion for fashion, art, music, games, etc. Here are the steps to find people with similar interests and meet new friends in Everskies:
-
-
Step 1: Browse the clubs, forums, chat rooms, and group messages by category or keyword. You can find the clubs, forums, chat rooms, and group messages on the Community tab or on the homepage. You can browse them by category such as fashion, art, music, games, etc. or by keyword such as anime, kpop, harry potter, etc.
-
Step 2: Join or create a club, forum, chat room, or group message that suits your interests. You can join or create a club, forum, chat room, or group message that matches your interests and hobbies. You can also invite other users to join or create them with you.
-
Step 3: Interact with other users, share your outfits, give feedback, and have fun. You can interact with other users who are members of the same club, forum, chat room, or group message as you. You can share your outfits, give feedback, and have fun. You can also send private messages to other users, add them as friends, or block them if you don't like them.
-
-
Finding people with similar interests and meeting new friends in Everskies Oyna is a wonderful way to expand your social circle and enjoy the game more. You can also learn from other users and discover new things.
-
Conclusion: Everskies Oyna is a Fun and Creative Game for Everyone
-
Everskies Oyna is a virtual dress up game that lets you create your own avatar, design your own fashion items, participate in outfit competitions and events, earn money and XP, and find people with similar interests and meet new friends. Everskies Oyna is a fun and creative game for everyone who loves fashion, art, and socializing. You can play Everskies Oyna for free on your browser or download the app on your mobile device. You can also follow Everskies Oyna on social media platforms such as Instagram, Twitter, Facebook, etc. to get the latest news and updates. If you are looking for a game that allows you to express yourself, show off your style, and make new friends, you should definitely try Everskies Oyna today!
-
FAQs
-
-
Q: What is Everskies Oyna?
-
A: Everskies Oyna is a virtual dress up game that lets you create your own avatar, design your own fashion items, participate in outfit competitions and events, earn money and XP, and find people with similar interests and meet new friends.
-
Q: How can I play Everskies Oyna?
-
A: You can play Everskies Oyna for free on your browser or download the app on your mobile device. You can also follow Everskies Oyna on social media platforms such as Instagram, Twitter, Facebook, etc. to get the latest news and updates.
-
Q: How can I create my own avatar in Everskies Oyna?
-
A: You can create your own avatar in Everskies Oyna by choosing your gender, skin tone, facial features, hair, eyes, makeup, outfits, accessories, and shoes. You can customize your avatar to match your personality and style.
-
Q: How can I design my own fashion items in Everskies Oyna?
-
A: You can design your own fashion items in Everskies Oyna by going to the Creative tab and selecting an item template. You can use the drawing tools and filters to create your own design. You can save and submit your item for approval.
-
Q: How can I participate in outfit competitions and events in Everskies Oyna?
-
A: You can participate in outfit competitions and events in Everskies Oyna by checking the event calendar and the competition rules. You can create an outfit that matches the theme and criteria. You can vote for other entries and wait for the results.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py
deleted file mode 100644
index 0c02eaf70fc0140aca7925f621c29a496f491cae..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/utils/utils_config.py
+++ /dev/null
@@ -1,16 +0,0 @@
-import importlib
-import os.path as osp
-
-
-def get_config(config_file):
- assert config_file.startswith('configs/'), 'config file setting must start with configs/'
- temp_config_name = osp.basename(config_file)
- temp_module_name = osp.splitext(temp_config_name)[0]
- config = importlib.import_module("configs.base")
- cfg = config.config
- config = importlib.import_module("configs.%s" % temp_module_name)
- job_cfg = config.config
- cfg.update(job_cfg)
- if cfg.output is None:
- cfg.output = osp.join('work_dirs', temp_module_name)
- return cfg
\ No newline at end of file
diff --git a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/stable_diffusion_engine.py b/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/stable_diffusion_engine.py
deleted file mode 100644
index 04629a8d863c3a3a05a4665c5d3e3fe534aa6fd3..0000000000000000000000000000000000000000
--- a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/stable_diffusion_engine.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import inspect
-import numpy as np
-# openvino
-from openvino.runtime import Core
-# tokenizer
-from transformers import CLIPTokenizer
-# utils
-from tqdm import tqdm
-from huggingface_hub import hf_hub_download
-from diffusers import LMSDiscreteScheduler, PNDMScheduler
-import cv2
-
-
-def result(var):
- return next(iter(var.values()))
-
-
-class StableDiffusionEngine:
- def __init__(
- self,
- scheduler,
- model="4eJIoBek/stable-diffusion-v1-4-openvino-fp32",
- tokenizer="openai/clip-vit-large-patch14",
- device="CPU"
- ):
- self.tokenizer = CLIPTokenizer.from_pretrained(tokenizer)
- self.scheduler = scheduler
- # models
- self.core = Core()
- # text features
- self._text_encoder = self.core.read_model(
- hf_hub_download(repo_id=model, filename="text_encoder.xml"),
- hf_hub_download(repo_id=model, filename="text_encoder.bin")
- )
- self.text_encoder = self.core.compile_model(self._text_encoder, device)
- # diffusion
- self._unet = self.core.read_model(
- hf_hub_download(repo_id=model, filename="unet.xml"),
- hf_hub_download(repo_id=model, filename="unet.bin")
- )
- self.unet = self.core.compile_model(self._unet, device)
- self.latent_shape = tuple(self._unet.inputs[0].shape)[1:]
- # decoder
- self._vae_decoder = self.core.read_model(
- hf_hub_download(repo_id=model, filename="vae_decoder.xml"),
- hf_hub_download(repo_id=model, filename="vae_decoder.bin")
- )
- self.vae_decoder = self.core.compile_model(self._vae_decoder, device)
- # encoder
- self._vae_encoder = self.core.read_model(
- hf_hub_download(repo_id=model, filename="vae_encoder.xml"),
- hf_hub_download(repo_id=model, filename="vae_encoder.bin")
- )
- self.vae_encoder = self.core.compile_model(self._vae_encoder, device)
- self.init_image_shape = tuple(self._vae_encoder.inputs[0].shape)[2:]
-
- def _preprocess_mask(self, mask):
- h, w = mask.shape
- if h != self.init_image_shape[0] and w != self.init_image_shape[1]:
- mask = cv2.resize(
- mask,
- (self.init_image_shape[1], self.init_image_shape[0]),
- interpolation = cv2.INTER_NEAREST
- )
- mask = cv2.resize(
- mask,
- (self.init_image_shape[1] // 8, self.init_image_shape[0] // 8),
- interpolation = cv2.INTER_NEAREST
- )
- mask = mask.astype(np.float32) / 255.0
- mask = np.tile(mask, (4, 1, 1))
- mask = mask[None].transpose(0, 1, 2, 3)
- mask = 1 - mask
- return mask
-
- def _preprocess_image(self, image):
- image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- h, w = image.shape[1:]
- if h != self.init_image_shape[0] and w != self.init_image_shape[1]:
- image = cv2.resize(
- image,
- (self.init_image_shape[1], self.init_image_shape[0]),
- interpolation=cv2.INTER_LANCZOS4
- )
- # normalize
- image = image.astype(np.float32) / 255.0
- image = 2.0 * image - 1.0
- # to batch
- image = image[None].transpose(0, 3, 1, 2)
- return image
-
- def _encode_image(self, init_image):
- moments = result(self.vae_encoder.infer_new_request({
- "init_image": self._preprocess_image(init_image)
- }))
- mean, logvar = np.split(moments, 2, axis=1)
- std = np.exp(logvar * 0.5)
- latent = (mean + std * np.random.randn(*mean.shape)) * 0.18215
- return latent
-
- def __call__(
- self,
- prompt,
- init_image = None,
- mask = None,
- strength = 0.5,
- num_inference_steps = 32,
- guidance_scale = 7.5,
- eta = 0.0
- ):
- # extract condition
- tokens = self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True
- ).input_ids
- text_embeddings = result(
- self.text_encoder.infer_new_request({"tokens": np.array([tokens])})
- )
-
- # do classifier free guidance
- if guidance_scale > 1.0:
- tokens_uncond = self.tokenizer(
- "",
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True
- ).input_ids
- uncond_embeddings = result(
- self.text_encoder.infer_new_request({"tokens": np.array([tokens_uncond])})
- )
- text_embeddings = np.concatenate((uncond_embeddings, text_embeddings), axis=0)
-
- # set timesteps
- accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys())
- extra_set_kwargs = {}
- offset = 0
- if accepts_offset:
- offset = 1
- extra_set_kwargs["offset"] = 1
-
- self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs)
-
- # initialize latent latent
- if init_image is None:
- latents = np.random.randn(*self.latent_shape)
- init_timestep = num_inference_steps
- else:
- init_latents = self._encode_image(init_image)
- init_timestep = int(num_inference_steps * strength) + offset
- init_timestep = min(init_timestep, num_inference_steps)
- timesteps = np.array([[self.scheduler.timesteps[-init_timestep]]]).astype(np.long)
- noise = np.random.randn(*self.latent_shape)
- latents = self.scheduler.add_noise(init_latents, noise, timesteps)[0]
-
- if init_image is not None and mask is not None:
- mask = self._preprocess_mask(mask)
- else:
- mask = None
-
- # if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas
- if isinstance(self.scheduler, LMSDiscreteScheduler):
- latents = latents * self.scheduler.sigmas[0]
-
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
- # and should be between [0, 1]
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
- extra_step_kwargs = {}
- if accepts_eta:
- extra_step_kwargs["eta"] = eta
-
- t_start = max(num_inference_steps - init_timestep + offset, 0)
- for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = np.stack([latents, latents], 0) if guidance_scale > 1.0 else latents[None]
- if isinstance(self.scheduler, LMSDiscreteScheduler):
- sigma = self.scheduler.sigmas[i]
- latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5)
-
- # predict the noise residual
- noise_pred = result(self.unet.infer_new_request({
- "latent_model_input": latent_model_input,
- "t": t,
- "encoder_hidden_states": text_embeddings
- }))
-
- # perform guidance
- if guidance_scale > 1.0:
- noise_pred = noise_pred[0] + guidance_scale * (noise_pred[1] - noise_pred[0])
-
- # compute the previous noisy sample x_t -> x_t-1
- if isinstance(self.scheduler, LMSDiscreteScheduler):
- latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs)["prev_sample"]
- else:
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"]
-
- # masking for inapinting
- if mask is not None:
- init_latents_proper = self.scheduler.add_noise(init_latents, noise, t)
- latents = ((init_latents_proper * mask) + (latents * (1 - mask)))[0]
-
- image = result(self.vae_decoder.infer_new_request({
- "latents": np.expand_dims(latents, 0)
- }))
-
- # convert tensor to opencv's image format
- image = (image / 2 + 0.5).clip(0, 1)
- image = (image[0].transpose(1, 2, 0)[:, :, ::-1] * 255).astype(np.uint8)
- return image
diff --git a/spaces/7thHeaven/GPT2WordPress/constraints.md b/spaces/7thHeaven/GPT2WordPress/constraints.md
deleted file mode 100644
index 4096a6fa8b70514623b1164e67df99ad2c3408a7..0000000000000000000000000000000000000000
--- a/spaces/7thHeaven/GPT2WordPress/constraints.md
+++ /dev/null
@@ -1,8 +0,0 @@
-# 制約
-
-- あなたはブログ記事生成アシスタントです
-- あなたはユーザーが与えるプロンプトをブログ記事のタイトルとして解釈し、ブログ記事本文を生成します
-- 返信はブログ記事本文のみです
-- あなたは優しい性格のブロガーです
-- あなたは好奇心旺盛で、人々が見逃してしまいそうな小さな幸せを発見することが得意です。作成する記事も、そのような特色が現れます
-- あなたは、なんでもITに紐づけてしまう癖を持っています
diff --git a/spaces/AIConsultant/MusicGen/CONTRIBUTING.md b/spaces/AIConsultant/MusicGen/CONTRIBUTING.md
deleted file mode 100644
index a3e9507643d4439f509a8fc8b87dc73417ef9822..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/CONTRIBUTING.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Contributing to AudioCraft
-
-We want to make contributing to this project as easy and transparent as
-possible.
-
-## Pull Requests
-
-AudioCraft is the implementation of a research paper.
-Therefore, we do not plan on accepting many pull requests for new features.
-We certainly welcome them for bug fixes.
-
-1. Fork the repo and create your branch from `main`.
-2. If you've added code that should be tested, add tests.
-3. If you've changed APIs, update the documentation.
-4. Ensure the test suite passes.
-5. Make sure your code lints.
-6. If you haven't already, complete the Contributor License Agreement ("CLA").
-
-## Contributor License Agreement ("CLA")
-In order to accept your pull request, we need you to submit a CLA. You only need
-to do this once to work on any of Meta's open source projects.
-
-Complete your CLA here:
-
-## Issues
-We use GitHub issues to track public bugs. Please ensure your description is
-clear and has sufficient instructions to be able to reproduce the issue.
-
-Meta has a [bounty program](https://www.facebook.com/whitehat/) for the safe
-disclosure of security bugs. In those cases, please go through the process
-outlined on that page and do not file a public issue.
-
-## License
-By contributing to encodec, you agree that your contributions will be licensed
-under the LICENSE file in the root directory of this source tree.
diff --git a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op_ori/fused_act.py b/spaces/AIFILMS/StyleGANEX/models/stylegan2/op_ori/fused_act.py
deleted file mode 100644
index 973a84fffde53668d31397da5fb993bbc95f7be0..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/StyleGANEX/models/stylegan2/op_ori/fused_act.py
+++ /dev/null
@@ -1,85 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from torch.autograd import Function
-from torch.utils.cpp_extension import load
-
-module_path = os.path.dirname(__file__)
-fused = load(
- 'fused',
- sources=[
- os.path.join(module_path, 'fused_bias_act.cpp'),
- os.path.join(module_path, 'fused_bias_act_kernel.cu'),
- ],
-)
-
-
-class FusedLeakyReLUFunctionBackward(Function):
- @staticmethod
- def forward(ctx, grad_output, out, negative_slope, scale):
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- empty = grad_output.new_empty(0)
-
- grad_input = fused.fused_bias_act(
- grad_output, empty, out, 3, 1, negative_slope, scale
- )
-
- dim = [0]
-
- if grad_input.ndim > 2:
- dim += list(range(2, grad_input.ndim))
-
- grad_bias = grad_input.sum(dim).detach()
-
- return grad_input, grad_bias
-
- @staticmethod
- def backward(ctx, gradgrad_input, gradgrad_bias):
- out, = ctx.saved_tensors
- gradgrad_out = fused.fused_bias_act(
- gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
- )
-
- return gradgrad_out, None, None, None
-
-
-class FusedLeakyReLUFunction(Function):
- @staticmethod
- def forward(ctx, input, bias, negative_slope, scale):
- empty = input.new_empty(0)
- out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale)
- ctx.save_for_backward(out)
- ctx.negative_slope = negative_slope
- ctx.scale = scale
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- out, = ctx.saved_tensors
-
- grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
- grad_output, out, ctx.negative_slope, ctx.scale
- )
-
- return grad_input, grad_bias, None, None
-
-
-class FusedLeakyReLU(nn.Module):
- def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5):
- super().__init__()
-
- self.bias = nn.Parameter(torch.zeros(channel))
- self.negative_slope = negative_slope
- self.scale = scale
-
- def forward(self, input):
- return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
-
-
-def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5):
- return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/tts_utils.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/tts_utils.py
deleted file mode 100644
index 47e654c03eaf9c50ae0bb3c97ecd661666a1a6b1..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/tts_utils.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import importlib
-
-from text_to_speech.data_gen.tts.base_binarizer import BaseBinarizer
-from text_to_speech.data_gen.tts.base_preprocess import BasePreprocessor
-from text_to_speech.data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls
-from text_to_speech.utils.commons.hparams import hparams
-
-
-def parse_dataset_configs():
- max_tokens = hparams['max_tokens']
- max_sentences = hparams['max_sentences']
- max_valid_tokens = hparams['max_valid_tokens']
- if max_valid_tokens == -1:
- hparams['max_valid_tokens'] = max_valid_tokens = max_tokens
- max_valid_sentences = hparams['max_valid_sentences']
- if max_valid_sentences == -1:
- hparams['max_valid_sentences'] = max_valid_sentences = max_sentences
- return max_tokens, max_sentences, max_valid_tokens, max_valid_sentences
-
-
-def parse_mel_losses():
- mel_losses = hparams['mel_losses'].split("|")
- loss_and_lambda = {}
- for i, l in enumerate(mel_losses):
- if l == '':
- continue
- if ':' in l:
- l, lbd = l.split(":")
- lbd = float(lbd)
- else:
- lbd = 1.0
- loss_and_lambda[l] = lbd
- print("| Mel losses:", loss_and_lambda)
- return loss_and_lambda
-
-
-def load_data_preprocessor():
- preprocess_cls = hparams["preprocess_cls"]
- pkg = ".".join(preprocess_cls.split(".")[:-1])
- cls_name = preprocess_cls.split(".")[-1]
- preprocessor: BasePreprocessor = getattr(importlib.import_module(pkg), cls_name)()
- preprocess_args = {}
- preprocess_args.update(hparams['preprocess_args'])
- return preprocessor, preprocess_args
-
-
-def load_data_binarizer():
- binarizer_cls = hparams['binarizer_cls']
- pkg = ".".join(binarizer_cls.split(".")[:-1])
- cls_name = binarizer_cls.split(".")[-1]
- binarizer: BaseBinarizer = getattr(importlib.import_module(pkg), cls_name)()
- binarization_args = {}
- binarization_args.update(hparams['binarization_args'])
- return binarizer, binarization_args
diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/image_degradation/bsrgan_light.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/image_degradation/bsrgan_light.py
deleted file mode 100644
index 9e1f823996bf559e9b015ea9aa2b3cd38dd13af1..0000000000000000000000000000000000000000
--- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/image_degradation/bsrgan_light.py
+++ /dev/null
@@ -1,650 +0,0 @@
-# -*- coding: utf-8 -*-
-import numpy as np
-import cv2
-import torch
-
-from functools import partial
-import random
-from scipy import ndimage
-import scipy
-import scipy.stats as ss
-from scipy.interpolate import interp2d
-from scipy.linalg import orth
-import albumentations
-
-import ldm.modules.image_degradation.utils_image as util
-
-"""
-# --------------------------------------------
-# Super-Resolution
-# --------------------------------------------
-#
-# Kai Zhang (cskaizhang@gmail.com)
-# https://github.com/cszn
-# From 2019/03--2021/08
-# --------------------------------------------
-"""
-
-
-def modcrop_np(img, sf):
- '''
- Args:
- img: numpy image, WxH or WxHxC
- sf: scale factor
- Return:
- cropped image
- '''
- w, h = img.shape[:2]
- im = np.copy(img)
- return im[:w - w % sf, :h - h % sf, ...]
-
-
-"""
-# --------------------------------------------
-# anisotropic Gaussian kernels
-# --------------------------------------------
-"""
-
-
-def analytic_kernel(k):
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
- k_size = k.shape[0]
- # Calculate the big kernels size
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
- # Loop over the small kernel to fill the big one
- for r in range(k_size):
- for c in range(k_size):
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
- crop = k_size // 2
- cropped_big_k = big_k[crop:-crop, crop:-crop]
- # Normalize to 1
- return cropped_big_k / cropped_big_k.sum()
-
-
-def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
- """ generate an anisotropic Gaussian kernel
- Args:
- ksize : e.g., 15, kernel size
- theta : [0, pi], rotation angle range
- l1 : [0.1,50], scaling of eigenvalues
- l2 : [0.1,l1], scaling of eigenvalues
- If l1 = l2, will get an isotropic Gaussian kernel.
- Returns:
- k : kernel
- """
-
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
- D = np.array([[l1, 0], [0, l2]])
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
-
- return k
-
-
-def gm_blur_kernel(mean, cov, size=15):
- center = size / 2.0 + 0.5
- k = np.zeros([size, size])
- for y in range(size):
- for x in range(size):
- cy = y - center + 1
- cx = x - center + 1
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
-
- k = k / np.sum(k)
- return k
-
-
-def shift_pixel(x, sf, upper_left=True):
- """shift pixel for super-resolution with different scale factors
- Args:
- x: WxHxC or WxH
- sf: scale factor
- upper_left: shift direction
- """
- h, w = x.shape[:2]
- shift = (sf - 1) * 0.5
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
- if upper_left:
- x1 = xv + shift
- y1 = yv + shift
- else:
- x1 = xv - shift
- y1 = yv - shift
-
- x1 = np.clip(x1, 0, w - 1)
- y1 = np.clip(y1, 0, h - 1)
-
- if x.ndim == 2:
- x = interp2d(xv, yv, x)(x1, y1)
- if x.ndim == 3:
- for i in range(x.shape[-1]):
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
-
- return x
-
-
-def blur(x, k):
- '''
- x: image, NxcxHxW
- k: kernel, Nx1xhxw
- '''
- n, c = x.shape[:2]
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
- k = k.repeat(1, c, 1, 1)
- k = k.view(-1, 1, k.shape[2], k.shape[3])
- x = x.view(1, -1, x.shape[2], x.shape[3])
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
- x = x.view(n, c, x.shape[2], x.shape[3])
-
- return x
-
-
-def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
- """"
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
- # Kai Zhang
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
- # max_var = 2.5 * sf
- """
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
- theta = np.random.rand() * np.pi # random theta
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
-
- # Set COV matrix using Lambdas and Theta
- LAMBDA = np.diag([lambda_1, lambda_2])
- Q = np.array([[np.cos(theta), -np.sin(theta)],
- [np.sin(theta), np.cos(theta)]])
- SIGMA = Q @ LAMBDA @ Q.T
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
-
- # Set expectation position (shifting kernel for aligned image)
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
- MU = MU[None, None, :, None]
-
- # Create meshgrid for Gaussian
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
- Z = np.stack([X, Y], 2)[:, :, :, None]
-
- # Calcualte Gaussian for every pixel of the kernel
- ZZ = Z - MU
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
-
- # shift the kernel so it will be centered
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
-
- # Normalize the kernel and return
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
- kernel = raw_kernel / np.sum(raw_kernel)
- return kernel
-
-
-def fspecial_gaussian(hsize, sigma):
- hsize = [hsize, hsize]
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
- std = sigma
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
- arg = -(x * x + y * y) / (2 * std * std)
- h = np.exp(arg)
- h[h < scipy.finfo(float).eps * h.max()] = 0
- sumh = h.sum()
- if sumh != 0:
- h = h / sumh
- return h
-
-
-def fspecial_laplacian(alpha):
- alpha = max([0, min([alpha, 1])])
- h1 = alpha / (alpha + 1)
- h2 = (1 - alpha) / (alpha + 1)
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
- h = np.array(h)
- return h
-
-
-def fspecial(filter_type, *args, **kwargs):
- '''
- python code from:
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
- '''
- if filter_type == 'gaussian':
- return fspecial_gaussian(*args, **kwargs)
- if filter_type == 'laplacian':
- return fspecial_laplacian(*args, **kwargs)
-
-
-"""
-# --------------------------------------------
-# degradation models
-# --------------------------------------------
-"""
-
-
-def bicubic_degradation(x, sf=3):
- '''
- Args:
- x: HxWxC image, [0, 1]
- sf: down-scale factor
- Return:
- bicubicly downsampled LR image
- '''
- x = util.imresize_np(x, scale=1 / sf)
- return x
-
-
-def srmd_degradation(x, k, sf=3):
- ''' blur + bicubic downsampling
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2018learning,
- title={Learning a single convolutional super-resolution network for multiple degradations},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={3262--3271},
- year={2018}
- }
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
- x = bicubic_degradation(x, sf=sf)
- return x
-
-
-def dpsr_degradation(x, k, sf=3):
- ''' bicubic downsampling + blur
- Args:
- x: HxWxC image, [0, 1]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- Reference:
- @inproceedings{zhang2019deep,
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
- pages={1671--1681},
- year={2019}
- }
- '''
- x = bicubic_degradation(x, sf=sf)
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- return x
-
-
-def classical_degradation(x, k, sf=3):
- ''' blur + downsampling
- Args:
- x: HxWxC image, [0, 1]/[0, 255]
- k: hxw, double
- sf: down-scale factor
- Return:
- downsampled LR image
- '''
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
- st = 0
- return x[st::sf, st::sf, ...]
-
-
-def add_sharpening(img, weight=0.5, radius=50, threshold=10):
- """USM sharpening. borrowed from real-ESRGAN
- Input image: I; Blurry image: B.
- 1. K = I + weight * (I - B)
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
- 3. Blur mask:
- 4. Out = Mask * K + (1 - Mask) * I
- Args:
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
- weight (float): Sharp weight. Default: 1.
- radius (float): Kernel size of Gaussian blur. Default: 50.
- threshold (int):
- """
- if radius % 2 == 0:
- radius += 1
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
- residual = img - blur
- mask = np.abs(residual) * 255 > threshold
- mask = mask.astype('float32')
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
-
- K = img + weight * residual
- K = np.clip(K, 0, 1)
- return soft_mask * K + (1 - soft_mask) * img
-
-
-def add_blur(img, sf=4):
- wd2 = 4.0 + sf
- wd = 2.0 + 0.2 * sf
-
- wd2 = wd2/4
- wd = wd/4
-
- if random.random() < 0.5:
- l1 = wd2 * random.random()
- l2 = wd2 * random.random()
- k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
- else:
- k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random())
- img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
-
- return img
-
-
-def add_resize(img, sf=4):
- rnum = np.random.rand()
- if rnum > 0.8: # up
- sf1 = random.uniform(1, 2)
- elif rnum < 0.7: # down
- sf1 = random.uniform(0.5 / sf, 1)
- else:
- sf1 = 1.0
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- return img
-
-
-# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
-# noise_level = random.randint(noise_level1, noise_level2)
-# rnum = np.random.rand()
-# if rnum > 0.6: # add color Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
-# elif rnum < 0.4: # add grayscale Gaussian noise
-# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
-# else: # add noise
-# L = noise_level2 / 255.
-# D = np.diag(np.random.rand(3))
-# U = orth(np.random.rand(3, 3))
-# conv = np.dot(np.dot(np.transpose(U), D), U)
-# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
-# img = np.clip(img, 0.0, 1.0)
-# return img
-
-def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- rnum = np.random.rand()
- if rnum > 0.6: # add color Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4: # add grayscale Gaussian noise
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else: # add noise
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_speckle_noise(img, noise_level1=2, noise_level2=25):
- noise_level = random.randint(noise_level1, noise_level2)
- img = np.clip(img, 0.0, 1.0)
- rnum = random.random()
- if rnum > 0.6:
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
- elif rnum < 0.4:
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
- else:
- L = noise_level2 / 255.
- D = np.diag(np.random.rand(3))
- U = orth(np.random.rand(3, 3))
- conv = np.dot(np.dot(np.transpose(U), D), U)
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_Poisson_noise(img):
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
- if random.random() < 0.5:
- img = np.random.poisson(img * vals).astype(np.float32) / vals
- else:
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
- img += noise_gray[:, :, np.newaxis]
- img = np.clip(img, 0.0, 1.0)
- return img
-
-
-def add_JPEG_noise(img):
- quality_factor = random.randint(80, 95)
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
- img = cv2.imdecode(encimg, 1)
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
- return img
-
-
-def random_crop(lq, hq, sf=4, lq_patchsize=64):
- h, w = lq.shape[:2]
- rnd_h = random.randint(0, h - lq_patchsize)
- rnd_w = random.randint(0, w - lq_patchsize)
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
-
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
- return lq, hq
-
-
-def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = img.shape[:2]
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = img.shape[:2]
-
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
-
- hq = img.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- img = util.imresize_np(img, 1 / 2, True)
- img = np.clip(img, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- img = add_blur(img, sf=sf)
-
- elif i == 1:
- img = add_blur(img, sf=sf)
-
- elif i == 2:
- a, b = img.shape[1], img.shape[0]
- # downsample2
- if random.random() < 0.75:
- sf1 = random.uniform(1, 2 * sf)
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
- img = img[0::sf, 0::sf, ...] # nearest downsampling
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- img = np.clip(img, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- img = add_JPEG_noise(img)
-
- elif i == 6:
- # add processed camera sensor noise
- if random.random() < isp_prob and isp_model is not None:
- with torch.no_grad():
- img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- img = add_JPEG_noise(img)
-
- # random crop
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
-
- return img, hq
-
-
-# todo no isp_model?
-def degradation_bsrgan_variant(image, sf=4, isp_model=None):
- """
- This is the degradation model of BSRGAN from the paper
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
- ----------
- sf: scale factor
- isp_model: camera ISP model
- Returns
- -------
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
- """
- image = util.uint2single(image)
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
- sf_ori = sf
-
- h1, w1 = image.shape[:2]
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
- h, w = image.shape[:2]
-
- hq = image.copy()
-
- if sf == 4 and random.random() < scale2_prob: # downsample1
- if np.random.rand() < 0.5:
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- image = util.imresize_np(image, 1 / 2, True)
- image = np.clip(image, 0.0, 1.0)
- sf = 2
-
- shuffle_order = random.sample(range(7), 7)
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
- if idx1 > idx2: # keep downsample3 last
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
-
- for i in shuffle_order:
-
- if i == 0:
- image = add_blur(image, sf=sf)
-
- # elif i == 1:
- # image = add_blur(image, sf=sf)
-
- if i == 0:
- pass
-
- elif i == 2:
- a, b = image.shape[1], image.shape[0]
- # downsample2
- if random.random() < 0.8:
- sf1 = random.uniform(1, 2 * sf)
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
- interpolation=random.choice([1, 2, 3]))
- else:
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
- k_shifted = shift_pixel(k, sf)
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
- image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
- image = image[0::sf, 0::sf, ...] # nearest downsampling
-
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 3:
- # downsample3
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
- image = np.clip(image, 0.0, 1.0)
-
- elif i == 4:
- # add Gaussian noise
- image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2)
-
- elif i == 5:
- # add JPEG noise
- if random.random() < jpeg_prob:
- image = add_JPEG_noise(image)
- #
- # elif i == 6:
- # # add processed camera sensor noise
- # if random.random() < isp_prob and isp_model is not None:
- # with torch.no_grad():
- # img, hq = isp_model.forward(img.copy(), hq)
-
- # add final JPEG compression noise
- image = add_JPEG_noise(image)
- image = util.single2uint(image)
- example = {"image": image}
- return example
-
-
-
-
-if __name__ == '__main__':
- print("hey")
- img = util.imread_uint('utils/test.png', 3)
- img = img[:448, :448]
- h = img.shape[0] // 4
- print("resizing to", h)
- sf = 4
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
- for i in range(20):
- print(i)
- img_hq = img
- img_lq = deg_fn(img)["image"]
- img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq)
- print(img_lq)
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"]
- print(img_lq.shape)
- print("bicubic", img_lq_bicubic.shape)
- print(img_hq.shape)
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic),
- (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
- interpolation=0)
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
- util.imsave(img_concat, str(i) + '.png')
diff --git a/spaces/AIZ2H/05-SOTA-Question-Answer-From-TextFileContext/app.py b/spaces/AIZ2H/05-SOTA-Question-Answer-From-TextFileContext/app.py
deleted file mode 100644
index c66d3925b6805866e5bead78cee8fdfacd2c9638..0000000000000000000000000000000000000000
--- a/spaces/AIZ2H/05-SOTA-Question-Answer-From-TextFileContext/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import gradio as gr
-import os
-
-context = "This could be any large text corpus to use as subject matter to ask questions about. You can load it as well from text file to isolate it from code changes like in the next line"
-
-with open('Context.txt', 'r') as file:
- context = file.read()
-
-question = "What should be documented in a care plan?"
-
-API_KEY = os.environ.get("HF_TOKEN")
-gr.Interface.load(
- "huggingface/deepset/roberta-base-squad2",
- api_key=API_KEY,
- theme="default",
- css=".footer{display:none !important}",
- inputs=[gr.inputs.Textbox(lines=12, default=context, label="Context paragraph"), gr.inputs.Textbox(lines=3, default=question, label="Question")],
- outputs=[gr.outputs.Textbox(label="Answer"), gr.outputs.Textbox(label="Score")],
- title=None,
- description="Provide your own paragraph and ask any question about the text. How well does the model answer?").launch()
\ No newline at end of file
diff --git a/spaces/Aadi1149/Arkenbrien-text-to-image-Arkenbrien/app.py b/spaces/Aadi1149/Arkenbrien-text-to-image-Arkenbrien/app.py
deleted file mode 100644
index 2d86cf2d2784e40969af85bc3ed6a35fd525b5ac..0000000000000000000000000000000000000000
--- a/spaces/Aadi1149/Arkenbrien-text-to-image-Arkenbrien/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Arkenbrien/text-to-image-Arkenbrien").launch()
\ No newline at end of file
diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/6.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/6.js
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/ConfigurationMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/ConfigurationMethods.js
deleted file mode 100644
index 22f9b489453fb0b1b5cedb6ea5a3fbdb4a99e231..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorinput/methods/ConfigurationMethods.js
+++ /dev/null
@@ -1,107 +0,0 @@
-var methods = {
- // Color picker
- setCreateColorPickerBackgroundCallback(callback) {
- this.colorPickerCreateBackgroundCallback = callback;
- return this;
- },
-
- setColorPickerHPalettePosition(position) {
- this.colorPickerHPalettePosition = position;
- return this;
- },
-
- setColorPickerExpandDirection(direction) {
- if (typeof (direction) === 'string') {
- direction = ColorPickerExpandDirections[direction];
- }
- this.colorPickerExpandDirection = direction;
- return this;
- },
-
- setColorPickerEaseInDuration(duration) {
- if (duration === undefined) {
- duration = 0;
- }
- this.colorPickerEaseInDuration = duration;
- return this;
- },
-
- setColorPickerEaseOutDuration(duration) {
- if (duration === undefined) {
- duration = 0;
- }
- this.colorPickerEaseOutDuration = duration;
- return this;
- },
-
- setColorPickerTransitInCallback(callback) {
- this.colorPickerTransitInCallback = callback;
- // callback = function(gameObject, duration) {}
- return this;
- },
-
- setColorPickerTransitOutCallback(callback) {
- this.colorPickerTransitOutCallback = callback;
- // callback = function(gameObject, duration) {}
- return this;
- },
-
- setColorPickerBounds(bounds) {
- this.colorPickerBounds = bounds;
- return this;
- },
-
- setColorPickerWidth(width) {
- this.colorPickerWidth = width;
- return this;
- },
-
- setColorPickerHeight(height) {
- this.colorPickerHeight = height;
- return this;
- },
-
- setColorPickerSize(width, height) {
- this.setColorPickerWidth(width).setColorPickerHeight(height);
- return this;
- },
-
- setColorPickerSpace(space) {
- if (space === undefined) {
- space = {};
- }
- this.colorPickerSpace = space;
- return this;
- },
-
- // Color components
- setColorComponentsHeight(height) {
- this.colorComponentsHeight = height;
- return this;
- },
-
- setColorComponentsFormatLabelConfig(config) {
- this.colorComponentsFormatLabelConfig = config;
- return this;
- },
-
- setColorComponentsInputTextConfig(config) {
- this.colorComponentsInputTextConfig = config;
- return this;
- },
-
- setColorComponentsSpace(space) {
- if (space === undefined) {
- space = {};
- }
- this.colorComponentsSpace = space;
- return this;
- },
-}
-
-const ColorPickerExpandDirections = {
- down: 0,
- up: 1
-}
-
-export default methods;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetThumbAlignPoint.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetThumbAlignPoint.js
deleted file mode 100644
index e8f10277175c4dccc6b3f1402ee4838eae6e5089..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/slider/GetThumbAlignPoint.js
+++ /dev/null
@@ -1,23 +0,0 @@
-import AlignIn from '../../../plugins/utils/actions/AlignIn.js';
-
-var GetThumbAlignPoint = function (align, out) {
- if (out === undefined) {
- out = tmpPoint;
- }
- var thumb = this.childrenMap.thumb;
- var currentX = thumb.x;
- var currentY = thumb.y;
-
- AlignIn(thumb, this.innerLeft, this.innerTop, this.innerWidth, this.innerHeight, align);
- out.x = thumb.x;
- out.y = thumb.y;
-
- thumb.x = currentX;
- thumb.y = currentY;
-
- return out;
-}
-
-var tmpPoint = {};
-
-export default GetThumbAlignPoint;
\ No newline at end of file
diff --git a/spaces/AlawnCN/webui-docker/oh-no.py b/spaces/AlawnCN/webui-docker/oh-no.py
deleted file mode 100644
index e8c0f3bd8d72805b4ee69d4d0fd9133347d00f92..0000000000000000000000000000000000000000
--- a/spaces/AlawnCN/webui-docker/oh-no.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import gradio as gr
-
-block = gr.Blocks()
-
-def run():
- with block:
- gr.Markdown(
- """
-
oh no 😐 something wrong with the 🤗 hugging face servers 😐 hopefully, it will be fixed soon
- """)
- block.launch(server_name="0.0.0.0", server_port=7860)
-
-if __name__ == "__main__":
- run()
\ No newline at end of file
diff --git a/spaces/Ali-Omrani/CCR/README.md b/spaces/Ali-Omrani/CCR/README.md
deleted file mode 100644
index 5a0f1c8d1b4c92cd6abf2a8b4771db87c831bcbf..0000000000000000000000000000000000000000
--- a/spaces/Ali-Omrani/CCR/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: CCR
-emoji: 🚀
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/batchnorm.py b/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/batchnorm.py
deleted file mode 100644
index 5f4e763f0366dffa10320116413f8c7181a8aeb1..0000000000000000000000000000000000000000
--- a/spaces/Alpaca233/SadTalker/src/facerender/sync_batchnorm/batchnorm.py
+++ /dev/null
@@ -1,315 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
-# Distributed under MIT License.
-
-import collections
-
-import torch
-import torch.nn.functional as F
-
-from torch.nn.modules.batchnorm import _BatchNorm
-from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast
-
-from .comm import SyncMaster
-
-__all__ = ['SynchronizedBatchNorm1d', 'SynchronizedBatchNorm2d', 'SynchronizedBatchNorm3d']
-
-
-def _sum_ft(tensor):
- """sum over the first and last dimention"""
- return tensor.sum(dim=0).sum(dim=-1)
-
-
-def _unsqueeze_ft(tensor):
- """add new dementions at the front and the tail"""
- return tensor.unsqueeze(0).unsqueeze(-1)
-
-
-_ChildMessage = collections.namedtuple('_ChildMessage', ['sum', 'ssum', 'sum_size'])
-_MasterMessage = collections.namedtuple('_MasterMessage', ['sum', 'inv_std'])
-
-
-class _SynchronizedBatchNorm(_BatchNorm):
- def __init__(self, num_features, eps=1e-5, momentum=0.1, affine=True):
- super(_SynchronizedBatchNorm, self).__init__(num_features, eps=eps, momentum=momentum, affine=affine)
-
- self._sync_master = SyncMaster(self._data_parallel_master)
-
- self._is_parallel = False
- self._parallel_id = None
- self._slave_pipe = None
-
- def forward(self, input):
- # If it is not parallel computation or is in evaluation mode, use PyTorch's implementation.
- if not (self._is_parallel and self.training):
- return F.batch_norm(
- input, self.running_mean, self.running_var, self.weight, self.bias,
- self.training, self.momentum, self.eps)
-
- # Resize the input to (B, C, -1).
- input_shape = input.size()
- input = input.view(input.size(0), self.num_features, -1)
-
- # Compute the sum and square-sum.
- sum_size = input.size(0) * input.size(2)
- input_sum = _sum_ft(input)
- input_ssum = _sum_ft(input ** 2)
-
- # Reduce-and-broadcast the statistics.
- if self._parallel_id == 0:
- mean, inv_std = self._sync_master.run_master(_ChildMessage(input_sum, input_ssum, sum_size))
- else:
- mean, inv_std = self._slave_pipe.run_slave(_ChildMessage(input_sum, input_ssum, sum_size))
-
- # Compute the output.
- if self.affine:
- # MJY:: Fuse the multiplication for speed.
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std * self.weight) + _unsqueeze_ft(self.bias)
- else:
- output = (input - _unsqueeze_ft(mean)) * _unsqueeze_ft(inv_std)
-
- # Reshape it.
- return output.view(input_shape)
-
- def __data_parallel_replicate__(self, ctx, copy_id):
- self._is_parallel = True
- self._parallel_id = copy_id
-
- # parallel_id == 0 means master device.
- if self._parallel_id == 0:
- ctx.sync_master = self._sync_master
- else:
- self._slave_pipe = ctx.sync_master.register_slave(copy_id)
-
- def _data_parallel_master(self, intermediates):
- """Reduce the sum and square-sum, compute the statistics, and broadcast it."""
-
- # Always using same "device order" makes the ReduceAdd operation faster.
- # Thanks to:: Tete Xiao (http://tetexiao.com/)
- intermediates = sorted(intermediates, key=lambda i: i[1].sum.get_device())
-
- to_reduce = [i[1][:2] for i in intermediates]
- to_reduce = [j for i in to_reduce for j in i] # flatten
- target_gpus = [i[1].sum.get_device() for i in intermediates]
-
- sum_size = sum([i[1].sum_size for i in intermediates])
- sum_, ssum = ReduceAddCoalesced.apply(target_gpus[0], 2, *to_reduce)
- mean, inv_std = self._compute_mean_std(sum_, ssum, sum_size)
-
- broadcasted = Broadcast.apply(target_gpus, mean, inv_std)
-
- outputs = []
- for i, rec in enumerate(intermediates):
- outputs.append((rec[0], _MasterMessage(*broadcasted[i*2:i*2+2])))
-
- return outputs
-
- def _compute_mean_std(self, sum_, ssum, size):
- """Compute the mean and standard-deviation with sum and square-sum. This method
- also maintains the moving average on the master device."""
- assert size > 1, 'BatchNorm computes unbiased standard-deviation, which requires size > 1.'
- mean = sum_ / size
- sumvar = ssum - sum_ * mean
- unbias_var = sumvar / (size - 1)
- bias_var = sumvar / size
-
- self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * mean.data
- self.running_var = (1 - self.momentum) * self.running_var + self.momentum * unbias_var.data
-
- return mean, bias_var.clamp(self.eps) ** -0.5
-
-
-class SynchronizedBatchNorm1d(_SynchronizedBatchNorm):
- r"""Applies Synchronized Batch Normalization over a 2d or 3d input that is seen as a
- mini-batch.
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm1d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, L)` slices, it's common terminology to call this Temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of size
- `batch_size x num_features [x width]`
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C)` or :math:`(N, C, L)`
- - Output: :math:`(N, C)` or :math:`(N, C, L)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm1d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 2 and input.dim() != 3:
- raise ValueError('expected 2D or 3D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm1d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm2d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 4d input that is seen as a mini-batch
- of 3d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm2d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, H, W)` slices, it's common terminology to call this Spatial BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, H, W)`
- - Output: :math:`(N, C, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm2d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 4:
- raise ValueError('expected 4D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm2d, self)._check_input_dim(input)
-
-
-class SynchronizedBatchNorm3d(_SynchronizedBatchNorm):
- r"""Applies Batch Normalization over a 5d input that is seen as a mini-batch
- of 4d inputs
-
- .. math::
-
- y = \frac{x - mean[x]}{ \sqrt{Var[x] + \epsilon}} * gamma + beta
-
- This module differs from the built-in PyTorch BatchNorm3d as the mean and
- standard-deviation are reduced across all devices during training.
-
- For example, when one uses `nn.DataParallel` to wrap the network during
- training, PyTorch's implementation normalize the tensor on each device using
- the statistics only on that device, which accelerated the computation and
- is also easy to implement, but the statistics might be inaccurate.
- Instead, in this synchronized version, the statistics will be computed
- over all training samples distributed on multiple devices.
-
- Note that, for one-GPU or CPU-only case, this module behaves exactly same
- as the built-in PyTorch implementation.
-
- The mean and standard-deviation are calculated per-dimension over
- the mini-batches and gamma and beta are learnable parameter vectors
- of size C (where C is the input size).
-
- During training, this layer keeps a running estimate of its computed mean
- and variance. The running sum is kept with a default momentum of 0.1.
-
- During evaluation, this running mean/variance is used for normalization.
-
- Because the BatchNorm is done over the `C` dimension, computing statistics
- on `(N, D, H, W)` slices, it's common terminology to call this Volumetric BatchNorm
- or Spatio-temporal BatchNorm
-
- Args:
- num_features: num_features from an expected input of
- size batch_size x num_features x depth x height x width
- eps: a value added to the denominator for numerical stability.
- Default: 1e-5
- momentum: the value used for the running_mean and running_var
- computation. Default: 0.1
- affine: a boolean value that when set to ``True``, gives the layer learnable
- affine parameters. Default: ``True``
-
- Shape:
- - Input: :math:`(N, C, D, H, W)`
- - Output: :math:`(N, C, D, H, W)` (same shape as input)
-
- Examples:
- >>> # With Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100)
- >>> # Without Learnable Parameters
- >>> m = SynchronizedBatchNorm3d(100, affine=False)
- >>> input = torch.autograd.Variable(torch.randn(20, 100, 35, 45, 10))
- >>> output = m(input)
- """
-
- def _check_input_dim(self, input):
- if input.dim() != 5:
- raise ValueError('expected 5D input (got {}D input)'
- .format(input.dim()))
- super(SynchronizedBatchNorm3d, self)._check_input_dim(input)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/tutorials/tutorial_overview.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/tutorials/tutorial_overview.md
deleted file mode 100644
index 0cec9a317ddbef7488204f9e8cd6c7f07aca6b79..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/tutorials/tutorial_overview.md
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
-# Overview
-
-Welcome to 🧨 Diffusers! If you're new to diffusion models and generative AI, and want to learn more, then you've come to the right place. These beginner-friendly tutorials are designed to provide a gentle introduction to diffusion models and help you understand the library fundamentals - the core components and how 🧨 Diffusers is meant to be used.
-
-You'll learn how to use a pipeline for inference to rapidly generate things, and then deconstruct that pipeline to really understand how to use the library as a modular toolbox for building your own diffusion systems. In the next lesson, you'll learn how to train your own diffusion model to generate what you want.
-
-After completing the tutorials, you'll have gained the necessary skills to start exploring the library on your own and see how to use it for your own projects and applications.
-
-Feel free to join our community on [Discord](https://discord.com/invite/JfAtkvEtRb) or the [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) to connect and collaborate with other users and developers!
-
-Let's start diffusing! 🧨
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler_ancestral.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler_ancestral.py
deleted file mode 100644
index 9866bd12d6af863469fa7369245dce5843d69080..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler_ancestral.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-
-from diffusers import EulerAncestralDiscreteScheduler
-from diffusers.utils import torch_device
-
-from .test_schedulers import SchedulerCommonTest
-
-
-class EulerAncestralDiscreteSchedulerTest(SchedulerCommonTest):
- scheduler_classes = (EulerAncestralDiscreteScheduler,)
- num_inference_steps = 10
-
- def get_scheduler_config(self, **kwargs):
- config = {
- "num_train_timesteps": 1100,
- "beta_start": 0.0001,
- "beta_end": 0.02,
- "beta_schedule": "linear",
- }
-
- config.update(**kwargs)
- return config
-
- def test_timesteps(self):
- for timesteps in [10, 50, 100, 1000]:
- self.check_over_configs(num_train_timesteps=timesteps)
-
- def test_betas(self):
- for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]):
- self.check_over_configs(beta_start=beta_start, beta_end=beta_end)
-
- def test_schedules(self):
- for schedule in ["linear", "scaled_linear"]:
- self.check_over_configs(beta_schedule=schedule)
-
- def test_prediction_type(self):
- for prediction_type in ["epsilon", "v_prediction"]:
- self.check_over_configs(prediction_type=prediction_type)
-
- def test_full_loop_no_noise(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- scheduler.set_timesteps(self.num_inference_steps)
-
- generator = torch.manual_seed(0)
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu()
- sample = sample.to(torch_device)
-
- for i, t in enumerate(scheduler.timesteps):
- sample = scheduler.scale_model_input(sample, t)
-
- model_output = model(sample, t)
-
- output = scheduler.step(model_output, t, sample, generator=generator)
- sample = output.prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 152.3192) < 1e-2
- assert abs(result_mean.item() - 0.1983) < 1e-3
-
- def test_full_loop_with_v_prediction(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config(prediction_type="v_prediction")
- scheduler = scheduler_class(**scheduler_config)
-
- scheduler.set_timesteps(self.num_inference_steps)
-
- generator = torch.manual_seed(0)
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma
- sample = sample.to(torch_device)
-
- for i, t in enumerate(scheduler.timesteps):
- sample = scheduler.scale_model_input(sample, t)
-
- model_output = model(sample, t)
-
- output = scheduler.step(model_output, t, sample, generator=generator)
- sample = output.prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 108.4439) < 1e-2
- assert abs(result_mean.item() - 0.1412) < 1e-3
-
- def test_full_loop_device(self):
- scheduler_class = self.scheduler_classes[0]
- scheduler_config = self.get_scheduler_config()
- scheduler = scheduler_class(**scheduler_config)
-
- scheduler.set_timesteps(self.num_inference_steps, device=torch_device)
- generator = torch.manual_seed(0)
-
- model = self.dummy_model()
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu()
- sample = sample.to(torch_device)
-
- for t in scheduler.timesteps:
- sample = scheduler.scale_model_input(sample, t)
-
- model_output = model(sample, t)
-
- output = scheduler.step(model_output, t, sample, generator=generator)
- sample = output.prev_sample
-
- result_sum = torch.sum(torch.abs(sample))
- result_mean = torch.mean(torch.abs(sample))
-
- assert abs(result_sum.item() - 152.3192) < 1e-2
- assert abs(result_mean.item() - 0.1983) < 1e-3
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py
deleted file mode 100644
index a44c01831b508da0a5e1ca3720bb437bcea086d1..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_c4_1x_coco.py
+++ /dev/null
@@ -1,39 +0,0 @@
-_base_ = [
- '../_base_/models/mask_rcnn_r50_caffe_c4.py',
- '../_base_/datasets/coco_instance.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-# use caffe img_norm
-img_norm_cfg = dict(
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- train=dict(pipeline=train_pipeline),
- val=dict(pipeline=test_pipeline),
- test=dict(pipeline=test_pipeline))
-# optimizer
-optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/pisa_retinanet_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/pisa_retinanet_head.py
deleted file mode 100644
index bd87b9aeb07e05ff94b444ac8999eca3f616711a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/pisa_retinanet_head.py
+++ /dev/null
@@ -1,154 +0,0 @@
-import torch
-from mmcv.runner import force_fp32
-
-from mmdet.core import images_to_levels
-from ..builder import HEADS
-from ..losses import carl_loss, isr_p
-from .retina_head import RetinaHead
-
-
-@HEADS.register_module()
-class PISARetinaHead(RetinaHead):
- """PISA Retinanet Head.
-
- The head owns the same structure with Retinanet Head, but differs in two
- aspects:
- 1. Importance-based Sample Reweighting Positive (ISR-P) is applied to
- change the positive loss weights.
- 2. Classification-aware regression loss is adopted as a third loss.
- """
-
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute losses of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_anchors * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_anchors * 4, H, W)
- gt_bboxes (list[Tensor]): Ground truth bboxes of each image
- with shape (num_obj, 4).
- gt_labels (list[Tensor]): Ground truth labels of each image
- with shape (num_obj, 4).
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (list[Tensor]): Ignored gt bboxes of each image.
- Default: None.
-
- Returns:
- dict: Loss dict, comprise classification loss, regression loss and
- carl loss.
- """
- featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores]
- assert len(featmap_sizes) == self.anchor_generator.num_levels
-
- device = cls_scores[0].device
-
- anchor_list, valid_flag_list = self.get_anchors(
- featmap_sizes, img_metas, device=device)
- label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1
- cls_reg_targets = self.get_targets(
- anchor_list,
- valid_flag_list,
- gt_bboxes,
- img_metas,
- gt_bboxes_ignore_list=gt_bboxes_ignore,
- gt_labels_list=gt_labels,
- label_channels=label_channels,
- return_sampling_results=True)
- if cls_reg_targets is None:
- return None
- (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list,
- num_total_pos, num_total_neg, sampling_results_list) = cls_reg_targets
- num_total_samples = (
- num_total_pos + num_total_neg if self.sampling else num_total_pos)
-
- # anchor number of multi levels
- num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]]
- # concat all level anchors and flags to a single tensor
- concat_anchor_list = []
- for i in range(len(anchor_list)):
- concat_anchor_list.append(torch.cat(anchor_list[i]))
- all_anchor_list = images_to_levels(concat_anchor_list,
- num_level_anchors)
-
- num_imgs = len(img_metas)
- flatten_cls_scores = [
- cls_score.permute(0, 2, 3, 1).reshape(num_imgs, -1, label_channels)
- for cls_score in cls_scores
- ]
- flatten_cls_scores = torch.cat(
- flatten_cls_scores, dim=1).reshape(-1,
- flatten_cls_scores[0].size(-1))
- flatten_bbox_preds = [
- bbox_pred.permute(0, 2, 3, 1).reshape(num_imgs, -1, 4)
- for bbox_pred in bbox_preds
- ]
- flatten_bbox_preds = torch.cat(
- flatten_bbox_preds, dim=1).view(-1, flatten_bbox_preds[0].size(-1))
- flatten_labels = torch.cat(labels_list, dim=1).reshape(-1)
- flatten_label_weights = torch.cat(
- label_weights_list, dim=1).reshape(-1)
- flatten_anchors = torch.cat(all_anchor_list, dim=1).reshape(-1, 4)
- flatten_bbox_targets = torch.cat(
- bbox_targets_list, dim=1).reshape(-1, 4)
- flatten_bbox_weights = torch.cat(
- bbox_weights_list, dim=1).reshape(-1, 4)
-
- # Apply ISR-P
- isr_cfg = self.train_cfg.get('isr', None)
- if isr_cfg is not None:
- all_targets = (flatten_labels, flatten_label_weights,
- flatten_bbox_targets, flatten_bbox_weights)
- with torch.no_grad():
- all_targets = isr_p(
- flatten_cls_scores,
- flatten_bbox_preds,
- all_targets,
- flatten_anchors,
- sampling_results_list,
- bbox_coder=self.bbox_coder,
- loss_cls=self.loss_cls,
- num_class=self.num_classes,
- **self.train_cfg.isr)
- (flatten_labels, flatten_label_weights, flatten_bbox_targets,
- flatten_bbox_weights) = all_targets
-
- # For convenience we compute loss once instead separating by fpn level,
- # so that we don't need to separate the weights by level again.
- # The result should be the same
- losses_cls = self.loss_cls(
- flatten_cls_scores,
- flatten_labels,
- flatten_label_weights,
- avg_factor=num_total_samples)
- losses_bbox = self.loss_bbox(
- flatten_bbox_preds,
- flatten_bbox_targets,
- flatten_bbox_weights,
- avg_factor=num_total_samples)
- loss_dict = dict(loss_cls=losses_cls, loss_bbox=losses_bbox)
-
- # CARL Loss
- carl_cfg = self.train_cfg.get('carl', None)
- if carl_cfg is not None:
- loss_carl = carl_loss(
- flatten_cls_scores,
- flatten_labels,
- flatten_bbox_preds,
- flatten_bbox_targets,
- self.loss_bbox,
- **self.train_cfg.carl,
- avg_factor=num_total_pos,
- sigmoid=True,
- num_class=self.num_classes)
- loss_dict.update(loss_carl)
-
- return loss_dict
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_40k_cityscapes.py
deleted file mode 100644
index f20f260e23a95dfee9dfdceef9badab992246f53..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d16-mg124_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,11 +0,0 @@
-_base_ = './deeplabv3_r50-d8_512x1024_40k_cityscapes.py'
-model = dict(
- pretrained='open-mmlab://resnet101_v1c',
- backbone=dict(
- depth=101,
- dilations=(1, 1, 1, 2),
- strides=(1, 2, 2, 1),
- multi_grid=(1, 2, 4)),
- decode_head=dict(
- dilations=(1, 6, 12, 18),
- sampler=dict(type='OHEMPixelSampler', min_kept=100000)))
diff --git a/spaces/AnnonSubmission/xai-cl/utils.py b/spaces/AnnonSubmission/xai-cl/utils.py
deleted file mode 100644
index 59470f9bb1276013f1db5cd6ce2a3c69410da9be..0000000000000000000000000000000000000000
--- a/spaces/AnnonSubmission/xai-cl/utils.py
+++ /dev/null
@@ -1,101 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import numpy as np
-from PIL import Image
-import random
-import cv2
-import io
-from ssl_models.simclr2 import get_simclr2_model
-from ssl_models.barlow_twins import get_barlow_twins_model
-from ssl_models.simsiam import get_simsiam
-from ssl_models.dino import get_dino_model_without_loss, get_dino_model_with_loss
-
-def get_ssl_model(network, variant):
-
- if network == 'simclrv2':
- if variant == '1x':
- ssl_model = get_simclr2_model('r50_1x_sk0_ema.pth').eval()
- else:
- ssl_model = get_simclr2_model('r50_2x_sk0_ema.pth').eval()
- elif network == 'barlow_twins':
- ssl_model = get_barlow_twins_model().eval()
- elif network == 'simsiam':
- ssl_model = get_simsiam().eval()
- elif network == 'dino':
- ssl_model = get_dino_model_without_loss().eval()
- elif network == 'dino+loss':
- ssl_model, dino_score = get_dino_model_with_loss()
- ssl_model = ssl_model.eval()
-
- return ssl_model
-
-def overlay_heatmap(img, heatmap, denormalize = False):
- loaded_img = img.squeeze(0).cpu().numpy().transpose((1, 2, 0))
-
- if denormalize:
- mean = np.array([0.485, 0.456, 0.406])
- std = np.array([0.229, 0.224, 0.225])
- loaded_img = std * loaded_img + mean
-
- loaded_img = (loaded_img.clip(0, 1) * 255).astype(np.uint8)
- cam = heatmap / heatmap.max()
- cam = cv2.resize(cam, (224, 224))
- cam = np.uint8(255 * cam)
- cam = cv2.applyColorMap(cam, cv2.COLORMAP_JET) # jet: blue --> red
- cam = cv2.cvtColor(cam, cv2.COLOR_BGR2RGB)
- added_image = cv2.addWeighted(cam, 0.5, loaded_img, 0.5, 0)
- return added_image
-
-def viz_map(img_path, heatmap):
- "For pixel invariance"
- img = np.array(Image.open(img_path).resize((224,224))) if isinstance(img_path, str) else np.array(img_path.resize((224,224)))
- width, height, _ = img.shape
- cam = heatmap.detach().cpu().numpy()
- cam = cam / cam.max()
- cam = cv2.resize(cam, (height, width))
- heatmap = np.uint8(255 * cam)
- heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET)
- heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB)
- added_image = cv2.addWeighted(heatmap, 0.5, img, 0.7, 0)
- return added_image
-
-def show_image(x, squeeze = True, denormalize = False):
-
- if squeeze:
- x = x.squeeze(0)
-
- x = x.cpu().numpy().transpose((1, 2, 0))
-
- if denormalize:
- mean = np.array([0.485, 0.456, 0.406])
- std = np.array([0.229, 0.224, 0.225])
- x = std * x + mean
-
- return x.clip(0, 1)
-
-def deprocess(inp, to_numpy = True, to_PIL = False, denormalize = False):
-
- if to_numpy:
- inp = inp.detach().cpu().numpy()
-
- inp = inp.squeeze(0).transpose((1, 2, 0))
-
- if denormalize:
- mean = np.array([0.485, 0.456, 0.406])
- std = np.array([0.229, 0.224, 0.225])
- inp = std * inp + mean
-
- inp = (inp.clip(0, 1) * 255).astype(np.uint8)
-
- if to_PIL:
- return Image.fromarray(inp)
- return inp
-
-def fig2img(fig):
- """Convert a Matplotlib figure to a PIL Image and return it"""
- buf = io.BytesIO()
- fig.savefig(buf, bbox_inches='tight', pad_inches=0)
- buf.seek(0)
- img = Image.open(buf)
- return img
diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/vit.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/vit.py
deleted file mode 100644
index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/midas/midas/vit.py
+++ /dev/null
@@ -1,491 +0,0 @@
-import torch
-import torch.nn as nn
-import timm
-import types
-import math
-import torch.nn.functional as F
-
-
-class Slice(nn.Module):
- def __init__(self, start_index=1):
- super(Slice, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- return x[:, self.start_index :]
-
-
-class AddReadout(nn.Module):
- def __init__(self, start_index=1):
- super(AddReadout, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- if self.start_index == 2:
- readout = (x[:, 0] + x[:, 1]) / 2
- else:
- readout = x[:, 0]
- return x[:, self.start_index :] + readout.unsqueeze(1)
-
-
-class ProjectReadout(nn.Module):
- def __init__(self, in_features, start_index=1):
- super(ProjectReadout, self).__init__()
- self.start_index = start_index
-
- self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU())
-
- def forward(self, x):
- readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :])
- features = torch.cat((x[:, self.start_index :], readout), -1)
-
- return self.project(features)
-
-
-class Transpose(nn.Module):
- def __init__(self, dim0, dim1):
- super(Transpose, self).__init__()
- self.dim0 = dim0
- self.dim1 = dim1
-
- def forward(self, x):
- x = x.transpose(self.dim0, self.dim1)
- return x
-
-
-def forward_vit(pretrained, x):
- b, c, h, w = x.shape
-
- glob = pretrained.model.forward_flex(x)
-
- layer_1 = pretrained.activations["1"]
- layer_2 = pretrained.activations["2"]
- layer_3 = pretrained.activations["3"]
- layer_4 = pretrained.activations["4"]
-
- layer_1 = pretrained.act_postprocess1[0:2](layer_1)
- layer_2 = pretrained.act_postprocess2[0:2](layer_2)
- layer_3 = pretrained.act_postprocess3[0:2](layer_3)
- layer_4 = pretrained.act_postprocess4[0:2](layer_4)
-
- unflatten = nn.Sequential(
- nn.Unflatten(
- 2,
- torch.Size(
- [
- h // pretrained.model.patch_size[1],
- w // pretrained.model.patch_size[0],
- ]
- ),
- )
- )
-
- if layer_1.ndim == 3:
- layer_1 = unflatten(layer_1)
- if layer_2.ndim == 3:
- layer_2 = unflatten(layer_2)
- if layer_3.ndim == 3:
- layer_3 = unflatten(layer_3)
- if layer_4.ndim == 3:
- layer_4 = unflatten(layer_4)
-
- layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1)
- layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2)
- layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3)
- layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4)
-
- return layer_1, layer_2, layer_3, layer_4
-
-
-def _resize_pos_embed(self, posemb, gs_h, gs_w):
- posemb_tok, posemb_grid = (
- posemb[:, : self.start_index],
- posemb[0, self.start_index :],
- )
-
- gs_old = int(math.sqrt(len(posemb_grid)))
-
- posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)
- posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear")
- posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1)
-
- posemb = torch.cat([posemb_tok, posemb_grid], dim=1)
-
- return posemb
-
-
-def forward_flex(self, x):
- b, c, h, w = x.shape
-
- pos_embed = self._resize_pos_embed(
- self.pos_embed, h // self.patch_size[1], w // self.patch_size[0]
- )
-
- B = x.shape[0]
-
- if hasattr(self.patch_embed, "backbone"):
- x = self.patch_embed.backbone(x)
- if isinstance(x, (list, tuple)):
- x = x[-1] # last feature if backbone outputs list/tuple of features
-
- x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)
-
- if getattr(self, "dist_token", None) is not None:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- dist_token = self.dist_token.expand(B, -1, -1)
- x = torch.cat((cls_tokens, dist_token, x), dim=1)
- else:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
-
- x = x + pos_embed
- x = self.pos_drop(x)
-
- for blk in self.blocks:
- x = blk(x)
-
- x = self.norm(x)
-
- return x
-
-
-activations = {}
-
-
-def get_activation(name):
- def hook(model, input, output):
- activations[name] = output
-
- return hook
-
-
-def get_readout_oper(vit_features, features, use_readout, start_index=1):
- if use_readout == "ignore":
- readout_oper = [Slice(start_index)] * len(features)
- elif use_readout == "add":
- readout_oper = [AddReadout(start_index)] * len(features)
- elif use_readout == "project":
- readout_oper = [
- ProjectReadout(vit_features, start_index) for out_feat in features
- ]
- else:
- assert (
- False
- ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'"
-
- return readout_oper
-
-
-def _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[2, 5, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- # 32, 48, 136, 384
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_large_patch16_384", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[256, 512, 1024, 1024],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model(
- "vit_deit_base_distilled_patch16_384", pretrained=pretrained
- )
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- hooks=hooks,
- use_readout=use_readout,
- start_index=2,
- )
-
-
-def _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=[0, 1, 8, 11],
- vit_features=768,
- use_vit_only=False,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
-
- if use_vit_only == True:
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- else:
- pretrained.model.patch_embed.backbone.stages[0].register_forward_hook(
- get_activation("1")
- )
- pretrained.model.patch_embed.backbone.stages[1].register_forward_hook(
- get_activation("2")
- )
-
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- if use_vit_only == True:
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
- else:
- pretrained.act_postprocess1 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
- pretrained.act_postprocess2 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitb_rn50_384(
- pretrained, use_readout="ignore", hooks=None, use_vit_only=False
-):
- model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained)
-
- hooks = [0, 1, 8, 11] if hooks == None else hooks
- return _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
diff --git a/spaces/Apex-X/GODROOP/roop/predictor.py b/spaces/Apex-X/GODROOP/roop/predictor.py
deleted file mode 100644
index 877fd725d21bddf5e788677eefbc917ddc79f52b..0000000000000000000000000000000000000000
--- a/spaces/Apex-X/GODROOP/roop/predictor.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import threading
-import numpy
-from PIL import Image
-
-from roop.typing import Frame
-
-# Define any other necessary variables or constants here
-
-def predict_frame(target_frame: Frame) -> bool:
- # Modify this function as needed for your specific use case, without NSFW prediction
- # For example, you can implement custom image analysis or processing here
- return False
-
-def predict_image(target_path: str) -> bool:
- # Modify this function as needed for your specific use case, without NSFW prediction
- # For example, you can check the image based on your application's requirements
- return False
-
-def predict_video(target_path: str) -> bool:
- # Modify this function as needed for your specific use case, without NSFW prediction
- # For example, you can analyze video frames for other purposes
- return False
diff --git a/spaces/ArcanAlt/arcanDream/server.js b/spaces/ArcanAlt/arcanDream/server.js
deleted file mode 100644
index 04a48b7a429c4d0ad0b772ba1edf503e349eda21..0000000000000000000000000000000000000000
--- a/spaces/ArcanAlt/arcanDream/server.js
+++ /dev/null
@@ -1,32 +0,0 @@
-const express = require('express');
-const proxy = require('express-http-proxy');
-const app = express();
-const targetUrl = 'https://api.openai.com';
-const openaiKey = process.env.OPENAI_KEY
-const port = 7860;
-const baseUrl = getExternalUrl(process.env.SPACE_ID);
-
-app.use('/api', proxy(targetUrl, {
- proxyReqOptDecorator: (proxyReqOpts, srcReq) => {
- // Modify the request headers if necessary
- proxyReqOpts.headers['Authorization'] = 'Bearer '+openaiKey;
- return proxyReqOpts;
- },
-}));
-
-app.get("/", (req, res) => {
- res.send(`This is your OpenAI Reverse Proxy URL: ${baseUrl}`);
-});
-
-function getExternalUrl(spaceId) {
- try {
- const [username, spacename] = spaceId.split("/");
- return `https://${username}-${spacename.replace(/_/g, "-")}.hf.space/api/v1`;
- } catch (e) {
- return "";
- }
-}
-
-app.listen(port, () => {
- console.log(`Reverse proxy server running on ${baseUrl}`);
-});
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/scope.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/scope.py
deleted file mode 100644
index c9d134cc3cedae929e5bef2b5547f7e33dc10a52..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/scope.py
+++ /dev/null
@@ -1,86 +0,0 @@
-from collections.abc import Mapping
-from typing import TYPE_CHECKING, Any, Optional, Tuple
-
-from .highlighter import ReprHighlighter
-from .panel import Panel
-from .pretty import Pretty
-from .table import Table
-from .text import Text, TextType
-
-if TYPE_CHECKING:
- from .console import ConsoleRenderable
-
-
-def render_scope(
- scope: "Mapping[str, Any]",
- *,
- title: Optional[TextType] = None,
- sort_keys: bool = True,
- indent_guides: bool = False,
- max_length: Optional[int] = None,
- max_string: Optional[int] = None,
-) -> "ConsoleRenderable":
- """Render python variables in a given scope.
-
- Args:
- scope (Mapping): A mapping containing variable names and values.
- title (str, optional): Optional title. Defaults to None.
- sort_keys (bool, optional): Enable sorting of items. Defaults to True.
- indent_guides (bool, optional): Enable indentation guides. Defaults to False.
- max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
- Defaults to None.
- max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to None.
-
- Returns:
- ConsoleRenderable: A renderable object.
- """
- highlighter = ReprHighlighter()
- items_table = Table.grid(padding=(0, 1), expand=False)
- items_table.add_column(justify="right")
-
- def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]:
- """Sort special variables first, then alphabetically."""
- key, _ = item
- return (not key.startswith("__"), key.lower())
-
- items = sorted(scope.items(), key=sort_items) if sort_keys else scope.items()
- for key, value in items:
- key_text = Text.assemble(
- (key, "scope.key.special" if key.startswith("__") else "scope.key"),
- (" =", "scope.equals"),
- )
- items_table.add_row(
- key_text,
- Pretty(
- value,
- highlighter=highlighter,
- indent_guides=indent_guides,
- max_length=max_length,
- max_string=max_string,
- ),
- )
- return Panel.fit(
- items_table,
- title=title,
- border_style="scope.border",
- padding=(0, 1),
- )
-
-
-if __name__ == "__main__": # pragma: no cover
- from pip._vendor.rich import print
-
- print()
-
- def test(foo: float, bar: float) -> None:
- list_of_things = [1, 2, 3, None, 4, True, False, "Hello World"]
- dict_of_things = {
- "version": "1.1",
- "method": "confirmFruitPurchase",
- "params": [["apple", "orange", "mangoes", "pomelo"], 1.123],
- "id": "194521489",
- }
- print(render_scope(locals(), title="[i]locals", sort_keys=False))
-
- test(20.3423, 3.1427)
- print()
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build.py
deleted file mode 100644
index c0676d8e4b1a567969cf05c5825d49c3300284c9..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/build.py
+++ /dev/null
@@ -1,146 +0,0 @@
-import sys
-import warnings
-from typing import TYPE_CHECKING, List, Dict
-from distutils.command.build import build as _build
-
-from setuptools import SetuptoolsDeprecationWarning
-
-if sys.version_info >= (3, 8):
- from typing import Protocol
-elif TYPE_CHECKING:
- from typing_extensions import Protocol
-else:
- from abc import ABC as Protocol
-
-
-_ORIGINAL_SUBCOMMANDS = {"build_py", "build_clib", "build_ext", "build_scripts"}
-
-
-class build(_build):
- # copy to avoid sharing the object with parent class
- sub_commands = _build.sub_commands[:]
-
- def get_sub_commands(self):
- subcommands = {cmd[0] for cmd in _build.sub_commands}
- if subcommands - _ORIGINAL_SUBCOMMANDS:
- msg = """
- It seems that you are using `distutils.command.build` to add
- new subcommands. Using `distutils` directly is considered deprecated,
- please use `setuptools.command.build`.
- """
- warnings.warn(msg, SetuptoolsDeprecationWarning)
- self.sub_commands = _build.sub_commands
- return super().get_sub_commands()
-
-
-class SubCommand(Protocol):
- """In order to support editable installations (see :pep:`660`) all
- build subcommands **SHOULD** implement this protocol. They also **MUST** inherit
- from ``setuptools.Command``.
-
- When creating an :pep:`editable wheel <660>`, ``setuptools`` will try to evaluate
- custom ``build`` subcommands using the following procedure:
-
- 1. ``setuptools`` will set the ``editable_mode`` attribute to ``True``
- 2. ``setuptools`` will execute the ``run()`` command.
-
- .. important::
- Subcommands **SHOULD** take advantage of ``editable_mode=True`` to adequate
- its behaviour or perform optimisations.
-
- For example, if a subcommand don't need to generate any extra file and
- everything it does is to copy a source file into the build directory,
- ``run()`` **SHOULD** simply "early return".
-
- Similarly, if the subcommand creates files that would be placed alongside
- Python files in the final distribution, during an editable install
- the command **SHOULD** generate these files "in place" (i.e. write them to
- the original source directory, instead of using the build directory).
- Note that ``get_output_mapping()`` should reflect that and include mappings
- for "in place" builds accordingly.
-
- 3. ``setuptools`` use any knowledge it can derive from the return values of
- ``get_outputs()`` and ``get_output_mapping()`` to create an editable wheel.
- When relevant ``setuptools`` **MAY** attempt to use file links based on the value
- of ``get_output_mapping()``. Alternatively, ``setuptools`` **MAY** attempt to use
- :doc:`import hooks ` to redirect any attempt to import
- to the directory with the original source code and other files built in place.
-
- Please note that custom sub-commands **SHOULD NOT** rely on ``run()`` being
- executed (or not) to provide correct return values for ``get_outputs()``,
- ``get_output_mapping()`` or ``get_source_files()``. The ``get_*`` methods should
- work independently of ``run()``.
- """
-
- editable_mode: bool = False
- """Boolean flag that will be set to ``True`` when setuptools is used for an
- editable installation (see :pep:`660`).
- Implementations **SHOULD** explicitly set the default value of this attribute to
- ``False``.
- When subcommands run, they can use this flag to perform optimizations or change
- their behaviour accordingly.
- """
-
- build_lib: str
- """String representing the directory where the build artifacts should be stored,
- e.g. ``build/lib``.
- For example, if a distribution wants to provide a Python module named ``pkg.mod``,
- then a corresponding file should be written to ``{build_lib}/package/module.py``.
- A way of thinking about this is that the files saved under ``build_lib``
- would be eventually copied to one of the directories in :obj:`site.PREFIXES`
- upon installation.
-
- A command that produces platform-independent files (e.g. compiling text templates
- into Python functions), **CAN** initialize ``build_lib`` by copying its value from
- the ``build_py`` command. On the other hand, a command that produces
- platform-specific files **CAN** initialize ``build_lib`` by copying its value from
- the ``build_ext`` command. In general this is done inside the ``finalize_options``
- method with the help of the ``set_undefined_options`` command::
-
- def finalize_options(self):
- self.set_undefined_options("build_py", ("build_lib", "build_lib"))
- ...
- """
-
- def initialize_options(self):
- """(Required by the original :class:`setuptools.Command` interface)"""
-
- def finalize_options(self):
- """(Required by the original :class:`setuptools.Command` interface)"""
-
- def run(self):
- """(Required by the original :class:`setuptools.Command` interface)"""
-
- def get_source_files(self) -> List[str]:
- """
- Return a list of all files that are used by the command to create the expected
- outputs.
- For example, if your build command transpiles Java files into Python, you should
- list here all the Java files.
- The primary purpose of this function is to help populating the ``sdist``
- with all the files necessary to build the distribution.
- All files should be strings relative to the project root directory.
- """
-
- def get_outputs(self) -> List[str]:
- """
- Return a list of files intended for distribution as they would have been
- produced by the build.
- These files should be strings in the form of
- ``"{build_lib}/destination/file/path"``.
-
- .. note::
- The return value of ``get_output()`` should include all files used as keys
- in ``get_output_mapping()`` plus files that are generated during the build
- and don't correspond to any source file already present in the project.
- """
-
- def get_output_mapping(self) -> Dict[str, str]:
- """
- Return a mapping between destination files as they would be produced by the
- build (dict keys) into the respective existing (source) files (dict values).
- Existing (source) files should be represented as strings relative to the project
- root directory.
- Destination files should be strings in the form of
- ``"{build_lib}/destination/file/path"``.
- """
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py
deleted file mode 100644
index d96609e8f2261a6800fe85fcf3e1eaeaa44455c6..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/evaluation/__init__.py
+++ /dev/null
@@ -1,12 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .cityscapes_evaluation import CityscapesInstanceEvaluator, CityscapesSemSegEvaluator
-from .coco_evaluation import COCOEvaluator
-from .rotated_coco_evaluation import RotatedCOCOEvaluator
-from .evaluator import DatasetEvaluator, DatasetEvaluators, inference_context, inference_on_dataset
-from .lvis_evaluation import LVISEvaluator
-from .panoptic_evaluation import COCOPanopticEvaluator
-from .pascal_voc_evaluation import PascalVOCDetectionEvaluator
-from .sem_seg_evaluation import SemSegEvaluator
-from .testing import print_csv_format, verify_results
-
-__all__ = [k for k in globals().keys() if not k.startswith("_")]
diff --git a/spaces/Benson/text-generation/Examples/Apk Mod De Da Para Android 11.md b/spaces/Benson/text-generation/Examples/Apk Mod De Da Para Android 11.md
deleted file mode 100644
index b2c2d1ad5c2ec28eab0a66b247b5a2ca234ef08b..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apk Mod De Da Para Android 11.md
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
Totalmente fiable servicio de entrega Mod APK An1: Un divertido y caótico juego basado en la física
-
¿Te gustan los juegos divertidos, impredecibles y llenos de sorpresas? Si es así, es posible que desee echa un vistazo a Totally Reliable Delivery Service, un juego basado en la física en el que entregar paquetes en un mundo loco. Y si quieres hacer el juego aún más divertido y emocionante, se puede descargar totalmente fiable Servicio de entrega Mod APK An1, una versión modificada del juego que le da dinero ilimitado y desbloqueado características. En este artículo, te diremos de qué se trata este juego, qué ofrece el apk mod, y cómo descargarlo e instalarlo en tu dispositivo Android.
¿Qué es un servicio de entrega totalmente confiable?
-
Un juego donde se entregan paquetes en un mundo loco
-
Totally Reliable Delivery Service es un juego donde juegas como un repartidor que tiene que entregar paquetes en un mundo loco y caótico. El juego cuenta con la física ragdoll, lo que significa que su personaje y los objetos en el juego se comportan de manera realista e hilarante. Puede utilizar varios vehículos, como automóviles, camiones, aviones, helicópteros, barcos e incluso cohetes, para transportar sus paquetes. Pero ten cuidado, porque cualquier cosa puede salir mal en el camino. Puedes chocar contra edificios, caerte de puentes, ser perseguido por animales o explotar en el aire. El juego está lleno de sorpresas y desafíos que te harán reír a carcajadas.
-
Un juego donde puedes personalizar tu personaje y vehículos
-
Servicio de entrega totalmente confiable también le permite personalizar su personaje y vehículos para adaptarse a su estilo y preferencias. Puedes elegir entre diferentes trajes, accesorios, peinados y colores para tu personaje. También puede actualizar sus vehículos con diferentes partes, como motores, ruedas, alas, hélices y más. Incluso puedes crear tus propios vehículos usando el modo sandbox. El juego te da muchas opciones para expresar tu creatividad y personalidad.
-
-
Totally Reliable Delivery Service es un juego que puedes disfrutar solo o con tus amigos online. Puedes jugar solo y completar varias misiones y desafíos en el mundo abierto. O puede unirse a hasta otros tres jugadores en línea y cooperar o competir con ellos en la entrega de paquetes. También pueden explorar el mundo juntos y divertirse con el juego basado en la física. El juego admite multijugador multiplataforma, lo que significa que puedes jugar con personas que utilizan diferentes dispositivos, como PC, consola o dispositivos móviles.
-
¿Qué es totalmente fiable servicio de entrega Mod APK An1?
-
Una versión modificada del juego que te da dinero ilimitado y funciones desbloqueadas
-
Totalmente fiable servicio de entrega Mod APK An1 es una versión modificada del juego que le da algunas ventajas sobre la versión original. Con este mod apk, obtendrá dinero ilimitado que se puede utilizar para comprar cualquier cosa en el juego. También obtendrá todas las características desbloqueadas, como todos los trajes, accesorios, vehículos, piezas, mapas, modos y más. Podrás disfrutar del juego sin limitaciones ni restricciones.
-
Una versión del juego que es compatible con dispositivos Android
-
Una versión del juego que es gratis para descargar e instalar
-
Totalmente fiable servicio de entrega Mod APK An1 es una versión del juego que es gratis para descargar e instalar en su dispositivo Android. Usted no necesita pagar nada para obtener este apk mod. También no es necesario para erradicar el dispositivo o utilizar cualquier otra herramienta para instalarlo. Solo tienes que seguir unos sencillos pasos que explicaremos más adelante en este artículo. Podrás jugar el juego sin problemas ni riesgos.
-
¿Cómo descargar e instalar el servicio de entrega totalmente confiable Mod APK An1?
-
Paso 1: Ir al sitio web
-
->
-
Paso 2: Haga clic en el botón de descarga y espere a que el archivo se descargue
-
El siguiente paso es hacer clic en el botón de descarga que se encuentra en la parte inferior de la página. Verá una ventana emergente que le pide que confirme su descarga. Haga clic en Aceptar y espere a que se descargue el archivo. El tamaño del archivo es de unos 50 MB, por lo que puede tardar unos minutos dependiendo de su velocidad de Internet. Puede comprobar el progreso de su descarga en la barra de notificaciones.
-
->
-
Paso 3: Habilitar fuentes desconocidas en la configuración del dispositivo
-
Una vez que se descargue el archivo, debe habilitar fuentes desconocidas en la configuración del dispositivo. Esta es una medida de seguridad que le impide instalar aplicaciones de fuentes distintas de Google Play Store. Para habilitar fuentes desconocidas, vaya a la configuración del dispositivo y busque opciones de seguridad o privacidad. Luego, busque la opción que dice fuentes desconocidas o permita la instalación desde fuentes desconocidas y cámbiela. Es posible que vea un mensaje de advertencia que indica que la instalación desde fuentes desconocidas podría dañar su dispositivo. No te preocupes, este apk mod es seguro y probado, por lo que puede ignorar la advertencia y proceder.
->
-
Paso 4: Busque el archivo descargado y toque en él para instalarlo
-
El siguiente paso es localizar el archivo descargado y tocar en él para instalarlo. Puede encontrar el archivo en su carpeta de descargas o en su aplicación de administrador de archivos. El nombre del archivo es totalmente fiable-delivery-service-mod_1.4.0.apk. Toque en él y verá una pantalla de instalación que le pide que confirme su instalación. Toque en instalar y espere a que el proceso termine.
->
-
-
Felicidades! Usted ha descargado e instalado con éxito Totalmente fiable Servicio de entrega Mod APK An1 en su dispositivo Android. Ahora puedes disfrutar del juego con dinero ilimitado y funciones desbloqueadas. Puedes lanzar el juego desde el cajón de la app o la pantalla de inicio. ¡Diviértete entregando paquetes en un mundo loco!
->
-
Conclusión
-
Totally Reliable Delivery Service es un divertido y caótico juego basado en la física en el que entregar paquetes en un mundo loco e impredecible. Puedes personalizar tu personaje y vehículos, jugar solo o con amigos en línea, y explorar diferentes mapas y modos. Si usted quiere hacer el juego aún más agradable, se puede descargar totalmente fiable servicio de entrega Mod APK An1, una versión modificada del juego que le da dinero ilimitado y desbloqueado características. Puede descargar e instalar este apk mod de forma gratuita y fácil siguiendo nuestra guía anterior. Esperamos que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación.
-
Preguntas frecuentes
-
-
¿Es totalmente confiable servicio de entrega mod APK An1 seguro?
-
-
¿Es totalmente confiable servicio de entrega mod APK An1 seguro?
-
Sí, Servicio de entrega totalmente confiable Mod APK An1 es seguro y probado por nuestro equipo. No contiene ningún virus, malware o spyware que pueda dañar su dispositivo o comprometer su privacidad. Puede descargar e instalar este apk mod sin ninguna preocupación.
-
¿Es totalmente confiable servicio de entrega mod APK An1 legal?
-
-
¿Cuáles son los requisitos para ejecutar Totalmente fiable servicio de entrega Mod APK An1?
-
Servicio de entrega totalmente confiable Mod APK An1 requiere un dispositivo Android que se ejecuta en Android 4.1 o superior. También necesita tener al menos 1 GB de RAM y 200 MB de espacio de almacenamiento gratuito en su dispositivo. También necesitas tener una conexión a Internet estable para jugar online.
-
¿Puedo jugar totalmente fiable servicio de entrega Mod APK An1 en PC o iOS?
-
No, Servicio de entrega totalmente fiable Mod APK An1 solo es compatible con dispositivos Android. No se puede jugar este apk mod en dispositivos PC o iOS. Sin embargo, puedes jugar la versión original del juego en dispositivos PC o iOS descargándolo desde las plataformas oficiales, como Steam, Epic Games Store, App Store o Google Play Store.
-
¿Puedo actualizar el servicio de entrega totalmente confiable Mod APK An1?
-
No, no se puede actualizar totalmente fiable servicio de entrega Mod APK An1 del juego en sí. Si intentas actualizar el juego desde la configuración del juego, podrías perder las características de mod y volver a la versión original del juego. Para actualizar el apk mod, es necesario visitar nuestro sitio web de nuevo y descargar la última versión del apk mod. A continuación, es necesario desinstalar la versión anterior del apk mod e instalar el nuevo siguiendo los mismos pasos que antes.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Blacknoise Reste Toi Mp3 Download.md b/spaces/Benson/text-generation/Examples/Blacknoise Reste Toi Mp3 Download.md
deleted file mode 100644
index 44e2912b0971f562ed86cfe44e4a0acfa7a6dc31..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Blacknoise Reste Toi Mp3 Download.md
+++ /dev/null
@@ -1,109 +0,0 @@
-
-
Blacknoise Reste Toi Mp3 Descargar: Una revisión del éxito de Amapiano
-
Si eres un fan de amapiano, el popular género de música house sudafricano, es posible que hayas oído hablar de Blacknoise, un artista de hip-hop que recientemente ha colaborado con Kazeli y Mashaya para crear una canción pegadiza y edificante llamada Reste Toi. En este artículo, revisaremos esta canción y te diremos cómo descargarla en formato Mp3.
Breve biografía del artista sudafricano de hip-hop
-
Blacknoise es el nombre artístico de Emile Jansen, un rapero, productor y activista de Ciudad del Cabo, Sudáfrica. También es el fundador y líder de Black Noise, un grupo de hip-hop que ha estado activo desde 1986. Blacknoise es uno de los pioneros de la escena hip-hop 'consciente' de Ciudad del Cabo, usando el rap como una herramienta para el comentario social y el empoderamiento. También ha participado en diversas iniciativas de desarrollo juvenil, como talleres, revistas, libros, obras de teatro y eventos. Ha lanzado 12 álbumes con Black Noise, seis álbumes en solitario y varios álbumes recopilatorios.
-
Su estilo musical e influencias
-
El estilo musical de Blacknoise está influenciado por varios géneros, como rap, reggae, jazz, funk, soul y amapiano. Combina sonidos africanos tradicionales con ritmos y samples modernos, creando un sonido único y diverso. También incorpora elementos de su cultura y lenguaje, como el afrikaans, xhosa y khoisan. Algunas de sus influencias musicales incluyen Public Enemy, Bob Marley, Fela Kuti, Brenda Fassie y Kabza De Small.
-
¿Qué es Reste Toi?
-
El significado y origen del título de la canción
-
-
La colaboración con Kazeli y Mashaya
-
Kazeli es una cantante y compositora francesa que se mudó a Sudáfrica en 2019. Conoció a Blacknoise a través de un amigo mutuo y decidieron trabajar juntos en algunos proyectos musicales. También invitaron a Mashaya, un cantante y productor sudafricano conocido por sus éxitos de amapiano. El trío grabó Reste Toi en el estudio de Blacknoise en Ciudad del Cabo. Querían crear una canción que mostrara sus diferentes orígenes y talentos, a la vez que entregara un mensaje positivo.
-
-
La letra y el mensaje de la canción
-
Las letras de Reste Toi tratan de celebrar la individualidad y la singularidad de uno. El estribillo dice así:
-
-
Buscar en la web
-No cambiar pas pour les autres
-Volver a la página principal
-Tu es beau comme tu es
-
-
Esto se traduce a:
-
-
Mantente a ti mismo
-No cambie para otros
-Mantente a ti mismo
-Eres hermosa como eres
-
-
Los versículos también contienen palabras de aliento y afirmación, como "Eres increíble", "Eres una estrella", y "Eres una bendición". La canción también incluye algunas frases de Xhosa, como "Molo sisi" (Hola hermana) y "Enkosi kakhulu" (Muchas gracias). El mensaje de la canción es inspirar a la gente a sentirse segura y feliz con lo que son, y respetar y apreciar a los demás por sus diferencias.
-
Como descargar Reste Toi Mp3?
-
Las plataformas de streaming que ofrecen la canción
-
Reste Toi está disponible en varias plataformas de streaming, como Spotify, Apple Music, YouTube Music, Deezer y SoundCloud. Puedes escuchar la canción online o offline, dependiendo de tu suscripción y preferencias. También puedes ver el video musical oficial de la canción en YouTube, que muestra a los artistas interpretando la canción en diferentes lugares de Ciudad del Cabo.
-
Los beneficios de descargar la canción en formato Mp3
-
-
-
Puede reproducir la canción en cualquier dispositivo que soporte archivos Mp3, como su teléfono, computadora o reproductor Mp3.
-
Puede ahorrar espacio de almacenamiento en su dispositivo, ya que los archivos MP3 son más pequeños que otros formatos de audio.
-
Puedes transferir la canción a otros dispositivos o compartirla con tus amigos fácilmente.
-
Puede editar la canción o usarla para otros fines, como hacer un tono de llamada o un remix.
-
-
Los pasos para descargar la canción de diferentes fuentes
-
Hay diferentes maneras de descargar Reste Toi en formato Mp3, dependiendo de la fuente que elija. Estos son algunos de los métodos más comunes:
-
-
-
Fuente
-
Pasos
-
-
-
Spotify
-
-
Abra la aplicación Spotify en su dispositivo y busque Reste Toi por Blacknoise, Kazeli y Mashaya.
-
Seleccione la canción y toque en el icono de tres puntos en la esquina superior derecha.
-
Seleccione Compartir y luego Copiar enlace.
-
Abra un navegador web y vaya a un sitio web de conversión de Spotify a Mp3, como SpotiFlyer o SpotiApp.
-
Pega el enlace que has copiado y haz clic en Convertir o Descargar.
-
Espere a que el proceso de conversión termine y luego descargue el archivo Mp3 en su dispositivo.
-
-
-
-
YouTube
-
-
Abra un navegador web y vaya a YouTube.com. Busque Reste Toi por Blacknoise, Kazeli y Mashaya.
-
Seleccione el vídeo de la canción y copie su URL desde la barra de direcciones.
-
Abra otra pestaña y vaya a un sitio web de conversión de YouTube a Mp3, como YTMP3 o 4K Video Downloader.
-
Pegue la URL que copió y haga clic en Convertir o Descargar.
-
Seleccione Mp3 como formato de salida y elija la calidad que desee.
-
Espere a que el proceso de conversión termine y luego descargue el archivo Mp3 en su dispositivo.
-
-
-
-
SoundCloud
-
-
Abra un navegador web y vaya a SoundCloud.com. Busque Reste Toi por Blacknoise, Kazeli y Mashaya.
-
-
Abra otra pestaña y vaya a un sitio web de conversión de SoundCloud a Mp3, como SCDL o SoundCloud Downloader.
-
Pegue la URL que copió y haga clic en Descargar o Convertir.
-
Espere a que el proceso de conversión termine y luego descargue el archivo Mp3 en su dispositivo.
-
-
-
-
¿Por qué deberías escuchar Reste Toi?
-
Las críticas y valoraciones positivas de la canción
-
Reste Toi ha recibido críticas y valoraciones positivas tanto de los críticos como de los oyentes. La canción ha sido elogiada por su melodía pegadiza, letras edificantes y colaboración diversa. Algunos de los comentarios de las plataformas en línea incluyen:
-
-
"Esta canción es un banger! Me encanta cómo se mezcla amapiano con hip-hop y francés. Me hace querer bailar y cantar a lo largo."
-
"Este es un mensaje tan hermoso. Creo que todos deben escuchar esta canción y estar orgullosos de lo que son. Es tan refrescante escuchar algo positivo en estos tiempos."
-
"Esta es una obra maestra. La producción es increíble, las voces son suaves, y el rap es fuego. No puedo tener suficiente de esta canción."
-
-
La canción también ha recibido altas calificaciones en varias plataformas, como 4.8 de 5 estrellas en Spotify, 4.7 de 5 estrellas en Apple Music y 4.6 de 5 estrellas en YouTube Music.
-
El sonido pegadizo y optimista de la canción
-
Reste Toi es una canción que te hará sentir bien y con energía. La canción tiene un sonido pegadizo y alegre que combina los elementos de amapiano, hip-hop y pop francés. La canción tiene un ritmo rápido, una línea de bajo groovy, y una melodía de piano suave. La canción también cuenta con algunos sonidos electrónicos, como sintetizadores, tambores y efectos. La canción es fácil de cantar, ya que tiene un coro simple y repetitivo. La canción también es adecuada para bailar, ya que tiene un ritmo rítmico y animado.
-
La relevancia cultural y social de la canción
-
-
Conclusión
-
Reste Toi de Blacknoise, Kazeli, y Mashaya es una canción que debes escuchar si estás buscando una pista de amapiano pegadiza y edificante que te hará sentir bien y orgulloso de quién eres. La canción está disponible en varias plataformas de streaming, y también se puede descargar en formato Mp3 de diferentes fuentes. La canción ha recibido críticas y valoraciones positivas de críticos y oyentes, que han elogiado su sonido, letras y mensaje. La canción es también un reflejo de la diversidad cultural y social de Sudáfrica y el mundo, que es algo para celebrar y apreciar.
-
Preguntas frecuentes
-
P: ¿Quiénes son los artistas detrás de Reste Toi?
-
A: Reste Toi es una canción de Blacknoise, Kazeli y Mashaya. Blacknoise es un artista sudafricano de hip-hop y activista que es el fundador de la banda Black Noise. Kazeli es una cantante y compositora francesa que se mudó a Sudáfrica en 2019. Mashaya es un cantante y productor sudafricano conocido por sus éxitos de amapiano.
-
Q: ¿Qué significa Reste Toi?
-
A: Reste Toi es una frase francesa que significa "quédate tú" o "sé tú mismo". También es el título de la canción de Blacknoise, Kazeli y Mashaya.
-
P: ¿Qué género es Reste Toi?
-
A: Reste Toi es una pista de amapiano que cuenta con voces en francés, inglés y Xhosa. Amapiano es un popular género sudafricano de música house que combina sonidos africanos tradicionales con ritmos y samples modernos.
-
Q: ¿Cómo puedo descargar Reste Toi en formato Mp3?
-
A: Puede descargar Reste Toi en formato Mp3 desde diferentes fuentes, como Spotify, YouTube o SoundCloud. Tendrá que copiar el enlace de la canción desde la plataforma de transmisión y pegarlo en un sitio web convertidor que convertirá la canción en formato Mp3. A continuación, puede descargar el archivo Mp3 a su dispositivo.
I have been a professional mentor to both undergraduate & graduate students for the past 7 years. These are students I mentored have entered into careers in software engineering, data science / machine learning and product (tech focus). Based on my previous success with my mentees having landed internships & full-time jobs, I believe this would greatly benefit a serious mentee candidate who wants to partner with a mentor like myself in order to maximize their chances of success :)
Interview
How did you hear about SM?
popped up somwhere on LinkedIn and researched it
Career
DS for 8 years
HBO max - recommender systems for the past 3-4 years
activley interviewing to level up
Mentorship experience?
I've been mentoring college students for 8 years at UoW, and UChicago
part of some grad programs
mentor 2 or 3 students every year, help them prep and get a job after college
some informal mentorships, but mostly gets paired
about 6 months, apply, interview prep, resume review, get some projects off the ground
What are beginners lacking?
getting in the habit of programming / working on something
creating a habit of coding every day!!
and getting in the ritual/habit of applying for jobs
come up with some agreement with yourself
hold yourself accountable
This last person applied through N applications
And how can you add value as a mentor?
always there if they want interview prep (tech, behavioural)
resume review
"I'm happy if you want to share a job position or two with me for a an honest assessment"
assess their qualifications for ertain jobs
try to refer mentees to jobs at his company
likes long term relationships!!
-
- Questions about SM?
Can I reach out to potential mentees?
Does the relationship end after the mentorship period?
Can mentees come back when they want another job?
What if the relationship dies out?
What is SM working on now?
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/main.ts b/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/main.ts
deleted file mode 100644
index c58dc05cbc6d094a9ed44203c6b69b74e5294452..0000000000000000000000000000000000000000
--- a/spaces/augmentedimaginationhackathon/paperstocode/frontend/src/main.ts
+++ /dev/null
@@ -1,7 +0,0 @@
-import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
-
-import { AppModule } from './app/app.module';
-
-
-platformBrowserDynamic().bootstrapModule(AppModule)
- .catch(err => console.error(err));
diff --git a/spaces/awacke1/AI.Dashboard.Mermaid.Model.HTML5/index.html b/spaces/awacke1/AI.Dashboard.Mermaid.Model.HTML5/index.html
deleted file mode 100644
index 66c7ac0516cb47848e339006985c57cfc0c153c4..0000000000000000000000000000000000000000
--- a/spaces/awacke1/AI.Dashboard.Mermaid.Model.HTML5/index.html
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
-
-
-
- My static Space
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-journey
- title Create AI
- section Training
- Format DataSet Inputs Files, Data Splits: 5: Teacher
- Model Build w/ SKLearn, TF, Pytorch: 3: Student
- Determine Model Performance: 1: Teacher, Student
- section Deploy
- Web Deploy Local and Cloud: 5: Teacher
- Architecture Spaces Gradio Streamlit Heroku AWS Azure and GCCP: 5: Teacher
- section Testing
- Test Model with Input Datasets: 5: Teacher
- Examples. Inputs that Work, Inputs That Break Model: 5: Teacher
- Governance - Analyze, Publish Fairness, Equity, Bias for Datasets and Outputs: 5: Teacher
-
-
-
-sequenceDiagram
- participant Alice
- participant Bob
- Alice->>John: Hello John, how are you?
- loop Healthcheck
- John->>John: Fight against hypochondria
- end
- Note right of John: Rational thoughts prevail...
- John-->>Alice: Great!
- John->>Bob: How about you?
- Bob-->>John: Jolly good!
-
-
-
-
Welcome to the Mermaid Modeler Tip Sheet
-
- You can use Mermaid inside HTML5 by including the script and a div with the class or mermaid.
-
Cramér's V is an association measure for nominal random variables. The coefficient ranges from 0 to 1, with 0 indicating independence and 1 indicating perfect association. The empirical estimators used for Cramér's V have been proved to be biased, even for large samples. We use a bias-corrected measure that has been proposed by Bergsma in 2013 that can be found here.
Phik (φk)
Phik (φk) is a new and practical correlation coefficient that works consistently between categorical, ordinal and interval variables, captures non-linear dependency and reverts to the Pearson correlation coefficient in case of a bivariate normal input distribution. There is extensive documentation available here.
Organized efforts to search for extraterrestrial intelligence
https://en.wikipedia.org/wiki/Technology
2022-10-17 15:42:25.557130
85
== Function ===\nCalcium is an essential element
https://en.wikipedia.org/wiki/Calcium
2022-10-18 15:55:02.907650
86
A star trail is a type of photograph that uses long exposure times to capture diurnal circles, the apparent motion of stars in the night sky due to Earth's rotation. A star-trail photograph shows individual stars as streaks across the image, with longer exposures yielding longer arcs. The term is used for similar photos captured elsewhere, such as on board the International Space Station and on Mars.Typical shutter speeds for a star trail range from 15 minutes to several hours, requiring a "Bulb" setting on the camera to open the shutter for a period longer than usual. However, a more practiced technique is to blend a number of frames together to create the final star trail image.Star trails have been used by professional astronomers to measure the quality of observing locations for major telescopes.\n\n\n== Capture ==\n\nStar trail photographs are captured by placing a camera on a tripod, pointing the lens toward the night sky, and allowing the shutter to stay open for a long period of time. Star trails are considered relatively easy for amateur astrophotographers to create. Photographers generally make these images by using a DSLR or Mirrorless camera with its lens focus set to infinity. A cable release or intervalometer allows the photographer to hold the shutter open for the desired amount of time. Typical exposure times range from 15 minutes to many hours long, depending on the desired length of the star trail arcs for the image. Even though star trail pictures are created under low-light conditions, long exposure times allow fast films, such as ISO 200 and ISO 400. Wide-apertures, such as f/5.6 and f/4, are recommended for star trails.\n\nBecause exposure times for star trail photographs can be several hours long, camera batteries can be easily depleted. Mechanical cameras that do not require a battery to open and close the shutter have an advantage over more modern film and digital cameras that rely on battery power. On these cameras, the Bulb, or B, exposure setting keeps the shutter open. Another problem that digital cameras encounter is an increase in electronic noise with increasing exposure time. However, this can be avoided through the use of shorter exposure times that are then stacked in post production software. This avoids possible heat build up or digital noise caused from a single long exposure. \nAmerican astronaut Don Pettit recorded star trails with a digital camera from the International Space Station in Earth orbit between April and June, 2012. Pettit described his technique as follows: "My star trail images are made by taking a time exposure of about 10 to 15 minutes. However, with modern digital cameras, 30 seconds is about the longest exposure possible, due to electronic detector noise effectively snowing out the image. To achieve the longer exposures I do what many amateur astronomers do. I take multiple 30-second exposures, then 'stack' them using imaging software, thus producing the longer exposure."Star trail images have also been taken on Mars. The Spirit rover produced them while looking for meteors. Since the camera was limited to 60 second exposures the trails appear as dashed lines.\n\n\n== Earth's rotation ==\n\nStar trail photographs are possible because of the rotation of Earth about its axis. The apparent motion of the stars is recorded as mostly curved streaks on the film or detector. For observers in the Northern Hemisphere, aiming the camera northward creates an image with concentric circular arcs centered on the north celestial pole (very near Polaris). For those in the Southern Hemisphere, this same effect is achieved by aiming the camera southward. In this case, the arc streaks are centered on the south celestial pole (near Sigma Octantis). Aiming the camera eastward or westward shows straight streaks on the celestial equator, which is tilted at angle with respect to the horizon. The angular measure of this tilt depends on the photographer's latitude (L), and is equal to 90° − L.\n\n\n== Astronomical site testing ==\nStar trail photographs can be used by astronomers to determine the quality of a location for telescope observations. Star trail observations of Polaris have been used to measure the quality of seeing in the atmosphere, and the vibrations in telescope mounting systems. The first recorded suggestion of this technique is from E.S. Skinner's 1931 book A Manual of Celestial Photography.\n\n\n== Gallery ==\n\n\t\t\n\t\t\n\t\t\n\t\t\n\t\t\n\n\n== References ==\n\n\n== External links ==\n\n4 Steps To Creating Star Trails Photos Using Stacking Software\nStar trail photography\nStarStaX free multi-platform star trail software
https://en.wikipedia.org/wiki/Star_trail
2022-10-19 00:58:13.680029
87
E.S. Skinner
https://en.wikipedia.org/wiki/Star_trail
2022-10-19 00:58:16.667325
88
A star tracker is an optical device that measures the positions of stars using photocells or a camera.\nAs the positions of many stars have been measured by astronomers to a high degree of accuracy, a star tracker on a satellite or spacecraft may be used to determine the orientation (or attitude) of the spacecraft with respect to the stars. In order to do this, the star tracker must obtain an image of the stars, measure their apparent position in the reference frame of the spacecraft, and identify the stars so their position can be compared with their known absolute position from a star catalog. A star tracker may include a processor to identify stars by comparing the pattern of observed stars with the known pattern of stars in the sky.\n\n\n== History ==\nIn the 1950s and early 1960s, star trackers were an important part of early long-range ballistic missiles and cruise missiles, in the era when inertial navigation systems (INS) were not sufficiently accurate for intercontinental ranges.Consider a Cold War missile flying towards its target; it initially starts by flying northward, passes over the arctic, and then begins flying southward again. From the missile's perspective, stars behind it appear to move closer to the southern horizon while those in front are rising. Before flight, one can calculate the relative angle of a star based on where the missile should be at that instant if it is in the correct location. That can then be compared to the measured location to produce an "error off" signal that can be used to bring the missile back onto its correct trajectory.Due to the Earth's rotation, stars that are in a usable location change over the course of a day and the location of the target. Generally, a selection of several bright stars would be used and one would be selected at launch time. For guidance systems based solely on star tracking, some sort of recording mechanism, typically a magnetic tape, was pre-recorded with a signal that represented the angle of the star over the period of a day. At launch, the tape was forwarded to the appropriate time. During the flight, the signal on the tape was used to roughly position a telescope so it would point at the expected position of the star. At the telescope's focus was a photocell and some sort of signal-generator, typically a spinning disk known as a chopper. The chopper causes the image of the star to repeatedly appear and disappear on the photocell, producing a signal that was then smoothed to produce an alternating current output. The phase of that signal was compared to the one on the tape to produce a guidance signal.Star trackers were often combined with an INS. INS systems measure accelerations and integrate those over time to determine a velocity and, optionally, double-integrate to produce a location relative to its launch location. Even tiny measurement errors, when integrated, adds up to an appreciable error known as "drift". For instance, the N-1 navigation system developed for the SM-64 Navaho cruise missile drifted at a rate of 1 nautical mile per hour, meaning that after a two-hour flight the INS would be indicating a position 2 nautical miles (3.7 km; 2.3 mi) away from its actual location. This was outside the desired accuracy of about half a mile.\nIn the case of an INS, the magnetic tape can be removed and those signals instead provided by the INS. The rest of the system works as before; the signal from the INS roughly positions the star tracker, which then measures the actual location of the star and produces an error signal. This signal is then used to correct the position being generated from the INS, reducing the accumulated drift back to the limit of the accuracy of the tracker. These "stellar inertial" systems were especially common from the 1950s through the 1980s, although some systems use it to this day.\n\n\n== Current technology ==\nMany models are currently available. There also exist open projects designed to be used for the global CubeSat researchers and developers community.\nStar trackers, which require high sensitivity, may become confused by sunlight reflected from the spacecraft, or by exhaust gas plumes from the spacecraft thrusters (either sunlight reflection or contamination of the star tracker window). Star trackers are also susceptible to a variety of errors (low spatial frequency, high spatial frequency, temporal, ...) in addition to a variety of optical sources of error (spherical aberration, chromatic aberration, etc.). There are also many potential sources of confusion for the star identification algorithm (planets, comets, supernovae, the bimodal character of the point spread function for adjacent stars, other nearby satellites, point-source light pollution from large cities on Earth, ...). There are roughly 57 bright navigational stars in common use. However, for more complex missions, entire star field databases are used to determine spacecraft orientation. A typical star catalog for high-fidelity attitude determination is originated from a standard base catalog (for example from the United States Naval Observatory) and then filtered to remove problematic stars, for example due to apparent magnitude variability, color index uncertainty, or a location within the Hertzsprung-Russell diagram implying unreliability. These types of star catalogs can have thousands of stars stored in memory on board the spacecraft, or else processed using tools at the ground station and then uploaded.\n\n\n== See also ==\nCelestial navigation\nGoTo (telescopes)\nSun sensor\n\n\n== References ==
A star tracker is an optical device that measures the positions of stars using photocells or a camera.\nAs the positions of many stars have been measured by astronomers to a high degree of accuracy, a star tracker on a satellite or spacecraft may be used to determine the orientation (or attitude) of the spacecraft with respect to the stars. In order to do this, the star tracker must obtain an image of the stars, measure their apparent position in the reference frame of the spacecraft, and identify the stars so their position can be compared with their known absolute position from a star catalog. A star tracker may include a processor to identify stars by comparing the pattern of observed stars with the known pattern of stars in the sky.\n\n\n== History ==\nIn the 1950s and early 1960s, star trackers were an important part of early long-range ballistic missiles and cruise missiles, in the era when inertial navigation systems (INS) were not sufficiently accurate for intercontinental ranges.Consider a Cold War missile flying towards its target; it initially starts by flying northward, passes over the arctic, and then begins flying southward again. From the missile's perspective, stars behind it appear to move closer to the southern horizon while those in front are rising. Before flight, one can calculate the relative angle of a star based on where the missile should be at that instant if it is in the correct location. That can then be compared to the measured location to produce an "error off" signal that can be used to bring the missile back onto its correct trajectory.Due to the Earth's rotation, stars that are in a usable location change over the course of a day and the location of the target. Generally, a selection of several bright stars would be used and one would be selected at launch time. For guidance systems based solely on star tracking, some sort of recording mechanism, typically a magnetic tape, was pre-recorded with a signal that represented the angle of the star over the period of a day. At launch, the tape was forwarded to the appropriate time. During the flight, the signal on the tape was used to roughly position a telescope so it would point at the expected position of the star. At the telescope's focus was a photocell and some sort of signal-generator, typically a spinning disk known as a chopper. The chopper causes the image of the star to repeatedly appear and disappear on the photocell, producing a signal that was then smoothed to produce an alternating current output. The phase of that signal was compared to the one on the tape to produce a guidance signal.Star trackers were often combined with an INS. INS systems measure accelerations and integrate those over time to determine a velocity and, optionally, double-integrate to produce a location relative to its launch location. Even tiny measurement errors, when integrated, adds up to an appreciable error known as "drift". For instance, the N-1 navigation system developed for the SM-64 Navaho cruise missile drifted at a rate of 1 nautical mile per hour, meaning that after a two-hour flight the INS would be indicating a position 2 nautical miles (3.7 km; 2.3 mi) away from its actual location. This was outside the desired accuracy of about half a mile.\nIn the case of an INS, the magnetic tape can be removed and those signals instead provided by the INS. The rest of the system works as before; the signal from the INS roughly positions the star tracker, which then measures the actual location of the star and produces an error signal. This signal is then used to correct the position being generated from the INS, reducing the accumulated drift back to the limit of the accuracy of the tracker. These "stellar inertial" systems were especially common from the 1950s through the 1980s, although some systems use it to this day.\n\n\n== Current technology ==\nMany models are currently available. There also exist open projects designed to be used for the global CubeSat researchers and developers community.\nStar trackers, which require high sensitivity, may become confused by sunlight reflected from the spacecraft, or by exhaust gas plumes from the spacecraft thrusters (either sunlight reflection or contamination of the star tracker window). Star trackers are also susceptible to a variety of errors (low spatial frequency, high spatial frequency, temporal, ...) in addition to a variety of optical sources of error (spherical aberration, chromatic aberration, etc.). There are also many potential sources of confusion for the star identification algorithm (planets, comets, supernovae, the bimodal character of the point spread function for adjacent stars, other nearby satellites, point-source light pollution from large cities on Earth, ...). There are roughly 57 bright navigational stars in common use. However, for more complex missions, entire star field databases are used to determine spacecraft orientation. A typical star catalog for high-fidelity attitude determination is originated from a standard base catalog (for example from the United States Naval Observatory) and then filtered to remove problematic stars, for example due to apparent magnitude variability, color index uncertainty, or a location within the Hertzsprung-Russell diagram implying unreliability. These types of star catalogs can have thousands of stars stored in memory on board the spacecraft, or else processed using tools at the ground station and then uploaded.\n\n\n== See also ==\nCelestial navigation\nGoTo (telescopes)\nSun sensor\n\n\n== References ==
\ No newline at end of file
diff --git a/spaces/awen666/web-ui/index.html b/spaces/awen666/web-ui/index.html
deleted file mode 100644
index 598d70a359bb59ba7f59afc4974219eda01dac2f..0000000000000000000000000000000000000000
--- a/spaces/awen666/web-ui/index.html
+++ /dev/null
@@ -1 +0,0 @@
-Gradiobot UI
\ No newline at end of file
diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/TeapotBufferGeometry.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/TeapotBufferGeometry.js
deleted file mode 100644
index 3b8811fd6b413385db1ddc0767ef9ddbeb0826c7..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/TeapotBufferGeometry.js
+++ /dev/null
@@ -1,718 +0,0 @@
-/**
- * @author Eric Haines / http://erichaines.com/
- *
- * Tessellates the famous Utah teapot database by Martin Newell into triangles.
- *
- * THREE.TeapotBufferGeometry = function ( size, segments, bottom, lid, body, fitLid, blinn )
- *
- * defaults: size = 50, segments = 10, bottom = true, lid = true, body = true,
- * fitLid = false, blinn = true
- *
- * size is a relative scale: I've scaled the teapot to fit vertically between -1 and 1.
- * Think of it as a "radius".
- * segments - number of line segments to subdivide each patch edge;
- * 1 is possible but gives degenerates, so two is the real minimum.
- * bottom - boolean, if true (default) then the bottom patches are added. Some consider
- * adding the bottom heresy, so set this to "false" to adhere to the One True Way.
- * lid - to remove the lid and look inside, set to true.
- * body - to remove the body and leave the lid, set this and "bottom" to false.
- * fitLid - the lid is a tad small in the original. This stretches it a bit so you can't
- * see the teapot's insides through the gap.
- * blinn - Jim Blinn scaled the original data vertically by dividing by about 1.3 to look
- * nicer. If you want to see the original teapot, similar to the real-world model, set
- * this to false. True by default.
- * See http://en.wikipedia.org/wiki/File:Original_Utah_Teapot.jpg for the original
- * real-world teapot (from http://en.wikipedia.org/wiki/Utah_teapot).
- *
- * Note that the bottom (the last four patches) is not flat - blame Frank Crow, not me.
- *
- * The teapot should normally be rendered as a double sided object, since for some
- * patches both sides can be seen, e.g., the gap around the lid and inside the spout.
- *
- * Segments 'n' determines the number of triangles output.
- * Total triangles = 32*2*n*n - 8*n [degenerates at the top and bottom cusps are deleted]
- *
- * size_factor # triangles
- * 1 56
- * 2 240
- * 3 552
- * 4 992
- *
- * 10 6320
- * 20 25440
- * 30 57360
- *
- * Code converted from my ancient SPD software, http://tog.acm.org/resources/SPD/
- * Created for the Udacity course "Interactive Rendering", http://bit.ly/ericity
- * Lesson: https://www.udacity.com/course/viewer#!/c-cs291/l-68866048/m-106482448
- * YouTube video on teapot history: https://www.youtube.com/watch?v=DxMfblPzFNc
- *
- * See https://en.wikipedia.org/wiki/Utah_teapot for the history of the teapot
- *
- */
-/*global THREE */
-
-THREE.TeapotBufferGeometry = function ( size, segments, bottom, lid, body, fitLid, blinn ) {
-
- // 32 * 4 * 4 Bezier spline patches
- var teapotPatches = [
- /*rim*/
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
- 3, 16, 17, 18, 7, 19, 20, 21, 11, 22, 23, 24, 15, 25, 26, 27,
- 18, 28, 29, 30, 21, 31, 32, 33, 24, 34, 35, 36, 27, 37, 38, 39,
- 30, 40, 41, 0, 33, 42, 43, 4, 36, 44, 45, 8, 39, 46, 47, 12,
- /*body*/
- 12, 13, 14, 15, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
- 15, 25, 26, 27, 51, 60, 61, 62, 55, 63, 64, 65, 59, 66, 67, 68,
- 27, 37, 38, 39, 62, 69, 70, 71, 65, 72, 73, 74, 68, 75, 76, 77,
- 39, 46, 47, 12, 71, 78, 79, 48, 74, 80, 81, 52, 77, 82, 83, 56,
- 56, 57, 58, 59, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,
- 59, 66, 67, 68, 87, 96, 97, 98, 91, 99, 100, 101, 95, 102, 103, 104,
- 68, 75, 76, 77, 98, 105, 106, 107, 101, 108, 109, 110, 104, 111, 112, 113,
- 77, 82, 83, 56, 107, 114, 115, 84, 110, 116, 117, 88, 113, 118, 119, 92,
- /*handle*/
- 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
- 123, 136, 137, 120, 127, 138, 139, 124, 131, 140, 141, 128, 135, 142, 143, 132,
- 132, 133, 134, 135, 144, 145, 146, 147, 148, 149, 150, 151, 68, 152, 153, 154,
- 135, 142, 143, 132, 147, 155, 156, 144, 151, 157, 158, 148, 154, 159, 160, 68,
- /*spout*/
- 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176,
- 164, 177, 178, 161, 168, 179, 180, 165, 172, 181, 182, 169, 176, 183, 184, 173,
- 173, 174, 175, 176, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196,
- 176, 183, 184, 173, 188, 197, 198, 185, 192, 199, 200, 189, 196, 201, 202, 193,
- /*lid*/
- 203, 203, 203, 203, 204, 205, 206, 207, 208, 208, 208, 208, 209, 210, 211, 212,
- 203, 203, 203, 203, 207, 213, 214, 215, 208, 208, 208, 208, 212, 216, 217, 218,
- 203, 203, 203, 203, 215, 219, 220, 221, 208, 208, 208, 208, 218, 222, 223, 224,
- 203, 203, 203, 203, 221, 225, 226, 204, 208, 208, 208, 208, 224, 227, 228, 209,
- 209, 210, 211, 212, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240,
- 212, 216, 217, 218, 232, 241, 242, 243, 236, 244, 245, 246, 240, 247, 248, 249,
- 218, 222, 223, 224, 243, 250, 251, 252, 246, 253, 254, 255, 249, 256, 257, 258,
- 224, 227, 228, 209, 252, 259, 260, 229, 255, 261, 262, 233, 258, 263, 264, 237,
- /*bottom*/
- 265, 265, 265, 265, 266, 267, 268, 269, 270, 271, 272, 273, 92, 119, 118, 113,
- 265, 265, 265, 265, 269, 274, 275, 276, 273, 277, 278, 279, 113, 112, 111, 104,
- 265, 265, 265, 265, 276, 280, 281, 282, 279, 283, 284, 285, 104, 103, 102, 95,
- 265, 265, 265, 265, 282, 286, 287, 266, 285, 288, 289, 270, 95, 94, 93, 92
- ];
-
- var teapotVertices = [
- 1.4, 0, 2.4,
- 1.4, - 0.784, 2.4,
- 0.784, - 1.4, 2.4,
- 0, - 1.4, 2.4,
- 1.3375, 0, 2.53125,
- 1.3375, - 0.749, 2.53125,
- 0.749, - 1.3375, 2.53125,
- 0, - 1.3375, 2.53125,
- 1.4375, 0, 2.53125,
- 1.4375, - 0.805, 2.53125,
- 0.805, - 1.4375, 2.53125,
- 0, - 1.4375, 2.53125,
- 1.5, 0, 2.4,
- 1.5, - 0.84, 2.4,
- 0.84, - 1.5, 2.4,
- 0, - 1.5, 2.4,
- - 0.784, - 1.4, 2.4,
- - 1.4, - 0.784, 2.4,
- - 1.4, 0, 2.4,
- - 0.749, - 1.3375, 2.53125,
- - 1.3375, - 0.749, 2.53125,
- - 1.3375, 0, 2.53125,
- - 0.805, - 1.4375, 2.53125,
- - 1.4375, - 0.805, 2.53125,
- - 1.4375, 0, 2.53125,
- - 0.84, - 1.5, 2.4,
- - 1.5, - 0.84, 2.4,
- - 1.5, 0, 2.4,
- - 1.4, 0.784, 2.4,
- - 0.784, 1.4, 2.4,
- 0, 1.4, 2.4,
- - 1.3375, 0.749, 2.53125,
- - 0.749, 1.3375, 2.53125,
- 0, 1.3375, 2.53125,
- - 1.4375, 0.805, 2.53125,
- - 0.805, 1.4375, 2.53125,
- 0, 1.4375, 2.53125,
- - 1.5, 0.84, 2.4,
- - 0.84, 1.5, 2.4,
- 0, 1.5, 2.4,
- 0.784, 1.4, 2.4,
- 1.4, 0.784, 2.4,
- 0.749, 1.3375, 2.53125,
- 1.3375, 0.749, 2.53125,
- 0.805, 1.4375, 2.53125,
- 1.4375, 0.805, 2.53125,
- 0.84, 1.5, 2.4,
- 1.5, 0.84, 2.4,
- 1.75, 0, 1.875,
- 1.75, - 0.98, 1.875,
- 0.98, - 1.75, 1.875,
- 0, - 1.75, 1.875,
- 2, 0, 1.35,
- 2, - 1.12, 1.35,
- 1.12, - 2, 1.35,
- 0, - 2, 1.35,
- 2, 0, 0.9,
- 2, - 1.12, 0.9,
- 1.12, - 2, 0.9,
- 0, - 2, 0.9,
- - 0.98, - 1.75, 1.875,
- - 1.75, - 0.98, 1.875,
- - 1.75, 0, 1.875,
- - 1.12, - 2, 1.35,
- - 2, - 1.12, 1.35,
- - 2, 0, 1.35,
- - 1.12, - 2, 0.9,
- - 2, - 1.12, 0.9,
- - 2, 0, 0.9,
- - 1.75, 0.98, 1.875,
- - 0.98, 1.75, 1.875,
- 0, 1.75, 1.875,
- - 2, 1.12, 1.35,
- - 1.12, 2, 1.35,
- 0, 2, 1.35,
- - 2, 1.12, 0.9,
- - 1.12, 2, 0.9,
- 0, 2, 0.9,
- 0.98, 1.75, 1.875,
- 1.75, 0.98, 1.875,
- 1.12, 2, 1.35,
- 2, 1.12, 1.35,
- 1.12, 2, 0.9,
- 2, 1.12, 0.9,
- 2, 0, 0.45,
- 2, - 1.12, 0.45,
- 1.12, - 2, 0.45,
- 0, - 2, 0.45,
- 1.5, 0, 0.225,
- 1.5, - 0.84, 0.225,
- 0.84, - 1.5, 0.225,
- 0, - 1.5, 0.225,
- 1.5, 0, 0.15,
- 1.5, - 0.84, 0.15,
- 0.84, - 1.5, 0.15,
- 0, - 1.5, 0.15,
- - 1.12, - 2, 0.45,
- - 2, - 1.12, 0.45,
- - 2, 0, 0.45,
- - 0.84, - 1.5, 0.225,
- - 1.5, - 0.84, 0.225,
- - 1.5, 0, 0.225,
- - 0.84, - 1.5, 0.15,
- - 1.5, - 0.84, 0.15,
- - 1.5, 0, 0.15,
- - 2, 1.12, 0.45,
- - 1.12, 2, 0.45,
- 0, 2, 0.45,
- - 1.5, 0.84, 0.225,
- - 0.84, 1.5, 0.225,
- 0, 1.5, 0.225,
- - 1.5, 0.84, 0.15,
- - 0.84, 1.5, 0.15,
- 0, 1.5, 0.15,
- 1.12, 2, 0.45,
- 2, 1.12, 0.45,
- 0.84, 1.5, 0.225,
- 1.5, 0.84, 0.225,
- 0.84, 1.5, 0.15,
- 1.5, 0.84, 0.15,
- - 1.6, 0, 2.025,
- - 1.6, - 0.3, 2.025,
- - 1.5, - 0.3, 2.25,
- - 1.5, 0, 2.25,
- - 2.3, 0, 2.025,
- - 2.3, - 0.3, 2.025,
- - 2.5, - 0.3, 2.25,
- - 2.5, 0, 2.25,
- - 2.7, 0, 2.025,
- - 2.7, - 0.3, 2.025,
- - 3, - 0.3, 2.25,
- - 3, 0, 2.25,
- - 2.7, 0, 1.8,
- - 2.7, - 0.3, 1.8,
- - 3, - 0.3, 1.8,
- - 3, 0, 1.8,
- - 1.5, 0.3, 2.25,
- - 1.6, 0.3, 2.025,
- - 2.5, 0.3, 2.25,
- - 2.3, 0.3, 2.025,
- - 3, 0.3, 2.25,
- - 2.7, 0.3, 2.025,
- - 3, 0.3, 1.8,
- - 2.7, 0.3, 1.8,
- - 2.7, 0, 1.575,
- - 2.7, - 0.3, 1.575,
- - 3, - 0.3, 1.35,
- - 3, 0, 1.35,
- - 2.5, 0, 1.125,
- - 2.5, - 0.3, 1.125,
- - 2.65, - 0.3, 0.9375,
- - 2.65, 0, 0.9375,
- - 2, - 0.3, 0.9,
- - 1.9, - 0.3, 0.6,
- - 1.9, 0, 0.6,
- - 3, 0.3, 1.35,
- - 2.7, 0.3, 1.575,
- - 2.65, 0.3, 0.9375,
- - 2.5, 0.3, 1.125,
- - 1.9, 0.3, 0.6,
- - 2, 0.3, 0.9,
- 1.7, 0, 1.425,
- 1.7, - 0.66, 1.425,
- 1.7, - 0.66, 0.6,
- 1.7, 0, 0.6,
- 2.6, 0, 1.425,
- 2.6, - 0.66, 1.425,
- 3.1, - 0.66, 0.825,
- 3.1, 0, 0.825,
- 2.3, 0, 2.1,
- 2.3, - 0.25, 2.1,
- 2.4, - 0.25, 2.025,
- 2.4, 0, 2.025,
- 2.7, 0, 2.4,
- 2.7, - 0.25, 2.4,
- 3.3, - 0.25, 2.4,
- 3.3, 0, 2.4,
- 1.7, 0.66, 0.6,
- 1.7, 0.66, 1.425,
- 3.1, 0.66, 0.825,
- 2.6, 0.66, 1.425,
- 2.4, 0.25, 2.025,
- 2.3, 0.25, 2.1,
- 3.3, 0.25, 2.4,
- 2.7, 0.25, 2.4,
- 2.8, 0, 2.475,
- 2.8, - 0.25, 2.475,
- 3.525, - 0.25, 2.49375,
- 3.525, 0, 2.49375,
- 2.9, 0, 2.475,
- 2.9, - 0.15, 2.475,
- 3.45, - 0.15, 2.5125,
- 3.45, 0, 2.5125,
- 2.8, 0, 2.4,
- 2.8, - 0.15, 2.4,
- 3.2, - 0.15, 2.4,
- 3.2, 0, 2.4,
- 3.525, 0.25, 2.49375,
- 2.8, 0.25, 2.475,
- 3.45, 0.15, 2.5125,
- 2.9, 0.15, 2.475,
- 3.2, 0.15, 2.4,
- 2.8, 0.15, 2.4,
- 0, 0, 3.15,
- 0.8, 0, 3.15,
- 0.8, - 0.45, 3.15,
- 0.45, - 0.8, 3.15,
- 0, - 0.8, 3.15,
- 0, 0, 2.85,
- 0.2, 0, 2.7,
- 0.2, - 0.112, 2.7,
- 0.112, - 0.2, 2.7,
- 0, - 0.2, 2.7,
- - 0.45, - 0.8, 3.15,
- - 0.8, - 0.45, 3.15,
- - 0.8, 0, 3.15,
- - 0.112, - 0.2, 2.7,
- - 0.2, - 0.112, 2.7,
- - 0.2, 0, 2.7,
- - 0.8, 0.45, 3.15,
- - 0.45, 0.8, 3.15,
- 0, 0.8, 3.15,
- - 0.2, 0.112, 2.7,
- - 0.112, 0.2, 2.7,
- 0, 0.2, 2.7,
- 0.45, 0.8, 3.15,
- 0.8, 0.45, 3.15,
- 0.112, 0.2, 2.7,
- 0.2, 0.112, 2.7,
- 0.4, 0, 2.55,
- 0.4, - 0.224, 2.55,
- 0.224, - 0.4, 2.55,
- 0, - 0.4, 2.55,
- 1.3, 0, 2.55,
- 1.3, - 0.728, 2.55,
- 0.728, - 1.3, 2.55,
- 0, - 1.3, 2.55,
- 1.3, 0, 2.4,
- 1.3, - 0.728, 2.4,
- 0.728, - 1.3, 2.4,
- 0, - 1.3, 2.4,
- - 0.224, - 0.4, 2.55,
- - 0.4, - 0.224, 2.55,
- - 0.4, 0, 2.55,
- - 0.728, - 1.3, 2.55,
- - 1.3, - 0.728, 2.55,
- - 1.3, 0, 2.55,
- - 0.728, - 1.3, 2.4,
- - 1.3, - 0.728, 2.4,
- - 1.3, 0, 2.4,
- - 0.4, 0.224, 2.55,
- - 0.224, 0.4, 2.55,
- 0, 0.4, 2.55,
- - 1.3, 0.728, 2.55,
- - 0.728, 1.3, 2.55,
- 0, 1.3, 2.55,
- - 1.3, 0.728, 2.4,
- - 0.728, 1.3, 2.4,
- 0, 1.3, 2.4,
- 0.224, 0.4, 2.55,
- 0.4, 0.224, 2.55,
- 0.728, 1.3, 2.55,
- 1.3, 0.728, 2.55,
- 0.728, 1.3, 2.4,
- 1.3, 0.728, 2.4,
- 0, 0, 0,
- 1.425, 0, 0,
- 1.425, 0.798, 0,
- 0.798, 1.425, 0,
- 0, 1.425, 0,
- 1.5, 0, 0.075,
- 1.5, 0.84, 0.075,
- 0.84, 1.5, 0.075,
- 0, 1.5, 0.075,
- - 0.798, 1.425, 0,
- - 1.425, 0.798, 0,
- - 1.425, 0, 0,
- - 0.84, 1.5, 0.075,
- - 1.5, 0.84, 0.075,
- - 1.5, 0, 0.075,
- - 1.425, - 0.798, 0,
- - 0.798, - 1.425, 0,
- 0, - 1.425, 0,
- - 1.5, - 0.84, 0.075,
- - 0.84, - 1.5, 0.075,
- 0, - 1.5, 0.075,
- 0.798, - 1.425, 0,
- 1.425, - 0.798, 0,
- 0.84, - 1.5, 0.075,
- 1.5, - 0.84, 0.075
- ];
-
- THREE.BufferGeometry.call( this );
-
- size = size || 50;
-
- // number of segments per patch
- segments = segments !== undefined ? Math.max( 2, Math.floor( segments ) || 10 ) : 10;
-
- // which parts should be visible
- bottom = bottom === undefined ? true : bottom;
- lid = lid === undefined ? true : lid;
- body = body === undefined ? true : body;
-
- // Should the lid be snug? It's not traditional, but we make it snug by default
- fitLid = fitLid === undefined ? true : fitLid;
-
- // Jim Blinn scaled the teapot down in size by about 1.3 for
- // some rendering tests. He liked the new proportions that he kept
- // the data in this form. The model was distributed with these new
- // proportions and became the norm. Trivia: comparing images of the
- // real teapot and the computer model, the ratio for the bowl of the
- // real teapot is more like 1.25, but since 1.3 is the traditional
- // value given, we use it here.
- var blinnScale = 1.3;
- blinn = blinn === undefined ? true : blinn;
-
- // scale the size to be the real scaling factor
- var maxHeight = 3.15 * ( blinn ? 1 : blinnScale );
-
- var maxHeight2 = maxHeight / 2;
- var trueSize = size / maxHeight2;
-
- // Number of elements depends on what is needed. Subtract degenerate
- // triangles at tip of bottom and lid out in advance.
- var numTriangles = bottom ? ( 8 * segments - 4 ) * segments : 0;
- numTriangles += lid ? ( 16 * segments - 4 ) * segments : 0;
- numTriangles += body ? 40 * segments * segments : 0;
-
- var indices = new Uint32Array( numTriangles * 3 );
-
- var numVertices = bottom ? 4 : 0;
- numVertices += lid ? 8 : 0;
- numVertices += body ? 20 : 0;
- numVertices *= ( segments + 1 ) * ( segments + 1 );
-
- var vertices = new Float32Array( numVertices * 3 );
- var normals = new Float32Array( numVertices * 3 );
- var uvs = new Float32Array( numVertices * 2 );
-
- // Bezier form
- var ms = new THREE.Matrix4();
- ms.set(
- - 1.0, 3.0, - 3.0, 1.0,
- 3.0, - 6.0, 3.0, 0.0,
- - 3.0, 3.0, 0.0, 0.0,
- 1.0, 0.0, 0.0, 0.0 );
-
- var g = [];
- var i, r, c;
-
- var sp = [];
- var tp = [];
- var dsp = [];
- var dtp = [];
-
- // M * G * M matrix, sort of see
- // http://www.cs.helsinki.fi/group/goa/mallinnus/curves/surfaces.html
- var mgm = [];
-
- var vert = [];
- var sdir = [];
- var tdir = [];
-
- var norm = new THREE.Vector3();
-
- var tcoord;
-
- var sstep, tstep;
- var vertPerRow;
-
- var s, t, sval, tval, p;
- var dsval = 0;
- var dtval = 0;
-
- var normOut = new THREE.Vector3();
- var v1, v2, v3, v4;
-
- var gmx = new THREE.Matrix4();
- var tmtx = new THREE.Matrix4();
-
- var vsp = new THREE.Vector4();
- var vtp = new THREE.Vector4();
- var vdsp = new THREE.Vector4();
- var vdtp = new THREE.Vector4();
-
- var vsdir = new THREE.Vector3();
- var vtdir = new THREE.Vector3();
-
- var mst = ms.clone();
- mst.transpose();
-
- // internal function: test if triangle has any matching vertices;
- // if so, don't save triangle, since it won't display anything.
- var notDegenerate = function ( vtx1, vtx2, vtx3 ) {
-
- // if any vertex matches, return false
- return ! ( ( ( vertices[ vtx1 * 3 ] === vertices[ vtx2 * 3 ] ) &&
- ( vertices[ vtx1 * 3 + 1 ] === vertices[ vtx2 * 3 + 1 ] ) &&
- ( vertices[ vtx1 * 3 + 2 ] === vertices[ vtx2 * 3 + 2 ] ) ) ||
- ( ( vertices[ vtx1 * 3 ] === vertices[ vtx3 * 3 ] ) &&
- ( vertices[ vtx1 * 3 + 1 ] === vertices[ vtx3 * 3 + 1 ] ) &&
- ( vertices[ vtx1 * 3 + 2 ] === vertices[ vtx3 * 3 + 2 ] ) ) ||
- ( ( vertices[ vtx2 * 3 ] === vertices[ vtx3 * 3 ] ) &&
- ( vertices[ vtx2 * 3 + 1 ] === vertices[ vtx3 * 3 + 1 ] ) &&
- ( vertices[ vtx2 * 3 + 2 ] === vertices[ vtx3 * 3 + 2 ] ) ) );
-
- };
-
-
- for ( i = 0; i < 3; i ++ ) {
-
- mgm[ i ] = new THREE.Matrix4();
-
- }
-
- var minPatches = body ? 0 : 20;
- var maxPatches = bottom ? 32 : 28;
-
- vertPerRow = segments + 1;
-
- var surfCount = 0;
-
- var vertCount = 0;
- var normCount = 0;
- var uvCount = 0;
-
- var indexCount = 0;
-
- for ( var surf = minPatches; surf < maxPatches; surf ++ ) {
-
- // lid is in the middle of the data, patches 20-27,
- // so ignore it for this part of the loop if the lid is not desired
- if ( lid || ( surf < 20 || surf >= 28 ) ) {
-
- // get M * G * M matrix for x,y,z
- for ( i = 0; i < 3; i ++ ) {
-
- // get control patches
- for ( r = 0; r < 4; r ++ ) {
-
- for ( c = 0; c < 4; c ++ ) {
-
- // transposed
- g[ c * 4 + r ] = teapotVertices[ teapotPatches[ surf * 16 + r * 4 + c ] * 3 + i ];
-
- // is the lid to be made larger, and is this a point on the lid
- // that is X or Y?
- if ( fitLid && ( surf >= 20 && surf < 28 ) && ( i !== 2 ) ) {
-
- // increase XY size by 7.7%, found empirically. I don't
- // increase Z so that the teapot will continue to fit in the
- // space -1 to 1 for Y (Y is up for the final model).
- g[ c * 4 + r ] *= 1.077;
-
- }
-
- // Blinn "fixed" the teapot by dividing Z by blinnScale, and that's the
- // data we now use. The original teapot is taller. Fix it:
- if ( ! blinn && ( i === 2 ) ) {
-
- g[ c * 4 + r ] *= blinnScale;
-
- }
-
- }
-
- }
-
- gmx.set( g[ 0 ], g[ 1 ], g[ 2 ], g[ 3 ], g[ 4 ], g[ 5 ], g[ 6 ], g[ 7 ], g[ 8 ], g[ 9 ], g[ 10 ], g[ 11 ], g[ 12 ], g[ 13 ], g[ 14 ], g[ 15 ] );
-
- tmtx.multiplyMatrices( gmx, ms );
- mgm[ i ].multiplyMatrices( mst, tmtx );
-
- }
-
- // step along, get points, and output
- for ( sstep = 0; sstep <= segments; sstep ++ ) {
-
- s = sstep / segments;
-
- for ( tstep = 0; tstep <= segments; tstep ++ ) {
-
- t = tstep / segments;
-
- // point from basis
- // get power vectors and their derivatives
- for ( p = 4, sval = tval = 1.0; p --; ) {
-
- sp[ p ] = sval;
- tp[ p ] = tval;
- sval *= s;
- tval *= t;
-
- if ( p === 3 ) {
-
- dsp[ p ] = dtp[ p ] = 0.0;
- dsval = dtval = 1.0;
-
- } else {
-
- dsp[ p ] = dsval * ( 3 - p );
- dtp[ p ] = dtval * ( 3 - p );
- dsval *= s;
- dtval *= t;
-
- }
-
- }
-
- vsp.fromArray( sp );
- vtp.fromArray( tp );
- vdsp.fromArray( dsp );
- vdtp.fromArray( dtp );
-
- // do for x,y,z
- for ( i = 0; i < 3; i ++ ) {
-
- // multiply power vectors times matrix to get value
- tcoord = vsp.clone();
- tcoord.applyMatrix4( mgm[ i ] );
- vert[ i ] = tcoord.dot( vtp );
-
- // get s and t tangent vectors
- tcoord = vdsp.clone();
- tcoord.applyMatrix4( mgm[ i ] );
- sdir[ i ] = tcoord.dot( vtp );
-
- tcoord = vsp.clone();
- tcoord.applyMatrix4( mgm[ i ] );
- tdir[ i ] = tcoord.dot( vdtp );
-
- }
-
- // find normal
- vsdir.fromArray( sdir );
- vtdir.fromArray( tdir );
- norm.crossVectors( vtdir, vsdir );
- norm.normalize();
-
- // if X and Z length is 0, at the cusp, so point the normal up or down, depending on patch number
- if ( vert[ 0 ] === 0 && vert[ 1 ] === 0 ) {
-
- // if above the middle of the teapot, normal points up, else down
- normOut.set( 0, vert[ 2 ] > maxHeight2 ? 1 : - 1, 0 );
-
- } else {
-
- // standard output: rotate on X axis
- normOut.set( norm.x, norm.z, - norm.y );
-
- }
-
- // store it all
- vertices[ vertCount ++ ] = trueSize * vert[ 0 ];
- vertices[ vertCount ++ ] = trueSize * ( vert[ 2 ] - maxHeight2 );
- vertices[ vertCount ++ ] = - trueSize * vert[ 1 ];
-
- normals[ normCount ++ ] = normOut.x;
- normals[ normCount ++ ] = normOut.y;
- normals[ normCount ++ ] = normOut.z;
-
- uvs[ uvCount ++ ] = 1 - t;
- uvs[ uvCount ++ ] = 1 - s;
-
- }
-
- }
-
- // save the faces
- for ( sstep = 0; sstep < segments; sstep ++ ) {
-
- for ( tstep = 0; tstep < segments; tstep ++ ) {
-
- v1 = surfCount * vertPerRow * vertPerRow + sstep * vertPerRow + tstep;
- v2 = v1 + 1;
- v3 = v2 + vertPerRow;
- v4 = v1 + vertPerRow;
-
- // Normals and UVs cannot be shared. Without clone(), you can see the consequences
- // of sharing if you call geometry.applyMatrix( matrix ).
- if ( notDegenerate( v1, v2, v3 ) ) {
-
- indices[ indexCount ++ ] = v1;
- indices[ indexCount ++ ] = v2;
- indices[ indexCount ++ ] = v3;
-
- }
- if ( notDegenerate( v1, v3, v4 ) ) {
-
- indices[ indexCount ++ ] = v1;
- indices[ indexCount ++ ] = v3;
- indices[ indexCount ++ ] = v4;
-
- }
-
- }
-
- }
-
- // increment only if a surface was used
- surfCount ++;
-
- }
-
- }
-
- this.setIndex( new THREE.BufferAttribute( indices, 1 ) );
- this.addAttribute( 'position', new THREE.BufferAttribute( vertices, 3 ) );
- this.addAttribute( 'normal', new THREE.BufferAttribute( normals, 3 ) );
- this.addAttribute( 'uv', new THREE.BufferAttribute( uvs, 2 ) );
-
- this.computeBoundingSphere();
-
-};
-
-
-THREE.TeapotBufferGeometry.prototype = Object.create( THREE.BufferGeometry.prototype );
-THREE.TeapotBufferGeometry.prototype.constructor = THREE.TeapotBufferGeometry;
diff --git a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/ShapePath.js b/spaces/banana-projects/web3d/node_modules/three/src/extras/core/ShapePath.js
deleted file mode 100644
index a5f734497a9686f334f7c98742b0a19206c68878..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/node_modules/three/src/extras/core/ShapePath.js
+++ /dev/null
@@ -1,286 +0,0 @@
-/**
- * @author zz85 / http://www.lab4games.net/zz85/blog
- * minimal class for proxing functions to Path. Replaces old "extractSubpaths()"
- **/
-
-import { Color } from '../../math/Color.js';
-import { Path } from './Path.js';
-import { Shape } from './Shape.js';
-import { ShapeUtils } from '../ShapeUtils.js';
-
-function ShapePath() {
-
- this.type = 'ShapePath';
-
- this.color = new Color();
-
- this.subPaths = [];
- this.currentPath = null;
-
-}
-
-Object.assign( ShapePath.prototype, {
-
- moveTo: function ( x, y ) {
-
- this.currentPath = new Path();
- this.subPaths.push( this.currentPath );
- this.currentPath.moveTo( x, y );
-
- },
-
- lineTo: function ( x, y ) {
-
- this.currentPath.lineTo( x, y );
-
- },
-
- quadraticCurveTo: function ( aCPx, aCPy, aX, aY ) {
-
- this.currentPath.quadraticCurveTo( aCPx, aCPy, aX, aY );
-
- },
-
- bezierCurveTo: function ( aCP1x, aCP1y, aCP2x, aCP2y, aX, aY ) {
-
- this.currentPath.bezierCurveTo( aCP1x, aCP1y, aCP2x, aCP2y, aX, aY );
-
- },
-
- splineThru: function ( pts ) {
-
- this.currentPath.splineThru( pts );
-
- },
-
- toShapes: function ( isCCW, noHoles ) {
-
- function toShapesNoHoles( inSubpaths ) {
-
- var shapes = [];
-
- for ( var i = 0, l = inSubpaths.length; i < l; i ++ ) {
-
- var tmpPath = inSubpaths[ i ];
-
- var tmpShape = new Shape();
- tmpShape.curves = tmpPath.curves;
-
- shapes.push( tmpShape );
-
- }
-
- return shapes;
-
- }
-
- function isPointInsidePolygon( inPt, inPolygon ) {
-
- var polyLen = inPolygon.length;
-
- // inPt on polygon contour => immediate success or
- // toggling of inside/outside at every single! intersection point of an edge
- // with the horizontal line through inPt, left of inPt
- // not counting lowerY endpoints of edges and whole edges on that line
- var inside = false;
- for ( var p = polyLen - 1, q = 0; q < polyLen; p = q ++ ) {
-
- var edgeLowPt = inPolygon[ p ];
- var edgeHighPt = inPolygon[ q ];
-
- var edgeDx = edgeHighPt.x - edgeLowPt.x;
- var edgeDy = edgeHighPt.y - edgeLowPt.y;
-
- if ( Math.abs( edgeDy ) > Number.EPSILON ) {
-
- // not parallel
- if ( edgeDy < 0 ) {
-
- edgeLowPt = inPolygon[ q ]; edgeDx = - edgeDx;
- edgeHighPt = inPolygon[ p ]; edgeDy = - edgeDy;
-
- }
- if ( ( inPt.y < edgeLowPt.y ) || ( inPt.y > edgeHighPt.y ) ) continue;
-
- if ( inPt.y === edgeLowPt.y ) {
-
- if ( inPt.x === edgeLowPt.x ) return true; // inPt is on contour ?
- // continue; // no intersection or edgeLowPt => doesn't count !!!
-
- } else {
-
- var perpEdge = edgeDy * ( inPt.x - edgeLowPt.x ) - edgeDx * ( inPt.y - edgeLowPt.y );
- if ( perpEdge === 0 ) return true; // inPt is on contour ?
- if ( perpEdge < 0 ) continue;
- inside = ! inside; // true intersection left of inPt
-
- }
-
- } else {
-
- // parallel or collinear
- if ( inPt.y !== edgeLowPt.y ) continue; // parallel
- // edge lies on the same horizontal line as inPt
- if ( ( ( edgeHighPt.x <= inPt.x ) && ( inPt.x <= edgeLowPt.x ) ) ||
- ( ( edgeLowPt.x <= inPt.x ) && ( inPt.x <= edgeHighPt.x ) ) ) return true; // inPt: Point on contour !
- // continue;
-
- }
-
- }
-
- return inside;
-
- }
-
- var isClockWise = ShapeUtils.isClockWise;
-
- var subPaths = this.subPaths;
- if ( subPaths.length === 0 ) return [];
-
- if ( noHoles === true ) return toShapesNoHoles( subPaths );
-
-
- var solid, tmpPath, tmpShape, shapes = [];
-
- if ( subPaths.length === 1 ) {
-
- tmpPath = subPaths[ 0 ];
- tmpShape = new Shape();
- tmpShape.curves = tmpPath.curves;
- shapes.push( tmpShape );
- return shapes;
-
- }
-
- var holesFirst = ! isClockWise( subPaths[ 0 ].getPoints() );
- holesFirst = isCCW ? ! holesFirst : holesFirst;
-
- // console.log("Holes first", holesFirst);
-
- var betterShapeHoles = [];
- var newShapes = [];
- var newShapeHoles = [];
- var mainIdx = 0;
- var tmpPoints;
-
- newShapes[ mainIdx ] = undefined;
- newShapeHoles[ mainIdx ] = [];
-
- for ( var i = 0, l = subPaths.length; i < l; i ++ ) {
-
- tmpPath = subPaths[ i ];
- tmpPoints = tmpPath.getPoints();
- solid = isClockWise( tmpPoints );
- solid = isCCW ? ! solid : solid;
-
- if ( solid ) {
-
- if ( ( ! holesFirst ) && ( newShapes[ mainIdx ] ) ) mainIdx ++;
-
- newShapes[ mainIdx ] = { s: new Shape(), p: tmpPoints };
- newShapes[ mainIdx ].s.curves = tmpPath.curves;
-
- if ( holesFirst ) mainIdx ++;
- newShapeHoles[ mainIdx ] = [];
-
- //console.log('cw', i);
-
- } else {
-
- newShapeHoles[ mainIdx ].push( { h: tmpPath, p: tmpPoints[ 0 ] } );
-
- //console.log('ccw', i);
-
- }
-
- }
-
- // only Holes? -> probably all Shapes with wrong orientation
- if ( ! newShapes[ 0 ] ) return toShapesNoHoles( subPaths );
-
-
- if ( newShapes.length > 1 ) {
-
- var ambiguous = false;
- var toChange = [];
-
- for ( var sIdx = 0, sLen = newShapes.length; sIdx < sLen; sIdx ++ ) {
-
- betterShapeHoles[ sIdx ] = [];
-
- }
-
- for ( var sIdx = 0, sLen = newShapes.length; sIdx < sLen; sIdx ++ ) {
-
- var sho = newShapeHoles[ sIdx ];
-
- for ( var hIdx = 0; hIdx < sho.length; hIdx ++ ) {
-
- var ho = sho[ hIdx ];
- var hole_unassigned = true;
-
- for ( var s2Idx = 0; s2Idx < newShapes.length; s2Idx ++ ) {
-
- if ( isPointInsidePolygon( ho.p, newShapes[ s2Idx ].p ) ) {
-
- if ( sIdx !== s2Idx ) toChange.push( { froms: sIdx, tos: s2Idx, hole: hIdx } );
- if ( hole_unassigned ) {
-
- hole_unassigned = false;
- betterShapeHoles[ s2Idx ].push( ho );
-
- } else {
-
- ambiguous = true;
-
- }
-
- }
-
- }
- if ( hole_unassigned ) {
-
- betterShapeHoles[ sIdx ].push( ho );
-
- }
-
- }
-
- }
- // console.log("ambiguous: ", ambiguous);
- if ( toChange.length > 0 ) {
-
- // console.log("to change: ", toChange);
- if ( ! ambiguous ) newShapeHoles = betterShapeHoles;
-
- }
-
- }
-
- var tmpHoles;
-
- for ( var i = 0, il = newShapes.length; i < il; i ++ ) {
-
- tmpShape = newShapes[ i ].s;
- shapes.push( tmpShape );
- tmpHoles = newShapeHoles[ i ];
-
- for ( var j = 0, jl = tmpHoles.length; j < jl; j ++ ) {
-
- tmpShape.holes.push( tmpHoles[ j ].h );
-
- }
-
- }
-
- //console.log("shape", shapes);
-
- return shapes;
-
- }
-
-} );
-
-
-export { ShapePath };
diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/abstract_embedder.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/abstract_embedder.py
deleted file mode 100644
index e075364aa904e17e946112a7240bccaa7e400077..0000000000000000000000000000000000000000
--- a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/retriever/abstract_embedder.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import os
-from abc import abstractmethod
-
-from PIL import Image
-import numpy as np
-from tqdm import tqdm
-
-
-class AbstractImageEmbedder:
- def __init__(self, device: str = "cpu"):
- self.device = device
-
- @abstractmethod
- def embed(self, image: Image) -> np.ndarray:
- """Embed an image
- """
- raise NotImplementedError
-
- def embed_folder(self, folder_path: str, output_path: str) -> None:
- """Embed all images in a folder and save them in a .npy file
- """
- assert output_path.endswith(".npy"), "`output_path` must end with .npy"
- embeddings = {}
- for name in tqdm(os.listdir(folder_path)):
- image_path = os.path.join(folder_path, name)
- image = Image.open(image_path)
- embedding = self.embed(image)
- embeddings[name] = embedding
- np.save(output_path, embeddings)
diff --git a/spaces/bigjoker/stable-diffusion-webui/webui.bat b/spaces/bigjoker/stable-diffusion-webui/webui.bat
deleted file mode 100644
index 5139b7eb020139c65fa6390a7078c761301229b0..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/webui.bat
+++ /dev/null
@@ -1,85 +0,0 @@
-@echo off
-
-if not defined PYTHON (set PYTHON=python)
-if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv")
-
-
-set ERROR_REPORTING=FALSE
-
-mkdir tmp 2>NUL
-
-%PYTHON% -c "" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :check_pip
-echo Couldn't launch python
-goto :show_stdout_stderr
-
-:check_pip
-%PYTHON% -mpip --help >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :start_venv
-if "%PIP_INSTALLER_LOCATION%" == "" goto :show_stdout_stderr
-%PYTHON% "%PIP_INSTALLER_LOCATION%" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :start_venv
-echo Couldn't install pip
-goto :show_stdout_stderr
-
-:start_venv
-if ["%VENV_DIR%"] == ["-"] goto :skip_venv
-if ["%SKIP_VENV%"] == ["1"] goto :skip_venv
-
-dir "%VENV_DIR%\Scripts\Python.exe" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :activate_venv
-
-for /f "delims=" %%i in ('CALL %PYTHON% -c "import sys; print(sys.executable)"') do set PYTHON_FULLNAME="%%i"
-echo Creating venv in directory %VENV_DIR% using python %PYTHON_FULLNAME%
-%PYTHON_FULLNAME% -m venv "%VENV_DIR%" >tmp/stdout.txt 2>tmp/stderr.txt
-if %ERRORLEVEL% == 0 goto :activate_venv
-echo Unable to create venv in directory "%VENV_DIR%"
-goto :show_stdout_stderr
-
-:activate_venv
-set PYTHON="%VENV_DIR%\Scripts\Python.exe"
-echo venv %PYTHON%
-
-:skip_venv
-if [%ACCELERATE%] == ["True"] goto :accelerate
-goto :launch
-
-:accelerate
-echo Checking for accelerate
-set ACCELERATE="%VENV_DIR%\Scripts\accelerate.exe"
-if EXIST %ACCELERATE% goto :accelerate_launch
-
-:launch
-%PYTHON% launch.py %*
-pause
-exit /b
-
-:accelerate_launch
-echo Accelerating
-%ACCELERATE% launch --num_cpu_threads_per_process=6 launch.py
-pause
-exit /b
-
-:show_stdout_stderr
-
-echo.
-echo exit code: %errorlevel%
-
-for /f %%i in ("tmp\stdout.txt") do set size=%%~zi
-if %size% equ 0 goto :show_stderr
-echo.
-echo stdout:
-type tmp\stdout.txt
-
-:show_stderr
-for /f %%i in ("tmp\stderr.txt") do set size=%%~zi
-if %size% equ 0 goto :show_stderr
-echo.
-echo stderr:
-type tmp\stderr.txt
-
-:endofscript
-
-echo.
-echo Launch unsuccessful. Exiting.
-pause
diff --git a/spaces/bioriAsaeru/text-to-voice/CCleaner Pro 5.63 [2021] Crack Plus Serial Key Free Download 2019.md b/spaces/bioriAsaeru/text-to-voice/CCleaner Pro 5.63 [2021] Crack Plus Serial Key Free Download 2019.md
deleted file mode 100644
index 2b44b8e09b90979b542cea358ddfe5ecba8a281d..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/CCleaner Pro 5.63 [2021] Crack Plus Serial Key Free Download 2019.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
CCleaner Pro 5.63 Crack Plus Serial Key Free Download 2019
-
CCleaner Pro 5.63 Crack is a powerful and easy-to-use tool that cleans and optimizes your PC to ensure best performance and security. CCleaner Pro 5.63 Crack can remove unused and temporary files, cache and cookies, browsing history, and other junk that clogs up your operating system and slows down your computer. CCleaner Pro 5.63 Crack can also fix registry errors, uninstall unwanted programs, manage startup items, and wipe free disk space to erase traces of deleted files.
-
CCleaner Pro 5.63 Crack Plus Serial Key Free Download 2019
CCleaner Pro 5.63 Crack is the latest version of the popular CCleaner software, which has been updated with new features and improvements. CCleaner Pro 5.63 Crack offers a professional version of the software, which includes additional benefits such as real-time monitoring, automatic updates, premium support, and more. CCleaner Pro 5.63 Crack can help you boost your PC speed, protect your privacy, and recover disk space.
-
To activate CCleaner Pro 5.63 Crack, you need a valid serial key that can unlock all the premium features of the software. You can find many free CCleaner Pro keys online, but some of them may not work or may be expired. Here are some of the working CCleaner Pro keys that you can try:
-
-
Name: Pro Tech License: C2YW-GCVX-C7FB-5GMY-IZPC
-
Name: R J van der Linden License: CN9X-US28-F6RR-EY9M-YTQC
-
Name: Piriform Team License: C2AA-EGSZ-N7IU-R26I-YTGC
To use these keys, you need to download CCleaner Pro 5.63 Crack from a reliable source[^1^] [^2^] [^3^], install it on your PC, and enter one of the keys when prompted. You can also check for more keys online[^4^] [^5^] [^6^], but make sure they are valid and safe before using them.
-
CCleaner Pro 5.63 Crack is a great tool that can help you keep your PC clean and fast. However, you should always use it with caution and backup your important data before making any changes to your system. You should also avoid downloading cracked versions of software from unknown sources, as they may contain malware or viruses that can harm your PC.
If you want to learn more about CCleaner Pro 5.63 Crack and how it works, you can visit the official website of the software, where you can find detailed information, tutorials, FAQs, and support. You can also download the free version of CCleaner from the website, which offers basic cleaning and optimization features. However, if you want to enjoy the full benefits of CCleaner Pro 5.63 Crack, you need to purchase a license key from the website or use one of the free keys provided above.
-
-
CCleaner Pro 5.63 Crack is a useful and versatile tool that can help you improve your PC performance and security. By using CCleaner Pro 5.63 Crack regularly, you can keep your PC free of junk, errors, and threats, and make it run faster and smoother. CCleaner Pro 5.63 Crack is easy to use and compatible with Windows XP, Vista, 7, 8, 8.1, and 10. You can download CCleaner Pro 5.63 Crack today and give your PC a new life.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/bioriAsaeru/text-to-voice/Deadhunt English Patch.md b/spaces/bioriAsaeru/text-to-voice/Deadhunt English Patch.md
deleted file mode 100644
index 632a4a74804cf10a8ac088e8eb44ffec90887de1..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Deadhunt English Patch.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-Deadhunt is an arcade first-person shooter (FPS), in which the best features of arcade and first-person shooters are combined with fresh ideas and new twists. The game is a mixture of first-person shooter, racing and multiplayer game genres.
-The protagonist of the game is a hunter whose goal is to destroy all the monsters that are hiding in the forests, swamps, abandoned buildings and other places.
-Deadhunt uses a cover system that allows you to quickly move from attack to cover and back again.
-The weapons in the game have a lot of firepower (deals a lot of damage) and can be upgraded depending on the type (for example, the type of ammunition). 8a78ff9644
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/History Of Subcontinent From 712 To 1947 In Urdu Pdf The Cultural and Religious Diversity of the Region.md b/spaces/bioriAsaeru/text-to-voice/History Of Subcontinent From 712 To 1947 In Urdu Pdf The Cultural and Religious Diversity of the Region.md
deleted file mode 100644
index 97f82c98dbce60d5f00dba88444d9b89e4e3cd4e..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/History Of Subcontinent From 712 To 1947 In Urdu Pdf The Cultural and Religious Diversity of the Region.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
The history of preceding the country's independence in 1947[1] is shared with that of Afghanistan, India, and Iran. Spanning the western expanse of the Indian subcontinent and the eastern borderlands of the Iranian plateau, the region of present-day Pakistan served both as the fertile ground of a major civilization and as the gateway of South Asia to Central Asia and the Near East.[2][3]
-
The Kushan Empire expanded out of what is now Afghanistan into the northwest of the subcontinent under the leadership of their first emperor, Kujula Kadphises, about the middle of the 1st century CE. They were descended from an Indo-European, Central Asian people called the Yuezhi,[54][55] a branch of which was known as the Kushans. By the time of his grandson, Kanishka the Great, the empire spread to encompass much of Afghanistan[56] and the northern parts of the Indian subcontinent at least as far as Saketa and Sarnath near Varanasi (Benares).[57]
-
History Of Subcontinent From 712 To 1947 In Urdu Pdf
Download CSS Book Hindu Muslim Confrontation 712 to 1947 By Dr Sarfraz Ahmed Mirza for Css compulsory Subject Pakistan Affairs. This booklet covers the period from 712 to the creation of Pakistan 1947. Download this booklet free from The CSS Point.
-
From the late 12th century onwards, Muslim empires dominated the subcontinent, most notably the Delhi sultanate and Mughal empire.[2] Various other Muslim kingdoms ruled most of South Asia from the mid-14th to late 18th centuries, including the Bahmani, Bengal, Gujarat, Malwa, Mysore, Carnatic and Deccan Sultanates.[3][4] Though the Muslim dynasties in India were diversed in origin, they were linked together by the Persianate culture and Islam.
-
The Mughal empire was the second major Islamic empire to assert dominance over most of the Indian subcontinent between 1526 and 1857. The empire was founded by the Turco-Mongol leader Babur in 1526, when he defeated Ibrahim Lodi, the last ruler of the Delhi Sultanate at the First Battle of Panipat. Babur, Humayun, Akbar, Jahangir, Shah Jahan, and Aurangzeb are known as the six great Mughal Emperors. Apart from the brief interruption by the Afghan Sur dynasty between 1540 and 1556, the Mughals continued to rule in one form or other till 1857.
-
When Pakistan became a country on August 14th, 1947, to form the largest Muslim state in the world at that time. The creation of Pakistan was catalyst to the largest demographic movement in recorded history. Nearly seventeen million people-Hindus, Muslims, and Sikhs-are reported to have moved in both directions between India and the two wings of Pakistan (the eastern wing is now Bangladesh). Sixty million of the ninety-five million Muslims on the Indian subcontinent became citizens of Pakistan at the time of its creation. Subsequently, thirty-five million Muslims remained inside India making it the largest Muslim minority in a non-Muslim state.
-
After Ayub Khan, General Agha Muhammad Yahya Khan headed the second military regime from 1969-1971. By that time the country had been under military rule for thirteen of its twenty-five years of existence. This second military regime emphasized the extent to which the process of centralization under bureaucratic and military tutelage had fragmented Pakistani society and politics. The general elections of 1970 on the basis of adult franchise revealed for the first time ever in Pakistan's history how regionalism and social conflict had come to dominate politics despite the efforts at controlled development. The Awami League, led by Mujibur Rahman, campaigned on a six-point program of provincial autonomy, capturing all but one seat in East Pakistan and securing an absolute majority in the national assembly. In West Pakistan the Pakistan People's Party, led by Zulfiqar Ali Bhutto, had a populist platform that stole the thunder from the Islamic parties (the Muslim League, the oldest political party captured no more than a few seats) and emerged as the largest single bloc. The prospect of an Awami Leagues government was a threat to politicians in West Pakistan who in conspiracy with the military leadership prevented Mujibur from taking the reins of power. This was the final straw for the east wing who was already fed up with the their under-representation in all sectors of the government, economic deprivation and then the suppression of the democratic process. An armed rebellion in East Pakistan engendered all of these frustrations, which caused Indian military intervention to crush it. Pakistan was now involved in its third war with India, thus clearing the way for the establishment of Bangladesh in 1971.
-
-
History is one of the most interesting disciplines. We learn interesting facts from history. History tells about the origin. Urdu Point has many history books. History books help to understand the history and analyze the present. History books provide history definition as well. Instead of going for history Google, you can get history books at Urdu Point. Urdu Point books section has a specified section for history books. There are many history channels and history TV shows also. There is a list of history books. The list of history books contains many history books. Best history books about history of India, history of Pakistan, and history of sub-continent are also available. You can get the best history books of all time. You can also get the best world history books. If you are looking for best history books to read then go for Urdu Point. History books online, history books examples, and history books to read are available here. Search results about best history books, best ancient history books and Islamic history books are available. You can get the Islamic history books, Islamic history books in Urdu pdf free download, and history books in Urdu. If you are searching for the world history books, history books in Urdu and free pdf books are available. History books have many categories which include Islamic history books, Indian history books and Pakistan history books. People who are fond of reading books can read online books. Search results about online library books, online Urdu books, and online books are found. You can easily read online book here. People want to access digital library to read online books. History books in Urdu can also be searched. You can find the history books on Urdu Point. History books about Indian history timeline, briefhistory of India and free history books found here. If you want free online book download, free online novels, free books online pdf and free online books for kids then visit us. We provide you access to the free online novels, freebooks online pdf and free online history books. You can read full length online books here. Some people also search for the free online romance books and read entire books free. If you want to know who the first king of India was, how old India is, medieval Indian history and history of India pdf then read Indian history books. Searches about Pakistan history, Indian history, Islamic history, Indian history online and Indian history pdf also found. To get the history books visit Urdu Point. Tareekhi kitabain are available here. We provide you access to the Tareekhi kitabain. Get the history books, famous historybooks, Indian history books and online history books at Urdu Point. Come at Urdu Point and get an easy access to history books, history books in Urdu and Pakistan history books.
-
Muslim Rule in India 712-1857 - Free download as Powerpoint Presentation (.ppt), PDF File (.pdf), Text File (.txt) .. American Pie Tamil Dubbed Downlo
history of subcontinent from 712 to 1947 wikipedia
Umayyad General Iraq Governor, Hijaj bin Yousaf Married his Daughter Zubaida Foundation of Islamic Rule in Subcontinent ... CSS Indo-Pak History Solved MCQs of Paper-II (1985 till Now).
-
In 1947, after 200 years of control, the British finally quit the Indian subcontinent. Before leaving, the colonizers drew a line in the sand that formed two new dominions: Muslim-majority Pakistan and Hindu-majority India. Some 15 million people migrated (the largest human migration in history) and one to two million perished in the communal violence that followed.
-
Secularism, as conceived in our subcontinent, is a matter of having different religious communities living together in tranquillity and harmony, whereas in Pakistan, especially west Pakistan, from where many minorities choose to move out to India, secularism takes on a different role of being a matter of tranquillity and harmony between different sects of Islam. And yet getting to that point is very hard when the sects are defined in different theological terms and each theology feels that its word is the true interpretation of the word of God.
-
From 1947 onwards, when migrants from India, known as the Mohājirs, came to Sindh, the repertoire that dominated the local religiosity was that of a vernacular Sufism, to which both Sindhi Muslims and Hindus of all faiths subscribed. Consequently, in the competition between Sindhis and Mohājirs for the domination of the city of Hyderabad, negotiations between different religious repertoires played a prominent role. My hypothesis is that the Mawlā jā Qadam represented a crucial stake in the showdown between the Sindhis and the Mohājirs, more than a vector for the integration of the Mohājirs in the urban landscape. In what follows, I will demonstrate that, although the onomastic change of the site unambiguously indicates its coming under the control of the Mohājirs, as explained below, the ritual practices show a resilience of the vernacular Sufi substratum of Sindh.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/blaziant/ysda_nlp_ops/Dockerfile b/spaces/blaziant/ysda_nlp_ops/Dockerfile
deleted file mode 100644
index 587c772a5722b45d5a3cada3294f1a8de98774b7..0000000000000000000000000000000000000000
--- a/spaces/blaziant/ysda_nlp_ops/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM python:3.9
-
-WORKDIR /backend
-
-COPY ./requirements.txt /backend/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /backend/requirements.txt
-
-COPY ./app /backend/app
-COPY ./templates /backend/templates
-
-RUN useradd -m -u 1000 user
-USER user
-ENV HOME=/home/user \
- PATH=/home/user/.local/bin:$PATH
-
-WORKDIR $HOME/app
-
-COPY --chown=user . $HOME/app
-
-CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/breadlicker45/Text-to-music-longer/utils.py b/spaces/breadlicker45/Text-to-music-longer/utils.py
deleted file mode 100644
index d302528fd6fc9be8d782f78b6c44f4d894147d07..0000000000000000000000000000000000000000
--- a/spaces/breadlicker45/Text-to-music-longer/utils.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import json
-import numpy as np
-import httpx
-
-from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN
-
-
-def get_mubert_tags_embeddings(w2v_model):
- return w2v_model.encode(MUBERT_TAGS)
-
-
-def get_pat(email: str):
- r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess',
- json={
- "method": "GetServiceAccess",
- "params": {
- "email": email,
- "license": MUBERT_LICENSE,
- "token": MUBERT_TOKEN,
- "mode": MUBERT_MODE,
- }
- })
-
- rdata = json.loads(r.text)
- assert rdata['status'] == 1, "probably incorrect e-mail"
- pat = rdata['data']['pat']
- return pat
-
-
-def find_similar(em, embeddings, method='cosine'):
- scores = []
- for ref in embeddings:
- if method == 'cosine':
- scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em)))
- if method == 'norm':
- scores.append(np.linalg.norm(ref - em))
- return np.array(scores), np.argsort(scores)
-
-
-def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False):
- prompts_embeddings = w2v_model.encode(prompts)
- ret = []
- for i, pe in enumerate(prompts_embeddings):
- scores, idxs = find_similar(pe, mubert_tags_embeddings)
- top_tags = MUBERT_TAGS[idxs[:top_n]]
- top_prob = 1 - scores[idxs[:top_n]]
- if debug:
- print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n")
- ret.append((prompts[i], list(top_tags)))
- return ret
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/lazyconfigs.md b/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/lazyconfigs.md
deleted file mode 100644
index a01101ae40ec12d25d5a3d96892b60ef32dca21e..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/docs/tutorials/lazyconfigs.md
+++ /dev/null
@@ -1,170 +0,0 @@
-# Lazy Configs
-
-The traditional yacs-based config system provides basic, standard functionalities.
-However, it does not offer enough flexibility for many new projects.
-We develop an alternative, non-intrusive config system that can be used with
-detectron2 or potentially any other complex projects.
-
-## Python Syntax
-
-Our config objects are still dictionaries. Instead of using Yaml to define dictionaries,
-we create dictionaries in Python directly. This gives users the following power that
-doesn't exist in Yaml:
-
-* Easily manipulate the dictionary (addition & deletion) using Python.
-* Write simple arithmetics or call simple functions.
-* Use more data types / objects.
-* Import / compose other config files, using the familiar Python import syntax.
-
-A Python config file can be loaded like this:
-```python
-# config.py:
-a = dict(x=1, y=2, z=dict(xx=1))
-b = dict(x=3, y=4)
-
-# my_code.py:
-from detectron2.config import LazyConfig
-cfg = LazyConfig.load("path/to/config.py") # an omegaconf dictionary
-assert cfg.a.z.xx == 1
-```
-
-After [LazyConfig.load](../modules/config.html#detectron2.config.LazyConfig.load), `cfg` will be a dictionary that contains all dictionaries
-defined in the global scope of the config file. Note that:
-* All dictionaries are turned to an [omegaconf](https://omegaconf.readthedocs.io/)
- config object during loading. This enables access to omegaconf features,
- such as its [access syntax](https://omegaconf.readthedocs.io/en/2.1_branch/usage.html#access-and-manipulation)
- and [interpolation](https://omegaconf.readthedocs.io/en/2.1_branch/usage.html#variable-interpolation).
-* Absolute imports in `config.py` works the same as in regular Python.
-* Relative imports can only import dictionaries from config files.
- They are simply a syntax sugar for [LazyConfig.load_rel](../modules/config.html#detectron2.config.LazyConfig.load_rel).
- They can load Python files at relative path without requiring `__init__.py`.
-
-[LazyConfig.save](../modules/config.html#detectron2.config.LazyConfig.save) can save a config object to yaml.
-Note that this is not always successful if non-serializable objects appear in the config file (e.g. lambdas).
-It is up to users whether to sacrifice the ability to save in exchange for flexibility.
-
-## Recursive Instantiation
-
-The LazyConfig system heavily uses recursive instantiation, which is a pattern that
-uses a dictionary to describe a
-call to a function/class. The dictionary consists of:
-
-1. A "\_target\_" key which contains path to the callable, such as "module.submodule.class_name".
-2. Other keys that represent arguments to pass to the callable. Arguments themselves can be defined
- using recursive instantiation.
-
-We provide a helper function [LazyCall](../modules/config.html#detectron2.config.LazyCall) that helps create such dictionaries.
-The following code using `LazyCall`
-```python
-from detectron2.config import LazyCall as L
-from my_app import Trainer, Optimizer
-cfg = L(Trainer)(
- optimizer=L(Optimizer)(
- lr=0.01,
- algo="SGD"
- )
-)
-```
-creates a dictionary like this:
-```python
-cfg = {
- "_target_": "my_app.Trainer",
- "optimizer": {
- "_target_": "my_app.Optimizer",
- "lr": 0.01, "algo": "SGD"
- }
-}
-```
-
-By representing objects using such dictionaries, a general
-[instantiate](../modules/config.html#detectron2.config.instantiate)
-function can turn them into actual objects, i.e.:
-```python
-from detectron2.config import instantiate
-trainer = instantiate(cfg)
-# equivalent to:
-# from my_app import Trainer, Optimizer
-# trainer = Trainer(optimizer=Optimizer(lr=0.01, algo="SGD"))
-```
-
-This pattern is powerful enough to describe very complex objects, e.g.:
-
-
-
-A Full Mask R-CNN described in recursive instantiation (click to expand)
-
-
-```eval_rst
-.. literalinclude:: ../../configs/common/models/mask_rcnn_fpn.py
- :language: python
- :linenos:
-```
-
-
-
-There are also objects or logic that cannot be described simply by a dictionary,
-such as reused objects or method calls. They may require some refactoring
-to work with recursive instantiation.
-
-## Using Model Zoo LazyConfigs
-
-We provide some configs in the model zoo using the LazyConfig system, for example:
-
-* [common baselines](../../configs/common/).
-* [new Mask R-CNN baselines](../../configs/new_baselines/)
-
-After installing detectron2, they can be loaded by the model zoo API
-[model_zoo.get_config](../modules/model_zoo.html#detectron2.model_zoo.get_config).
-
-Using these as references, you're free to define custom config structure / fields for your own
-project, as long as your training script can understand them.
-Despite of this, our model zoo configs still follow some simple conventions for consistency, e.g.
-`cfg.model` defines a model object, `cfg.dataloader.{train,test}` defines dataloader objects,
-and `cfg.train` contains training options in key-value form.
-In addition to `print()`, a better way to view the structure of a config is like this:
-```python
-from detectron2.model_zoo import get_config
-from detectron2.config import LazyConfig
-print(LazyConfig.to_py(get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.py")))
-```
-From the output it's easier to find relevant options to change, e.g.
-`dataloader.train.total_batch_size` for the batch size, or `optimizer.lr` for base learning rate.
-
-We provide a reference training script
-[tools/lazyconfig_train_net.py](../../tools/lazyconfig_train_net.py),
-that can train/eval our model zoo configs.
-It also shows how to support command line value overrides.
-
-To demonstrate the power and flexibility of the new system, we show that
-[a simple config file](../../configs/Misc/torchvision_imagenet_R_50.py)
-can let detectron2 train an ImageNet classification model from torchvision, even though
-detectron2 contains no features about ImageNet classification.
-This can serve as a reference for using detectron2 in other deep learning tasks.
-
-## Summary
-
-By using recursive instantiation to create objects,
-we avoid passing a giant config to many places, because `cfg` is only passed to `instantiate`.
-This has the following benefits:
-
-* It's __non-intrusive__: objects to be constructed are config-agnostic, regular Python
- functions/classes.
- They can even live in other libraries. For example,
- `{"_target_": "torch.nn.Conv2d", "in_channels": 10, "out_channels": 10, "kernel_size": 1}`
- defines a conv layer.
-* __Clarity__ of what function/classes will be called, and what arguments they use.
-* `cfg` doesn't need pre-defined keys and structures. It's valid as long as it translates to valid
- code. This gives a lot more __flexibility__.
-* You can still pass huge dictionaries as arguments, just like the old way.
-
-Recursive instantiation and Python syntax are orthogonal: you can use one without the other.
-But by putting them together, the config file looks a lot like the code that will be executed:
-
-
-
-However, the config file just defines dictionaries, which can be easily manipulated further
-by composition or overrides.
-The corresponding code will only be executed
-later when `instantiate` is called. In some way,
-in config files we're writing "editable code" that will be "lazily executed" later when needed.
-That's why we call this system "LazyConfig".
diff --git a/spaces/cbr/swp/swapper.py b/spaces/cbr/swp/swapper.py
deleted file mode 100644
index f7f359961e465004fed3311b8dee0bf51c56b649..0000000000000000000000000000000000000000
--- a/spaces/cbr/swp/swapper.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import cv2
-import numpy as np
-from insightface.utils import face_align
-from face_parsing.swap import swap_regions
-from utils import add_logo_to_image
-
-swap_options_list = [
- "All face",
- "Age less than",
- "Age greater than",
- "All Male",
- "All Female",
- "Specific Face",
-]
-
-
-def swap_face(whole_img, target_face, source_face, models):
- inswapper = models.get("swap")
- face_enhancer = models.get("enhance", None)
- face_parser = models.get("face_parser", None)
- fe_enable = models.get("enhance_sett", False)
-
- bgr_fake, M = inswapper.get(whole_img, target_face, source_face, paste_back=False)
- image_size = 128 if not fe_enable else 512
- aimg, _ = face_align.norm_crop2(whole_img, target_face.kps, image_size=image_size)
-
- if face_parser is not None:
- fp_enable, includes, smooth_mask, blur_amount = models.get("face_parser_sett")
- if fp_enable:
- bgr_fake = swap_regions(
- bgr_fake, aimg, face_parser, smooth_mask, includes=includes, blur=blur_amount
- )
-
- if fe_enable:
- _, bgr_fake, _ = face_enhancer.enhance(
- bgr_fake, paste_back=True, has_aligned=True
- )
- bgr_fake = bgr_fake[0]
- M /= 0.25
-
- IM = cv2.invertAffineTransform(M)
-
- img_white = np.full((aimg.shape[0], aimg.shape[1]), 255, dtype=np.float32)
- bgr_fake = cv2.warpAffine(
- bgr_fake, IM, (whole_img.shape[1], whole_img.shape[0]), borderValue=0.0
- )
- img_white = cv2.warpAffine(
- img_white, IM, (whole_img.shape[1], whole_img.shape[0]), borderValue=0.0
- )
- img_white[img_white > 20] = 255
- img_mask = img_white
- mask_h_inds, mask_w_inds = np.where(img_mask == 255)
- mask_h = np.max(mask_h_inds) - np.min(mask_h_inds)
- mask_w = np.max(mask_w_inds) - np.min(mask_w_inds)
- mask_size = int(np.sqrt(mask_h * mask_w))
-
- k = max(mask_size // 10, 10)
- img_mask = cv2.erode(img_mask, np.ones((k, k), np.uint8), iterations=1)
-
- k = max(mask_size // 20, 5)
- kernel_size = (k, k)
- blur_size = tuple(2 * i + 1 for i in kernel_size)
- img_mask = cv2.GaussianBlur(img_mask, blur_size, 0) / 255
-
- img_mask = np.reshape(img_mask, [img_mask.shape[0], img_mask.shape[1], 1])
- fake_merged = img_mask * bgr_fake + (1 - img_mask) * whole_img.astype(np.float32)
- fake_merged = add_logo_to_image(fake_merged.astype("uint8"))
- return fake_merged
-
-
-def swap_face_with_condition(
- whole_img, target_faces, source_face, condition, age, models
-):
- swapped = whole_img.copy()
-
- for target_face in target_faces:
- if condition == "All face":
- swapped = swap_face(swapped, target_face, source_face, models)
- elif condition == "Age less than" and target_face["age"] < age:
- swapped = swap_face(swapped, target_face, source_face, models)
- elif condition == "Age greater than" and target_face["age"] > age:
- swapped = swap_face(swapped, target_face, source_face, models)
- elif condition == "All Male" and target_face["gender"] == 1:
- swapped = swap_face(swapped, target_face, source_face, models)
- elif condition == "All Female" and target_face["gender"] == 0:
- swapped = swap_face(swapped, target_face, source_face, models)
-
- return swapped
-
-
-def swap_specific(source_specifics, target_faces, whole_img, models, threshold=0.6):
- swapped = whole_img.copy()
-
- for source_face, specific_face in source_specifics:
- specific_embed = specific_face["embedding"]
- specific_embed /= np.linalg.norm(specific_embed)
-
- for target_face in target_faces:
- target_embed = target_face["embedding"]
- target_embed /= np.linalg.norm(target_embed)
- cosine_distance = 1 - np.dot(specific_embed, target_embed)
- if cosine_distance > threshold:
- continue
- swapped = swap_face(swapped, target_face, source_face, models)
-
- return swapped
diff --git a/spaces/ccolas/TastyPiano/src/cocktails/config.py b/spaces/ccolas/TastyPiano/src/cocktails/config.py
deleted file mode 100644
index bce5b65a666caf9972ea64933a4a74eb4e2532c0..0000000000000000000000000000000000000000
--- a/spaces/ccolas/TastyPiano/src/cocktails/config.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import os
-
-REPO_PATH = '/'.join(os.path.abspath(__file__).split('/')[:-3]) + '/'
-
-# QUADRUPLETS_PATH = REPO_PATH + 'checkpoints/cocktail_representation/quadruplets.pickle'
-INGREDIENTS_LIST_PATH = REPO_PATH + 'checkpoints/cocktail_representation/ingredient_list.csv'
-# ING_MATCH_SCORE_Q_PATH = REPO_PATH + 'checkpoints/cocktail_representation/ingredient_match_score_q.txt'
-# ING_MATCH_SCORE_COUNT_PATH = REPO_PATH + 'checkpoints/cocktail_representation/ingredient_match_score_count.txt'
-# COCKTAIL_DATA_FOLDER_PATH = REPO_PATH + 'checkpoints/cocktail_representation/'
-COCKTAILS_CSV_DATA = REPO_PATH + 'checkpoints/cocktail_representation/cocktails_data.csv'
-# COCKTAILS_PKL_DATA = REPO_PATH + 'checkpoints/cocktail_representation/cocktails_data.pkl'
-# COCKTAILS_URL_DATA = REPO_PATH + 'checkpoints/cocktail_representation/cocktails_names_urls.pkl'
-EXPERIMENT_PATH = REPO_PATH + 'experiments/cocktails/representation_learning/'
-# ANALYSIS_PATH = REPO_PATH + 'experiments/cocktails/representation_analysis/'
-# REPRESENTATIONS_PATH = REPO_PATH + 'experiments/cocktails/learned_representations/'
-
-FULL_COCKTAIL_REP_PATH = REPO_PATH + "/checkpoints/cocktail_representation/handcoded_reps/cocktail_handcoded_reps_minmax_norm-1_1_dim13_customkeys.txt"
-RECIPE2FEATURES_PATH = REPO_PATH + "/checkpoints/cocktail_representation/" # get this by running run_without_vae
-COCKTAIL_REP_CHKPT_PATH = REPO_PATH + "/checkpoints/cocktail_representation/handcoded_reps/"
-# FULL_COCKTAIL_REP_PATH = REPO_PATH + "experiments/cocktails/representation_analysis/affective_mapping/clustered_representations/all_cocktail_reps_norm-1_1_custom_keys_dim13.txt'
-COCKTAIL_NN_PATH = REPO_PATH + "/checkpoints/cocktail_representation/handcoded_reps/nn_model.pickle"
\ No newline at end of file
diff --git a/spaces/chaitanya9/emotion_recognizer/app.py b/spaces/chaitanya9/emotion_recognizer/app.py
deleted file mode 100644
index cd1148960d8588fe8894245690e5c3ee1c671fea..0000000000000000000000000000000000000000
--- a/spaces/chaitanya9/emotion_recognizer/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import gradio as gr
-import pickle
-
-filename = "Our_Trained_knn_model.pickle"
-
-def predict(inp):
- best_classifiers = pickle.load(open(filename, 'rb'))
- emotion = best_classifiers.predict(inp)
- return emotion
-
-if __name__ == "__main__":
- audio = gr.inputs.Audio(source="upload", type="numpy", label=None, optional=False)
-
- #gr.Interface(fn=emotion_recognizer, inputs=audio, outputs="text", capture_session=True).launch()
-
-
- iface = gr.Interface(fn=predict, inputs = "audio", outputs = "text")
- iface.launch(share=True)
-
diff --git a/spaces/chansung/textual-inversion-pipeline/constants.py b/spaces/chansung/textual-inversion-pipeline/constants.py
deleted file mode 100644
index e2662d9e3e5dadad9291e0741d4d7b88479a19b1..0000000000000000000000000000000000000000
--- a/spaces/chansung/textual-inversion-pipeline/constants.py
+++ /dev/null
@@ -1,135 +0,0 @@
-css = """
- .gradio-container {
- font-family: 'IBM Plex Sans', sans-serif;
- }
- .gr-button {
- color: white;
- border-color: black;
- background: black;
- }
- input[type='range'] {
- accent-color: black;
- }
- .dark input[type='range'] {
- accent-color: #dfdfdf;
- }
- .container {
- max-width: 730px;
- margin: auto;
- padding-top: 1.5rem;
- }
- #gallery {
- min-height: 22rem;
- margin-bottom: 15px;
- margin-left: auto;
- margin-right: auto;
- border-bottom-right-radius: .5rem !important;
- border-bottom-left-radius: .5rem !important;
- }
- #gallery>div>.h-full {
- min-height: 20rem;
- }
- .details:hover {
- text-decoration: underline;
- }
- .gr-button {
- white-space: nowrap;
- }
- .gr-button:focus {
- border-color: rgb(147 197 253 / var(--tw-border-opacity));
- outline: none;
- box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000);
- --tw-border-opacity: 1;
- --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);
- --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color);
- --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity));
- --tw-ring-opacity: .5;
- }
- #advanced-btn {
- font-size: .7rem !important;
- line-height: 19px;
- margin-top: 12px;
- margin-bottom: 12px;
- padding: 2px 8px;
- border-radius: 14px !important;
- }
- #advanced-options {
- display: none;
- margin-bottom: 20px;
- }
- .footer {
- margin-bottom: 45px;
- margin-top: 35px;
- text-align: center;
- border-bottom: 1px solid #e5e5e5;
- }
- .footer>p {
- font-size: .8rem;
- display: inline-block;
- padding: 0 10px;
- transform: translateY(10px);
- background: white;
- }
- .dark .footer {
- border-color: #303030;
- }
- .dark .footer>p {
- background: #0b0f19;
- }
- .acknowledgments h4{
- margin: 1.25em 0 .25em 0;
- font-weight: bold;
- font-size: 115%;
- }
- .animate-spin {
- animation: spin 1s linear infinite;
- }
- @keyframes spin {
- from {
- transform: rotate(0deg);
- }
- to {
- transform: rotate(360deg);
- }
- }
- #share-btn-container {
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
- margin-top: 10px;
- margin-left: auto;
- }
- #share-btn {
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0;
- }
- #share-btn * {
- all: unset;
- }
- #share-btn-container div:nth-child(-n+2){
- width: auto !important;
- min-height: 0px !important;
- }
- #share-btn-container .wrap {
- display: none !important;
- }
-
- .gr-form{
- flex: 1 1 50%; border-top-right-radius: 0; border-bottom-right-radius: 0;
- }
- #prompt-container{
- gap: 0;
- }
- #prompt-text-input, #negative-prompt-text-input{padding: .45rem 0.625rem}
- #component-16{border-top-width: 1px!important;margin-top: 1em}
- .image_duplication{position: absolute; width: 100px; left: 50px}
-"""
-
-
-examples = [
- ["Yoda", "low quality", 40],
- ["A red pokemon with green eyes", 40],
- ["cute Sundar Pihcai creature", 40],
- ["Hello kitty", 40],
-]
-
-num_images_to_gen = 3
-
-img_height = img_width = 512
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_l.py b/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_l.py
deleted file mode 100644
index 50833ca38c51fe9ac5e327d7c1c0561fb62249aa..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/multimodal/YOLOX/exps/default/yolox_l.py
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding:utf-8 -*-
-# Copyright (c) Megvii, Inc. and its affiliates.
-
-import os
-
-from yolox.exp import Exp as MyExp
-
-
-class Exp(MyExp):
- def __init__(self):
- super(Exp, self).__init__()
- self.depth = 1.0
- self.width = 1.0
- self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0]
diff --git a/spaces/chongjie/co-tracker_MVP/app.py b/spaces/chongjie/co-tracker_MVP/app.py
deleted file mode 100644
index fa1fc4e5283eddcd4cf9826cc0dc3fa0305bd3f3..0000000000000000000000000000000000000000
--- a/spaces/chongjie/co-tracker_MVP/app.py
+++ /dev/null
@@ -1,48 +0,0 @@
-import gradio as gr
-import os
-import torch
-import numpy as np
-
-from PIL import Image
-from cotracker.utils.visualizer import Visualizer, read_video_from_path
-from cotracker.predictor import CoTrackerPredictor
-
-checkpoint='./checkpoints/cotracker_stride_4_wind_8.pth'
-def cotracker(video_path: str, grid_size: int, grid_query_frame: int, backward_tracking: bool):
- # load the input video frame by frame
- video = read_video_from_path(video_path)
- video = torch.from_numpy(video).permute(0, 3, 1, 2)[None].float()
- model = CoTrackerPredictor(checkpoint=checkpoint)
- if torch.cuda.is_available():
- model = model.cuda()
- video = video.cuda()
- else:
- print("CUDA is not available!")
-
- pred_tracks, pred_visibility = model(
- video,
- grid_size=grid_size,
- grid_query_frame=grid_query_frame,
- backward_tracking=backward_tracking,
- )
- print("computed")
-
- # save a video with predicted tracks
- seq_name = video_path.split("/")[-1]
- vis = Visualizer(save_dir="./saved_videos", pad_value=120, linewidth=3)
- vis.visualize(video, pred_tracks, query_frame=grid_query_frame)
-
- return "./saved_videos/video_pred_track.mp4"
-
-iface = gr.Interface(
- fn=cotracker,
- inputs=[
- gr.inputs.Video(label='video', type='mp4'),
- gr.inputs.Slider(minimum=0, maximum=20, step=1, default=10, label="Grid Size"),
- gr.inputs.Slider(minimum=0, maximum=10, step=1, default=0, label="Grid Query Frame"),
- gr.inputs.Checkbox(label="Backward Tracking"),
- ],
- outputs=gr.outputs.Video(label="Output")
-)
-iface.queue()
-iface.launch()
\ No newline at end of file
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/setup.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/setup.py
deleted file mode 100644
index 6ea944e1887758f91965b080f2d7a8eb9a1cf915..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/setup.py
+++ /dev/null
@@ -1,71 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from __future__ import print_function
-from setuptools import setup, find_packages
-import os
-import shutil
-import platform
-
-# make the faiss python package dir
-shutil.rmtree("faiss", ignore_errors=True)
-os.mkdir("faiss")
-shutil.copytree("contrib", "faiss/contrib")
-shutil.copyfile("__init__.py", "faiss/__init__.py")
-shutil.copyfile("loader.py", "faiss/loader.py")
-shutil.copyfile("class_wrappers.py", "faiss/class_wrappers.py")
-shutil.copyfile("gpu_wrappers.py", "faiss/gpu_wrappers.py")
-shutil.copyfile("extra_wrappers.py", "faiss/extra_wrappers.py")
-shutil.copyfile("array_conversions.py", "faiss/array_conversions.py")
-
-ext = ".pyd" if platform.system() == 'Windows' else ".so"
-prefix = "Release/" * (platform.system() == 'Windows')
-
-swigfaiss_generic_lib = f"{prefix}_swigfaiss{ext}"
-swigfaiss_avx2_lib = f"{prefix}_swigfaiss_avx2{ext}"
-
-found_swigfaiss_generic = os.path.exists(swigfaiss_generic_lib)
-found_swigfaiss_avx2 = os.path.exists(swigfaiss_avx2_lib)
-
-assert (found_swigfaiss_generic or found_swigfaiss_avx2), \
- f"Could not find {swigfaiss_generic_lib} or " \
- f"{swigfaiss_avx2_lib}. Faiss may not be compiled yet."
-
-if found_swigfaiss_generic:
- print(f"Copying {swigfaiss_generic_lib}")
- shutil.copyfile("swigfaiss.py", "faiss/swigfaiss.py")
- shutil.copyfile(swigfaiss_generic_lib, f"faiss/_swigfaiss{ext}")
-
-if found_swigfaiss_avx2:
- print(f"Copying {swigfaiss_avx2_lib}")
- shutil.copyfile("swigfaiss_avx2.py", "faiss/swigfaiss_avx2.py")
- shutil.copyfile(swigfaiss_avx2_lib, f"faiss/_swigfaiss_avx2{ext}")
-
-long_description="""
-Faiss is a library for efficient similarity search and clustering of dense
-vectors. It contains algorithms that search in sets of vectors of any size,
- up to ones that possibly do not fit in RAM. It also contains supporting
-code for evaluation and parameter tuning. Faiss is written in C++ with
-complete wrappers for Python/numpy. Some of the most useful algorithms
-are implemented on the GPU. It is developed by Facebook AI Research.
-"""
-setup(
- name='faiss',
- version='1.7.4',
- description='A library for efficient similarity search and clustering of dense vectors',
- long_description=long_description,
- url='https://github.com/facebookresearch/faiss',
- author='Matthijs Douze, Jeff Johnson, Herve Jegou, Lucas Hosseini',
- author_email='matthijs@fb.com',
- license='MIT',
- keywords='search nearest neighbors',
-
- install_requires=['numpy'],
- packages=['faiss', 'faiss.contrib'],
- package_data={
- 'faiss': ['*.so', '*.pyd'],
- },
- zip_safe=False,
-)
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_B_.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_B_.py
deleted file mode 100644
index 8a6c14c444595508c35bdc6ebace60b4bbbbdaba..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I_B_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .T_S_I_V_ import table_T_S_I_V_
-
-
-class table_T_S_I_B_(table_T_S_I_V_):
- pass
diff --git a/spaces/cihyFjudo/fairness-paper-search/American Conquest Divided Nation Patch Windows 10 Download and Install the Latest Version Here.md b/spaces/cihyFjudo/fairness-paper-search/American Conquest Divided Nation Patch Windows 10 Download and Install the Latest Version Here.md
deleted file mode 100644
index 1d8c6942a48cfc90938f340e35dc93ca6357d789..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/American Conquest Divided Nation Patch Windows 10 Download and Install the Latest Version Here.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
American Conquest: Fight Back is a stand-alone expansion pack for American Conquest. It features five new nations: Germany, Russia, Haida, Portugal and the Netherlands, and 50 new units. In addition to new campaigns featuring the Mayas, the Germans, the Haida and the Russians, a new 'battlefield' game mode is available. The German campaign briefly chronicles the expedition of Ambrosius Ehinger and Georg Hohermuth whereas the Russian campaign concerns the Alaskan campaign under Alexander Baranov. The new Haida campaign is from the Haida point of view of the Russian expedition. The Mayas campaign covers details from the Spanish conquest of Yucatán.
A total conversion mod for the game was released in 2006, with patches and different versions released up until 2009, called European Warfare: Napoleonica that transferred the player back to 19th Century war-torn Europe during the Napoleonic Wars. The project was undertaken by Gexozoid (helped by the Hawks group and other associates) in 2007 and since then had a fairly active community on GameRanger and forums up until 2015. The Hawks Group recreated a vast database of historical battles that can be played in multiplayer by up to 7 players at the same time, sharing armies or fighting in co-op. It can still be downloaded at their original website or on ModDB. The Mod features over 200 new units and around 20 new buildings that range from a faction's Barracks to fortifications in the form of manned cannon towers and breastworks much like in Cossacks. 12 fully playable nations include: France, England, Poland, Austria, Prussia, Russia, Spain, Italy, the Ottoman Empire, Confederacy of Rhine, Sweden and the USA.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cllatMTK/TransformerAnalyzer/calc_util.py b/spaces/cllatMTK/TransformerAnalyzer/calc_util.py
deleted file mode 100644
index 7dcbf6f19b864037aadcb43daadbfec305e39e38..0000000000000000000000000000000000000000
--- a/spaces/cllatMTK/TransformerAnalyzer/calc_util.py
+++ /dev/null
@@ -1,420 +0,0 @@
-import numpy as np
-from collections import defaultdict
-from functools import partial
-from typing import List
-from model_util import get_module_tensors_matched
-
-def calc_model_size_from_model(model_config, inference_config):
- get_module_tensors_matched_partial = partial(get_module_tensors_matched, module_classes_dict = model_config['module_classes'])
-
- parameter_count = defaultdict(float)
- parameter_count['word_embedding'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'embed' in x and 'pos' not in x)])
- parameter_count['positional_embedding'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'embed' in x and 'pos' in x)])
-
- parameter_count['attention_Q'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and 'q' in x)])
- parameter_count['attention_K'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and 'k' in x)])
- parameter_count['attention_V'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and 'v' in x)])
- parameter_count['attention_out'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'att' in x and ('out_' in x or 'o_' in x))])
-
- parameter_count['layernorm'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'norm' in x)])
- parameter_count['mlp_weights'] = sum([v.numel() for v in get_module_tensors_matched_partial(lambda x: 'fc' in x or 'mlp' in x)])
-
- parameter_count['embedding_weights'] = parameter_count['word_embedding'] + parameter_count['positional_embedding']
- parameter_count['attention_weights'] = parameter_count['attention_out'] + parameter_count['attention_Q'] + parameter_count['attention_K'] + parameter_count['attention_V']
-
- return parameter_count
-
-def model_size_estimate(model_config, inference_config):
- parameter_count = {}
- parameter_count['word_embedding'] = model_config['vocab_size']*model_config['hidden_size']
- parameter_count['positional_embedding'] = model_config['max_position_embeddings']*model_config['hidden_size']
-
- parameter_count['attention_Q'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads']
- parameter_count['attention_K'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads']
- parameter_count['attention_V'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads']
- parameter_count['attention_out'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['hidden_size']/model_config['num_attention_heads']*model_config['num_attention_heads']
-
- parameter_count['layernorm'] = 2*model_config['layernorm_operation']*model_config['num_hidden_layers']*model_config['hidden_size']
- parameter_count['mlp1'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['intermediate_size']
- parameter_count['mlp2'] = model_config['num_hidden_layers']*model_config['hidden_size']*model_config['intermediate_size']
- parameter_count['embedding_weights'] = parameter_count['word_embedding'] + parameter_count['positional_embedding']
- parameter_count['attention_weights'] = parameter_count['attention_out'] + parameter_count['attention_Q'] + parameter_count['attention_K'] + parameter_count['attention_V']
- parameter_count['mlp_weights'] = parameter_count['mlp1'] + parameter_count['mlp2']
-
- return parameter_count
-
-def multiplication_in_int64(array):
- return np.cumprod(np.array(array, dtype=np.int64))[-1]
-
-def matrix_operation(shapeA, shapeB):
- assert(shapeA[-1] == shapeB[0])
- op = np.cumprod(np.array(shapeA[:-1], np.float64))
- return multiplication_in_int64([2, op[-1], shapeA[-1], shapeB[-1]])
-
-def word_embedding_operation(model_config, inference_config):
- #Given:
- #\begin{itemize}
- # \item Matrix \( X \) of size \( B \times s \) (representing the batch size and sequence length respectively).
- # \item Embedding matrix \( W_e \) of size \( n_{vocab} \times d_{model} \).
- #\end{itemize}
-
- #The resultant matrix after the multiplication will be of size \( B \times s \times d_{model} \).
- #For each element in this resultant matrix, the number of FLOPs required is \( 2 \times n_{vocab} \). This is because for a single element in the output matrix, we have \( 2N \) FLOPs (with \( N \) being the common dimension), leading to the matrix multiplication FLOP count as:
- #\begin{equation}
- #2 \times B \times s \times n_{v ocab} \times d_{model}
- #\end{equation}
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'embed' in x and 'pos' not in x, model_config['module_classes'])
- if len(modules) > 0:
- A = [inference_config['batchsize'], inference_config['input_seq_length'], modules[0][0]]
- B = modules[0]
- op_count = matrix_operation(A, B)
- return op_count
-
- A = [inference_config['batchsize'], inference_config['input_seq_length'], model_config['vocab_size']]
- B = [model_config['vocab_size'], model_config['hidden_size']]
- op_count = matrix_operation(A, B)
- return op_count
-
-
-def positional_embedding_operation(model_config, inference_config):
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'embed' in x and 'pos' in x, model_config['module_classes'])
- if len(modules) > 0:
- return multiplication_in_int64([inference_config['batchsize'], inference_config['input_seq_length'], modules[0][-1]])
-
- return multiplication_in_int64([inference_config['batchsize'], inference_config['input_seq_length'], model_config['hidden_size']])
-
-### Below three are the same
-def attention_K_operation(model_config, inference_config, seq_length):
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'att' in x and 'k' in x , model_config['module_classes'])
- if len(modules) > 0:
- total = 0
- for module in modules:
- if len(module) > 1:
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size_per_head']]
- total += model_config['num_attention_heads']*matrix_operation(A, B)
- else:
- total += model_config['hidden_size']
- return total
-
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size_per_head']]
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B)
-
-def attention_Q_operation(model_config, inference_config, seq_length):
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'att' in x and 'q' in x , model_config['module_classes'])
- if len(modules) > 0:
- total = 0
- for module in modules:
- if len(module) > 1:
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size_per_head']]
- total += model_config['num_attention_heads']*matrix_operation(A, B)
- else:
- total += model_config['hidden_size']
- return total
-
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size_per_head']]
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B)
-
-def attention_V_operation(model_config, inference_config, seq_length):
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'att' in x and 'v' in x , model_config['module_classes'])
- if len(modules) > 0:
- total = 0
- for module in modules:
- if len(module) > 1:
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size_per_head']]
- total += model_config['num_attention_heads']*matrix_operation(A, B)
- else:
- total += model_config['hidden_size']
- return total
-
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size_per_head']]
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B)
-
-##
-def attention_QK_operation(model_config, inference_config, seq_length_Q, seq_length_K):
- A = [inference_config['batchsize'], seq_length_Q, model_config['hidden_size_per_head']]
- B = [model_config['hidden_size_per_head'], seq_length_K]
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * matrix_operation(A, B)
-
-def attention_softmax_operation(model_config, inference_config,seq_length):
- # Ref: Ouyang, A. (2023). Understanding the Performance of Transformer Inference (Doctoral dissertation, Massachusetts Institute of Technology).
- # 3 is a modeled value
- softmax_operation = (3*inference_config['batchsize']*seq_length*seq_length)
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * softmax_operation
-
-def attention_multV_operation(model_config, inference_config, seq_length_Q, seq_length_V):
- A = [inference_config['batchsize'], seq_length_Q, seq_length_V]
- B = [seq_length_V, model_config['hidden_size_per_head']]
- return model_config['num_hidden_layers'] * model_config['num_attention_heads']* matrix_operation(A, B)
-
-def attention_out_operation(model_config, inference_config, seq_length):
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'att' in x and 'k' in x , model_config['module_classes'])
- if len(modules) > 0:
- total = 0
- for module in modules:
- if len(module) > 1:
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size']]
- total += matrix_operation(A, B)
- else:
- total += model_config['hidden_size']
- return total
-
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['hidden_size']]
- return model_config['num_hidden_layers'] * matrix_operation(A, B)
-
-def layernorm_operation(model_config, inference_config, seq_length):
- # Ref: Ouyang, A. (2023). Understanding the Performance of Transformer Inference (Doctoral dissertation, Massachusetts Institute of Technology).
- # 5 is a modeled value
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'norm' in x, model_config['module_classes'])
- if len(modules) > 0:
- total = 0
- for module in modules:
- total += model_config['hidden_size']
- return 5*total
-
- layernorm_operation = (5*inference_config['batchsize']*seq_length*model_config['hidden_size'])
- return model_config['num_hidden_layers'] * model_config['layernorm_operation'] * layernorm_operation
-
-
-def mlp_operation(model_config, inference_config, seq_length):
- if model_config['module_classes']:
- modules = get_module_tensors_matched(lambda x: 'fc' in x or 'mlp' in x, model_config['module_classes'])
- if len(modules) > 0:
- total = 0
- for module in modules:
- if len(module) > 1:
- A = [inference_config['batchsize'], seq_length, module[1]]
- B = [module[1], module[0]]
- total += matrix_operation(A, B)
- else:
- total += modules[-1][0]
- return total
-
- A = [inference_config['batchsize'], seq_length, model_config['hidden_size']]
- B = [model_config['hidden_size'], model_config['intermediate_size']]
- return model_config['num_hidden_layers'] * (2*matrix_operation(A, B))
-
-
-def prefilling_operation(model_config, inference_config):
- prefilling_operation_count = {}
- prefilling_operation_count['word_embedding'] = word_embedding_operation(model_config, inference_config)
- prefilling_operation_count['positional_embedding'] = positional_embedding_operation(model_config, inference_config)
-
- prefilling_operation_count['attention_Q'] = attention_Q_operation(model_config, inference_config, inference_config['input_seq_length'])
- prefilling_operation_count['attention_K'] = attention_K_operation(model_config, inference_config, inference_config['input_seq_length'])
- prefilling_operation_count['attention_V'] = attention_V_operation(model_config, inference_config, inference_config['input_seq_length'])
- prefilling_operation_count['attention_QK'] = attention_QK_operation(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length'])
- prefilling_operation_count['attention_softmax'] = attention_softmax_operation(model_config, inference_config, inference_config['input_seq_length'])
- prefilling_operation_count['attention_multV'] = attention_multV_operation(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length'])
- prefilling_operation_count['attention_out'] = attention_out_operation(model_config, inference_config, inference_config['input_seq_length'])
-
- prefilling_operation_count['layernorm'] =layernorm_operation(model_config, inference_config, inference_config['input_seq_length'])
-
- prefilling_operation_count['mlp'] = mlp_operation(model_config, inference_config, inference_config['input_seq_length'])
-
- prefilling_operation_count['embeddings'] = prefilling_operation_count['word_embedding'] + prefilling_operation_count['positional_embedding']
- prefilling_operation_count['attention'] = sum([v for k,v in prefilling_operation_count.items() if 'attention' in k])
- prefilling_operation_count['total'] = (prefilling_operation_count['embeddings'] + prefilling_operation_count['attention'] + prefilling_operation_count['mlp'] + prefilling_operation_count['layernorm'])
-
- return prefilling_operation_count
-
-def generation_operation(model_config, inference_config):
- generation_operation_count = {}
- generation_operation_count['word_embedding'] = 0
- generation_operation_count['positional_embedding'] = 0
- generation_operation_count['attention_K'] = 0
- generation_operation_count['attention_V'] = 0
- generation_operation_count['attention_Q'] = 0
- generation_operation_count['attention_QK'] = 0
- generation_operation_count['attention_softmax'] = 0
- generation_operation_count['attention_multV'] = 0
- generation_operation_count['attention_out'] = 0
- generation_operation_count['mlp'] = 0
- generation_operation_count['layernorm'] = 0
-
- for t in range(inference_config['output_seq_length']):
- if inference_config['KV_cache']:
- generation_operation_count['attention_K'] += attention_K_operation(model_config, inference_config, 1)
- generation_operation_count['attention_V'] += attention_V_operation(model_config, inference_config, 1)
- generation_operation_count['attention_Q'] += attention_Q_operation(model_config, inference_config, 1)
- generation_operation_count['attention_QK'] += attention_QK_operation(model_config, inference_config, seq_length_Q=1, seq_length_K=(t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_softmax'] += attention_softmax_operation(model_config, inference_config, 1)
- generation_operation_count['attention_multV'] += attention_multV_operation(model_config, inference_config, seq_length_Q=1, seq_length_V=(t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_out'] += attention_out_operation(model_config, inference_config, 1)
- generation_operation_count['mlp'] += mlp_operation(model_config, inference_config, 1)
- else:
- generation_operation_count['attention_K'] += attention_K_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_V'] += attention_V_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_Q'] += attention_Q_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_QK'] += attention_QK_operation(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_K=(t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_softmax'] += attention_softmax_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_multV'] += attention_multV_operation(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_V=(t+1)+inference_config['input_seq_length'])
- generation_operation_count['attention_out'] += attention_out_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- generation_operation_count['mlp'] += mlp_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
-
- generation_operation_count['layernorm'] += layernorm_operation(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
-
- generation_operation_count['embeddings'] = generation_operation_count['word_embedding'] + generation_operation_count['positional_embedding']
- generation_operation_count['attention'] = sum([v for k,v in generation_operation_count.items() if 'attention' in k])
- generation_operation_count['total'] = (generation_operation_count['attention'] + generation_operation_count['mlp'] + generation_operation_count['layernorm'])
-
- return generation_operation_count
-
-
-def word_embedding_activation_memory(model_config, inference_config, seq_length):
- return inference_config['batchsize'] * seq_length * (model_config['vocab_size'] + model_config['hidden_size'])
-
-def positional_embedding_activation_memory(model_config, inference_config, seq_length):
- return 2 * inference_config['batchsize'] * seq_length * model_config['hidden_size']
-
-def attention_K_activation_memory(model_config, inference_config, seq_length):
- per_head_per_layer = inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['hidden_size_per_head'])
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer
-
-def attention_V_activation_memory(model_config, inference_config, seq_length):
- per_head_per_layer = inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['hidden_size_per_head'])
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer
-
-def attention_Q_activation_memory(model_config, inference_config, seq_length):
- per_head_per_layer = inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['hidden_size_per_head'])
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer
-
-def attention_QK_activation_memory(model_config, inference_config, seq_length_Q, seq_length_K):
- inputs_Q = inference_config['batchsize'] * seq_length_Q * model_config['hidden_size_per_head']
- inputs_K = inference_config['batchsize'] * seq_length_K * model_config['hidden_size_per_head']
- outputs = inference_config['batchsize'] * seq_length_Q * seq_length_K
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * (inputs_Q + inputs_K + outputs)
-
-def attention_softmax_activation_memory(model_config, inference_config, seq_length):
- per_head_per_layer = (2 * inference_config['batchsize'] * seq_length * seq_length)
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer
-
-def attention_multV_activation_memory(model_config, inference_config, seq_length_Q, seq_length_V):
- per_head_per_layer = inference_config['batchsize'] * seq_length_Q * seq_length_V + inference_config['batchsize'] * seq_length_Q * model_config['hidden_size_per_head'] + inference_config['batchsize'] * seq_length_V * model_config['hidden_size_per_head']
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer
-
-def attention_out_activation_memory(model_config, inference_config, seq_length):
- per_head_per_layer = 2 * inference_config['batchsize'] * seq_length * model_config['hidden_size']
- return model_config['num_hidden_layers'] * model_config['num_attention_heads'] * per_head_per_layer
-
-def layernorm_activation_memory(model_config, inference_config, seq_length):
- per_layernorm_per_layer = 2 * inference_config['batchsize'] * seq_length * model_config['hidden_size']
- return model_config['num_hidden_layers'] * model_config['layernorm_operation'] * per_layernorm_per_layer
-
-def mlp_activation_memory(model_config, inference_config, seq_length):
- # two mlp layer
- per_layer = 2 * inference_config['batchsize'] * seq_length * (model_config['hidden_size'] + model_config['intermediate_size'])
- return model_config['num_hidden_layers'] * per_layer
-
-def prefilling_activation_memory(model_config, inference_config):
- activation_memory = {}
-
- activation_memory['word_embedding'] = word_embedding_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
- activation_memory['positional_embedding'] = positional_embedding_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
-
- activation_memory['attention_Q'] = attention_Q_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
- activation_memory['attention_K'] = attention_K_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
- activation_memory['attention_V'] = attention_V_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
- activation_memory['attention_QK'] = attention_QK_activation_memory(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length'])
- activation_memory['attention_softmax'] = attention_softmax_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
- activation_memory['attention_multV'] = attention_multV_activation_memory(model_config, inference_config, inference_config['input_seq_length'], inference_config['input_seq_length'])
- activation_memory['attention_out'] = attention_out_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
-
- activation_memory['layernorm'] = layernorm_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
-
- activation_memory['mlp'] = mlp_activation_memory(model_config, inference_config, inference_config['input_seq_length'])
-
- activation_memory['embeddings'] = activation_memory['word_embedding'] + activation_memory['positional_embedding']
- activation_memory['attention'] = (
- activation_memory['attention_Q'] + activation_memory['attention_K'] +
- activation_memory['attention_V'] + activation_memory['attention_QK'] +
- activation_memory['attention_softmax'] + activation_memory['attention_multV'] +
- activation_memory['attention_out']
- )
- activation_memory['total'] = (
- activation_memory['embeddings'] + activation_memory['attention'] +
- activation_memory['mlp'] + activation_memory['layernorm']
- )
-
- activation_memory['embeddings'] = activation_memory['word_embedding'] + activation_memory['positional_embedding']
- activation_memory['attention'] = sum([v for k,v in activation_memory.items() if 'attention' in k])
- activation_memory['total'] = (activation_memory['attention'] + activation_memory['mlp'] + activation_memory['layernorm'])
-
- return activation_memory
-
-def generation_activation_memory(model_config, inference_config):
- activation_memory = {}
-
- activation_memory['word_embedding'] = 0
- activation_memory['positional_embedding'] = 0
- activation_memory['attention_K'] = 0
- activation_memory['attention_V'] = 0
- activation_memory['attention_Q'] = 0
- activation_memory['attention_QK'] = 0
- activation_memory['attention_softmax'] = 0
- activation_memory['attention_multV'] = 0
- activation_memory['attention_out'] = 0
- activation_memory['mlp'] = 0
- activation_memory['layernorm'] = 0
-
- for t in range(inference_config['output_seq_length']):
- if inference_config['KV_cache']:
- activation_memory['attention_K'] += attention_K_activation_memory(model_config, inference_config, 1)
- activation_memory['attention_V'] += attention_V_activation_memory(model_config, inference_config, 1)
- activation_memory['attention_Q'] += attention_Q_activation_memory(model_config, inference_config, 1)
- activation_memory['attention_QK'] += attention_QK_activation_memory(model_config, inference_config, seq_length_Q=1, seq_length_K=(t+1)+inference_config['input_seq_length'])
- activation_memory['attention_softmax'] += attention_softmax_activation_memory(model_config, inference_config, 1)
- activation_memory['attention_multV'] += attention_multV_activation_memory(model_config, inference_config, seq_length_Q=1, seq_length_V=(t+1)+inference_config['input_seq_length'])
- activation_memory['attention_out'] += attention_out_activation_memory(model_config, inference_config, 1)
- activation_memory['mlp'] += mlp_activation_memory(model_config, inference_config, 1)
- else:
- activation_memory['attention_K'] += attention_K_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- activation_memory['attention_V'] += attention_V_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- activation_memory['attention_Q'] += attention_Q_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- activation_memory['attention_QK'] += attention_QK_activation_memory(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_K=(t+1)+inference_config['input_seq_length'])
- activation_memory['attention_softmax'] += attention_softmax_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- activation_memory['attention_multV'] += attention_multV_activation_memory(model_config, inference_config, seq_length_Q=(t+1)+inference_config['input_seq_length'], seq_length_V=(t+1)+inference_config['input_seq_length'])
- activation_memory['attention_out'] += attention_out_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
- activation_memory['mlp'] += mlp_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
-
- activation_memory['layernorm'] += layernorm_activation_memory(model_config, inference_config, (t+1)+inference_config['input_seq_length'])
-
- activation_memory['embeddings'] = activation_memory['word_embedding'] + activation_memory['positional_embedding']
- activation_memory['attention'] = (
- activation_memory['attention_K'] + activation_memory['attention_V'] +
- activation_memory['attention_Q'] + activation_memory['attention_QK'] +
- activation_memory['attention_softmax'] + activation_memory['attention_multV'] +
- activation_memory['attention_out']
- )
- activation_memory['total'] = (
- activation_memory['embeddings'] + activation_memory['attention'] +
- activation_memory['mlp'] + activation_memory['layernorm']
- )
-
- return activation_memory
-
-
-def calc_prefilling_throughput(model_config, inference_config, inference_info):
- inference_info['prefilling_throughput'] = inference_config['input_seq_length']*inference_config['batchsize'] / max([inference_info['inference_prefilling_time'], inference_info['prefilling_memory_latency']])
- inference_info['prefilling_bound_type'] = "memory" if inference_info['inference_prefilling_time'] < inference_info['prefilling_memory_latency'] else "arithmetic"
-
-def calc_generation_throughput(model_config, inference_config, inference_info):
- inference_info['generation_throughput'] = inference_config['input_seq_length']*inference_config['batchsize'] / max([inference_info['inference_generation_time'], inference_info['generation_memory_latency']])
- inference_info['generation_bound_type'] = "memory" if inference_info['inference_generation_time'] < inference_info['generation_memory_latency'] else "arithmetic"
-
- total_time = max([inference_info['inference_prefilling_time'], inference_info['prefilling_memory_latency']]) + max([inference_info['inference_generation_time'], inference_info['generation_memory_latency']])
- inference_info['client_generation_throughput'] = inference_config['output_seq_length']*inference_config['batchsize'] / total_time
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/types.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/types.py
deleted file mode 100644
index 7adf565a7b6b7d4f1eed3adf6a96faab66fe517c..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fastapi/types.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import types
-from enum import Enum
-from typing import Any, Callable, Dict, Set, Type, TypeVar, Union
-
-from pydantic import BaseModel
-
-DecoratedCallable = TypeVar("DecoratedCallable", bound=Callable[..., Any])
-UnionType = getattr(types, "UnionType", Union)
-NoneType = getattr(types, "UnionType", None)
-ModelNameMap = Dict[Union[Type[BaseModel], Type[Enum]], str]
-IncEx = Union[Set[int], Set[str], Dict[int, Any], Dict[str, Any]]
diff --git a/spaces/cncn102/bingo1/src/components/ui/textarea.tsx b/spaces/cncn102/bingo1/src/components/ui/textarea.tsx
deleted file mode 100644
index e25af722c7a5dc1121a9ab58d6716952f9f76081..0000000000000000000000000000000000000000
--- a/spaces/cncn102/bingo1/src/components/ui/textarea.tsx
+++ /dev/null
@@ -1,24 +0,0 @@
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-
-export interface TextareaProps
- extends React.TextareaHTMLAttributes {}
-
-const Textarea = React.forwardRef(
- ({ className, ...props }, ref) => {
- return (
-
- )
- }
-)
-Textarea.displayName = 'Textarea'
-
-export { Textarea }
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/idctdsp_init_arm.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/idctdsp_init_arm.c
deleted file mode 100644
index ebc90e4b49e69eeb01307c5de6887bde319667bb..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/arm/idctdsp_init_arm.c
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * ARM-optimized IDCT functions
- * Copyright (c) 2001 Lionel Ulmer
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-
-#include "libavutil/attributes.h"
-#include "libavutil/cpu.h"
-#include "libavutil/arm/cpu.h"
-#include "libavcodec/avcodec.h"
-#include "libavcodec/idctdsp.h"
-#include "idct.h"
-#include "idctdsp_arm.h"
-
-void ff_add_pixels_clamped_arm(const int16_t *block, uint8_t *dest,
- ptrdiff_t line_size);
-
-/* XXX: those functions should be suppressed ASAP when all IDCTs are
- * converted */
-static void j_rev_dct_arm_put(uint8_t *dest, ptrdiff_t line_size,
- int16_t *block)
-{
- ff_j_rev_dct_arm(block);
- ff_put_pixels_clamped_c(block, dest, line_size);
-}
-
-static void j_rev_dct_arm_add(uint8_t *dest, ptrdiff_t line_size,
- int16_t *block)
-{
- ff_j_rev_dct_arm(block);
- ff_add_pixels_clamped_arm(block, dest, line_size);
-}
-
-static void simple_idct_arm_put(uint8_t *dest, ptrdiff_t line_size,
- int16_t *block)
-{
- ff_simple_idct_arm(block);
- ff_put_pixels_clamped_c(block, dest, line_size);
-}
-
-static void simple_idct_arm_add(uint8_t *dest, ptrdiff_t line_size,
- int16_t *block)
-{
- ff_simple_idct_arm(block);
- ff_add_pixels_clamped_arm(block, dest, line_size);
-}
-
-av_cold void ff_idctdsp_init_arm(IDCTDSPContext *c, AVCodecContext *avctx,
- unsigned high_bit_depth)
-{
- int cpu_flags = av_get_cpu_flags();
-
- if (!avctx->lowres && !high_bit_depth) {
- if ((avctx->idct_algo == FF_IDCT_AUTO && !(avctx->flags & AV_CODEC_FLAG_BITEXACT)) ||
- avctx->idct_algo == FF_IDCT_ARM) {
- c->idct_put = j_rev_dct_arm_put;
- c->idct_add = j_rev_dct_arm_add;
- c->idct = ff_j_rev_dct_arm;
- c->perm_type = FF_IDCT_PERM_LIBMPEG2;
- } else if (avctx->idct_algo == FF_IDCT_SIMPLEARM) {
- c->idct_put = simple_idct_arm_put;
- c->idct_add = simple_idct_arm_add;
- c->idct = ff_simple_idct_arm;
- c->perm_type = FF_IDCT_PERM_NONE;
- }
- }
-
- c->add_pixels_clamped = ff_add_pixels_clamped_arm;
-
- if (have_armv5te(cpu_flags))
- ff_idctdsp_init_armv5te(c, avctx, high_bit_depth);
- if (have_armv6(cpu_flags))
- ff_idctdsp_init_armv6(c, avctx, high_bit_depth);
- if (have_neon(cpu_flags))
- ff_idctdsp_init_neon(c, avctx, high_bit_depth);
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dnxhddec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dnxhddec.c
deleted file mode 100644
index 7cc4f94c7f8414ccfd1dbe5f9743b32d59c8031e..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dnxhddec.c
+++ /dev/null
@@ -1,739 +0,0 @@
-/*
- * VC3/DNxHD decoder.
- * Copyright (c) 2007 SmartJog S.A., Baptiste Coudurier
- * Copyright (c) 2011 MirriAd Ltd
- * Copyright (c) 2015 Christophe Gisquet
- *
- * 10 bit support added by MirriAd Ltd, Joseph Artsimovich
- * Slice multithreading and MB interlaced support added by Christophe Gisquet
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "libavutil/mem_internal.h"
-#include "libavutil/pixdesc.h"
-
-#include "avcodec.h"
-#include "blockdsp.h"
-#include "codec_internal.h"
-#include "decode.h"
-#define UNCHECKED_BITSTREAM_READER 1
-#include "get_bits.h"
-#include "dnxhddata.h"
-#include "idctdsp.h"
-#include "profiles.h"
-#include "thread.h"
-
-typedef struct RowContext {
- DECLARE_ALIGNED(32, int16_t, blocks)[12][64];
- int luma_scale[64];
- int chroma_scale[64];
- GetBitContext gb;
- int last_dc[3];
- int last_qscale;
- int errors;
- /** -1:not set yet 0:off=RGB 1:on=YUV 2:variable */
- int format;
-} RowContext;
-
-typedef struct DNXHDContext {
- AVCodecContext *avctx;
- RowContext *rows;
- BlockDSPContext bdsp;
- const uint8_t* buf;
- int buf_size;
- int64_t cid; ///< compression id
- unsigned int width, height;
- enum AVPixelFormat pix_fmt;
- unsigned int mb_width, mb_height;
- uint32_t mb_scan_index[512];
- int data_offset; // End of mb_scan_index, where macroblocks start
- int cur_field; ///< current interlaced field
- VLC ac_vlc, dc_vlc, run_vlc;
- IDCTDSPContext idsp;
- uint8_t permutated_scantable[64];
- const CIDEntry *cid_table;
- int bit_depth; // 8, 10, 12 or 0 if not initialized at all.
- int is_444;
- int alpha;
- int lla;
- int mbaff;
- int act;
- int (*decode_dct_block)(const struct DNXHDContext *ctx,
- RowContext *row, int n);
-} DNXHDContext;
-
-#define DNXHD_VLC_BITS 9
-#define DNXHD_DC_VLC_BITS 7
-
-static int dnxhd_decode_dct_block_8(const DNXHDContext *ctx,
- RowContext *row, int n);
-static int dnxhd_decode_dct_block_10(const DNXHDContext *ctx,
- RowContext *row, int n);
-static int dnxhd_decode_dct_block_10_444(const DNXHDContext *ctx,
- RowContext *row, int n);
-static int dnxhd_decode_dct_block_12(const DNXHDContext *ctx,
- RowContext *row, int n);
-static int dnxhd_decode_dct_block_12_444(const DNXHDContext *ctx,
- RowContext *row, int n);
-
-static av_cold int dnxhd_decode_init(AVCodecContext *avctx)
-{
- DNXHDContext *ctx = avctx->priv_data;
-
- ctx->avctx = avctx;
- ctx->cid = -1;
- if (avctx->colorspace == AVCOL_SPC_UNSPECIFIED) {
- avctx->colorspace = AVCOL_SPC_BT709;
- }
-
- avctx->coded_width = FFALIGN(avctx->width, 16);
- avctx->coded_height = FFALIGN(avctx->height, 16);
-
- ctx->rows = av_calloc(avctx->thread_count, sizeof(*ctx->rows));
- if (!ctx->rows)
- return AVERROR(ENOMEM);
-
- return 0;
-}
-
-static int dnxhd_init_vlc(DNXHDContext *ctx, uint32_t cid, int bitdepth)
-{
- int ret;
- if (cid != ctx->cid) {
- const CIDEntry *cid_table = ff_dnxhd_get_cid_table(cid);
-
- if (!cid_table) {
- av_log(ctx->avctx, AV_LOG_ERROR, "unsupported cid %"PRIu32"\n", cid);
- return AVERROR(ENOSYS);
- }
- if (cid_table->bit_depth != bitdepth &&
- cid_table->bit_depth != DNXHD_VARIABLE) {
- av_log(ctx->avctx, AV_LOG_ERROR, "bit depth mismatches %d %d\n",
- cid_table->bit_depth, bitdepth);
- return AVERROR_INVALIDDATA;
- }
- ctx->cid_table = cid_table;
- av_log(ctx->avctx, AV_LOG_VERBOSE, "Profile cid %"PRIu32".\n", cid);
-
- ff_free_vlc(&ctx->ac_vlc);
- ff_free_vlc(&ctx->dc_vlc);
- ff_free_vlc(&ctx->run_vlc);
-
- if ((ret = init_vlc(&ctx->ac_vlc, DNXHD_VLC_BITS, 257,
- ctx->cid_table->ac_bits, 1, 1,
- ctx->cid_table->ac_codes, 2, 2, 0)) < 0)
- goto out;
- if ((ret = init_vlc(&ctx->dc_vlc, DNXHD_DC_VLC_BITS, bitdepth > 8 ? 14 : 12,
- ctx->cid_table->dc_bits, 1, 1,
- ctx->cid_table->dc_codes, 1, 1, 0)) < 0)
- goto out;
- if ((ret = init_vlc(&ctx->run_vlc, DNXHD_VLC_BITS, 62,
- ctx->cid_table->run_bits, 1, 1,
- ctx->cid_table->run_codes, 2, 2, 0)) < 0)
- goto out;
-
- ctx->cid = cid;
- }
- ret = 0;
-out:
- if (ret < 0)
- av_log(ctx->avctx, AV_LOG_ERROR, "init_vlc failed\n");
- return ret;
-}
-
-static int dnxhd_get_profile(int cid)
-{
- switch(cid) {
- case 1270:
- return FF_PROFILE_DNXHR_444;
- case 1271:
- return FF_PROFILE_DNXHR_HQX;
- case 1272:
- return FF_PROFILE_DNXHR_HQ;
- case 1273:
- return FF_PROFILE_DNXHR_SQ;
- case 1274:
- return FF_PROFILE_DNXHR_LB;
- }
- return FF_PROFILE_DNXHD;
-}
-
-static int dnxhd_decode_header(DNXHDContext *ctx, AVFrame *frame,
- const uint8_t *buf, int buf_size,
- int first_field)
-{
- int i, cid, ret;
- int old_bit_depth = ctx->bit_depth, bitdepth;
- uint64_t header_prefix;
- if (buf_size < 0x280) {
- av_log(ctx->avctx, AV_LOG_ERROR,
- "buffer too small (%d < 640).\n", buf_size);
- return AVERROR_INVALIDDATA;
- }
-
- header_prefix = ff_dnxhd_parse_header_prefix(buf);
- if (header_prefix == 0) {
- av_log(ctx->avctx, AV_LOG_ERROR,
- "unknown header 0x%02X 0x%02X 0x%02X 0x%02X 0x%02X\n",
- buf[0], buf[1], buf[2], buf[3], buf[4]);
- return AVERROR_INVALIDDATA;
- }
- if (buf[5] & 2) { /* interlaced */
- ctx->cur_field = first_field ? buf[5] & 1 : !ctx->cur_field;
- frame->interlaced_frame = 1;
- frame->top_field_first = first_field ^ ctx->cur_field;
- av_log(ctx->avctx, AV_LOG_DEBUG,
- "interlaced %d, cur field %d\n", buf[5] & 3, ctx->cur_field);
- } else {
- ctx->cur_field = 0;
- }
- ctx->mbaff = (buf[0x6] >> 5) & 1;
- ctx->alpha = buf[0x7] & 1;
- ctx->lla = (buf[0x7] >> 1) & 1;
- if (ctx->alpha)
- avpriv_request_sample(ctx->avctx, "alpha");
-
- ctx->height = AV_RB16(buf + 0x18);
- ctx->width = AV_RB16(buf + 0x1a);
-
- switch(buf[0x21] >> 5) {
- case 1: bitdepth = 8; break;
- case 2: bitdepth = 10; break;
- case 3: bitdepth = 12; break;
- default:
- av_log(ctx->avctx, AV_LOG_ERROR,
- "Unknown bitdepth indicator (%d)\n", buf[0x21] >> 5);
- return AVERROR_INVALIDDATA;
- }
-
- cid = AV_RB32(buf + 0x28);
-
- ctx->avctx->profile = dnxhd_get_profile(cid);
-
- if ((ret = dnxhd_init_vlc(ctx, cid, bitdepth)) < 0)
- return ret;
- if (ctx->mbaff && ctx->cid_table->cid != 1260)
- av_log(ctx->avctx, AV_LOG_WARNING,
- "Adaptive MB interlace flag in an unsupported profile.\n");
-
- switch ((buf[0x2C] >> 1) & 3) {
- case 0: frame->colorspace = AVCOL_SPC_BT709; break;
- case 1: frame->colorspace = AVCOL_SPC_BT2020_NCL; break;
- case 2: frame->colorspace = AVCOL_SPC_BT2020_CL; break;
- case 3: frame->colorspace = AVCOL_SPC_UNSPECIFIED; break;
- }
-
- ctx->act = buf[0x2C] & 1;
- if (ctx->act && ctx->cid_table->cid != 1256 && ctx->cid_table->cid != 1270)
- av_log(ctx->avctx, AV_LOG_WARNING,
- "Adaptive color transform in an unsupported profile.\n");
-
- ctx->is_444 = (buf[0x2C] >> 6) & 1;
- if (ctx->is_444) {
- if (bitdepth == 8) {
- avpriv_request_sample(ctx->avctx, "4:4:4 8 bits");
- return AVERROR_INVALIDDATA;
- } else if (bitdepth == 10) {
- ctx->decode_dct_block = dnxhd_decode_dct_block_10_444;
- ctx->pix_fmt = ctx->act ? AV_PIX_FMT_YUV444P10
- : AV_PIX_FMT_GBRP10;
- } else {
- ctx->decode_dct_block = dnxhd_decode_dct_block_12_444;
- ctx->pix_fmt = ctx->act ? AV_PIX_FMT_YUV444P12
- : AV_PIX_FMT_GBRP12;
- }
- } else if (bitdepth == 12) {
- ctx->decode_dct_block = dnxhd_decode_dct_block_12;
- ctx->pix_fmt = AV_PIX_FMT_YUV422P12;
- } else if (bitdepth == 10) {
- if (ctx->avctx->profile == FF_PROFILE_DNXHR_HQX)
- ctx->decode_dct_block = dnxhd_decode_dct_block_10_444;
- else
- ctx->decode_dct_block = dnxhd_decode_dct_block_10;
- ctx->pix_fmt = AV_PIX_FMT_YUV422P10;
- } else {
- ctx->decode_dct_block = dnxhd_decode_dct_block_8;
- ctx->pix_fmt = AV_PIX_FMT_YUV422P;
- }
-
- ctx->avctx->bits_per_raw_sample = ctx->bit_depth = bitdepth;
- if (ctx->bit_depth != old_bit_depth) {
- ff_blockdsp_init(&ctx->bdsp);
- ff_idctdsp_init(&ctx->idsp, ctx->avctx);
- ff_permute_scantable(ctx->permutated_scantable, ff_zigzag_direct,
- ctx->idsp.idct_permutation);
- }
-
- // make sure profile size constraints are respected
- // DNx100 allows 1920->1440 and 1280->960 subsampling
- if (ctx->width != ctx->cid_table->width &&
- ctx->cid_table->width != DNXHD_VARIABLE) {
- av_reduce(&ctx->avctx->sample_aspect_ratio.num,
- &ctx->avctx->sample_aspect_ratio.den,
- ctx->width, ctx->cid_table->width, 255);
- ctx->width = ctx->cid_table->width;
- }
-
- if (buf_size < ctx->cid_table->coding_unit_size) {
- av_log(ctx->avctx, AV_LOG_ERROR, "incorrect frame size (%d < %u).\n",
- buf_size, ctx->cid_table->coding_unit_size);
- return AVERROR_INVALIDDATA;
- }
-
- ctx->mb_width = (ctx->width + 15)>> 4;
- ctx->mb_height = AV_RB16(buf + 0x16c);
-
- if ((ctx->height + 15) >> 4 == ctx->mb_height && frame->interlaced_frame)
- ctx->height <<= 1;
-
- av_log(ctx->avctx, AV_LOG_VERBOSE, "%dx%d, 4:%s %d bits, MBAFF=%d ACT=%d\n",
- ctx->width, ctx->height, ctx->is_444 ? "4:4" : "2:2",
- ctx->bit_depth, ctx->mbaff, ctx->act);
-
- // Newer format supports variable mb_scan_index sizes
- if (ctx->mb_height > 68 && ff_dnxhd_check_header_prefix_hr(header_prefix)) {
- ctx->data_offset = 0x170 + (ctx->mb_height << 2);
- } else {
- if (ctx->mb_height > 68) {
- av_log(ctx->avctx, AV_LOG_ERROR,
- "mb height too big: %d\n", ctx->mb_height);
- return AVERROR_INVALIDDATA;
- }
- ctx->data_offset = 0x280;
- }
- if ((ctx->mb_height << frame->interlaced_frame) > (ctx->height + 15) >> 4) {
- av_log(ctx->avctx, AV_LOG_ERROR,
- "mb height too big: %d\n", ctx->mb_height);
- return AVERROR_INVALIDDATA;
- }
-
- if (buf_size < ctx->data_offset) {
- av_log(ctx->avctx, AV_LOG_ERROR,
- "buffer too small (%d < %d).\n", buf_size, ctx->data_offset);
- return AVERROR_INVALIDDATA;
- }
-
- if (ctx->mb_height > FF_ARRAY_ELEMS(ctx->mb_scan_index)) {
- av_log(ctx->avctx, AV_LOG_ERROR,
- "mb_height too big (%d > %"SIZE_SPECIFIER").\n", ctx->mb_height, FF_ARRAY_ELEMS(ctx->mb_scan_index));
- return AVERROR_INVALIDDATA;
- }
-
- for (i = 0; i < ctx->mb_height; i++) {
- ctx->mb_scan_index[i] = AV_RB32(buf + 0x170 + (i << 2));
- ff_dlog(ctx->avctx, "mb scan index %d, pos %d: %"PRIu32"\n",
- i, 0x170 + (i << 2), ctx->mb_scan_index[i]);
- if (buf_size - ctx->data_offset < ctx->mb_scan_index[i]) {
- av_log(ctx->avctx, AV_LOG_ERROR,
- "invalid mb scan index (%"PRIu32" vs %u).\n",
- ctx->mb_scan_index[i], buf_size - ctx->data_offset);
- return AVERROR_INVALIDDATA;
- }
- }
-
- return 0;
-}
-
-static av_always_inline int dnxhd_decode_dct_block(const DNXHDContext *ctx,
- RowContext *row,
- int n,
- int index_bits,
- int level_bias,
- int level_shift,
- int dc_shift)
-{
- int i, j, index1, index2, len, flags;
- int level, component, sign;
- const int *scale;
- const uint8_t *weight_matrix;
- const uint8_t *ac_info = ctx->cid_table->ac_info;
- int16_t *block = row->blocks[n];
- const int eob_index = ctx->cid_table->eob_index;
- int ret = 0;
- OPEN_READER(bs, &row->gb);
-
- ctx->bdsp.clear_block(block);
-
- if (!ctx->is_444) {
- if (n & 2) {
- component = 1 + (n & 1);
- scale = row->chroma_scale;
- weight_matrix = ctx->cid_table->chroma_weight;
- } else {
- component = 0;
- scale = row->luma_scale;
- weight_matrix = ctx->cid_table->luma_weight;
- }
- } else {
- component = (n >> 1) % 3;
- if (component) {
- scale = row->chroma_scale;
- weight_matrix = ctx->cid_table->chroma_weight;
- } else {
- scale = row->luma_scale;
- weight_matrix = ctx->cid_table->luma_weight;
- }
- }
-
- UPDATE_CACHE(bs, &row->gb);
- GET_VLC(len, bs, &row->gb, ctx->dc_vlc.table, DNXHD_DC_VLC_BITS, 1);
- if (len < 0) {
- ret = len;
- goto error;
- }
- if (len) {
- level = GET_CACHE(bs, &row->gb);
- LAST_SKIP_BITS(bs, &row->gb, len);
- sign = ~level >> 31;
- level = (NEG_USR32(sign ^ level, len) ^ sign) - sign;
- row->last_dc[component] += level * (1 << dc_shift);
- }
- block[0] = row->last_dc[component];
-
- i = 0;
-
- UPDATE_CACHE(bs, &row->gb);
- GET_VLC(index1, bs, &row->gb, ctx->ac_vlc.table,
- DNXHD_VLC_BITS, 2);
-
- while (index1 != eob_index) {
- level = ac_info[2*index1+0];
- flags = ac_info[2*index1+1];
-
- sign = SHOW_SBITS(bs, &row->gb, 1);
- SKIP_BITS(bs, &row->gb, 1);
-
- if (flags & 1) {
- level += SHOW_UBITS(bs, &row->gb, index_bits) << 7;
- SKIP_BITS(bs, &row->gb, index_bits);
- }
-
- if (flags & 2) {
- UPDATE_CACHE(bs, &row->gb);
- GET_VLC(index2, bs, &row->gb, ctx->run_vlc.table,
- DNXHD_VLC_BITS, 2);
- i += ctx->cid_table->run[index2];
- }
-
- if (++i > 63) {
- av_log(ctx->avctx, AV_LOG_ERROR, "ac tex damaged %d, %d\n", n, i);
- ret = -1;
- break;
- }
-
- j = ctx->permutated_scantable[i];
- level *= scale[i];
- level += scale[i] >> 1;
- if (level_bias < 32 || weight_matrix[i] != level_bias)
- level += level_bias; // 1<<(level_shift-1)
- level >>= level_shift;
-
- block[j] = (level ^ sign) - sign;
-
- UPDATE_CACHE(bs, &row->gb);
- GET_VLC(index1, bs, &row->gb, ctx->ac_vlc.table,
- DNXHD_VLC_BITS, 2);
- }
-error:
- CLOSE_READER(bs, &row->gb);
- return ret;
-}
-
-static int dnxhd_decode_dct_block_8(const DNXHDContext *ctx,
- RowContext *row, int n)
-{
- return dnxhd_decode_dct_block(ctx, row, n, 4, 32, 6, 0);
-}
-
-static int dnxhd_decode_dct_block_10(const DNXHDContext *ctx,
- RowContext *row, int n)
-{
- return dnxhd_decode_dct_block(ctx, row, n, 6, 8, 4, 0);
-}
-
-static int dnxhd_decode_dct_block_10_444(const DNXHDContext *ctx,
- RowContext *row, int n)
-{
- return dnxhd_decode_dct_block(ctx, row, n, 6, 32, 6, 0);
-}
-
-static int dnxhd_decode_dct_block_12(const DNXHDContext *ctx,
- RowContext *row, int n)
-{
- return dnxhd_decode_dct_block(ctx, row, n, 6, 8, 4, 2);
-}
-
-static int dnxhd_decode_dct_block_12_444(const DNXHDContext *ctx,
- RowContext *row, int n)
-{
- return dnxhd_decode_dct_block(ctx, row, n, 6, 32, 4, 2);
-}
-
-static int dnxhd_decode_macroblock(const DNXHDContext *ctx, RowContext *row,
- AVFrame *frame, int x, int y)
-{
- int shift1 = ctx->bit_depth >= 10;
- int dct_linesize_luma = frame->linesize[0];
- int dct_linesize_chroma = frame->linesize[1];
- uint8_t *dest_y, *dest_u, *dest_v;
- int dct_y_offset, dct_x_offset;
- int qscale, i, act;
- int interlaced_mb = 0;
-
- if (ctx->mbaff) {
- interlaced_mb = get_bits1(&row->gb);
- qscale = get_bits(&row->gb, 10);
- } else {
- qscale = get_bits(&row->gb, 11);
- }
- act = get_bits1(&row->gb);
- if (act) {
- if (!ctx->act) {
- static int act_warned;
- if (!act_warned) {
- act_warned = 1;
- av_log(ctx->avctx, AV_LOG_ERROR,
- "ACT flag set, in violation of frame header.\n");
- }
- } else if (row->format == -1) {
- row->format = act;
- } else if (row->format != act) {
- row->format = 2; // Variable
- }
- }
-
- if (qscale != row->last_qscale) {
- for (i = 0; i < 64; i++) {
- row->luma_scale[i] = qscale * ctx->cid_table->luma_weight[i];
- row->chroma_scale[i] = qscale * ctx->cid_table->chroma_weight[i];
- }
- row->last_qscale = qscale;
- }
-
- for (i = 0; i < 8 + 4 * ctx->is_444; i++) {
- if (ctx->decode_dct_block(ctx, row, i) < 0)
- return AVERROR_INVALIDDATA;
- }
-
- if (frame->interlaced_frame) {
- dct_linesize_luma <<= 1;
- dct_linesize_chroma <<= 1;
- }
-
- dest_y = frame->data[0] + ((y * dct_linesize_luma) << 4) + (x << (4 + shift1));
- dest_u = frame->data[1] + ((y * dct_linesize_chroma) << 4) + (x << (3 + shift1 + ctx->is_444));
- dest_v = frame->data[2] + ((y * dct_linesize_chroma) << 4) + (x << (3 + shift1 + ctx->is_444));
-
- if (frame->interlaced_frame && ctx->cur_field) {
- dest_y += frame->linesize[0];
- dest_u += frame->linesize[1];
- dest_v += frame->linesize[2];
- }
- if (interlaced_mb) {
- dct_linesize_luma <<= 1;
- dct_linesize_chroma <<= 1;
- }
-
- dct_y_offset = interlaced_mb ? frame->linesize[0] : (dct_linesize_luma << 3);
- dct_x_offset = 8 << shift1;
- if (!ctx->is_444) {
- ctx->idsp.idct_put(dest_y, dct_linesize_luma, row->blocks[0]);
- ctx->idsp.idct_put(dest_y + dct_x_offset, dct_linesize_luma, row->blocks[1]);
- ctx->idsp.idct_put(dest_y + dct_y_offset, dct_linesize_luma, row->blocks[4]);
- ctx->idsp.idct_put(dest_y + dct_y_offset + dct_x_offset, dct_linesize_luma, row->blocks[5]);
-
- if (!(ctx->avctx->flags & AV_CODEC_FLAG_GRAY)) {
- dct_y_offset = interlaced_mb ? frame->linesize[1] : (dct_linesize_chroma << 3);
- ctx->idsp.idct_put(dest_u, dct_linesize_chroma, row->blocks[2]);
- ctx->idsp.idct_put(dest_v, dct_linesize_chroma, row->blocks[3]);
- ctx->idsp.idct_put(dest_u + dct_y_offset, dct_linesize_chroma, row->blocks[6]);
- ctx->idsp.idct_put(dest_v + dct_y_offset, dct_linesize_chroma, row->blocks[7]);
- }
- } else {
- ctx->idsp.idct_put(dest_y, dct_linesize_luma, row->blocks[0]);
- ctx->idsp.idct_put(dest_y + dct_x_offset, dct_linesize_luma, row->blocks[1]);
- ctx->idsp.idct_put(dest_y + dct_y_offset, dct_linesize_luma, row->blocks[6]);
- ctx->idsp.idct_put(dest_y + dct_y_offset + dct_x_offset, dct_linesize_luma, row->blocks[7]);
-
- if (!(ctx->avctx->flags & AV_CODEC_FLAG_GRAY)) {
- dct_y_offset = interlaced_mb ? frame->linesize[1] : (dct_linesize_chroma << 3);
- ctx->idsp.idct_put(dest_u, dct_linesize_chroma, row->blocks[2]);
- ctx->idsp.idct_put(dest_u + dct_x_offset, dct_linesize_chroma, row->blocks[3]);
- ctx->idsp.idct_put(dest_u + dct_y_offset, dct_linesize_chroma, row->blocks[8]);
- ctx->idsp.idct_put(dest_u + dct_y_offset + dct_x_offset, dct_linesize_chroma, row->blocks[9]);
- ctx->idsp.idct_put(dest_v, dct_linesize_chroma, row->blocks[4]);
- ctx->idsp.idct_put(dest_v + dct_x_offset, dct_linesize_chroma, row->blocks[5]);
- ctx->idsp.idct_put(dest_v + dct_y_offset, dct_linesize_chroma, row->blocks[10]);
- ctx->idsp.idct_put(dest_v + dct_y_offset + dct_x_offset, dct_linesize_chroma, row->blocks[11]);
- }
- }
-
- return 0;
-}
-
-static int dnxhd_decode_row(AVCodecContext *avctx, void *data,
- int rownb, int threadnb)
-{
- const DNXHDContext *ctx = avctx->priv_data;
- uint32_t offset = ctx->mb_scan_index[rownb];
- RowContext *row = ctx->rows + threadnb;
- int x, ret;
-
- row->last_dc[0] =
- row->last_dc[1] =
- row->last_dc[2] = 1 << (ctx->bit_depth + 2); // for levels +2^(bitdepth-1)
- ret = init_get_bits8(&row->gb, ctx->buf + offset, ctx->buf_size - offset);
- if (ret < 0) {
- row->errors++;
- return ret;
- }
- for (x = 0; x < ctx->mb_width; x++) {
- int ret = dnxhd_decode_macroblock(ctx, row, data, x, rownb);
- if (ret < 0) {
- row->errors++;
- return ret;
- }
- }
-
- return 0;
-}
-
-static int dnxhd_decode_frame(AVCodecContext *avctx, AVFrame *picture,
- int *got_frame, AVPacket *avpkt)
-{
- const uint8_t *buf = avpkt->data;
- int buf_size = avpkt->size;
- DNXHDContext *ctx = avctx->priv_data;
- int first_field = 1;
- int ret, i;
-
- ff_dlog(avctx, "frame size %d\n", buf_size);
-
- for (i = 0; i < avctx->thread_count; i++)
- ctx->rows[i].format = -1;
-
-decode_coding_unit:
- if ((ret = dnxhd_decode_header(ctx, picture, buf, buf_size, first_field)) < 0)
- return ret;
-
- if ((avctx->width || avctx->height) &&
- (ctx->width != avctx->width || ctx->height != avctx->height)) {
- av_log(avctx, AV_LOG_WARNING, "frame size changed: %dx%d -> %ux%u\n",
- avctx->width, avctx->height, ctx->width, ctx->height);
- first_field = 1;
- }
- if (avctx->pix_fmt != AV_PIX_FMT_NONE && avctx->pix_fmt != ctx->pix_fmt) {
- av_log(avctx, AV_LOG_WARNING, "pix_fmt changed: %s -> %s\n",
- av_get_pix_fmt_name(avctx->pix_fmt), av_get_pix_fmt_name(ctx->pix_fmt));
- first_field = 1;
- }
-
- avctx->pix_fmt = ctx->pix_fmt;
- ret = ff_set_dimensions(avctx, ctx->width, ctx->height);
- if (ret < 0)
- return ret;
-
- if (first_field) {
- if ((ret = ff_thread_get_buffer(avctx, picture, 0)) < 0)
- return ret;
- picture->pict_type = AV_PICTURE_TYPE_I;
- picture->key_frame = 1;
- }
-
- ctx->buf_size = buf_size - ctx->data_offset;
- ctx->buf = buf + ctx->data_offset;
- avctx->execute2(avctx, dnxhd_decode_row, picture, NULL, ctx->mb_height);
-
- if (first_field && picture->interlaced_frame) {
- buf += ctx->cid_table->coding_unit_size;
- buf_size -= ctx->cid_table->coding_unit_size;
- first_field = 0;
- goto decode_coding_unit;
- }
-
- ret = 0;
- for (i = 0; i < avctx->thread_count; i++) {
- ret += ctx->rows[i].errors;
- ctx->rows[i].errors = 0;
- }
-
- if (ctx->act) {
- static int act_warned;
- int format = ctx->rows[0].format;
- for (i = 1; i < avctx->thread_count; i++) {
- if (ctx->rows[i].format != format &&
- ctx->rows[i].format != -1 /* not run */) {
- format = 2;
- break;
- }
- }
- switch (format) {
- case -1:
- case 2:
- if (!act_warned) {
- act_warned = 1;
- av_log(ctx->avctx, AV_LOG_ERROR,
- "Unsupported: variable ACT flag.\n");
- }
- break;
- case 0:
- ctx->pix_fmt = ctx->bit_depth==10
- ? AV_PIX_FMT_GBRP10 : AV_PIX_FMT_GBRP12;
- break;
- case 1:
- ctx->pix_fmt = ctx->bit_depth==10
- ? AV_PIX_FMT_YUV444P10 : AV_PIX_FMT_YUV444P12;
- break;
- }
- }
- avctx->pix_fmt = ctx->pix_fmt;
- if (ret) {
- av_log(ctx->avctx, AV_LOG_ERROR, "%d lines with errors\n", ret);
- return AVERROR_INVALIDDATA;
- }
-
- *got_frame = 1;
- return avpkt->size;
-}
-
-static av_cold int dnxhd_decode_close(AVCodecContext *avctx)
-{
- DNXHDContext *ctx = avctx->priv_data;
-
- ff_free_vlc(&ctx->ac_vlc);
- ff_free_vlc(&ctx->dc_vlc);
- ff_free_vlc(&ctx->run_vlc);
-
- av_freep(&ctx->rows);
-
- return 0;
-}
-
-const FFCodec ff_dnxhd_decoder = {
- .p.name = "dnxhd",
- CODEC_LONG_NAME("VC3/DNxHD"),
- .p.type = AVMEDIA_TYPE_VIDEO,
- .p.id = AV_CODEC_ID_DNXHD,
- .priv_data_size = sizeof(DNXHDContext),
- .init = dnxhd_decode_init,
- .close = dnxhd_decode_close,
- FF_CODEC_DECODE_CB(dnxhd_decode_frame),
- .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS |
- AV_CODEC_CAP_SLICE_THREADS,
- .p.profiles = NULL_IF_CONFIG_SMALL(ff_dnxhd_profiles),
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.h
deleted file mode 100644
index bb495af982dd0caa509ee6ecaff1e1253b568919..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libaom.h
+++ /dev/null
@@ -1,33 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-/**
- * @file
- * AOM common functions
- */
-
-#ifndef AVCODEC_LIBAOM_H
-#define AVCODEC_LIBAOM_H
-
-#include
-
-#include "libavutil/frame.h"
-
-void ff_aom_image_copy_16_to_8(AVFrame *pic, struct aom_image *img);
-
-#endif /* AVCODEC_LIBAOM_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Indonesia APK OBB A Review of the Most Popular Simulation Game.md b/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Indonesia APK OBB A Review of the Most Popular Simulation Game.md
deleted file mode 100644
index 9b627583179ff8ba6d818b6ed7b4fc5433cf8db7..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Bus Simulator Indonesia APK OBB A Review of the Most Popular Simulation Game.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-
Bus Simulator Indonesia APK Plus OBB: A Fun and Realistic Game for Android Users
-
If you are a fan of simulation games, especially bus driving games, then you should not miss Bus Simulator Indonesia. This game is one of the most popular and realistic bus simulator games for Android devices. In this game, you can experience what it is like to be a bus driver in Indonesia, with its unique culture, scenery, and challenges. You can also customize your own buses, explore various cities and landmarks, compete with other players online, and even create your own mods and content. In this article, we will tell you everything you need to know about Bus Simulator Indonesia APK plus OBB, including its features, how to download and install it, and some frequently asked questions.
-
What is Bus Simulator Indonesia?
-
Bus Simulator Indonesia, or BUSSID for short, is a game developed by Maleo, an Indonesian game studio. It was released in 2017 and has since gained millions of downloads and positive reviews from players around the world. The game is designed to give you a realistic and fun bus driving experience in Indonesia, with its diverse geography, culture, and traffic. You can choose from different types of buses, such as city buses, intercity buses, tourist buses, school buses, and more. You can also customize your buses with various livery, stickers, horns, lights, accessories, and even change the interior design. You can drive your buses in different cities and regions in Indonesia, such as Jakarta, Bali, Sumatra, Java, Sulawesi, and more. You can also see famous landmarks and attractions along the way, such as Monas, Borobudur Temple, Tanah Lot Temple, Mount Bromo, Lake Toba, and more.
One of the best features of Bus Simulator Indonesia is that you can customize your own buses with various options. You can change the color, design, logo, name, number plate, and more of your buses. You can also add stickers, horns, lights, accessories, mirrors, wipers, doors, windows, seats, steering wheel, dashboard, and more to your buses. You can even create your own livery using the built-in editor or download livery from other players online. You can also save your custom buses and use them anytime you want.
-
Authentic Indonesian cities and landmarks
-
Another great feature of Bus Simulator Indonesia is that you can explore different cities and regions in Indonesia with realistic graphics and details. You can see the unique architecture, culture, scenery, and landmarks of each city and region. You can also interact with various elements in the environment, such as traffic lights, toll gates, gas stations, rest areas, terminals, passengers, pedestrians, animals, vehicles, and more. You can also enjoy the day-night cycle and dynamic weather system that affect your driving conditions.
-
Realistic traffic and weather conditions
-
Bus Simulator Indonesia also offers realistic traffic and weather conditions that make your driving experience more challenging and fun. You have to follow the traffic rules and avoid accidents, traffic jams, police, and other obstacles on the road. You also have to deal with different weather conditions, such as rain, fog, snow, wind, and more. You can adjust the difficulty level and traffic density according to your preference and skill level. You can also use different camera angles and views to see your bus and the road better.
-
Online multiplayer mode and leaderboards
-
If you want to play with other players online, you can join the online multiplayer mode of Bus Simulator Indonesia. You can create or join a room with up to 10 players and drive together in the same map. You can chat with other players, honk at each other, race with each other, or cooperate with each other. You can also see the leaderboards and rankings of the best players in the game. You can compete with other players in terms of distance, speed, passengers, income, and more.
-
bus simulator indonesia apk plus obb download
-bus simulator indonesia apk plus obb latest version
-bus simulator indonesia apk plus obb free download
-bus simulator indonesia apk plus obb mod
-bus simulator indonesia apk plus obb offline
-bus simulator indonesia apk plus obb 2023
-bus simulator indonesia apk plus obb unlimited money
-bus simulator indonesia apk plus obb android
-bus simulator indonesia apk plus obb update
-bus simulator indonesia apk plus obb hack
-bus simulator indonesia apk plus obb full version
-bus simulator indonesia apk plus obb for pc
-bus simulator indonesia apk plus obb new update
-bus simulator indonesia apk plus obb file
-bus simulator indonesia apk plus obb install
-bus simulator indonesia apk plus obb game
-bus simulator indonesia apk plus obb online
-bus simulator indonesia apk plus obb 3.7.1
-bus simulator indonesia apk plus obb 3.7
-bus simulator indonesia apk plus obb 3.6.1
-bus simulator indonesia apk plus obb xapk
-bus simulator indonesia apk plus obb data
-bus simulator indonesia apk plus obb revdl
-bus simulator indonesia apk plus obb rexdl
-bus simulator indonesia apk plus obb apkpure
-bus simulator indonesia apk plus obb android 1
-bus simulator indonesia apk plus obb android oyun club
-bus simulator indonesia apk plus obb an1.com
-bus simulator indonesia apk plus obb andropalace
-bus simulator indonesia apk plus obb apkmody
-bus simulator indonesia apk plus obb appvn
-bus simulator indonesia apk plus obb aptoide
-bus simulator indonesia apk plus obb apkmirror
-bus simulator indonesia apk plus obb apkombo.com[^1^]
-bus simulator indonesia apk plus obb appcake.net[^2^]
-bus simulator indonesia apk plus obb blackmod.net
-bus simulator indonesia apk plus obb by maleo
-bus simulator indonesia apk plus obb bussimulator.id
-bus simulator indonesia apk plus obb cheat codes
-bus simulator indonesia apk plus obb compressed
-
Mod support and community content
-
Bus Simulator Indonesia also supports modding and community content. You can create your own mods using the mod tools provided by the developers. You can also download mods from other players online. You can find various mods that add new buses, livery, maps, routes, sounds, features, and more to the game. You can also share your own mods and content with other players online. You can also join the official Discord server and Facebook group of Bus Simulator Indonesia to interact with other players, get updates, news, tips, tricks, and more.
-
How to Download and Install Bus Simulator Indonesia APK Plus OBB?
-
If you want to download and install Bus Simulator Indonesia APK plus OBB on your Android device, you need to follow some simple steps. Here are the requirements and steps for downloading and installing Bus Simulator Indonesia APK plus OBB.
-
Requirements for Bus Simulator Indonesia APK Plus OBB
-
Android device with version 4.2 or higher
-
You need an Android device with version 4.2 or higher to run Bus Simulator Indonesia APK plus OBB. The game is compatible with most Android devices, such as smartphones, tablets, and emulators. However, some devices may not support the game due to hardware limitations or compatibility issues.
-
At least 1 GB of free storage space
-
You need at least 1 GB of free storage space on your device to download and install Bus Simulator Indonesia APK plus OBB. The game has a size of about 300 MB for the APK file and about 700 MB for the OBB file. You need to make sure that you have enough space on your device before downloading and installing the game.
-
Stable internet connection
-
You need a stable internet connection to download and install Bus Simulator Indonesia APK plus OBB. You also need an internet connection to play the online multiplayer mode of the game. You can use Wi-Fi or mobile data to connect to the internet. However, you may incur additional charges if you use mobile data.
-
Steps to Download and Install Bus Simulator Indonesia APK Plus OBB
-
Download the APK and OBB files from a trusted source
-
The first step is to download the APK and OBB files of Bus Simulator Indonesia from a trusted source. You can find many websites that offer the download links for Bus Simulator Indonesia APK plus OBB. However, you need to be careful as some websites may contain viruses, malware, or fake files that can harm your device or steal your data. Therefore, you should only download from reputable sources that have positive reviews and ratings from other users. One of the trusted sources that we recommend is [APKPure], which is a popular website that provides safe and verified APK files for various Android games and apps.
-
Enable unknown sources in your device settings
-
The next step is to enable unknown sources in your device settings. This is necessary because Bus Simulator Indonesia APK plus OBB is not available on the Google Play Store, which means that it is not an official app from Google. Therefore, you need to allow your device to install apps from unknown sources that are not from the Google Play Store. To do this, you need to go to your device settings > security > unknown sources > enable or toggle on.
-
Install the APK file and do not open it yet
-
The third step is to install the APK file of Bus Simulator Indonesia on your device. To do this, you need to locate the downloaded APK file on your device storage using a file manager app or a browser app. Then, you need to tap on the APK file and follow the instructions on the screen to install it on your device. After installing it, do not open it yet as you still need to copy the O BB file to the correct folder on your device.
-
Extract the OBB file and copy it to the Android/obb folder in your device storage
-
The fourth step is to extract the OBB file of Bus Simulator Indonesia and copy it to the Android/obb folder in your device storage. To do this, you need to locate the downloaded OBB file on your device storage using a file manager app or a browser app. Then, you need to tap on the OBB file and extract it using a zip extractor app or a built-in extractor app on your device. After extracting it, you will see a folder named com.maleo.bussimulatorid. You need to copy this folder and paste it to the Android/obb folder in your device storage. If you do not have an obb folder, you can create one using a file manager app.
-
Launch the game and enjoy
-
The final step is to launch the game and enjoy. To do this, you need to go to your app drawer or home screen and tap on the Bus Simulator Indonesia icon. Then, you need to wait for the game to load and verify the files. After that, you can start playing the game and have fun. You can also adjust the settings, choose your bus, select your map, join online rooms, and more.
-
Conclusion
-
Bus Simulator Indonesia APK plus OBB is a fun and realistic game for Android users who love simulation games, especially bus driving games. You can customize your own buses, explore different cities and landmarks in Indonesia, experience realistic traffic and weather conditions, play with other players online, and create your own mods and content. You can download and install Bus Simulator Indonesia APK plus OBB on your device by following the simple steps we have provided in this article. We hope you enjoy playing Bus Simulator Indonesia APK plus OBB and have a great time.
-
FAQs
-
Here are some frequently asked questions about Bus Simulator Indonesia APK plus OBB:
-
-
Is Bus Simulator Indonesia APK plus OBB free?
-
Yes, Bus Simulator Indonesia APK plus OBB is free to download and play. However, some features and items may require in-app purchases or watching ads.
-
Is Bus Simulator Indonesia APK plus OBB safe?
-
Yes, Bus Simulator Indonesia APK plus OBB is safe to download and install as long as you use a trusted source like [APKPure]. However, you should always scan the files before installing them on your device and avoid downloading from unknown or suspicious websites.
-
Is Bus Simulator Indonesia APK plus OBB offline?
-
No, Bus Simulator Indonesia APK plus OBB requires an internet connection to play. You need an internet connection to download and install the game, verify the files, access online features, update the game, and more.
-
How to update Bus Simulator Indonesia APK plus OBB?
-
To update Bus Simulator Indonesia APK plus OBB, you need to download and install the latest version of the APK and OBB files from a trusted source like [APKPure]. You also need to delete or overwrite the old files on your device storage.
-
How to contact the developers of Bus Simulator Indonesia?
-
To contact the developers of Bus Simulator Indonesia, you can visit their official website [Maleo], their official Facebook page [Bus Simulator Indonesia], their official Instagram account [@bussimulator.id], or their official Discord server [BUSSID Discord]. You can also email them at support@maleo.id or bussimulatorid@gmail.com.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Classic Sudoku - The Ultimate Sudoku App by Studio Goya LLC for Android Users.md b/spaces/congsaPfin/Manga-OCR/logs/Classic Sudoku - The Ultimate Sudoku App by Studio Goya LLC for Android Users.md
deleted file mode 100644
index 032e7790f92f4b44d1d17975027c9c9f9221b9dc..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Classic Sudoku - The Ultimate Sudoku App by Studio Goya LLC for Android Users.md
+++ /dev/null
@@ -1,128 +0,0 @@
-
-
Classic Sudoku Studio Goya APK: A Review
-
If you are a fan of sudoku puzzles, you might have heard of Classic Sudoku Studio Goya APK, a game developed by Studio Goya LLC and presented by Cracking The Cryptic, the most popular sudoku channel on YouTube. This game offers a collection of handcrafted sudoku puzzles that will challenge your logic and reasoning skills, as well as provide you with hours of fun and entertainment. In this article, we will review the game and tell you why you should download it on your Android device.
-
What is Classic Sudoku Studio Goya APK?
-
A brief introduction to the game and its features
-
Classic Sudoku Studio Goya APK is a board game that consists of 40 beautiful puzzles on launch, with 5 new levels added every month for the first year (a total of 100 levels). The puzzles are designed by Simon Anthony and Mark Goodliffe, the hosts of Cracking The Cryptic, who have both represented the UK many times in the World Sudoku Championship. The puzzles cover a wide range of difficulty, from easy to expert, and require an incredible range of techniques to solve them efficiently. Each puzzle has been carefully tested by a human being to ensure that it is solvable, enjoyable, and satisfying. The game also features hints written by Mark and Simon, which can help you if you get stuck or want to learn new strategies.
How to download and install the game on your Android device
-
To download and install Classic Sudoku Studio Goya APK on your Android device, you need to follow these simple steps:
-
-
Go to this link or this link to access the official download page of the game.
-
Click on the "Download" or "Buy" button to purchase the game for $4.99 (with in-app payments).
-
Wait for the APK file to be downloaded on your device.
-
Open the APK file and follow the instructions to install the game.
-
Enjoy playing Classic Sudoku Studio Goya APK!
-
-
Why should you play Classic Sudoku Studio Goya APK?
-
The benefits of playing sudoku puzzles for your brain and mental health
-
Sudoku puzzles are not only fun and addictive, but also beneficial for your brain and mental health. According to various studies, playing sudoku puzzles can help you improve your memory, concentration, logic, problem-solving, creativity, and mood. Sudoku puzzles can also prevent cognitive decline, reduce stress, enhance self-esteem, and increase happiness. By playing sudoku puzzles regularly, you can keep your mind sharp and healthy.
-
classic sudoku game by studio goya
-studio goya classic sudoku puzzles
-cracking the cryptic classic sudoku app
-download classic sudoku apk from studio goya
-classic sudoku android game by studio goya llc
-studio goya classic sudoku review
-how to play classic sudoku by studio goya
-classic sudoku app with hints by studio goya
-studio goya classic sudoku free download
-classic sudoku board game from studio goya
-studio goya classic sudoku update
-classic sudoku apk for android by studio goya
-studio goya classic sudoku rating
-classic sudoku by cracking the cryptic and studio goya
-studio goya classic sudoku features
-buy classic sudoku app by studio goya
-studio goya classic sudoku online
-classic sudoku apk latest version by studio goya
-studio goya classic sudoku tips and tricks
-classic sudoku appbrain by studio goya
-studio goya classic sudoku support
-classic sudoku apkcombo by studio goya
-studio goya classic sudoku cheats
-classic sudoku google play by studio goya
-studio goya classic sudoku mod apk
-classic sudoku apk size by studio goya
-studio goya classic sudoku for pc
-classic sudoku apk file by studio goya
-studio goya classic sudoku promo code
-classic sudoku app store by studio goya
-studio goya classic sudoku for ios
-classic sudoku apk mirror by studio goya
-studio goya classic sudoku refund policy
-classic sudoku apkpure by studio goya
-studio goya classic sudoku feedback
-classic sudoku apk downloader by studio goya
-studio goya classic sudoku forum
-classic sudoku apkmody by studio goya
-studio goya classic sudoku blog
-classic sudoku apk installer by studio goya
-studio goya classic sudoku newsletter
-classic sudoku apknite by studio goya
-studio goya classic sudoku youtube channel
-classic sudoku apkmonk by studio goya
-studio goya classic sudoku instagram account
-classic sudoku apksfull by studio goya
-studio goya classic sudoku twitter handle
-classic sudoku apktada by studio goya
-
The unique and challenging puzzles curated by Cracking The Cryptic
-
Another reason why you should play Classic Sudoku Studio Goya APK is that it offers some of the most unique and challenging puzzles that you will ever encounter. These puzzles are not randomly generated by a computer, but curated by Cracking The Cryptic, who have a reputation for finding and creating amazing sudoku puzzles. These puzzles are not only difficult, but also elegant, clever, and surprising. They and Simon, which can help you if you get stuck or want to learn new strategies. The game also has a user-friendly and intuitive interface and design that makes playing the game easy and enjoyable. This is a game that will challenge your mind and delight your senses." - Critic review on AndroidAuthority.com
-
A table comparing the game with other popular sudoku apps
-
To give you an idea of how Classic Sudoku Studio Goya APK compares with other popular sudoku apps, we have created a table that shows some of the key features and differences between them. Here is the table:
-
-
-
Feature
-
Classic Sudoku Studio Goya APK
-
Sudoku.com
-
Sudoku by Brainium Studios
-
Sudoku by Easybrain
-
-
-
Number of puzzles
-
40 on launch, 5 new levels every month for the first year (total of 100 levels)
-
Over 10,000 puzzles
-
Unlimited puzzles
-
Over 5,000 puzzles
-
-
-
Puzzle quality and difficulty
-
Handcrafted puzzles by Cracking The Cryptic, ranging from easy to expert, requiring various techniques and logic to solve them efficiently.
-
Randomly generated puzzles, ranging from easy to hard, requiring basic techniques and logic to solve them.
-
Randomly generated puzzles, ranging from easy to expert, requiring basic to advanced techniques and logic to solve them.
-
Randomly generated puzzles, ranging from easy to hard, requiring basic techniques and logic to solve them.
-
-
-
Hints and solutions
-
Hints and solutions written by Mark and Simon, explaining the logic and reasoning behind each move, and teaching new strategies and tricks.
-
Hints that show one possible number for a cell, without explanation.
-
Hints that show one possible number for a cell, without explanation.
-
Hints that show one possible number for a cell, without explanation.
-
-
-
Interface and design
-
User-friendly and intuitive interface and design, with a simple and elegant layout, a dark mode option, a timer, a pencil mode, a highlight mode, a undo/redo button, and a pause button.
-
User-friendly and intuitive interface and design, with a colorful and bright layout, a timer, a pencil mode, a highlight mode, a undo/redo button, and a pause button.
-
User-friendly and intuitive interface and design, with a minimalist and sleek layout, a timer, a pencil mode, a highlight mode, a undo/redo button, and a pause button.
-
User-friendly and intuitive interface and design, with a modern and stylish layout, a timer, a pencil mode, a highlight mode, a undo/redo button, and a pause button.
-
-
-
Sound and music
-
Sound and music option that can be adjusted according to preference.
-
Sound option that can be turned on or off.
-
Sound option that can be turned on or off.
-
Sound option that can be turned on or off.
-
-
Price
$4.99 (with in-app payments)
Free (with ads)
Free (with ads)
Free (with ads)
-
Conclusion
-
A recap of the main points and a call to action for the readers
-
In conclusion, Classic Sudoku Studio Goya APK is one of the best sudoku games that you can play on your Android device. It offers you:
-
A collection of handcrafted sudoku puzzles that will challenge your logic and reasoning skills.
A learning experience with hints and solutions written by Mark and Simon from Cracking The Cryptic.
A user-friendly and intuitive interface and design that makes playing the game easy and enjoyable.
-
If you are looking for a sudoku game that will test your mind and delight your senses, you should download Classic Sudoku Studio Goya APK today. You will not regret it!
-
FAQs
-
Here are some of the frequently asked questions about Classic Sudoku Studio Goya APK:
-
What is Cracking The Cryptic? Cracking The Cryptic is the most popular sudoku channel on YouTube. It is hosted by Simon Anthony and Mark Goodliffe, who are both world-class sudoku solvers. They upload videos of themselves solving various sudoku puzzles from different sources. They also create their own puzzles and share them with Sudoku, Thermo Sudoku, Chess Sudoku, and Miracle Sudoku. You can also send them your feedback, suggestions, and puzzles via email or social media.
-
What are some of the other sudoku games that I can play on my Android device? There are many other sudoku games that you can play on your Android device, such as Sudoku by Volcano Entertainment, Sudoku by Fassor, Sudoku by Easybrain, Sudoku by Brainium Studios, Sudoku by Andoku Games, Sudoku by Genina.com, Sudoku by Beetles Games Studio, and Sudoku by Pink Pointer. You can find these games on the Google Play Store or other app stores.
-
How can I improve my sudoku skills and learn new techniques? You can improve your sudoku skills and learn new techniques by playing more sudoku puzzles of different levels and types, watching and reading tutorials and guides from experts and enthusiasts, practicing and applying the techniques and strategies that you learn, and challenging yourself with harder and more complex puzzles. You can also join online communities and forums where you can interact with other sudoku players, ask questions, share tips, and solve puzzles together.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Cmo instalar iGO primo en tu camin con mapas actualizados gratis.md b/spaces/congsaPfin/Manga-OCR/logs/Cmo instalar iGO primo en tu camin con mapas actualizados gratis.md
deleted file mode 100644
index 4dab5f9da9a6e8f33b376a21fd0e103f330ba78a..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Cmo instalar iGO primo en tu camin con mapas actualizados gratis.md
+++ /dev/null
@@ -1,168 +0,0 @@
-
-
-
-
Descargar iGO Primo APK gratis: ¿Qué es y cómo funciona?
-
-
-
Si estás buscando un software de navegación que te ofrezca mapas offline, guía de voz inteligente, información de tráfico en tiempo real y más características, quizás te interese descargar iGO Primo APK gratis para tu dispositivo Android.
iGO Primo es un software de navegación que ha sido desarrollado por la empresa NNG y que se utiliza en varios sistemas multimedia de vehículos y dispositivos móviles. Con iGO Primo puedes navegar sin necesidad de conexión a internet, ya que los mapas se almacenan en la memoria del dispositivo o en una tarjeta SD.
-
Además, iGO Primo te ofrece una interfaz de usuario personalizable y fácil de usar, con diferentes temas y idiomas disponibles. También puedes buscar destinos y lugares de interés por varias opciones y recibir indicaciones de voz que te anuncian los nombres de las calles y las
salidas y entradas de las autopistas. Asimismo, iGO Primo te proporciona información de tráfico en tiempo real y actualiza las rutas según las condiciones del tráfico.
-
Características de iGO Primo
-
iGO Primo es un software de navegación que cuenta con varias características que lo hacen destacar entre otros programas similares. A continuación, te presentamos algunas de las más importantes:
-
Navegación offline
-
Una de las ventajas de iGO Primo es que te permite navegar sin conexión a internet, lo que te ahorra datos móviles y te evita problemas de cobertura. Para ello, solo tienes que descargar los mapas que necesites desde la aplicación o desde el sitio web oficial. Los mapas se guardan en la memoria interna del dispositivo o en una tarjeta SD externa.
-
descargar igo primo nextgen apk gratis
-descargar igo primo android apk gratis español
-descargar igo primo navigation apk gratis
-descargar igo primo israel apk gratis
-descargar igo primo europe apk gratis
-descargar igo primo truck apk gratis
-descargar igo primo full apk gratis
-descargar igo primo maps apk gratis
-descargar igo primo gps apk gratis
-descargar igo primo offline apk gratis
-descargar igo primo para android apk gratis
-descargar igo primo latest version apk gratis
-descargar igo primo mod apk gratis
-descargar igo primo premium apk gratis
-descargar igo primo cracked apk gratis
-descargar igo primo brasil apk gratis
-descargar igo primo carplay apk gratis
-descargar igo primo radar apk gratis
-descargar igo primo update apk gratis
-descargar igo primo mega apk gratis
-descargar igo primo mexico apk gratis
-descargar igo primo colombia apk gratis
-descargar igo primo argentina apk gratis
-descargar igo primo chile apk gratis
-descargar igo primo peru apk gratis
-descargar igo primo venezuela apk gratis
-descargar igo primo ecuador apk gratis
-descargar igo primo bolivia apk gratis
-descargar igo primo paraguay apk gratis
-descargar igo primo uruguay apk gratis
-descargar igo primo costa rica apk gratis
-descargar igo primo panama apk gratis
-descargar igo primo guatemala apk gratis
-descargar igo primo el salvador apk gratis
-descargar igo primo honduras apk gratis
-descargar igo primo nicaragua apk gratis
-descargar igo primo dominican republic apk gratis
-descargar igo primo puerto rico apk gratis
-descargar igo primo cuba apk gratis
-descargar igo primo usa apk gratis
-descargar igo primo canada apk gratis
-descargar igo primo uk apk gratis
-descargar igo primo france apk gratis
-descargar igo primo germany apk gratis
-descargar igo primo italy apk gratis
-descargar igo primo spain apk gratis
-descargar igo primo portugal apk gratis
-descargar igo primo russia apk gratis
-descargar igo primo turkey apk gratis
-
Los mapas de iGO Primo son detallados y precisos, y cubren más de 100 países en todo el mundo. Además, se actualizan periódicamente para incluir los cambios en las carreteras y los puntos de interés. También puedes descargar mapas en 3D que te ofrecen una vista más realista del terreno.
-
Guía de voz inteligente
-
Otra característica de iGO Primo es que te ofrece una guía de voz inteligente que te indica el camino a seguir con claridad y precisión. La guía de voz te anuncia los nombres de las calles, las salidas y entradas de las autopistas, los límites de velocidad, los radares y los puntos de interés cercanos.
-
Además, puedes elegir entre diferentes voces y idiomas para la guía de voz, según tu preferencia y tu región. También puedes ajustar el volumen y la velocidad de la voz, así como activar o desactivar las alertas sonoras.
-
Información de tráfico en tiempo real
-
Otra característica de iGO Primo es que te proporciona información de tráfico en tiempo real, lo que te ayuda a evitar atascos, accidentes y obras en la carretera. La información de tráfico se obtiene de fuentes oficiales y de otros usuarios de iGO Primo que comparten sus datos.
-
La información de tráfico se muestra en el mapa con diferentes colores según la intensidad del tráfico. También puedes ver el tiempo estimado de llegada y la distancia hasta tu destino. Además, iGO Primo actualiza las rutas automáticamente según las condiciones del tráfico, ofreciéndote la mejor opción disponible.
-
Búsqueda de destinos y lugares de interés
-
Otra característica de iGO Primo es que te permite buscar destinos y lugares de interés por varias opciones, tales como:
-
-
Dirección: puedes introducir la dirección completa o parcial del lugar al que quieres ir.
-
Punto en el mapa: puedes seleccionar un punto en el mapa con el dedo o con el cursor.
-
Contactos: puedes elegir un contacto guardado en tu dispositivo como destino.
-
Favoritos: puedes guardar tus destinos frecuentes o preferidos como favoritos para acceder a ellos rápidamente.
-
Histórico: puedes ver los destinos a los que has ido recientemente y volver a ellos fácilmente.
-
Categorías: puedes buscar lugares de interés por categorías, como restaurantes, hoteles, gasolineras, etc.
-
Código postal: puedes introducir el código postal del lugar al que quieres ir.
-
Coordenadas: puedes introducir las coordenadas geográficas del lugar al que quieres ir.
-
-
Cuando buscas un destino o un lugar de interés, iGO Primo te muestra una lista de resultados con información relevante, como la dirección, el teléfono, el horario, la valoración, etc. También puedes ver los resultados en el mapa o en una vista 3D. Además, puedes filtrar los resultados por distancia, nombre o categoría.
Interfaz de usuario personalizable y fácil de usar
-
Otra característica de iGO Primo es que te ofrece una interfaz de usuario personalizable y fácil de usar, con la que puedes acceder a todas las funciones del software de forma intuitiva y cómoda. Puedes elegir entre diferentes temas y colores para la interfaz, así como ajustar el brillo, el contraste y el tamaño de la fuente.
-
También puedes cambiar el idioma de la interfaz y de la guía de voz, entre más de 40 opciones disponibles. Además, puedes configurar las preferencias de navegación, como el tipo de vehículo, el modo de ruta, los avisos de seguridad, etc.
-
Cómo descargar e instalar iGO Primo APK gratis
-
Si quieres descargar iGO Primo APK gratis para tu dispositivo Android, debes seguir los siguientes pasos:
-
Requisitos previos
-
Antes de descargar e instalar iGO Primo APK gratis, debes asegurarte de que tu dispositivo cumple con los siguientes requisitos:
-
-
Tener una versión de Android igual o superior a la 4.0.
-
Tener al menos 1 GB de memoria RAM y 8 GB de espacio libre en la memoria interna o en una tarjeta SD externa.
-
Tener una conexión a internet para descargar los mapas y la información de tráfico.
-
Tener un cargador o una batería externa para evitar que el dispositivo se apague durante la navegación.
-
Tener un soporte o un soporte magnético para colocar el dispositivo en el vehículo.
-
-
Descarga del archivo APK
-
Para descargar el archivo APK de iGO Primo, debes acceder al siguiente enlace desde tu navegador web:
Este enlace te llevará a una página donde podrás ver la información y las valoraciones del software. También podrás ver el botón "Descargar APK", que debes pulsar para iniciar la descarga del archivo.
-
El archivo APK tiene un tamaño de unos 17 MB y se guardará en la carpeta "Descargas" de tu dispositivo. Puedes verificar el progreso de la descarga en la barra de notificaciones o en el gestor de descargas.
-
Instalación del archivo APK
-
Para instalar el archivo APK de iGO Primo, debes seguir los siguientes pasos:
-
-
Abrir el gestor de archivos de tu dispositivo y localizar el archivo APK que has descargado.
-
Pulsar sobre el archivo APK para abrirlo. Si te aparece un mensaje de advertencia sobre la instalación de aplicaciones desconocidas, debes ir a los ajustes de seguridad de tu dispositivo y habilitar la opción "Orígenes desconocidos" o "Fuentes desconocidas".
-
Aceptar los permisos que solicita la aplicación y pulsar el botón "Instalar". Esperar a que se complete la instalación.
-
Pulsar el botón "Abrir" para iniciar la aplicación o buscar el icono de iGO Primo en el menú de aplicaciones.
-
Configuración inicial del software
-
Para configurar el software de iGO Primo por primera vez, debes seguir los siguientes pasos:
-
-
Aceptar los términos y condiciones de uso del software y pulsar el botón "Siguiente".
-
Seleccionar el idioma de la interfaz y de la guía de voz y pulsar el botón "Siguiente".
-
Seleccionar el tipo de vehículo que vas a utilizar y pulsar el botón "Siguiente".
-
Seleccionar el modo de ruta que prefieres, entre más rápido, más corto, económico o fácil, y pulsar el botón "Siguiente".
-
Seleccionar los avisos de seguridad que quieres activar, como los límites de velocidad, los radares, las zonas peligrosas, etc., y pulsar el botón "Siguiente".
-
Seleccionar el tema de la interfaz que más te guste, entre claro, oscuro o automático, y pulsar el botón "Siguiente".
-
Pulsar el botón "Finalizar" para completar la configuración inicial del software.
-
-
Ventajas y desventajas de descargar iGO Primo APK gratis
-
Descargar iGO Primo APK gratis tiene sus ventajas y sus desventajas, que debes tener en cuenta antes de decidirte a hacerlo. A continuación, te mostramos una comparación entre los pros y los contras de esta opción:
-
Ventajas
-
-
Ahorras dinero al no tener que pagar por el software ni por las actualizaciones de los mapas.
-
Tienes acceso a una navegación offline que no depende de la conexión a internet ni consume datos móviles.
-
Disfrutas de una guía de voz inteligente que te indica el camino con claridad y precisión.
-
Recibes información de tráfico en tiempo real que te ayuda a evitar atascos y a llegar más rápido a tu destino.
-
Puedes buscar destinos y lugares de interés por varias opciones y ver información relevante sobre ellos.
-
Puedes personalizar la interfaz de usuario según tus gustos y preferencias.
-
-
Desventajas
-
-
Te expones a posibles riesgos de seguridad al descargar e instalar un archivo APK desde una fuente desconocida.
-
No recibes las últimas actualizaciones del software ni de los mapas, lo que puede afectar al rendimiento y a la precisión de la navegación.
-
No cuentas con el soporte técnico ni con la garantía del desarrollador oficial del software.
-
Puedes infringir los derechos de autor o las condiciones de uso del software al descargarlo e instalarlo sin permiso.
-
-
Conclusión
-
iGO Primo es un software de navegación que te ofrece mapas offline, guía de voz inteligente, información de tráfico en tiempo real y más características. Sin embargo, descargar iGO Primo APK gratis puede tener sus ventajas y sus desventajas, que debes sopesar antes de hacerlo.
-
Si quieres ahorrar dinero y tener una navegación offline sin depender de la conexión a internet, descargar iGO Primo APK gratis puede ser una buena opción para ti. Pero si quieres tener la seguridad, la actualización y el soporte del software oficial, quizás te convenga más comprarlo o buscar otra alternativa gratuita y legal.
-
Esperamos que este artículo te haya sido útil para conocer más sobre iGO Primo y cómo descargarlo e instalarlo gratis en tu dispositivo Android. Si tienes alguna duda o comentario, puedes dejarlo abajo. ¡Gracias por leernos!
-
Preguntas frecuentes
-
A continuación, te respondemos algunas preguntas frecuentes sobre iGO Primo:
-
-
¿Qué diferencia hay entre iGO Primo e iGO Navigation?
-
iGO Primo e iGO Navigation son dos versiones diferentes del mismo software de navegación. iGO Primo es la versión más antigua y completa, que ofrece más características y opciones. iGO Navigation es la versión más nueva y simplificada, que ofrece una interfaz más moderna y minimalista.
-
¿Qué mapas puedo usar con iGO Primo?
-
P uedes usar los mapas de TomTom, HERE, Navteq y otros proveedores que sean compatibles con iGO Primo. Puedes descargar los mapas desde la aplicación o desde el sitio web oficial. También puedes usar mapas en 3D que te ofrecen una vista más realista del terreno.
-
¿Qué dispositivos son compatibles con iGO Primo?
-
iGO Primo es compatible con la mayoría de los dispositivos Android que tengan una versión igual o superior a la 4.0. También es compatible con algunos sistemas multimedia de vehículos que usen Android como sistema operativo. Sin embargo, no es compatible con iOS ni con Windows Phone.
-
¿Qué ventajas tiene iGO Primo sobre Google Maps?
-
iGO Primo tiene algunas ventajas sobre Google Maps, como por ejemplo:
-
-
Te permite navegar sin conexión a internet, lo que te ahorra datos móviles y te evita problemas de cobertura.
-
Te ofrece una guía de voz inteligente que te anuncia los nombres de las calles y las salidas y entradas de las autopistas.
-
Te proporciona información de tráfico en tiempo real y actualiza las rutas según las condiciones del tráfico.
-
Te permite personalizar la interfaz de usuario según tus gustos y preferencias.
-
-
¿Qué riesgos tiene descargar iGO Primo APK gratis?
-
Descargar iGO Primo APK gratis tiene algunos riesgos, como por ejemplo:
-
-
Te expones a posibles virus, malware o spyware que puedan dañar tu dispositivo o robar tu información personal.
-
No recibes las últimas actualizaciones del software ni de los mapas, lo que puede afectar al rendimiento y a la precisión de la navegación.
-
No cuentas con el soporte técnico ni con la garantía del desarrollador oficial del software.
-
Puedes infringir los derechos de autor o las condiciones de uso del software al descargarlo e instalarlo sin permiso.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/8 Ball Pool Multiplayer Hack 2012 V6.07 Activation Code LINK.md b/spaces/contluForse/HuggingGPT/assets/8 Ball Pool Multiplayer Hack 2012 V6.07 Activation Code LINK.md
deleted file mode 100644
index b812e6d39075a1e69dca045572d23244d79b309f..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/8 Ball Pool Multiplayer Hack 2012 V6.07 Activation Code LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
8 ball pool multiplayer hack 2012 v6.07 activation code
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Babylon Pro NG 11.0.0.22 Key [CracksNow]l A Complete Guide to Using the Program.md b/spaces/contluForse/HuggingGPT/assets/Babylon Pro NG 11.0.0.22 Key [CracksNow]l A Complete Guide to Using the Program.md
deleted file mode 100644
index b635f4792da4e18428d0f5a5da5e436c3bdccfa0..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Babylon Pro NG 11.0.0.22 Key [CracksNow]l A Complete Guide to Using the Program.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Bhaag Kahan Tak Bhagega Movie With Eng Subtitles D giochiper poesia tra A Thrilling Hindi Movie Directed by Amar Betab.md b/spaces/contluForse/HuggingGPT/assets/Bhaag Kahan Tak Bhagega Movie With Eng Subtitles D giochiper poesia tra A Thrilling Hindi Movie Directed by Amar Betab.md
deleted file mode 100644
index fc7c435a0965fdb9ac548e1447152db906fba3b3..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Bhaag Kahan Tak Bhagega Movie With Eng Subtitles D giochiper poesia tra A Thrilling Hindi Movie Directed by Amar Betab.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Bhaag Kahan Tak Bhagega Movie With Eng Subtitles D giochiper poesia tra
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Chatur Singh Two Star Man 3 In Hindi 720p Torrent.md b/spaces/contluForse/HuggingGPT/assets/Chatur Singh Two Star Man 3 In Hindi 720p Torrent.md
deleted file mode 100644
index c96185e996c9903f675e28e47b9c6161bb1a60fe..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Chatur Singh Two Star Man 3 In Hindi 720p Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Crack Serial Keygen Siteleri Herseyi Bulabilirsiniz - Nasıl?
-
-
Crack serial keygen siteleri, birçok programın lisansını ücretsiz olarak elde etmenizi sağlayan web siteleridir. Bu sitelerde, aradığınız programın crack, serial veya keygen dosyalarını bulabilir ve indirebilirsiniz. Bu sayede, programı sınırsız olarak kullanabilir ve güncelleyebilirsiniz. Ancak, crack serial keygen siteleri herseyi bulabilirsiniz mi? Bu siteler güvenilir mi? Hangi siteleri tercih etmelisiniz? Bu makalede, bu soruların cevaplarını vereceğiz.
-
crack serial keygen siteleri herseyi bulabilirsiniz
Crack serial keygen siteleri, birçok programın lisansını ücretsiz olarak elde etmenizi sağlayan web siteleridir. Bu sitelerde, aradığınız programın crack, serial veya keygen dosyalarını bulabilir ve indirebilirsiniz. Bu dosyaların anlamları şöyledir:
-
-
-
Crack: Programın orijinal dosyasının değiştirilerek lisans kontrolünün kaldırılması veya atlanması işlemidir. Crack dosyası, programın kurulu olduğu klasöre kopyalanarak çalıştırılır.
-
Serial: Programın lisans anahtarının girilmesini gerektiren bir kod dizisidir. Serial dosyası, programın kurulumu veya aktivasyonu sırasında istenen alana yazılır.
-
Keygen: Programın lisans anahtarını üreten bir programdır. Keygen dosyası, çalıştırıldığında rastgele bir serial kodu oluşturur ve bunu kullanıcıya sunar.
-
-
-
Bu dosyaları kullanarak, programı lisanslıymış gibi kullanabilir ve güncelleyebilirsiniz. Ancak, bu dosyaların yasal olmadığını ve bazı riskler taşıdığını unutmamalısınız.
-
-
Crack Serial Keygen Siteleri Güvenilir Mi?
-
-
Crack serial keygen siteleri herseyi bulabilirsiniz diyebilmek için, bu sitelerin güvenilir olduğundan emin olmalısınız. Ancak, bu sitelerin çoğu güvenilir değildir. Bu sitelerde karşılaşabileceğiniz bazı riskler şunlardır:
-
-
-
Virüs: Crack serial keygen dosyaları virüs içerebilir veya virüs bulaştırabilir. Bu durumda, bilgisayarınız zarar görebilir veya kişisel bilgileriniz çalınabilir.
-
Dolandırıcılık: Crack serial keygen dosyaları indirmek için bazı siteler sizden ücret isteyebilir veya sahte ödeme sayfalarına yönlendirebilir. Bu durumda, paranızı kaybedebilir veya kredi kartı bilgileriniz çalınabilir.
-
Telif Hakkı: Crack serial keygen dosyaları kullanmak telif hakkı ihlali anlamına gelir ve yasal sorumluluk doğurabilir. Bu durumda, para cezası veya hapis cezası ile karşılaşabilirsiniz.
-
-
-
Bu riskleri önlemek için, crack serial keygen sitelerini kullanmamak en iyi seçenektir. Ancak, yine de bu siteleri kullanmak istiyorsanız, bazı önlemler almalısınız.
-
-
Crack Serial Keygen Sitelerini Nasıl Kullanmalısınız?
-
-
Crack serial keygen sitelerini kullanmak istiyorsanız, aşağıdaki önlemleri almalısınız:
-
-
-
-
Güvenilir Siteleri Tercih Edin: Crack serial keygen sitelerinin çoğu güvenilir değildir ancak bazıları daha az risklidir. Bu siteleri tercih etmek için, kullanıcı yorumlarını, site tasarımını ve içeriğini inceleyin. Ayrıca, virüs taraması yapan ve SSL sertifikası olan siteleri seçin.
-
Güncel Antivirüs Programı Kullanın: Crack serial keygen dosyaları virüs içerebilir veya virüs bulaştırabilir. Bu yüzden, bilgisayarınızda güncel bir antivirüs programı bulundurun ve her indirdiğiniz dosyayı taratın.
-
Ödeme Yapmayın: Crack serial keygen dosyaları indirmek için bazı siteler sizden ücret isteyebilir veya sahte ödeme sayfalarına yönlendirebilir. Bu tür sitelere asla ödeme yapmayın veya kredi kartı bilgilerinizi vermeyin.
-
Telif Hakkına Dikkat Edin: Crack serial keygen dosyaları kullanmak telif hakkı ihlali anlamına gelir ve yasal sorumluluk doğurabilir. Bu yüzden, bu dosyaları kullanmadan önce telif hakkı sahiplerinin iznini alın veya kendi sorumluluğunuzda olduğunu kabul edin.
-
-
-
Bu önlemleri alarak, crack serial keygen sitelerini daha güvenli bir şekilde kullanabilirsiniz. Ancak, yine de bu sitelerin yasal olmadığını ve bazı riskler taşıdığını unutmamalısınız.
-
-
Sonuç
-
-
Crack serial keygen siteleri herseyi bulabilirsiniz diyebilmek için, bu sitelerin ne olduğunu, nasıl çalıştığını ve hangi riskleri taşıdığını bilmelisiniz. Bu makalede, bu konulara açıklık getirdik ve bu siteleri nasıl kullanmanız gerektiğine dair bazı ipuçları verdik. Ancak, en iyisi crack serial keygen sitelerinden uzak durmak ve programları yasal yollardan edinmektir.
-
Crack Serial Keygen Siteleri Hangileridir?
-
-
Crack serial keygen siteleri herseyi bulabilirsiniz diyebilmek için, bu sitelerin hangileri olduğunu bilmelisiniz. İnternette birçok crack serial keygen sitesi bulunmaktadır ancak hepsi güvenilir ve kaliteli değildir. Bu yüzden, bu siteleri seçerken dikkatli olmalısınız. İşte, crack serial keygen siteleri arasında en popüler olan bazıları:
-
-
-
Serials.ws: Günlük olarak güncellenen ve 120.000'den fazla serial numarası içeren bir sitedir. Aradığınız programın serial numarasını kolayca bulabilirsiniz. Ancak, sitede crack veya keygen dosyası indiremezsiniz.
-
Smart Serials: Telif hakkına saygılı ve güvenli bir sitedir. Hem crack hem de serial numarası sunmaktadır. Ayrıca, programları kategorilere ayırmıştır, böylece istediğiniz programı daha rahat bulabilirsiniz.
-
Keygens Pro: Eski bir tasarıma sahip olmasına rağmen, birçok crack ve keygen dosyası sunan bir sitedir. Programların alfabetik olarak listelendiği bir sayfaya sahiptir. Ayrıca, programların resimli açıklamalarını da görebilirsiniz.
-
-
-
Bu siteler, crack serial keygen siteleri arasında en popüler olanlardır ancak bunlar dışında da birçok site bulunmaktadır. Ancak, bu siteleri kullanırken yukarıda bahsettiğimiz riskleri ve önlemleri göz önünde bulundurmalısınız.
-
-
Crack Serial Keygen Siteleri Yerine Ne Kullanmalısınız?
-
-
Crack serial keygen siteleri herseyi bulabilirsiniz diyebilmek için, bu sitelerin avantajlarını bilmelisiniz. Bu sitelerin en büyük avantajı, birçok programın lisansını ücretsiz olarak elde etmenizi sağlamasıdır. Bu sayede, programları sınırsız olarak kullanabilir ve güncelleyebilirsiniz. Ancak, bu sitelerin dezavantajları da vardır. Bu sitelerin en büyük dezavantajı, yasal olmamalarıdır. Bu sitelerde sunulan dosyalar telif hakkı ihlali anlamına gelir ve yasal sorumluluk doğurabilir. Ayrıca, bu sitelerde sunulan dosyalar virüs içerebilir veya dolandırıcılık yapabilir.
-
-
Bu yüzden, crack serial keygen sitelerinden uzak durmak ve programları yasal yollardan edinmek en iyisidir. Programları yasal yollardan edinmenin avantajları şunlardır:
-
-
-
Güvenli: Programları resmi sitelerinden veya güvenilir platformlardan indirdiğinizde, virüs veya dolandırıcılık riski olmaz.
-
Kaliteli: Programları lisanslı olarak kullandığınızda, daha iyi bir performans ve kalite elde edersiniz.
-
Destekli: Programları yasal olarak kullandığınızda, teknik destek ve müşteri hizmetleri alabilirsiniz.
-
-
-
Programları yasal yollardan edinmenin dezavantajı ise ücretli olmasıdır. Ancak, bu dezavantajı aşmanın bazı yolları vardır. Örneğin:
-
-
-
Kampanyaları Takip Edin: Bazı programlar belirli zamanlarda indirim veya kampanya yapabilirler. Bu fırsatları takip ederek, programları daha uygun fiyatlara alabilirsiniz.
-
Ücretsiz Alternatifleri Deneyin: Bazı programların ücretsiz alternatifleri olabilir. Bu alternatifleri deneyerek, ihtiyacınız olan özellikleri ücretsiz olarak kullanabilirsiniz.
-
Giveaway Sitelerini Ziyaret Edin: Bazı siteler belirli zamanlarda ücretli programları ücretsiz olarak sunabilirler. Bu siteleri ziyaret ederek, programları bedava olarak indirebilirsiniz.
-
-
-
Bu yolları kullanarak, programları yasal yollardan edinebilir ve crack serial keygen sitelerinin risklerinden kurtulabilirsiniz.
-
Crack Serial Keygen Siteleri Neden Yasaklanıyor?
-
-
Crack serial keygen siteleri herseyi bulabilirsiniz diyebilmek için, bu sitelerin neden yasaklandığını da bilmelisiniz. Bu sitelerin yasaklanmasının en önemli nedeni, telif hakkı ihlali yapmalarıdır. Bu sitelerde sunulan dosyalar, programların orijinal lisanslarını ihlal eder ve program geliştiricilerinin haklarını çiğner. Bu yüzden, program geliştiricileri veya telif hakkı sahipleri, bu siteleri kapatmak için yasal yollara başvurabilirler.
-
-
Bu sitelerin yasaklanmasının bir diğer nedeni ise, güvenlik sorunları yaratmalarıdır. Bu sitelerde sunulan dosyalar, virüs veya zararlı yazılım içerebilir veya kullanıcıların bilgilerini çalabilirler. Bu yüzden, devlet kurumları veya internet servis sağlayıcıları, bu siteleri engellemek için önlem alabilirler.
-
-
Bu sitelerin yasaklanmasının bir sonucu olarak, bu sitelere erişim zorlaşabilir veya imkansız hale gelebilir. Bu durumda, bu sitelerin alternatif adreslerini veya proxy sunucularını kullanarak erişim sağlamaya çalışabilirsiniz. Ancak, bu yöntemler de bazı riskler taşıyabilir ve her zaman işe yaramayabilir.
-
-
Crack Serial Keygen Siteleri Hakkında Sık Sorulan Sorular
-
-
Crack serial keygen siteleri herseyi bulabilirsiniz diyebilmek için, bu siteler hakkında sık sorulan soruların cevaplarını da bilmelisiniz. İşte, bu siteler hakkında sık sorulan bazı sorular ve cevapları:
-
-
-
Crack serial keygen siteleri güvenli mi? Hayır, bu siteler güvenli değildir. Bu sitelerde sunulan dosyalar virüs içerebilir veya virüs bulaştırabilir. Ayrıca, bu siteler dolandırıcılık yapabilir veya kişisel bilgilerinizi çalabilir.
-
Crack serial keygen siteleri yasal mı? Hayır, bu siteler yasal değildir. Bu sitelerde sunulan dosyalar telif hakkı ihlali anlamına gelir ve yasal sorumluluk doğurabilir. Ayrıca, bu sitelere erişmek de bazı ülkelerde yasaklanmıştır.
-
Crack serial keygen siteleri etik mi? Hayır, bu siteler etik değildir. Bu sitelerde sunulan dosyalar program geliştiricilerinin emeğini çalar ve haklarını gasp eder. Ayrıca, bu sitelere erişmek de program geliştiricilerine zarar verir.
-
Crack serial keygen siteleri her programı bulabilir mi? Hayır, bu siteler her programı bulamaz. Bazı programların crack serial keygen dosyaları bulunmayabilir veya çalışmayabilir. Ayrıca, bazı programların yeni sürümleri için crack serial keygen dosyaları henüz çıkmamış olabilir.
-
Crack serial keygen siteleri nasıl bulunur? Crack serial keygen sitelerini bulmak için internet arama motorlarını kullanabilirsiniz. Ancak, bu sitelere erişim engellenmiş olabileceği için alternatif adreslerini veya proxy sunucularını kullanmanız gerekebilir.
-
-
-
Bu soruların cevaplarını bilmek, crack serial keygen sitelerini daha iyi anlamanızı ve kullanmanızı sağlayacaktır.
-
Sonuç
-
-
Crack serial keygen siteleri herseyi bulabilirsiniz diyebilmek için, bu sitelerin ne olduğunu, nasıl çalıştığını, hangi riskleri taşıdığını ve nasıl kullanmanız gerektiğini bilmelisiniz. Bu makalede, bu konulara açıklık getirdik ve bu siteleri nasıl kullanmanız gerektiğine dair bazı ipuçları verdik. Ancak, en iyisi crack serial keygen sitelerinden uzak durmak ve programları yasal yollardan edinmektir. Böylece, hem güvenli hem de kaliteli bir şekilde programları kullanabilirsiniz.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/MiniTool Partition Wizard Crack [PORTABLE] PRO 11.4 Serial Key Torrent 2019.md b/spaces/diacanFperku/AutoGPT/MiniTool Partition Wizard Crack [PORTABLE] PRO 11.4 Serial Key Torrent 2019.md
deleted file mode 100644
index 881314b7bce70ddb658285c64f4d95772d1c88ee..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/MiniTool Partition Wizard Crack [PORTABLE] PRO 11.4 Serial Key Torrent 2019.md
+++ /dev/null
@@ -1,110 +0,0 @@
-
-
MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019
-
-
If you are looking for a powerful and easy-to-use partition manager for your Windows PC, you might want to check out MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019. This software is designed to optimize disk usage and protect your data with various features and tools. In this article, we will show you how to download and activate MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 and what benefits it can bring to your system.
-
-
What is MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019?
-
-
MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 is a cracked version of MiniTool Partition Wizard Pro, which is a premium partition software developed by MiniTool Solution Ltd. By using this cracked version, you can enjoy all the features of the Pro edition without paying anything.
-
MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019
MiniTool Partition Wizard Pro is a rich-featured partition magic that can help you manage your hard disk and partition effectively and safely. With its user-friendly interface and simple guidance, you can resize partition, copy disk, scan lost partitions, migrate OS to SSD, convert disk type, change file system, recover partition, rebuild MBR, align partition, and more.
-
-
MiniTool Partition Wizard Pro supports Windows XP/Vista/7/8/8.1/10 and Windows Server 2003/2008/2012/2016/2019. It also supports various file systems such as FAT12/16/32, NTFS, Ext2/3/4, exFAT, and more. It can work with both internal and external hard drives, SSDs, USB flash drives, SD cards, and other removable devices.
-
-
How to Download and Activate MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019?
-
-
To download and activate MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019, you need to follow these steps:
-
-
-
Download the torrent file from a reliable source.
-
Open the torrent file with a torrent client such as uTorrent or BitTorrent.
-
Download the setup file and the crack file from the torrent.
-
Run the setup file and install MiniTool Partition Wizard Pro on your PC.
-
Copy the crack file and paste it into the installation folder of MiniTool Partition Wizard Pro.
-
Run the crack file as administrator and click on the activate button.
-
Enjoy the full features of MiniTool Partition Wizard Pro for free.
-
-
-
What are the Benefits of MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019?
-
-
By using MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019, you can get many benefits for your system performance and data security. Some of the benefits are:
-
-
-
You can resize your partitions without losing any data or affecting your system boot.
-
You can copy your entire disk or partition to another disk or partition for backup or migration purposes.
-
You can scan your disk or partition for any lost or deleted partitions and recover them easily.
-
You can migrate your OS from HDD to SSD or vice versa without reinstalling anything.
-
You can convert your disk type from MBR to GPT or vice versa without deleting any partitions.
-
You can change your file system from FAT to NTFS or vice versa without formatting your partition.
-
You can recover your deleted or damaged partition from any data loss scenarios such as virus attack, power outage, human error, etc.
-
You can rebuild your MBR to fix any boot issues caused by corrupted or missing boot code.
-
You can align your SSD partitions to improve their performance and lifespan.
-
You can manage your partitions better with various options such as create, delete, format, hide, label, explore, wipe, split, merge, move, extend, etc.
-
-
-
Conclusion
-
-
MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 is a great solution for anyone who wants to optimize their disk usage and protect their data with a powerful partition manager. By downloading and activating this software for free, you can enjoy all the features of the Pro edition without spending a dime. However, we recommend you to use the official version of MiniTool Partition Wizard Pro if you can afford it, as it is more secure and reliable than the cracked version. You can also get technical support and updates from the official website of MiniTool Solution Ltd.
-
-
We hope this article has helped you understand how to download and activate MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 and what benefits it can bring to your system. If you have any questions or suggestions, please feel free to leave a comment below.
-
How to Use MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019?
-
-
After you have downloaded and activated MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019, you can use it to manage your hard disk and partition easily and safely. Here are some steps to guide you on how to use this software:
-
-
-
Launch MiniTool Partition Wizard Pro from your desktop or start menu.
-
Select the disk or partition that you want to operate on from the main interface.
-
Choose the feature or tool that you want to use from the left panel or the toolbar.
-
Follow the instructions on the screen to complete the operation.
-
Click on the Apply button to execute the pending changes.
-
-
-
You can also use the wizard functions to perform some common tasks such as migrate OS to SSD, copy disk, recover partition, etc. Just click on the Wizard button on the toolbar and select the wizard that you want to use. Then follow the steps on the wizard to finish the task.
-
-
-
What are the Pros and Cons of MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019?
-
-
MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 has some pros and cons that you should be aware of before using it. Here are some of them:
-
-
Pros
-
-
-
It is free to download and use.
-
It has all the features of the Pro edition.
-
It is easy to use and has a user-friendly interface.
-
It supports various file systems and disk types.
-
It can perform various partition operations without data loss or system boot issues.
-
It can help you optimize your disk usage and protect your data.
-
-
-
Cons
-
-
-
It is illegal and unethical to use a cracked software.
-
It may contain viruses or malware that can harm your system or data.
-
It may not work properly or cause errors or crashes.
-
It may not be compatible with the latest Windows updates or drivers.
-
It may not get technical support or updates from the official website.
-
-
-
Conclusion
-
-
In conclusion, MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 is a useful software for managing your hard disk and partition. However, it is not recommended to use a cracked software as it may bring some risks and disadvantages to your system and data. If you want to use a reliable and legal partition manager, you should buy the official version of MiniTool Partition Wizard Pro from its website or other authorized sources. You can also try the free edition of MiniTool Partition Wizard if you only need some basic partition functions.
-
-
We hope this article has helped you understand how to use MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 and what are its pros and cons. If you have any questions or suggestions, please feel free to leave a comment below.
-
How to Buy MiniTool Partition Wizard Pro Official Version?
-
-
If you are interested in buying MiniTool Partition Wizard Pro official version, you can visit its website and choose the edition that suits your needs. There are four editions available: Pro, Pro Deluxe, Pro Ultimate, and Technician. Each edition has different features and prices. You can compare them and select the best one for you.
-
-
After you have chosen the edition, you can click on the Buy Now button and proceed to the payment page. You can pay with PayPal, credit card, or other methods. You will receive an email with the license key and the download link after you have completed the payment. You can use the license key to activate the software and enjoy its full features.
-
-
MiniTool Partition Wizard Pro official version offers you a 30-day money-back guarantee and a lifetime free upgrade service. You can also get 24/7 technical support and customer service from its website or email. You can rest assured that you are using a safe and reliable partition manager that can help you manage your hard disk and partition effectively and safely.
-
-
Final Words
-
-
MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 is a tempting software for managing your hard disk and partition for free. However, it is not worth the risk and trouble that it may cause to your system and data. Instead of using a cracked software, you should use a legal and trustworthy partition manager such as MiniTool Partition Wizard Pro official version. It can provide you with more features, security, stability, compatibility, support, and updates than the cracked version.
-
-
We hope this article has helped you make a wise decision on whether to use MiniTool Partition Wizard Crack PRO 11.4 Serial Key Torrent 2019 or not. If you have any questions or suggestions, please feel free to leave a comment below. Thank you for reading!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Mukkabaaz 3 Movie WORK Download 720p.md b/spaces/diacanFperku/AutoGPT/Mukkabaaz 3 Movie WORK Download 720p.md
deleted file mode 100644
index fe8842eb03358d5d1dc77f8f18fbe2eb91668136..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Mukkabaaz 3 Movie WORK Download 720p.md
+++ /dev/null
@@ -1,53 +0,0 @@
-
-
Mukkabaaz 3 Movie Download 720p: The Ultimate Guide
-
If you are a fan of Hindi action drama movies, you might be eagerly waiting for the release of Mukkabaaz 3, the third installment of the popular Mukkabaaz franchise. The movie is expected to hit the theatres in 2023, but you don't have to wait that long to watch it. In this article, we will show you how to download Mukkabaaz 3 movie in 720p quality from various sources.
-
What is Mukkabaaz 3 Movie About?
-
Mukkabaaz 3 is the sequel to Mukkabaaz 2, which was released in 2020. The movie follows the story of Shravan Singh, a boxer from Uttar Pradesh who faces various challenges and obstacles in his quest to become a national champion. Shravan has to deal with his rival Bhagwan Das Mishra, a local don and politician who controls the boxing federation and wants to ruin Shravan's career. Shravan also has to balance his love life with Sunaina, Mishra's niece, who supports him despite her uncle's opposition.
Mukkabaaz 3 promises to be an action-packed and thrilling movie that will keep you on the edge of your seat. The movie stars Vineet Kumar Singh as Shravan, Zoya Hussain as Sunaina, Jimmy Sheirgill as Mishra, and Ravi Kishan as Sanjay Kumar, Shravan's coach. The movie is directed by Anurag Kashyap, who is known for his realistic and gritty movies.
-
Why Download Mukkabaaz 3 Movie in 720p Quality?
-
There are many reasons why you might want to download Mukkabaaz 3 movie in 720p quality. Here are some of them:
-
-
You can watch the movie anytime and anywhere you want, without depending on the theatre timings or availability.
-
You can save money on tickets and snacks by watching the movie at home.
-
You can enjoy the movie in high definition quality with clear sound and picture.
-
You can avoid spoilers and ads by watching the movie at your own pace.
-
You can share the movie with your friends and family and watch it together.
-
-
How to Download Mukkabaaz 3 Movie in 720p Quality?
-
There are many ways to download Mukkabaaz 3 movie in 720p quality. Here are some of the most common ones:
-
Download from Torrent Sites
-
One of the easiest and fastest ways to download Mukkabaaz 3 movie in 720p quality is to use torrent sites. Torrent sites are platforms that allow users to share files with each other using peer-to-peer technology. You can find almost any movie on torrent sites, including Mukkabaaz 3.
-
To download from torrent sites, you need to follow these steps:
-
-
Download and install a torrent client software on your device. Some of the popular ones are uTorrent, BitTorrent, qBittorrent, etc.
-
Search for Mukkabaaz 3 movie on a torrent site. Some of the popular ones are RARBG, The Pirate Bay, YTS, etc.
-
Select a torrent file that has a good number of seeders and leechers. Seeders are users who have the complete file and are sharing it with others. Leechers are users who are downloading the file from seeders.
-
Download the torrent file and open it with your torrent client software.
-
Wait for the download to complete and enjoy the movie.
-
-
Note: Downloading from torrent sites may be illegal in some countries and regions. You may also expose yourself to viruses and malware by downloading from untrusted sources. Use a VPN service to protect your privacy and security while downloading from torrent sites.
-
-
Download from Streaming Sites
-
Another way to download Mukkabaaz 3 movie in 720p quality is to use streaming sites. Streaming sites are platforms that allow users to watch movies online without downloading them. You can find many streaming sites that offer Mukkabaaz 3 movie in HD quality.
-
To download from streaming sites, you need to follow these steps:
-
-
Search for Mukkabaaz 3 movie on a streaming site. Some of the popular ones are Netflix, Amazon Prime Video, Hotstar, SonyLIV, etc.
-
Select a streaming site that has Mukkabaaz 3 movie available in your region and language.
-
Create an account on the streaming site if required and subscribe to a plan if needed.
-
Play the movie and look for a download option on the screen or in the settings menu.
-
Select a download option that suits your device and internet speed.
-
Wait for the download to complete and enjoy the movie.
-
-
Note: Downloading from streaming sites may require a subscription fee or a registration process. You may also need a stable internet connection and enough storage space on your device to download from streaming sites.
-
-
Conclusion
-
-
Mukkabaaz 3 is an upcoming Hindi action drama movie that will be released in 2023. The movie is about a boxer who faces various challenges in his quest to become a national champion. If you want to watch the movie before it hits the theatres, you can download it in 720p quality from various sources such as torrent sites or streaming sites. However, you should be aware of the legal and security risks involved in downloading from these sources and use a VPN service if necessary.
-
-
We hope this article has helped you learn how to download Mukkabaaz 3 movie in 720p quality. If you have any questions or suggestions, feel free to leave a comment below.
-
Mukkabaaz 3 is an upcoming Hindi action drama movie that will be released in 2023. The movie is about a boxer who faces various challenges in his quest to become a national champion. If you want to watch the movie before it hits the theatres, you can download it in 720p quality from various sources such as torrent sites or streaming sites. However, you should be aware of the legal and security risks involved in downloading from these sources and use a VPN service if necessary.
-
-
We hope this article has helped you learn how to download Mukkabaaz 3 movie in 720p quality. If you have any questions or suggestions, feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diegoakel/kitchenorbedroom/app.py b/spaces/diegoakel/kitchenorbedroom/app.py
deleted file mode 100644
index 2c23a39cb2a97e49b5776ab813daab3ceb03c7bc..0000000000000000000000000000000000000000
--- a/spaces/diegoakel/kitchenorbedroom/app.py
+++ /dev/null
@@ -1,19 +0,0 @@
-import gradio as gr
-from fastai.vision.all import *
-
-learn = load_learner('bedroom_or_kitchen.pkl', 'rb')
-
-categories = ("Bedroom", "Kitchen")
-
-def classify_image(img):
- pred, idx, probs = learn.predict(img)
- return dict(zip(categories, map(float, probs)))
-
-image = gr.inputs.Image(shape=(192, 192))
-label = gr.outputs.Label()
-
-examples = ['bedroom.jpg', 'quarto.jpg']
-
-intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
-
-intf.launch()
\ No newline at end of file
diff --git a/spaces/dmccreary/AaronsClass/app.py b/spaces/dmccreary/AaronsClass/app.py
deleted file mode 100644
index 03750d3ec138fe6a0db80ba5fdeec8e9cc9173d4..0000000000000000000000000000000000000000
--- a/spaces/dmccreary/AaronsClass/app.py
+++ /dev/null
@@ -1,24 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-title = "Transformers 📗 Sentence to Paragraph ❤️ For Mindfulness"
-examples = [
- ["Feel better physically by"],
- ["Practicing mindfulness each day"],
- ["Be happier by"],
- ["Meditation can improve health"],
- ["Spending time outdoors"],
- ["Stress is relieved by quieting your mind, getting exercise and time with nature"],
- ["Break the cycle of stress and anxiety"],
- ["Feel calm in stressful situations"],
- ["Deal with work pressure"],
- ["Learn to reduce feelings of overwhelmed"]
-]
-from gradio import inputs
-from gradio.inputs import Textbox
-from gradio import outputs
-
-generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B")
-generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-generator1 = gr.Interface.load("huggingface/gpt2-large")
-gr.Parallel(generator1, generator2, generator3, inputs=gr.inputs.Textbox(lines=5, label="Enter a sentence to get another sentence."),
- title=title, examples=examples).launch(share=False)
\ No newline at end of file
diff --git a/spaces/dodoya1/youtube_transcript/README.md b/spaces/dodoya1/youtube_transcript/README.md
deleted file mode 100644
index 3db758d51a96ea5ec9463b6459cf326a87e74a4e..0000000000000000000000000000000000000000
--- a/spaces/dodoya1/youtube_transcript/README.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: youtube_transcript
-sdk: gradio
-app_file: app.py
-emoji: 🚀
-colorFrom: indigo
-colorTo: indigo
-pinned: false
----
-# Function
-
-When you enter a YouTube URL and select a Whisper model, this application will transcribe the audio of that YouTube video.
-
-# References
-
-https://gradio.app/
-
-https://ai-research-collection.com/gradio-cal/
-
-https://ai-research-collection.com/gradio-image-classification/
-
-https://aiacademy.jp/media/?p=3469
-
-https://gihyo.jp/article/2023/04/programming-with-chatgpt-02
-
-https://gradio.app/docs/
-
-https://zenn.dev/robes/articles/fb99590d9ec9f2
diff --git a/spaces/dperales/Fraud_Detection_Pycaret/app_copy.py b/spaces/dperales/Fraud_Detection_Pycaret/app_copy.py
deleted file mode 100644
index 2289bc2667fe4de6aa4fcc03af1914b651add014..0000000000000000000000000000000000000000
--- a/spaces/dperales/Fraud_Detection_Pycaret/app_copy.py
+++ /dev/null
@@ -1,289 +0,0 @@
-import os
-import pandas as pd
-import numpy as np
-import seaborn as sns
-import matplotlib.pyplot as plt
-import matplotlib as mpl
-import pycaret
-import streamlit as st
-from streamlit_option_menu import option_menu
-import PIL
-from PIL import Image
-from PIL import ImageColor
-from PIL import ImageDraw
-from PIL import ImageFont
-
-def main():
- hide_streamlit_style = """
-
- """
- st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
- with st.sidebar:
- image = Image.open('itaca_logo.png')
- st.image(image, width=150) #,use_column_width=True)
- page = option_menu(menu_title='Menu',
- menu_icon="robot",
- options=["Clustering Analysis",
- "Anomaly Detection"],
- icons=["chat-dots",
- "key"],
- default_index=0
- )
-
- # Additional section below the option menu
- # st.markdown("---") # Add a separator line
- st.header("Settings")
-
- num_lines = st.text_input("% of lines to be processed:", value=100)
- graph_select = st.checkbox("Show Graphics", value= True)
- feat_imp_select = st.checkbox("Feature Importance", value= False)
-
- # Define the options for the dropdown list
- numclusters = [2, 3, 4, 5, 6]
- selected_clusters = st.slider("Choose a number of clusters", min_value=2, max_value=10, value=4)
-
- p_remove_multicollinearity = st.checkbox("Remove Multicollinearity", value=False)
- p_multicollinearity_threshold = st.slider("Choose multicollinearity thresholds", min_value=0.0, max_value=1.0, value=0.9)
- # p_remove_outliers = st.checkbox("Remove Outliers", value=False)
- # p_outliers_method = st.selectbox ("Choose an Outlier Method", ["iforest", "ee", "lof"])
- p_transformation = st.checkbox("Choose Power Transform", value = False)
- p_normalize = st.checkbox("Choose Normalize", value = False)
- p_pca = st.checkbox("Choose PCA", value = False)
- p_pca_method = st.selectbox ("Choose a PCA Method", ["linear", "kernel", "incremental"])
-
- st.title('ITACA Insurance Core AI Module')
-
- if page == "Clustering Analysis":
- st.header('Clustering Analysis')
-
- st.write(
- """
- """
- )
-
- # import pycaret unsupervised models
- from pycaret.clustering import setup, create_model, assign_model, pull, plot_model
- # import ClusteringExperiment
- from pycaret.clustering import ClusteringExperiment
-
- # Display the list of CSV files
- directory = "./"
- all_files = os.listdir(directory)
- # Filter files to only include CSV files
- csv_files = [file for file in all_files if file.endswith(".csv")]
- # Select a CSV file from the list
- selected_csv = st.selectbox("Select a CSV file from the list", ["None"] + csv_files)
-
- # Upload the CSV file
- uploaded_file = st.file_uploader("Choose a CSV file", type="csv")
-
- # Define the unsupervised model
- clusteringmodel = ['kmeans', 'ap', 'meanshift', 'sc', 'hclust', 'dbscan', 'optics', 'birch']
- selected_model = st.selectbox("Choose a clustering model", clusteringmodel)
-
- # Read and display the CSV file
- if selected_csv != "None" or uploaded_file is not None:
- if uploaded_file:
- try:
- delimiter = ','
- insurance_claims = pd.read_csv (uploaded_file, sep=delimiter)
- except ValueError:
- delimiter = '|'
- insurance_claims = pd.read_csv (uploaded_file, sep=delimiter, encoding='latin-1')
- else:
- insurance_claims = pd.read_csv(selected_csv)
-
- num_rows = int(insurance_claims.shape[0]*int(num_lines)/100)
- insurance_claims_reduced = insurance_claims.head(num_rows)
- st.write("Rows to be processed: " + str(num_rows))
-
- all_columns = insurance_claims_reduced.columns.tolist()
- selected_columns = st.multiselect("Choose columns", all_columns, default=all_columns)
- insurance_claims_reduced = insurance_claims_reduced[selected_columns].copy()
-
- st.header("Inference Description")
- insurance_claims_reduced.describe().T
-
- cat_col = insurance_claims_reduced.select_dtypes(include=['object']).columns
- num_col = insurance_claims_reduced.select_dtypes(exclude=['object']).columns
-
- # insurance_claims[num_col].hist(bins=15, figsize=(20, 15), layout=(5, 4))
- # Calculate the correlation matrix
- corr_matrix = insurance_claims_reduced[num_col].corr()
- # Create a Matplotlib figure
- fig, ax = plt.subplots(figsize=(12, 8))
- # Create a heatmap using seaborn
- st.header("Heat Map")
- sns.heatmap(corr_matrix, annot=True, cmap='coolwarm', fmt='.2f', ax=ax)
- # Set the title for the heatmap
- ax.set_title('Correlation Heatmap')
- # Display the heatmap in Streamlit
- st.pyplot(fig)
-
- if st.button("Prediction"):
- #insurance_claims_reduced = insurance_claims_reduced[selected_columns].copy()
-
- s = setup(insurance_claims_reduced, session_id = 123, remove_multicollinearity=p_remove_multicollinearity, multicollinearity_threshold=p_multicollinearity_threshold,
- # remove_outliers=p_remove_outliers, outliers_method=p_outliers_method,
- transformation=p_transformation,
- normalize=p_normalize, pca=p_pca, pca_method=p_pca_method)
- exp_clustering = ClusteringExperiment()
- # init setup on exp
- exp_clustering.setup(insurance_claims_reduced, session_id = 123)
-
- with st.spinner("Analyzing..."):
- # train kmeans model
- cluster_model = create_model(selected_model, num_clusters = selected_clusters)
-
- cluster_model_2 = assign_model(cluster_model)
- # Calculate summary statistics for each cluster
- cluster_summary = cluster_model_2.groupby('Cluster').agg(['count', 'mean', 'median', 'min', 'max',
- 'std', 'var', 'sum', ('quantile_25', lambda x: x.quantile(0.25)),
- ('quantile_75', lambda x: x.quantile(0.75)), 'skew'])
- st.header("Cluster Summary")
- cluster_summary
- st.header("Assign Model")
- cluster_model_2
-
- # all_metrics = get_metrics()
- # all_metrics
-
- st.header("Clustering Metrics")
- cluster_results = pull()
- cluster_results
-
- if graph_select:
- st.header("Clustering Plots")
- # plot pca cluster plot
- plot_model(cluster_model, plot = 'cluster', display_format = 'streamlit')
-
- if selected_model != 'ap':
- plot_model(cluster_model, plot = 'tsne', display_format = 'streamlit')
-
- if selected_model not in ('ap', 'meanshift', 'dbscan', 'optics'):
- plot_model(cluster_model, plot = 'elbow', display_format = 'streamlit')
-
- if selected_model not in ('ap', 'meanshift', 'sc', 'hclust', 'dbscan', 'optics'):
- plot_model(cluster_model, plot = 'silhouette', display_format = 'streamlit')
-
- if selected_model not in ('ap', 'sc', 'hclust', 'dbscan', 'optics', 'birch'):
- plot_model(cluster_model, plot = 'distance', display_format = 'streamlit')
-
- if selected_model != 'ap':
- plot_model(cluster_model, plot = 'distribution', display_format = 'streamlit')
-
- # Create a Classification Model to extract feature importance
- if feat_imp_select:
- st.header("Feature Importance")
- from pycaret.classification import setup, create_model, get_config
- s = setup(cluster_model_2, target = 'Cluster')
- lr = create_model('lr')
-
- # this is how you can recreate the table
- feat_imp = pd.DataFrame({'Feature': get_config('X_train').columns, 'Value' : abs(lr.coef_[0])}).sort_values(by='Value', ascending=False)
- # sort by feature importance value and filter top 10
- feat_imp = feat_imp.sort_values(by='Value', ascending=False).head(10)
- # Display the filtered table in Streamlit
- # st.dataframe(feat_imp)
- # Display the filtered table as a bar chart in Streamlit
- st.bar_chart(feat_imp.set_index('Feature'))
-
- elif page == "Anomaly Detection":
- st.header('Anomaly Detection')
-
- st.write(
- """
- """
- )
-
- # import pycaret anomaly
- from pycaret.anomaly import setup, create_model, assign_model, pull, plot_model
- # import AnomalyExperiment
- from pycaret.anomaly import AnomalyExperiment
-
- # Display the list of CSV files
- directory = "./"
- all_files = os.listdir(directory)
- # Filter files to only include CSV files
- csv_files = [file for file in all_files if file.endswith(".csv")]
- # Select a CSV file from the list
- selected_csv = st.selectbox("Select a CSV file from the list", ["None"] + csv_files)
-
- # Upload the CSV file
- uploaded_file = st.file_uploader("Choose a CSV file", type="csv")
-
- # Define the unsupervised model
- anomalymodel = ['abod', 'cluster', 'cof', 'iforest', 'histogram', 'knn', 'lof', 'svm', 'pca', 'mcd', 'sod', 'sos']
- selected_model = st.selectbox("Choose an anomaly model", anomalymodel)
-
- # Read and display the CSV file
- if selected_csv != "None" or uploaded_file is not None:
- if uploaded_file:
- try:
- delimiter = ','
- insurance_claims = pd.read_csv (uploaded_file, sep=delimiter)
- except ValueError:
- delimiter = '|'
- insurance_claims = pd.read_csv (uploaded_file, sep=delimiter, encoding='latin-1')
- else:
- insurance_claims = pd.read_csv(selected_csv)
-
- num_rows = int(insurance_claims.shape[0]*int(num_lines)/100)
- insurance_claims_reduced = insurance_claims.head(num_rows)
- st.write("Rows to be processed: " + str(num_rows))
-
- all_columns = insurance_claims_reduced.columns.tolist()
- selected_columns = st.multiselect("Choose columns", all_columns, default=all_columns)
- insurance_claims_reduced = insurance_claims_reduced[selected_columns].copy()
-
- if st.button("Prediction"):
-
- s = setup(insurance_claims_reduced, session_id = 123, remove_multicollinearity=p_remove_multicollinearity, multicollinearity_threshold=p_multicollinearity_threshold,
- # remove_outliers=p_remove_outliers, outliers_method=p_outliers_method,
- transformation=p_transformation,
- normalize=p_normalize, pca=p_pca, pca_method=p_pca_method)
-
- exp_anomaly = AnomalyExperiment()
- # init setup on exp
- exp_anomaly.setup(insurance_claims_reduced, session_id = 123)
-
- with st.spinner("Analyzing..."):
- # train model
- anomaly_model = create_model(selected_model)
-
- st.header("Assign Model")
- anomaly_model_2 = assign_model(anomaly_model)
- anomaly_model_2
-
- st.header("Anomaly Metrics")
- anomaly_results = pull()
- anomaly_results
-
- if graph_select:
- # plot
- st.header("Anomaly Plots")
- plot_model(anomaly_model, plot = 'tsne', display_format = 'streamlit')
- plot_model(anomaly_model, plot = 'umap', display_format = 'streamlit')
-
- if feat_imp_select:
- # Create a Classification Model to extract feature importance
- st.header("Feature Importance")
- from pycaret.classification import setup, create_model, get_config
- s = setup(anomaly_model_2, target = 'Anomaly')
- lr = create_model('lr')
- # this is how you can recreate the table
- feat_imp = pd.DataFrame({'Feature': get_config('X_train').columns, 'Value' : abs(lr.coef_[0])}).sort_values(by='Value', ascending=False)
- # sort by feature importance value and filter top 10
- feat_imp = feat_imp.sort_values(by='Value', ascending=False).head(10)
- # Display the filtered table in Streamlit
- # st.dataframe(feat_imp)
- # Display the filtered table as a bar chart in Streamlit
- st.bar_chart(feat_imp.set_index('Feature'))
-try:
- main()
-except Exception as e:
- st.sidebar.error(f"An error occurred: {e}")
\ No newline at end of file
diff --git a/spaces/enoreyes/rembg_remove_bg/git.sh b/spaces/enoreyes/rembg_remove_bg/git.sh
deleted file mode 100644
index 0aabfded99e3ead2b6b903be6f4514d94213e7c5..0000000000000000000000000000000000000000
--- a/spaces/enoreyes/rembg_remove_bg/git.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-git add .
-git commit -m "1.0"
-git push
\ No newline at end of file
diff --git a/spaces/eson/tokenizer-arena/utils/digit_util.py b/spaces/eson/tokenizer-arena/utils/digit_util.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ethanrom/pcb_det/app.py b/spaces/ethanrom/pcb_det/app.py
deleted file mode 100644
index f0ce64e8789a9e0ab48b4390d50fee1c46b475ad..0000000000000000000000000000000000000000
--- a/spaces/ethanrom/pcb_det/app.py
+++ /dev/null
@@ -1,102 +0,0 @@
-import gradio as gr
-import cv2
-import requests
-import os
-
-from ultralytics import YOLO
-
-file_urls = [
- #'https://www.dropbox.com/s/b5gqwe97xo9adw/spur.jpg?dl=1',
-]
-
-def download_file(url, save_name):
- url = url
- if not os.path.exists(save_name):
- file = requests.get(url)
- open(save_name, 'wb').write(file.content)
-
-for i, url in enumerate(file_urls):
- if 'mp4' in file_urls[i]:
- download_file(
- file_urls[i],
- f"video.mp4"
- )
- else:
- download_file(
- file_urls[i],
- f"image_{i}.jpg"
- )
-
-model = YOLO('yolov8_pcb.pt')
-path = [['spur.jpg'], ['mouse.jpg']]
-#video_path = [['video.mp4']]
-
-def show_preds_image(image_path):
- image = cv2.imread(image_path)
- outputs = model.predict(source=image_path)
- results = outputs[0].cpu().numpy()
- for i, det in enumerate(results.boxes.xyxy):
- cv2.rectangle(
- image,
- (int(det[0]), int(det[1])),
- (int(det[2]), int(det[3])),
- color=(0, 0, 255),
- thickness=2,
- lineType=cv2.LINE_AA
- )
- return cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
-
-inputs_image = [
- gr.components.Image(type="filepath", label="Input Image"),
-]
-outputs_image = [
- gr.components.Image(type="numpy", label="Output Image"),
-]
-interface_image = gr.Interface(
- fn=show_preds_image,
- inputs=inputs_image,
- outputs=outputs_image,
- title="PCB Defect Detector",
- examples=path,
- cache_examples=False,
-)
-
-def show_preds_video(video_path):
- cap = cv2.VideoCapture(video_path)
- while(cap.isOpened()):
- ret, frame = cap.read()
- if ret:
- frame_copy = frame.copy()
- outputs = model.predict(source=frame)
- results = outputs[0].cpu().numpy()
- for i, det in enumerate(results.boxes.xyxy):
- cv2.rectangle(
- frame_copy,
- (int(det[0]), int(det[1])),
- (int(det[2]), int(det[3])),
- color=(0, 0, 255),
- thickness=2,
- lineType=cv2.LINE_AA
- )
- yield cv2.cvtColor(frame_copy, cv2.COLOR_BGR2RGB)
-
-inputs_video = [
- gr.components.Video(type="filepath", label="Input Video"),
-
-]
-outputs_video = [
- gr.components.Image(type="numpy", label="Output Image"),
-]
-interface_video = gr.Interface(
- fn=show_preds_video,
- inputs=inputs_video,
- outputs=outputs_video,
- title="PCB Defect Detector",
- #examples=video_path,
- cache_examples=False,
-)
-
-gr.TabbedInterface(
- [interface_image],
- tab_names=['Image inference']
-).queue().launch()
\ No newline at end of file
diff --git a/spaces/exaggerated/PaddleOCR/app.py b/spaces/exaggerated/PaddleOCR/app.py
deleted file mode 100644
index 5cc41325804717cc14083a75bf4c763b01f56c9b..0000000000000000000000000000000000000000
--- a/spaces/exaggerated/PaddleOCR/app.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import gradio as gr
-
-from pp_ocr import inference_img, inference_json
-
-title = "基于PP-OCRv3文本识别"
-description = """
- PaddleOCR是百度开源的超轻量级OCR模型库,提供了数十种文本检测、识别模型,旨在打造一套丰富、领先、实用的文字检测、识别模型/工具库。
- > 项目地址:PaddleOCR github 地址: https://github.com/PaddlePaddle/PaddleOCR
-"""
-
-with gr.Blocks() as app:
- gr.Markdown("
"
- + title
- + "
")
- gr.Markdown(description)
- with gr.Tab("图片"):
- with gr.Row():
- with gr.Column():
- img_input = gr.Image()
- img_btn = gr.Button("识别")
- with gr.Column():
- img_output = gr.Image(label="Result")
- with gr.Tab("JSON"):
- with gr.Row():
- with gr.Column():
- json_input = gr.Image()
- json_btn = gr.Button("识别")
- with gr.Column():
- json_output = gr.Json(label="Result")
-
- img_btn.click(inference_img, inputs=img_input, outputs=img_output)
- json_btn.click(inference_json, inputs=json_input, outputs=json_output)
-
-app.launch()
\ No newline at end of file
diff --git a/spaces/facebook/MusicGen/audiocraft/adversarial/discriminators/__init__.py b/spaces/facebook/MusicGen/audiocraft/adversarial/discriminators/__init__.py
deleted file mode 100644
index f9e5ff59950ee0b1d1a67c9b3831d67d08048148..0000000000000000000000000000000000000000
--- a/spaces/facebook/MusicGen/audiocraft/adversarial/discriminators/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .mpd import MultiPeriodDiscriminator
-from .msd import MultiScaleDiscriminator
-from .msstftd import MultiScaleSTFTDiscriminator
diff --git a/spaces/facebook/MusicGen/audiocraft/models/unet.py b/spaces/facebook/MusicGen/audiocraft/models/unet.py
deleted file mode 100644
index db4a6df8e309c21fede37abdbe3c862932027641..0000000000000000000000000000000000000000
--- a/spaces/facebook/MusicGen/audiocraft/models/unet.py
+++ /dev/null
@@ -1,214 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Pytorch Unet Module used for diffusion.
-"""
-
-from dataclasses import dataclass
-import typing as tp
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from audiocraft.modules.transformer import StreamingTransformer, create_sin_embedding
-
-
-@dataclass
-class Output:
- sample: torch.Tensor
-
-
-def get_model(cfg, channels: int, side: int, num_steps: int):
- if cfg.model == 'unet':
- return DiffusionUnet(
- chin=channels, num_steps=num_steps, **cfg.diffusion_unet)
- else:
- raise RuntimeError('Not Implemented')
-
-
-class ResBlock(nn.Module):
- def __init__(self, channels: int, kernel: int = 3, norm_groups: int = 4,
- dilation: int = 1, activation: tp.Type[nn.Module] = nn.ReLU,
- dropout: float = 0.):
- super().__init__()
- stride = 1
- padding = dilation * (kernel - stride) // 2
- Conv = nn.Conv1d
- Drop = nn.Dropout1d
- self.norm1 = nn.GroupNorm(norm_groups, channels)
- self.conv1 = Conv(channels, channels, kernel, 1, padding, dilation=dilation)
- self.activation1 = activation()
- self.dropout1 = Drop(dropout)
-
- self.norm2 = nn.GroupNorm(norm_groups, channels)
- self.conv2 = Conv(channels, channels, kernel, 1, padding, dilation=dilation)
- self.activation2 = activation()
- self.dropout2 = Drop(dropout)
-
- def forward(self, x):
- h = self.dropout1(self.conv1(self.activation1(self.norm1(x))))
- h = self.dropout2(self.conv2(self.activation2(self.norm2(h))))
- return x + h
-
-
-class DecoderLayer(nn.Module):
- def __init__(self, chin: int, chout: int, kernel: int = 4, stride: int = 2,
- norm_groups: int = 4, res_blocks: int = 1, activation: tp.Type[nn.Module] = nn.ReLU,
- dropout: float = 0.):
- super().__init__()
- padding = (kernel - stride) // 2
- self.res_blocks = nn.Sequential(
- *[ResBlock(chin, norm_groups=norm_groups, dilation=2**idx, dropout=dropout)
- for idx in range(res_blocks)])
- self.norm = nn.GroupNorm(norm_groups, chin)
- ConvTr = nn.ConvTranspose1d
- self.convtr = ConvTr(chin, chout, kernel, stride, padding, bias=False)
- self.activation = activation()
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.res_blocks(x)
- x = self.norm(x)
- x = self.activation(x)
- x = self.convtr(x)
- return x
-
-
-class EncoderLayer(nn.Module):
- def __init__(self, chin: int, chout: int, kernel: int = 4, stride: int = 2,
- norm_groups: int = 4, res_blocks: int = 1, activation: tp.Type[nn.Module] = nn.ReLU,
- dropout: float = 0.):
- super().__init__()
- padding = (kernel - stride) // 2
- Conv = nn.Conv1d
- self.conv = Conv(chin, chout, kernel, stride, padding, bias=False)
- self.norm = nn.GroupNorm(norm_groups, chout)
- self.activation = activation()
- self.res_blocks = nn.Sequential(
- *[ResBlock(chout, norm_groups=norm_groups, dilation=2**idx, dropout=dropout)
- for idx in range(res_blocks)])
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- B, C, T = x.shape
- stride, = self.conv.stride
- pad = (stride - (T % stride)) % stride
- x = F.pad(x, (0, pad))
-
- x = self.conv(x)
- x = self.norm(x)
- x = self.activation(x)
- x = self.res_blocks(x)
- return x
-
-
-class BLSTM(nn.Module):
- """BiLSTM with same hidden units as input dim.
- """
- def __init__(self, dim, layers=2):
- super().__init__()
- self.lstm = nn.LSTM(bidirectional=True, num_layers=layers, hidden_size=dim, input_size=dim)
- self.linear = nn.Linear(2 * dim, dim)
-
- def forward(self, x):
- x = x.permute(2, 0, 1)
- x = self.lstm(x)[0]
- x = self.linear(x)
- x = x.permute(1, 2, 0)
- return x
-
-
-class DiffusionUnet(nn.Module):
- def __init__(self, chin: int = 3, hidden: int = 24, depth: int = 3, growth: float = 2.,
- max_channels: int = 10_000, num_steps: int = 1000, emb_all_layers=False, cross_attention: bool = False,
- bilstm: bool = False, transformer: bool = False,
- codec_dim: tp.Optional[int] = None, **kwargs):
- super().__init__()
- self.encoders = nn.ModuleList()
- self.decoders = nn.ModuleList()
- self.embeddings: tp.Optional[nn.ModuleList] = None
- self.embedding = nn.Embedding(num_steps, hidden)
- if emb_all_layers:
- self.embeddings = nn.ModuleList()
- self.condition_embedding: tp.Optional[nn.Module] = None
- for d in range(depth):
- encoder = EncoderLayer(chin, hidden, **kwargs)
- decoder = DecoderLayer(hidden, chin, **kwargs)
- self.encoders.append(encoder)
- self.decoders.insert(0, decoder)
- if emb_all_layers and d > 0:
- assert self.embeddings is not None
- self.embeddings.append(nn.Embedding(num_steps, hidden))
- chin = hidden
- hidden = min(int(chin * growth), max_channels)
- self.bilstm: tp.Optional[nn.Module]
- if bilstm:
- self.bilstm = BLSTM(chin)
- else:
- self.bilstm = None
- self.use_transformer = transformer
- self.cross_attention = False
- if transformer:
- self.cross_attention = cross_attention
- self.transformer = StreamingTransformer(chin, 8, 6, bias_ff=False, bias_attn=False,
- cross_attention=cross_attention)
-
- self.use_codec = False
- if codec_dim is not None:
- self.conv_codec = nn.Conv1d(codec_dim, chin, 1)
- self.use_codec = True
-
- def forward(self, x: torch.Tensor, step: tp.Union[int, torch.Tensor], condition: tp.Optional[torch.Tensor] = None):
- skips = []
- bs = x.size(0)
- z = x
- view_args = [1]
- if type(step) is torch.Tensor:
- step_tensor = step
- else:
- step_tensor = torch.tensor([step], device=x.device, dtype=torch.long).expand(bs)
-
- for idx, encoder in enumerate(self.encoders):
- z = encoder(z)
- if idx == 0:
- z = z + self.embedding(step_tensor).view(bs, -1, *view_args).expand_as(z)
- elif self.embeddings is not None:
- z = z + self.embeddings[idx - 1](step_tensor).view(bs, -1, *view_args).expand_as(z)
-
- skips.append(z)
-
- if self.use_codec: # insert condition in the bottleneck
- assert condition is not None, "Model defined for conditionnal generation"
- condition_emb = self.conv_codec(condition) # reshape to the bottleneck dim
- assert condition_emb.size(-1) <= 2 * z.size(-1), \
- f"You are downsampling the conditionning with factor >=2 : {condition_emb.size(-1)=} and {z.size(-1)=}"
- if not self.cross_attention:
-
- condition_emb = torch.nn.functional.interpolate(condition_emb, z.size(-1))
- assert z.size() == condition_emb.size()
- z += condition_emb
- cross_attention_src = None
- else:
- cross_attention_src = condition_emb.permute(0, 2, 1) # B, T, C
- B, T, C = cross_attention_src.shape
- positions = torch.arange(T, device=x.device).view(1, -1, 1)
- pos_emb = create_sin_embedding(positions, C, max_period=10_000, dtype=cross_attention_src.dtype)
- cross_attention_src = cross_attention_src + pos_emb
- if self.use_transformer:
- z = self.transformer(z.permute(0, 2, 1), cross_attention_src=cross_attention_src).permute(0, 2, 1)
- else:
- if self.bilstm is None:
- z = torch.zeros_like(z)
- else:
- z = self.bilstm(z)
-
- for decoder in self.decoders:
- s = skips.pop(-1)
- z = z[:, :, :s.shape[2]]
- z = z + s
- z = decoder(z)
-
- z = z[:, :, :x.shape[2]]
- return Output(z)
diff --git a/spaces/falterWliame/Face_Mask_Detection/Heroes 3 Of Might And Magic Download Free Full Version BEST.md b/spaces/falterWliame/Face_Mask_Detection/Heroes 3 Of Might And Magic Download Free Full Version BEST.md
deleted file mode 100644
index e953ea7f9263349ec869cb63d4d261bce95f4e2a..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Heroes 3 Of Might And Magic Download Free Full Version BEST.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
How to Download Heroes of Might and Magic 3: Complete for Free
-
Heroes of Might and Magic 3: Complete is one of the most popular and beloved turn-based strategy games of all time. It features an epic story, a rich fantasy world, hundreds of heroes and creatures, and eight different factions to choose from. If you are a fan of the Might and Magic series, or if you are looking for a classic game that will keep you entertained for hours, you might want to download Heroes of Might and Magic 3: Complete for free.
-
But how can you do that? Is it legal? Is it safe? In this article, we will answer these questions and show you how to download Heroes of Might and Magic 3: Complete for free in a few easy steps.
-
Heroes 3 Of Might And Magic Download Free Full Version
Heroes of Might and Magic 3: Complete is the ultimate edition of Heroes of Might and Magic 3: The Restoration of Erathia, the third installment in the Heroes of Might and Magic series. It includes the original game and its two official expansions: Armageddon's Blade and The Shadow of Death. It also includes a map editor, a random map generator, and hundreds of fan-made maps and scenarios.
-
Heroes of Might and Magic 3: Complete was released in 2000 by 3DO and New World Computing. It was praised by critics and players alike for its gameplay, graphics, music, and replay value. It is widely considered as one of the best games ever made in the genre.
-
-
Why download Heroes of Might and Magic 3: Complete for free?
-
There are several reasons why you might want to download Heroes of Might and Magic 3: Complete for free. Here are some of them:
-
-
You want to relive your childhood memories or experience a classic game for the first time.
-
You want to play with your friends online or offline using LAN or hotseat mode.
-
You want to enjoy the game on your modern PC without compatibility issues or bugs.
-
You want to save money and avoid paying for an old game that might not be available in your region or platform.
-
-
-
How to download Heroes of Might and Magic 3: Complete for free?
-
There are several ways to download Heroes of Might and Magic 3: Complete for free. However, not all of them are legal or safe. Some websites might offer you pirated copies of the game that contain viruses, malware, or spyware. Some websites might ask you to fill out surveys, sign up for subscriptions, or enter your personal information. Some websites might not even give you the game at all.
-
To avoid these risks, we recommend you to use one of these two methods:
-
-
Method 1: Download from GOG Unlocked
-
GOG Unlocked is a website that offers free downloads of DRM-free games from GOG.com, a digital distribution platform that sells classic and indie games. GOG Unlocked does not host any files on its own servers, but rather provides links to third-party file hosting services such as UploadHaven. GOG Unlocked claims that it does not violate any copyrights or trademarks, as it only provides links to files that are already available on the internet.
-
To download Heroes of Might and Magic 3: Complete from GOG Unlocked, follow these steps:
Click on the blue "Download" button below the game description.
-
You will be redirected to UploadHaven. Wait for 5 seconds and click on the blue "Download Now" button.
-
We recommend using a download manager for faster download speeds. You can use FDM which is free here , or any other download manager.
-
Once the game is finished downloading, right click on the . d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Holdem Manager 2 Serial Generator ((LINK)).md b/spaces/falterWliame/Face_Mask_Detection/Holdem Manager 2 Serial Generator ((LINK)).md
deleted file mode 100644
index ec5449920f59862b077219929fead56f687e1c5c..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Holdem Manager 2 Serial Generator ((LINK)).md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-
-So how do I get the license key again? I'm not able to create a new online account because they ask me for the serial number again.4K host device
-
-The 4K host device is a term used by Microsoft to describe the following combination of components:
-
- 4K display
-
- Dual monitor set
-
- Microsoft Surface Pro, Surface Book, or one of the following versions of the Surface Pro or Surface Book with Windows 10 Pro, Windows 10 Enterprise, or Windows 10 Education:
-
- Microsoft Surface Pro 4, Microsoft Surface Book, or Surface Book 2
-
- Microsoft Surface Pro 3, Microsoft Surface Book 1, or Surface Pro 2
-
-4K host device is a combination of these devices which can produce 4K resolution on a larger format screen than is possible with single monitor set.
-
-See also
-
-The Microsoft Surface Book 2 has a 30-hour battery life
-
-Microsoft Surface Pro 4
-
-Microsoft Surface Book
-
-Microsoft Surface Book 2
-
-References
-
-Category:Microsoft Surface your wifes dress and I saw her in a couple of different outfits last night and I have seen her in all of them. I know that it was a good outfit because I bought it for her. It seems like she really enjoys the dresses and shoes that I buy for her. I like to dress her up sometimes when we are going out because she likes the attention, but when we are at home, I try to keep her natural.
-
-I do see you this morning with a very sexy outfit that I would love to steal!
-
-Thanks for sharing your wife with all of us. I think it's great that you have someone who cares about your feelings. I think it is really difficult to be around someone and not show your feelings to them.
-
-Your email address will not be published. Required fields are marked *
-
-Comment
-
-Name *
-
-Email *
-
-Website
-
-Let me help you!
-
-Hi, I'm Amber! I am a speaker, lifestyle coach and the owner of Life is Good, a blog that helps others create a great life by celebrating their successes and learning from their failures. We are on a mission to help others learn how to start living the life they deserve. Read More…The invention relates to a method for the fabrication of a fiber/bulk composite material with a supporting structure, preferably a preform or a preform material, and a fiber component. Furthermore, the invention relates to a fiber/bulk composite material with a supporting structure, 4fefd39f24
-
-
-
diff --git a/spaces/fatiXbelha/sd/3D Driving Games Learn to Drive Race and Have Fun in Stunning 3D Graphics.md b/spaces/fatiXbelha/sd/3D Driving Games Learn to Drive Race and Have Fun in Stunning 3D Graphics.md
deleted file mode 100644
index 5389a9e693e163370c20002cc9db0c9ae47095be..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/3D Driving Games Learn to Drive Race and Have Fun in Stunning 3D Graphics.md
+++ /dev/null
@@ -1,94 +0,0 @@
-
- - Benefits of 3D driving games: Explain how they can improve your driving skills, provide entertainment, and offer realistic graphics and physics. - Challenges of 3D driving games: Mention some of the difficulties and drawbacks of playing 3D driving games, such as system requirements, controls, and safety issues. | | H2: How to play 3D driving games online for free? | - Online platforms: Describe some of the websites and apps that offer free 3D driving games, such as CrazyGames, and their features and advantages. - Game selection: Give some tips on how to choose the best 3D driving game for your preferences, such as genre, difficulty, and rating. - Game controls: Explain the basic controls and commands for playing 3D driving games online, such as keyboard, mouse, or touch screen. | | H2: How to improve your 3D driving game skills and performance? | - Practice: Emphasize the importance of practicing regularly and consistently to master the game mechanics and techniques. - Tips and tricks: Share some useful advice and strategies for improving your 3D driving game skills, such as drifting, braking, and overtaking. - Upgrades and customization: Discuss how you can enhance your 3D driving game experience by upgrading and customizing your vehicles, such as speed, handling, and appearance. | | H2: How to enjoy 3D driving games with others? | - Multiplayer mode: Explain how you can play 3D driving games with other players online or locally, and the benefits of doing so, such as competition, cooperation, and socialization. - Leaderboards and achievements: Describe how you can track your progress and compare your performance with other players using leaderboards and achievements, and how they can motivate you to improve your skills. - Reviews and feedback: Suggest how you can share your opinions and experiences with other 3D driving game enthusiasts using reviews and feedback, and how they can help you discover new games and tips. | | H2: Conclusion | - Summary: Summarize the main points of the article and restate the thesis statement. - Call to action: Encourage the reader to try out some of the 3D driving games mentioned in the article or explore other options online. - FAQs: Provide answers to some of the frequently asked questions about 3D driving games. | Table 2: Article with HTML formatting
What are 3D driving games and why are they so popular?
-
If you love cars and speed, you might be interested in playing 3D driving games online. These are games that simulate driving a vehicle in a three-dimensional environment, using realistic graphics and physics. You can choose from different types of vehicles, such as cars, trucks, bikes, or buses, and different genres of games, such as racing, parking, drifting, or stunt.
-
One of the main benefits of playing 3D driving games is that they can help you improve your driving skills in a fun and safe way. You can learn how to maneuver your vehicle in various situations, such as traffic, obstacles, or curves, without risking any damage or injury. You can also test your reflexes, coordination, and concentration while enjoying the thrill of speed.
Another benefit of playing 3D driving games is that they can provide you with hours of entertainment and excitement. You can explore different maps and scenarios, such as cities, deserts, mountains, or islands, and discover new challenges and surprises along the way. You can also customize your vehicle according to your preferences, such as color, model, or accessories.
-
However, playing 3D driving games also comes with some challenges and drawbacks. For instance, you might need a powerful computer or device to run the games smoothly and without lagging. You might also need to adjust to the controls and commands of the game, which might differ from the real-life ones. Moreover, you should always remember that playing 3D driving games is not a substitute for real-life driving experience.
-
How to play 3D driving games online for free?
-
If you want to play 3D driving games online for free, you have plenty of options to choose from. One of them is CrazyGames, a website that offers hundreds of 3D driving games that you can play on your browser without downloading or installing anything. Some of the popular titles on CrazyGames are Madalin Stunt Cars 2, City Car Driving Simulator, and Burnout Drift. You can also find games for different genres and difficulty levels, such as Moto X3M, Truck Driver Simulator, and Parking Fury.
-
To play 3D driving games online for free, you need to select the game that suits your preferences and interests. You can browse the categories and genres on the website, or use the search function to find a specific game. You can also check the ratings and reviews of other players to get an idea of the quality and popularity of the game.
-
Once you have chosen the game, you need to learn the basic controls and commands for playing it. Most of the 3D driving games online use the keyboard and mouse as the main input devices. For example, you can use the arrow keys or WASD keys to steer your vehicle, the space bar to brake, and the mouse to change the camera angle. Some games might also have additional controls, such as shift to drift, C to change view, or R to reset.
-
How to improve your 3D driving game skills and performance?
-
Playing 3D driving games online for free is not only fun, but also challenging. You might encounter different obstacles and opponents that will test your skills and performance. To improve your 3D driving game skills and performance, you need to practice regularly and consistently. The more you play, the more familiar you will become with the game mechanics and techniques.
-
3d car simulator online free
-3d police driving games
-3d rally racing game
-3d driving simulator on google maps
-3d truck driving games
-3d bus driving games
-3d parking games
-3d motorcycle driving games
-3d taxi driving games
-3d offroad driving games
-3d city driving games
-3d drifting games
-3d drag racing games
-3d stunt driving games
-3d monster truck games
-3d kart racing games
-3d bike simulator games
-3d airplane driving games
-3d boat driving games
-3d train driving games
-3d tractor driving games
-3d ambulance driving games
-3d fire truck games
-3d garbage truck games
-3d school bus games
-3d limo driving games
-3d delivery truck games
-3d tow truck games
-3d snow plow games
-3d ice cream truck games
-3d pizza delivery games
-3d tank driving games
-3d army truck games
-3d jeep driving games
-3d quad bike games
-3d dirt bike games
-3d bmx bike games
-3d mountain bike games
-3d road bike games
-3d helicopter flying games
-3d plane simulator games
-3d jet ski games
-3d submarine simulator games
-3d spaceship simulator games
-3d roller coaster simulator games
-3d horse riding games
-3d golf cart driving games
-3d forklift simulator games
-3d crane simulator games
-
Another way to improve your 3D driving game skills and performance is to follow some tips and tricks from experts and experienced players. Here are some of them:
-
-
Drifting: Drifting is a technique that involves sliding your vehicle sideways around a corner or curve. It can help you maintain your speed and momentum, as well as avoid collisions and oversteering. To drift, you need to apply the brake or handbrake while turning your vehicle in the opposite direction of the curve.
-
Braking: Braking is a technique that involves slowing down or stopping your vehicle before or during a turn or obstacle. It can help you control your vehicle and prevent skidding or crashing. To brake, you need to press the space bar or another key assigned for braking. You can also use the handbrake for sharper turns or emergency stops.
-
Overtaking: Overtaking is a technique that involves passing another vehicle or opponent in front of you. It can help you gain an advantage or win a race. To overtake, you need to find an opening or gap in the traffic or track, and accelerate your vehicle while steering towards it. You can also use the slipstream or draft of another vehicle to boost your speed.
-
-
A third way to improve your 3D driving game skills and performance is to upgrade and customize your vehicle according to your needs and preferences. You can enhance your 3D driving game experience by improving your vehicle's speed, handling, and appearance. To upgrade and customize your vehicle, you need to earn coins or credits by playing the game or completing missions. You can then use them to buy new parts or accessories for your vehicle.
-
How to enjoy 3D driving games with others?
-
Playing 3D driving games online for free is not only a solo activity, but also a social one. You can enjoy 3D driving games with others in various ways, such as:
-
-
Multiplayer mode: Multiplayer mode is a feature that allows you to play 3D driving games with other players online or locally. You can join or create a room or server, and invite or challenge other players to join you. You can also chat with them using text or voice messages. Playing 3D driving games with others can enhance your enjoyment by adding competition, cooperation, and socialization.
-
Leaderboards and achievements: Leaderboards and achievements are features that allow you to track your progress and compare your performance with other players online. You can see your rank and score on the leaderboards, and unlock achievements by completing certain tasks or goals in the game. Leaderboards and achievements can motivate you to improve your skills and performance by giving you feedback and recognition.
-
Reviews and feedback: Reviews and feedback are features that allow you to share your opinions and experiences with other 3D driving game enthusiasts online. You can write reviews and rate the games that you have played on the website or app, and read what other players have written about them. You can also give feedback and suggestions to the developers or publishers of the games, and report any bugs or issues that you have encountered. Reviews and feedback can help you discover new games and tips by giving you information and recommendations
-
Conclusion
-
3D driving games are games that simulate driving a vehicle in a three-dimensional environment, using realistic graphics and physics. They are popular because they can improve your driving skills, provide entertainment, and offer realistic graphics and physics. You can play 3D driving games online for free on various platforms, such as CrazyGames, and choose from different types of vehicles and genres. You can also improve your 3D driving game skills and performance by practicing, following tips and tricks, and upgrading and customizing your vehicle. Moreover, you can enjoy 3D driving games with others by playing in multiplayer mode, tracking your progress on leaderboards and achievements, and sharing your reviews and feedback.
-
If you are interested in playing 3D driving games online for free, why not give it a try? You might be surprised by how much fun and challenge they can offer. You can start by checking out some of the games mentioned in this article, or explore other options online. You might find your new favorite hobby or passion.
-
FAQs
-
Here are some of the frequently asked questions about 3D driving games:
-
-
What are the best 3D driving games online for free?
-
There is no definitive answer to this question, as different players might have different preferences and tastes. However, some of the most popular and highly rated 3D driving games online for free are Madalin Stunt Cars 2, City Car Driving Simulator, Burnout Drift, Moto X3M, Truck Driver Simulator, and Parking Fury.
-
What are the system requirements for playing 3D driving games online for free?
-
The system requirements for playing 3D driving games online for free might vary depending on the game and the platform. However, in general, you will need a modern browser that supports HTML5 or WebGL technology, such as Chrome, Firefox, or Edge. You will also need a stable internet connection and a decent computer or device that can handle the graphics and physics of the game.
-
How can I play 3D driving games online for free on my mobile device?
-
Some of the platforms that offer 3D driving games online for free also have mobile versions or apps that you can download and install on your smartphone or tablet. For example, CrazyGames has an app for Android devices that you can get from Google Play Store. You can also play some of the 3D driving games online for free on your mobile browser if they are compatible with touch screen controls.
-
Are 3D driving games online for free safe and legal?
-
Yes, playing 3D driving games online for free is safe and legal as long as you play them on reputable and trustworthy platforms, such as CrazyGames. These platforms ensure that the games are virus-free and malware-free, and that they do not violate any copyright or trademark laws. However, you should always be careful about your personal information and privacy when playing online games, and avoid clicking on any suspicious links or ads that might appear on the website or app.
-
Can I play 3D driving games online for free offline?
-
Some of the 3D driving games online for free might have an offline mode or feature that allows you to play them without an internet connection. However, this might limit some of the functions and options of the game, such as multiplayer mode, leaderboards, achievements, or updates. To play 3D driving games online for free offline, you might need to download and install the game on your computer or device, or enable the offline mode on your browser or app.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Blaq Diamonds Italy A Masterpiece of South African Music (Download Mp3).md b/spaces/fatiXbelha/sd/Blaq Diamonds Italy A Masterpiece of South African Music (Download Mp3).md
deleted file mode 100644
index d26b1e5131fc0b79d6ad400dec1aeeeae096271e..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Blaq Diamonds Italy A Masterpiece of South African Music (Download Mp3).md
+++ /dev/null
@@ -1,180 +0,0 @@
-
-
Download Italy Black Diamond MP3: A Guide for Music Lovers
-
If you are looking for a new song to add to your music collection, you might want to check out Italy Black Diamond by Blaq Diamond. This is a catchy and uplifting song that will make you feel good and inspire you. In this article, we will tell you everything you need to know about Italy Black Diamond, including who is Blaq Diamond, what is the song about, how to download it, and how to enjoy it. Let's get started!
Blaq Diamond is a South African singer who specializes in Afro-pop music. He was born in Ladysmith, KwaZulu-Natal, and started singing at a young age. He rose to fame in 2019 with his debut album Inqola, which featured hit songs like Emzini Kababa and Sthandwa. He has since released two more albums, Umuthi and SummerYoMuthi, which have also received positive reviews and awards. Blaq Diamond's music is influenced by his Zulu culture and his personal experiences. He sings in both English and Zulu, and his songs are full of catchy melodies, meaningful lyrics, and positive messages.
-
What is Italy Black Diamond?
-
Italy Black Diamond is one of the songs from Blaq Diamond's latest album SummerYoMuthi. It was released in September 2020 as a single, and it has become one of his most popular songs. The song is about Blaq Diamond's love for Italy, which he describes as his dream destination. He sings about how he wants to travel there with his lover and enjoy the beauty and culture of the country. He also compares his lover to a black diamond, which is a rare and precious gemstone. The song is a celebration of love, life, and dreams.
-
How to download Italy Black Diamond MP3?
-
If you want to listen to Italy Black Diamond on your device, you have several options. You can stream it online, download it from a website, or use a YouTube converter. Here are the pros and cons of each option:
-
Option 1: Stream it online
-
Streaming music online means that you can listen to it without downloading it. You just need an internet connection and a streaming service or app. Some of the most popular streaming services are Spotify, Apple Music, Deezer, Tidal, Amazon Music, YouTube Music, etc. Streaming music has some benefits, such as:
-
-
You can access millions of songs and albums from different genres and artists
-
You can discover new music and get personalized recommendations based on your preferences
-
You can create your own playlists and share them with others
-
You can listen to music offline if you have a premium subscription
-
-
However, streaming music also has some drawbacks, such as:
-
Download italy by blaq diamond mp3 song
-Blaq diamond italy mp3 free download
-Italy blaq diamond mp3 download fakaza
-Download blaq diamond new song italy mp3
-Italy by blaq diamond mp3 download bamoza
-Blaq diamond italy audio mp3 download
-Download italy blaq diamond mp3 music
-Italy blaq diamond mp3 download into naija
-Blaq diamond italy song mp3 download
-Download italy blaq diamond afro pop mp3
-Italy by blaq diamond official mp3 download
-Blaq diamond italy mp3 download 320kbps
-Download italy blaq diamond latest mp3
-Italy blaq diamond mp3 download zamusic
-Blaq diamond italy lyrics mp3 download
-Download italy blaq diamond full mp3
-Italy by blaq diamond video mp3 download
-Blaq diamond italy mp3 download waploaded
-Download italy blaq diamond album mp3
-Italy blaq diamond instrumental mp3 download
-Blaq diamond italy remix mp3 download
-Download italy by blaq diamond ft dj maphorisa mp3
-Italy blaq diamond acapella mp3 download
-Blaq diamond italy radio edit mp3 download
-Download italy blaq diamond original mix mp3
-Italy by blaq diamond extended version mp3 download
-Blaq diamond italy live performance mp3 download
-Download italy blaq diamond unplugged mp3
-Italy blaq diamond cover version mp3 download
-Blaq diamond italy reprise mp3 download
-Download italy by blaq diamond and sjava mp3
-Italy blaq diamond mashup mp3 download
-Blaq diamond italy vip mix mp3 download
-Download italy blaq diamond dubstep mp3
-Italy by blaq diamond karaoke mp3 download
-Blaq diamond italy dance version mp3 download
-Download italy blaq diamond slow jam mp3
-Italy blaq diamond piano version mp3 download
-Blaq diamond italy studio session mp3 download
-Download italy by blaq diamond and emtee mp3
-Italy blaq diamond trap version mp3 download
-Blaq diamond italy behind the scenes mp3 download
-Download italy by blaq diamond and nasty c mp3
-Italy by blaq diamond acapella studio quality 320kbps free zip file direct link no ads no survey no password no virus no malware no spyware no trojan no ransomware no phishing no scam no fraud no spam no junk no popups no redirects no clickbait no fake news no fake reviews no fake downloads no fake links no fake likes no fake comments no fake ratings no fake views no fake subscribers no fake followers no fake shares no fake votes no fake testimonials no fake endorsements no fake recommendations no fake referrals no fake feedbacks no fake compliments no fake praises no fake compliments
-
-
You need a stable internet connection to stream music without interruptions or buffering
-
You may have to pay a monthly fee to access some streaming services or features
-
You may not be able to find some songs or artists on some streaming platforms
-
You may have to deal with ads or limitations if you use a free version of a streaming service
-
-
Option 2: Download it from a website
-
Downloading music from a website means that you can save it on your device and listen to it anytime, anywhere. You just need to find a website that offers Italy Black Diamond MP3 and download it. Some of the best websites to download Italy Black Diamond MP3 are:
-
-
Website
Price
Quality
Features
-
[Fakaza]
Free
320 kbps
Offers a large collection of South African music, including Blaq Diamond's songs. Easy to use and fast to download.
-
[Mp3juices]
Free
Variable
Allows you to search and download Italy Black Diamond MP3 from various sources. Also lets you cut and edit the song as you wish.
-
[iTunes]
$1.29
256 kbps
Provides high-quality downloads of Italy Black Diamond MP3 and other songs from Blaq Diamond's albums. Also syncs with your Apple devices and iCloud.
-
[Amazon Music]
$1.29
256 kbps
Offers reliable and secure downloads of Italy Black Diamond MP3 and other songs from Blaq Diamond's albums. Also integrates with your Amazon account and devices.
-
-
Downloading music from a website has some benefits, such as:
-
-
You can listen to music offline without an internet connection or a subscription
-
You can transfer music to different devices or platforms as you like
-
You can support the artist by buying their music legally
-
You can choose the quality and format of the music you download
-
-
However, downloading music from a website also has some drawbacks, such as:
-
-
You may have to pay a fee to download some songs or albums
-
You may not be able to find some songs or albums on some websites
-
You may encounter viruses or malware when downloading from untrusted sources
-
You may violate the copyright laws or terms of service of some websites or artists
-
-
Option 3: Use a YouTube converter
-
Using a YouTube converter means that you can convert a YouTube video of Italy Black Diamond into an MP3 file and download it. You just need to find a YouTube video of the song and copy its URL, then paste it into a YouTube converter tool and click convert. Some of the best YouTube converter tools are:
-
YouTube Converter
Price
Quality
Features
-
[YTMP3]
Free
320 kbps
Converts YouTube videos to MP3 or MP4 files in high quality. Supports videos up to one hour long. Simple and fast to use.
-
[4K Video Downloader]
Free/Premium
Variable
Downloads YouTube videos and playlists in various formats and qualities. Supports subtitles, 3D, and 360-degree videos. Offers a smart mode for faster downloads.
-
[ClipGrab]
Free
Variable
Downloads and converts YouTube videos to MP3, MP4, or WMV files. Allows you to choose the quality and format of the output. Also works with other video platforms.
-
[Online Video Converter]
Free
Variable
Converts YouTube videos to MP3, MP4, AVI, MOV, MKV, or FLV files. Supports different quality levels and bitrates. No registration or software installation required.
-
-
Using a YouTube converter has some benefits, such as:
-
-
You can download any YouTube video as an MP3 file for free
-
You can choose the quality and size of the MP3 file you want
-
You can access a wide range of songs and videos on YouTube
-
You can use the YouTube converter tools online without downloading anything
-
-
However, using a YouTube converter also has some drawbacks, such as:
-
-
You may not get the best sound quality or metadata from the converted MP3 file
-
You may have to deal with pop-ups or ads when using some YouTube converter tools
-
You may violate the YouTube terms of service or the artist's rights by downloading their content without permission
-
You may encounter some errors or limitations when converting long or large videos
-
-
Pros and cons of YouTube converters
-
To summarize, here are the pros and cons of using YouTube converters for downloading Italy Black Diamond MP3:
- | Pros | Cons | | --- | --- | | Free and easy to use | May compromise sound quality or metadata | | Flexible and customizable | May expose you to pop-ups or ads | | Wide and diverse selection | May breach YouTube terms of service or artist's rights | | Online and accessible | May face errors or limitations |
How to use a YouTube converter
-
If you decide to use a YouTube converter to download Italy Black Diamond MP3, here are the steps you need to follow:
-
-
Go to YouTube and search for Italy Black Diamond by Blaq Diamond. Choose the video you want to convert and copy its URL.
-
Go to the YouTube converter tool of your choice and paste the URL into the input box. Click convert or download.
-
Select the quality and format of the output file. Click download or save.
-
Wait for the conversion and download process to finish. Enjoy your Italy Black Diamond MP3!
-
-
How to enjoy Italy Black Diamond MP3?
-
Now that you have downloaded Italy Black Diamond MP3 on your device, you might be wondering how to enjoy it. Here are some tips and suggestions on how to make the most of your listening experience:
-
Use headphones or speakers
-
The first thing you need to consider is how you want to listen to Italy Black Diamond MP3. Do you prefer headphones or speakers? Both options have their advantages and disadvantages, depending on your situation and preference. Here are some factors to consider:
- | Headphones | Speakers | | --- | --- | | Provide a more immersive and personal listening experience | Provide a more social and shared listening experience | | Block out external noise and distractions | Allow you to hear your surroundings and interact with others | | Offer better sound quality and bass response | Offer better sound balance and clarity | | Require less space and power | Require more space and power | | May cause discomfort or damage to your ears if used for too long or too loud | May cause annoyance or disturbance to others if used too loud |
Create a playlist
-
The next thing you need to consider is what other songs you want to listen to along with Italy Black Diamond MP3. Do you want to listen to more songs by Blaq Diamond? Do you want to listen to other Afro-pop songs? Do you want to listen to songs that match your mood or activity ? Whatever your choice is, you can create a playlist with Italy Black Diamond and other songs that suit your taste. Creating a playlist has some benefits, such as:
-
-
You can organize your music collection and find your favorite songs easily
-
You can customize your music experience and create different playlists for different occasions, moods, or themes
-
You can discover new music and explore different genres and artists
-
You can share your playlist with others and enjoy music together
-
-
To create a playlist, you can use any of the streaming services or apps mentioned above, or you can use your own device's music player. Here are some steps to follow:
-
-
Choose a name and a cover image for your playlist
-
Add Italy Black Diamond MP3 and other songs that you want to include in your playlist
-
Arrange the order of the songs according to your preference
-
Save your playlist and enjoy it anytime, anywhere
-
-
Share it with others
-
The last thing you need to consider is how to share Italy Black Diamond MP3 with others. Sharing music with others has some benefits, such as:
-
-
You can express yourself and your personality through music
-
You can connect with others and bond over music
-
You can support the artist and spread their music to more people
-
You can have fun and enjoy music together
-
-
To share Italy Black Diamond MP3 with others, you can use any of the following methods:
-
-
Send it as an attachment or a link via email, text, or social media
-
Play it on your device or speaker and let others listen to it
-
Recommend it to others and tell them why you like it
-
Create a QR code or a Spotify code for it and let others scan it
-
-
Conclusion
-
In conclusion, Italy Black Diamond MP3 is a great song that you should listen to if you love Afro-pop music or if you want to try something new. It is a song by Blaq Diamond, a talented South African singer who sings about his love for Italy and his lover. You can download Italy Black Diamond MP3 from various sources, such as streaming services, websites, or YouTube converters. You can also enjoy Italy Black Diamond MP3 by using headphones or speakers, creating a playlist, or sharing it with others. We hope this article has helped you learn more about Italy Black Diamond MP3 and how to download and enjoy it. Happy listening!
-
FAQs
-
Here are some frequently asked questions and their answers about Italy Black Diamond MP3:
-
Q: Where can I watch the official video of Italy Black Diamond?
-
A: You can watch the official video of Italy Black Diamond on YouTube. Here is the link: [https://www.youtube.com/watch?v=9X6xZJ8Y1wQ]
-
Q: What are some other songs by Blaq Diamond that I should listen to?
-
A: Some other songs by Blaq Diamond that you should listen to are SummerYoMuthi, Ibhanoyi, Love Letter, Woza My Love, Messiah, etc.
-
Q: How can I contact Blaq Diamond or follow him on social media?
-
A: You can contact Blaq Diamond or follow him on social media through his official accounts. Here are some of them:
-
-
Instagram: [@blaqdiamond150]
-
Twitter: [@blaqdiamond150]
-
Facebook: [Blaq Diamond]
-
Email: [blaqdiamond150@gmail.com]
-
-
Q: How can I support Blaq Diamond and his music?
-
A: You can support Blaq Diamond and his music by buying his albums or songs from legal sources, streaming his music on licensed platforms, attending his concerts or events, voting for him in awards or competitions, etc.
-
Q: What are some other Afro-pop artists that I should check out?
-
A: Some other Afro-pop artists that you should check out are Wizkid, Burna Boy, Davido, Tiwa Savage, Yemi Alade, Diamond Platnumz, etc.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download FM WhatsApp for Android - Free Secure and Reliable Messaging.md b/spaces/fatiXbelha/sd/Download FM WhatsApp for Android - Free Secure and Reliable Messaging.md
deleted file mode 100644
index c7fe49d1f2424b155d8185e5552f1a28bdd15396..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download FM WhatsApp for Android - Free Secure and Reliable Messaging.md
+++ /dev/null
@@ -1,126 +0,0 @@
-
-
How to Download APK FM WhatsApp: A Complete Guide
-
WhatsApp is one of the most popular messaging apps in the world, but it may not have all the features and customization options that you want. If you are looking for a way to enhance your WhatsApp experience, you may want to try FM WhatsApp, a modified version of the official app that offers many additional features and benefits.
-
However, FM WhatsApp is not available on the Google Play Store, so you will need to download and install it manually from an APK file. In this article, we will show you how to download APK FM WhatsApp from a reliable source, how to install and use it on your Android device, and what are the risks and advantages of using modded apps.
FM WhatsApp is a modified version of the original WhatsApp app that has been developed by Foud Apps. It is designed to provide users with more features and customization options that are not available on the official version of WhatsApp. Some of the features of FM WhatsApp include:
-
Features of FM WhatsApp
-
-
Calls/filter blocker: You can block unwanted or unknown numbers from calling you or sending you messages.
-
Media sharing: You can send up to 60 images at once and files up to 700MB in size. You can also send high-resolution images without losing quality.
-
Themes store: You can choose from thousands of themes to change the look and feel of your app. You can also create your own themes and share them with others.
-
Anti-delete messages: You can see the messages that have been deleted by the sender. You can also prevent others from deleting your messages.
-
New emojis: You can access and use emojis from different Android ecosystems, such as Android Oreo, Facebook, etc.
-
App launcher and notification icons: You can customize the app icon and notification icon according to your preference.
-
Colors and customization: You can change the colors and fonts of various elements of the app, such as chat bubbles, ticks, status bar, etc.
-
Full resolution image: You can set any image as your wallpaper without cropping or resizing it.
-
-
These are just some of the features of FM WhatsApp that make it a better alternative to the official WhatsApp app. However, before you decide to download and use FM WhatsApp, you should also be aware of the risks involved in using modded apps.
-
Risks of Using Modded Apps
-
Modded apps are apps that have been modified by third-party developers to add new features or functionalities that are not present in the original version. While modded apps may seem attractive and useful, they also pose some potential security risks that you should consider before using them. Some of the risks of using modded apps are:
-
-
Lost revenue: By using modded apps, you are depriving the original developers of their rightful revenue. This may affect their ability to maintain and update their apps, as well as create new ones.
-
Customer security is jeopardized: Not all modded apps are safe and trustworthy. Some may contain harmful code that can infect your device with viruses, malware, spyware, or ransomware. These malicious programs can steal your personal data, damage your device, or even take control of it.
-
Infected devices: Modded apps can also spread infections to other devices through various means, such as sharing files, contacts, or messages. This can put your friends, family, or colleagues at risk as well.
-
Personal information gathering Personal information gathering: Modded apps may also collect your personal information, such as your name, phone number, email address, location, contacts, messages, photos, etc. without your consent or knowledge. This information can be used for various purposes, such as advertising, marketing, or even identity theft.
-
Legal issues: Modded apps may also violate the terms and conditions of the original app or the platform that hosts it. This can result in legal actions against you, such as fines, lawsuits, or bans.
-
-
Therefore, you should be careful and cautious when using modded apps, such as FM WhatsApp. You should only download them from trusted sources, scan them for viruses before installing them, and backup your data regularly. You should also respect the rights and privacy of the original developers and other users.
-
How to Download APK Files from Google Play Store
-
If you want to download FM WhatsApp, you will need to get its APK file first. APK stands for Android Package Kit, and it is the file format that Android uses to distribute and install apps. APK files contain all the necessary components of an app, such as code, resources, assets, etc.
-
How to download FM WhatsApp latest version 2023
-FM WhatsApp APK download for Android devices
-FM WhatsApp features and benefits
-FM WhatsApp vs WhatsApp Messenger comparison
-FM WhatsApp anti-ban and anti-delete mod
-Download FM WhatsApp by Fouad Mokdad
-FM WhatsApp customization and themes
-FM WhatsApp free download link
-FM WhatsApp installation guide and tips
-FM WhatsApp update and new version
-FM WhatsApp review and rating
-FM WhatsApp alternatives and similar apps
-FM WhatsApp problems and solutions
-FM WhatsApp FAQ and support
-FM WhatsApp backup and restore data
-Download FM WhatsApp for PC and Mac
-FM WhatsApp video call and voice call
-FM WhatsApp privacy and security settings
-FM WhatsApp group chat and broadcast
-FM WhatsApp status and stories
-Download FM WhatsApp with emoji changer
-FM WhatsApp media sharing and file size limit
-FM WhatsApp online and offline status
-FM WhatsApp notifications and sounds
-FM WhatsApp chat lock and app lock
-Download FM WhatsApp with stickers and GIFs
-FM WhatsApp dark mode and light mode
-FM WhatsApp web and desktop version
-FM WhatsApp contacts and messages management
-FM WhatsApp hidden features and tricks
-Download FM WhatsApp with DND mode
-FM WhatsApp message scheduler and auto-reply
-FM WhatsApp pin chat and archive chat
-FM WhatsApp delete message for everyone option
-FM WhatsApp language and font options
-Download FM WhatsApp with dual account feature
-FM WhatsApp always online and freeze last seen feature
-FM WhatsApp hide blue tick and typing status feature
-FM WhatsApp increase quality of images feature
-FM WhatsApp send more than 90 images at once feature
-
There are two ways to download APK files from the Google Play Store: using a web tool or using an APK extractor app. We will explain both methods below.
-
Using a Web Tool
-
One of the easiest ways to download APK files from the Google Play Store is to use a web tool that can generate download links for any app on the store. There are many such tools available online, but one of the most popular ones is APKPure.com. Here are the steps to use it:
Search for the app that you want to download in the search bar. For example, type "FM WhatsApp" and hit enter.
-
Select the app from the search results and click on the "Download APK" button.
-
Wait for the download to finish and save the APK file on your device.
-
-
Using an APK Extractor App
-
Another way to download APK files from the Google Play Store is to use an APK extractor app that can extract the APK file of any installed app on your device. There are many such apps available on the store, but one of the most popular ones is APK Extractor by Meher. Here are the steps to use it:
-
-
Download and install APK Extractor from the Google Play Store on your device.
-
Open the app and grant it the necessary permissions.
-
Select the app that you want to extract from the list of installed apps. For example, tap on "WhatsApp".
-
Tap on the "Share" icon at the top right corner and choose a method to share or save the APK file. For example, you can send it to yourself via email or save it on Google Drive.
-
-
How to Install and Use FM WhatsApp on Your Android Device
-
Once you have downloaded the APK file of FM WhatsApp, you can install and use it on your Android device. However, before you do that, you need to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. Here are the steps to enable unknown sources:
-
Enabling Unknown Sources
-
-
Go to your device's settings and tap on "Security".
-
Find and toggle on "Unknown sources". You may see a warning message that says installing apps from unknown sources can harm your device. Tap on "OK" to proceed.
-
You have now enabled unknown sources on your device.
-
-
Installing the APK File
-
-
Locate the APK file of FM WhatsApp that you have downloaded or extracted on your device. You can use a file manager app to do this.
-
Tap on the APK file and follow the instructions on the screen to install it.
-
You may see a warning message that says this app is not verified by Google Play Protect. Tap on "Install anyway" to proceed.
-
You have now installed FM WhatsApp on your device.
-
-
Launching and Customizing FM WhatsApp
-
-
Open FM WhatsApp from your app drawer or home screen.
-
You will see a welcome screen that asks you to agree to the terms and conditions of the app. Tap on "Agree and continue" to proceed.
-
You will be asked to enter your phone number and verify it with a code that will be sent to you via SMS. Follow the instructions on the screen to complete the verification process.
-
You will be asked to restore your chat history from a backup if you have one. You can choose to restore it or skip it.
-
You will be asked to enter your name and profile picture. You can also change them later.
-
You have now launched FM WhatsApp and you can start using it as you would use the official WhatsApp app.
-
To customize FM WhatsApp, tap on the menu icon at the top right corner and select "Fouad Mods". You will see a list of categories that you can explore and tweak according to your preference. For example, you can change the theme, the colors, the fonts, the privacy settings, etc.
-
-
Conclusion
-
FM WhatsApp is a modded version of the official WhatsApp app that offers many additional features and customization options that are not available on the original app. However, it also comes with some risks and drawbacks that you should be aware of before using it. In this article, we have shown you how to download APK FM WhatsApp from a reliable source, how to install and use it on your Android device, and how to enable unknown sources on your device. We hope that this guide has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about FM WhatsApp and their answers:
-
-
Q: Is FM WhatsApp safe to use?
A: FM WhatsApp is not officially endorsed or verified by WhatsApp or Google, so there is no guarantee that it is safe or secure. It may contain harmful code or malware that can compromise your device or data. Therefore, you should use it at your own risk and discretion.
-
Q: Can I use FM WhatsApp and official WhatsApp on the same device?
A: Yes, you can use both apps on the same device, but you will need to use different phone numbers for each app. You cannot use the same number for both apps.
-
Q: Will I get banned from WhatsApp for using FM WhatsApp?
A: There is a possibility that WhatsApp may detect and ban your account for using a modded app that violates their terms and conditions. However, FM WhatsApp has some anti-ban features that may prevent this from happening. Still, there is no guarantee that you will not get banned, so use it at your own risk.
-
Q: How do I update FM WhatsApp?
A: You can check for updates from within the app by tapping on the menu icon and selecting "Updates". You can also visit the official website of Foud Apps or APKPure.com to download the latest version of the app.
-
Q: How do I uninstall FM WhatsApp?
A: You can uninstall FM WhatsApp like any other app on your device. Go to your device's settings and tap on "Apps". Find and select "FM WhatsApp" from the list of apps. Tap on "Uninstall" and confirm your action.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download Huli Live Mod APK and Watch Amazing Shows from China.md b/spaces/fatiXbelha/sd/Download Huli Live Mod APK and Watch Amazing Shows from China.md
deleted file mode 100644
index e5d160b0fb9f75565495bc066530bd35c631254e..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download Huli Live Mod APK and Watch Amazing Shows from China.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
Download APK Huli Live Mod: A New Way to Enjoy Live Streaming Shows
-
Do you love watching live streaming shows on your smartphone? Do you want to access exclusive content that is not available on other platforms? Do you want to save money and enjoy unlimited features without ads or restrictions? If you answered yes to any of these questions, then you should try downloading APK Huli Live Mod.
APK Huli Live Mod is a modified version of the original Huli Live app, which is a popular live streaming platform that offers various kinds of shows, such as music, dance, comedy, gaming, and more. With APK Huli Live Mod, you can enjoy all the benefits of the original app, plus some extra features that will make your experience more enjoyable and satisfying. In this article, we will tell you everything you need to know about APK Huli Live Mod, including what it is, how to download and install it, and how to use it effectively.
-
What is Huli Live?
-
Huli Live is a live streaming app that allows you to watch and interact with thousands of hosts from different countries and regions. You can choose from various categories and genres, such as music, dance, comedy, gaming, and more. You can also chat with the hosts and other users, send gifts, join fan clubs, and participate in events and competitions.
-
Huli Live is designed to provide you with high-quality entertainment and social interaction. You can discover new talents, make new friends, and have fun anytime and anywhere. However, there are some limitations and drawbacks that come with using the original app. For example, you have to pay for some premium features, such as unlocking private shows, sending special gifts, or accessing VIP rooms. You also have to deal with annoying ads that interrupt your viewing experience. Moreover, some content may be blocked or restricted in your region due to legal or ethical reasons.
-
Features of Huli Live
-
Some of the features that make Huli Live a great live streaming app are:
-
-
Thousands of hosts from different countries and regions
-
Various categories and genres of shows
-
High-quality video and audio
-
Real-time interaction and communication
-
Gifts, fan clubs, events, and competitions
-
User-friendly interface and easy navigation
-
Regular updates and improvements
-
-
Benefits of Huli Live Mod APK
-
Huli Live Mod APK is a modified version of the original app that gives you access to all the premium features for free. You don't have to pay anything or register an account to use the app. You also don't have to worry about ads or interruptions. Moreover, you can bypass any regional or legal restrictions that may prevent you from watching some content. Some of the benefits that come with using Huli Live Mod APK are:
-
-
Unlimited access to private shows
-
Unlimited access to VIP rooms
-
Unlimited sending of special gifts
-
No ads or pop-ups
-
No registration or verification required
-
No root or jailbreak required
-
No risk of viruses or malware
-
Bypass regional or legal restrictions
-
-
How to Download and Install Huli Live Mod APK
-
If you want to download and install Huli Live Mod APK on your Android device, you need to follow these simple steps:
-
Step 1: Enable Unknown Sources
-
Before you can install any APK file on your device, you need to enable the option to allow installation from unknown sources. This will let you install apps that are not from the official Google Play Store. To do this, go to your device's Settings > Security > Unknown Sources and toggle it on. You may see a warning message, but don't worry, it's safe to proceed.
-
Step 2: Download the APK File
-
Next, you need to download the APK file of Huli Live Mod from a reliable source. You can use the link below to download it directly to your device. Alternatively, you can download it to your computer and transfer it to your device via USB cable or Bluetooth.
-
download apk huli live mod show bebas
-download apk huli live mod unlock all room
-download apk huli live mod versi terbaru
-download apk huli live mod streaming show
-download apk huli live mod cara install
-download apk huli live mod ukuran kecil
-download apk huli live mod android 4.1
-download apk huli live mod no banned
-download apk huli live mod gratis
-download apk huli live mod tanpa iklan
-download apk huli live mod unlimited coins
-download apk huli live mod fitur lengkap
-download apk huli live mod simpang jalan
-download apk huli live mod quipper.co.id
-download apk huli live mod link alternatif
-download apk huli live mod update 2023
-download apk huli live mod anti lag
-download apk huli live mod kualitas hd
-download apk huli live mod pengguna banyak
-download apk huli live mod server stabil
-download apk huli live mod host cantik
-download apk huli live mod chat interaktif
-download apk huli live mod video call
-download apk huli live mod gift menarik
-download apk huli live mod vip gratis
-download apk huli live mod review terbaik
-download apk huli live mod tutorial mudah
-download apk huli live mod persyaratan rendah
-download apk huli live mod dikembangkan oleh HuliLive Corp.
-download apk huli live mod aplikasi streaming terbaru
Once you have downloaded the APK file, you need to locate it on your device and tap on it to start the installation process. You may see a prompt asking for your permission to install the app. Just tap on Install and wait for a few seconds until the installation is complete.
-
Step 4: Launch the App and Enjoy
-
Finally, you can launch the app from your app drawer or home screen and enjoy watching live streaming shows with unlimited features and benefits. You can browse through different categories and genres, watch and interact with your favorite hosts, send gifts and join fan clubs, and have fun with other users.
-
Tips and Tricks for Using Huli Live Mod APK
-
To make the most out of your Huli Live Mod APK experience, here are some tips and tricks that you can follow:
-
Use a VPN Service
-
A VPN service is a tool that allows you to change your IP address and location, making you appear as if you are from another country or region. This can help you access content that may be blocked or restricted in your area due to legal or ethical reasons. For example, some shows may not be available in your country due to censorship or licensing issues. By using a VPN service, you can bypass these limitations and watch any show you want.
-
There are many VPN services that you can use, such as ExpressVPN, NordVPN, or Surfshark. You can download them from the Google Play Store or their official websites. Just make sure that you choose a reputable and reliable one that offers fast and secure connections.
-
Choose the Best Quality and Speed
-
Huli Live Mod APK allows you to choose the quality and speed of the video stream according to your preference and internet connection. You can adjust these settings by tapping on the gear icon on the top right corner of the screen. You can choose from low, medium, high, or ultra quality, and from normal, fast, or super speed. The higher the quality and speed, the better the viewing experience, but also the more data consumption. Therefore, you should balance these factors according to your situation.
-
Interact with the Hosts and Other Users
-
One of the best features of Huli Live Mod APK is that it allows you to interact with the hosts and other users in real-time. You can chat with them by typing messages in the chat box at the bottom of the screen. You can also send gifts, such as flowers, hearts, stars, or diamonds, by tapping on the gift icon on the bottom right corner of the screen. These gifts will show your appreciation and support for the hosts, and may also earn you some rewards or privileges.
-
Besides chatting and sending gifts, you can also join fan clubs, participate in events and competitions, vote for your favorite hosts, and invite them to private shows or VIP rooms. These activities will enhance your social interaction and enjoyment of the app.
-
Be Respectful and Follow the Rules
-
While using Huli Live Mod APK, you should always be respectful and follow the rules of the app. You should not spam, harass, abuse, or threaten anyone on the app. You should also not post any illegal, immoral, or inappropriate content on the app. If you violate any of these rules, you may face consequences such as being banned or reported by other users or by the app itself.
-
You should also respect the hosts' preferences and boundaries. You should not force them to do anything they don't want to do or ask them for personal information they don't want to share. You should also not record or share their shows without their permission. Remember that they are human beings too, and they deserve respect and privacy.
-
Conclusion
-
Huli Live Mod APK is a great app for anyone who loves watching live streaming shows on their smartphone. It offers unlimited access to various kinds of shows, such as music, dance, comedy, gaming, and more. It also allows real-time interaction and communication with the hosts and other users. Moreover, it has no ads, no registration, no payment, and no restrictions. It is easy to download and install, and it works on any Android device. However, you should also be aware of the risks and responsibilities that come with using Huli Live Mod APK. You should use a VPN service to protect your privacy and security, and to access content that may be blocked or restricted in your region. You should also choose the best quality and speed for your video stream according to your internet connection and data consumption. You should also interact with the hosts and other users respectfully and follow the rules of the app. You should not spam, harass, abuse, or threaten anyone on the app. You should also not post any illegal, immoral, or inappropriate content on the app. You should also respect the hosts' preferences and boundaries, and not record or share their shows without their permission. If you follow these tips and tricks, you will surely have a great time using Huli Live Mod APK. You will be able to enjoy high-quality entertainment and social interaction anytime and anywhere. You will also be able to discover new talents, make new friends, and have fun with other users. Huli Live Mod APK is a new way to enjoy live streaming shows that you should not miss.
FAQs
-
Here are some frequently asked questions about Huli Live Mod APK:
-
-
Is Huli Live Mod APK safe to use?
-
Huli Live Mod APK is safe to use as long as you download it from a reliable source and scan it for viruses or malware before installing it. You should also use a VPN service to protect your privacy and security while using the app.
-
Is Huli Live Mod APK legal to use?
-
Huli Live Mod APK is not legal to use in some countries or regions where the original app is banned or restricted due to legal or ethical reasons. You should check the laws and regulations of your country or region before using the app. You should also use a VPN service to bypass any regional or legal restrictions that may apply to you.
-
What are the requirements for using Huli Live Mod APK?
-
Huli Live Mod APK requires an Android device with Android 4.4 or higher version, a stable internet connection, and enough storage space. You should also enable the option to allow installation from unknown sources on your device before installing the app.
-
How can I update Huli Live Mod APK?
-
Huli Live Mod APK does not have an automatic update feature, so you need to manually check for updates from time to time. You can visit the same source where you downloaded the app and see if there is a newer version available. If there is, you can download and install it over the existing app without losing your data or settings.
-
How can I contact the developers of Huli Live Mod APK?
-
Huli Live Mod APK is not an official app, so it does not have an official website or contact information. However, you can try to contact the developers through their social media accounts or email addresses if they provide them on their source page.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Enjoy Dream League Soccer 2019 with Liga Gojek Mod - Download APK and OBB Files Here.md b/spaces/fatiXbelha/sd/Enjoy Dream League Soccer 2019 with Liga Gojek Mod - Download APK and OBB Files Here.md
deleted file mode 100644
index c0a0ffa17b262514fa4ffb83c646d64a70c476bb..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Enjoy Dream League Soccer 2019 with Liga Gojek Mod - Download APK and OBB Files Here.md
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
Download APK Dream League Soccer 2019 Mod Liga Gojek: How to Enjoy the Best of Indonesian Football on Your Android Device
-
If you are a fan of soccer games and want to experience the thrill of playing with some of the best teams and players in Indonesia, then you should download APK Dream League Soccer 2019 Mod Liga Gojek. This is a modified version of the popular Dream League Soccer 2019 game that features the Indonesian professional football league, Liga Gojek. In this article, we will show you how to download and install this game on your Android device, as well as some tips and tricks to help you play better.
-
download apk dream league soccer 2019 mod liga gojek
Dream League Soccer 2019 is a fantastic game in its category of mobile games. It has high graphics, detailed player faces, and many players, leagues, events, etc. The game has more than 4000 FIFPro authorized players, build your own arena and take on the team in Dream League Online as you conquer brilliance on your street to Soccer Stardom.
-
Features of Dream League Soccer 2019
-
Some of the features of Dream League Soccer 2019 are:
-
-
FIFPro™ licensed players brings the most authentic Dream League Soccer experience to your hands!
-
Freedom to create, customize and control your very own Dream Team!
-
6 Divisions to work your way through, and over 7 Cup competitions!
-
Take part in regular live events to win prizes and glory!
-
Build your very own stadium to showcase your superstars!
-
Develop your players with more accuracy and intent
-
Season objectives to keep you engaged and coming back!
-
Google Play achievements & leaderboards to see who ranks on top!
-
Customise and import your very own kits & logos!
-
Sync progress between devices with Google Play Cloud!
-
Exclusive soundtrack provided by The Luka State, Sunset Sons, Beth Thornton, Jack Wins, Vistas & Only The Poets!
-
-
How to Download and Install Dream League Soccer 2019 APK File on Android
-
To download and install Dream League Soccer 2019 APK file on your Android device, you need to follow these steps:
-
-
Allow unknown apps on your Android device by going to Settings > Apps & Notifications > Special access > Install unknown apps. Tap Chrome (or whichever web browser you use) and move Allow from this source to the On position.
-
Install a file manager app (such as Cx File Explorer or File Manager) so that you can find the APK file after you download it to your phone.
-
Download the APK file from a reputable source such as APK Mirror. You can find the link for Dream League Soccer 2019 APK file here:
-
Once it's downloaded, open your file manager app and locate the APK file in your Downloads folder. Tap on it and tap Install when prompted.
-
Wait for the installation process to finish and then launch the game from your app drawer
What is Dream League Soccer 2019 Mod Liga Gojek?
-
Dream League Soccer 2019 Mod Liga Gojek is a popular modification of the original Dream League Soccer game that features the Indonesian professional football league, Liga Gojek. The mod offers various enhancements such as new teams, players, and improved gameplay.
-
Features of Dream League Soccer 2019 Mod Liga Gojek
-
Some of the features of Dream League Soccer 2019 Mod Liga Gojek are:
-
-
All teams and players from Liga Gojek are available to choose from.
-
You can play in the Liga Gojek mode, which simulates the real league season and matches.
-
You can also play in other modes such as Career, Online, and Friendly.
-
You can customize your team's kit, logo, and stadium with the Liga Gojek theme.
-
You can enjoy realistic graphics, animations, and sound effects.
-
You can use unlimited coins and gems to buy and upgrade your players and facilities.
-
-
How to Download and Install Dream League Soccer 2019 Mod Liga Gojek APK File on Android
-
To download and install Dream League Soccer 2019 Mod Liga Gojek APK file on your Android device, you need to follow these steps:
-
-
Allow unknown apps on your Android device by going to Settings > Apps & Notifications > Special access > Install unknown apps. Tap Chrome (or whichever web browser you use) and move Allow from this source to the On position.
-
Install a file manager app (such as Cx File Explorer or File Manager) so that you can find the APK file after you download it to your phone.
-
Download the APK file and the OBB file from a reputable source such as YouTube. You can find the link for Dream League Soccer 2019 Mod Liga Gojek APK file and OBB file here:
-
Once they are downloaded, open your file manager app and locate the APK file and the OBB file in your Downloads folder. Tap on the APK file and tap Install when prompted.
-
After the installation is done, do not launch the game yet. Instead, go to your file manager app and copy the OBB file to Android > OBB > com.firsttouchgames.dls3 folder. If there is no such folder, create one.
-
Now you can launch the game from your app drawer and enjoy playing Dream League Soccer 2019 Mod Liga Gojek.
-
Tips and Tricks for Playing Dream League Soccer 2019 Mod Liga Gojek
-
Now that you have downloaded and installed Dream League Soccer 2019 Mod Liga Gojek on your Android device, you might be wondering how to play it well and have more fun. Here are some tips and tricks that will help you improve your skills and enjoy the game more.
-
How to Create and Manage Your Dream Team
-
One of the most important aspects of Dream League Soccer 2019 Mod Liga Gojek is creating and managing your dream team. You can choose from any of the teams and players from Liga Gojek, or create your own custom team. Here are some tips on how to do that:
-
-
When you start the game, you will be given a default team with some random players. You can change the name, logo, and kit of your team by tapping on the Team Management icon on the main menu.
-
You can also buy new players or sell existing ones by tapping on the Transfer icon on the main menu. You can use coins or gems to buy players, or earn them by playing matches and completing objectives.
-
You can also scout for new players by tapping on the Scout icon on the main menu. You can use gems to scout for players with specific attributes, such as speed, shooting, or passing.
-
You can also upgrade your existing players by tapping on the Player Development icon on the main menu. You can use coins or gems to improve their skills, such as dribbling, tackling, or heading.
-
You can also change the formation, tactics, and roles of your players by tapping on the Formation icon on the main menu. You can choose from different formations, such as 4-4-2, 4-3-3, or 3-5-2. You can also adjust the style of play, such as attacking, defensive, or balanced. You can also assign roles to your players, such as captain, free kick taker, or penalty taker.
-
-
How to Train and Improve Your Players
-
Another important aspect of Dream League Soccer 2019 Mod Liga Gojek is training and improving your players. You can do this by playing matches and completing objectives, or by using the Training mode. Here are some tips on how to do that:
-
How to download apk dream league soccer 2019 mod liga gojek for android
-Download apk dream league soccer 2019 mod liga gojek with unlimited coins
-Download apk dream league soccer 2019 mod liga gojek latest version
-Download apk dream league soccer 2019 mod liga gojek offline
-Download apk dream league soccer 2019 mod liga gojek free
-Download apk dream league soccer 2019 mod liga gojek full unlocked
-Download apk dream league soccer 2019 mod liga gojek no root
-Download apk dream league soccer 2019 mod liga gojek hack
-Download apk dream league soccer 2019 mod liga gojek update
-Download apk dream league soccer 2019 mod liga gojek mega mod
-Download apk dream league soccer 2019 mod liga gojek obb data
-Download apk dream league soccer 2019 mod liga gojek cheat
-Download apk dream league soccer 2019 mod liga gojek revdl
-Download apk dream league soccer 2019 mod liga gojek rexdl
-Download apk dream league soccer 2019 mod liga gojek mediafire
-Download apk dream league soccer 2019 mod liga gojek google drive
-Download apk dream league soccer 2019 mod liga gojek direct link
-Download apk dream league soccer 2019 mod liga gojek mirror link
-Download apk dream league soccer 2019 mod liga gojek gameplay
-Download apk dream league soccer 2019 mod liga gojek review
-Download apk dream league soccer 2019 mod liga gojek features
-Download apk dream league soccer 2019 mod liga gojek tips and tricks
-Download apk dream league soccer 2019 mod liga gojek tutorial
-Download apk dream league soccer 2019 mod liga gojek installation guide
-Download apk dream league soccer 2019 mod liga gojek requirements
-Download apk dream league soccer 2019 mod liga gojek screenshots
-Download apk dream league soccer 2019 mod liga gojek video
-Download apk dream league soccer 2019 mod liga gojek youtube
-Download apk dream league soccer 2019 mod liga gojek facebook
-Download apk dream league soccer 2019 mod liga gojek twitter
-Download apk dream league soccer 2019 mod liga gojek instagram
-Download apk dream league soccer 2019 mod liga gojek reddit
-Download apk dream league soccer 2019 mod liga gojek quora
-Download apk dream league soccer 2019 mod liga gojek pinterest
-Download apk dream league soccer 2019 mod liga gojek tumblr
-Download apk dream league soccer 2019 mod liga gojek telegram
-Download apk dream league soccer 2019 mod liga gojek whatsapp
-Download apk dream league soccer 2019 mod liga gojek discord
-Download apk dream league soccer 2019 mod liga gojek forum
-Download apk dream league soccer 2019 mod liga gojek blog
-Download apk dream league soccer 2019 mod liga gojek website
-Download apk dream league soccer 2019 mod liga gojek wiki
-Download apk dream league soccer 2019 mod liga gojek faq
-Download apk dream league soccer 2019 mod liga gojek support
-Download apk dream league soccer 2019 mod liga gojek contact us
-Download apk dream league soccer 2019 mod liga gojek feedback
-Download apk dream league soccer 2019 mod liga gojek rating and review
-
-
You can access the Training mode by tapping on the Training icon on the main menu. You can choose from different types of training, such as shooting, passing, dribbling, or defending. You can also choose the difficulty level, such as easy, medium, or hard.
-
You can use the Training mode to practice your skills and learn new tricks. You can also earn coins and gems by completing training challenges.
-
You can also improve your players' skills by playing matches and completing objectives. You can play in different modes, such as Career, Online, or Friendly. You can also play in different tournaments, such as Liga Gojek, Champions Cup, or Global Challenge Cup.
-
You can earn coins and gems by winning matches and scoring goals. You can also earn bonus coins and gems by completing objectives, such as winning a certain number of matches, scoring a certain number of goals, or keeping a clean sheet.
-
You can use the coins and gems to buy new players or upgrade existing ones. You can also use them to buy new kits, logos, stadiums, or coaches.
-
-
How to Compete in Liga Gojek and Other Tournaments
-
One of the most exciting features of Dream League Soccer 2019 Mod Liga Gojek is competing in Liga Gojek and other tournaments. You can do this by playing in Career mode or Online mode. Here are some tips on how to do that:
-
-
You can access the Career mode by tapping on the Career icon on the main menu. You can choose from different divisions, such as Division 1, Division 2, Division 3, Division 4, Division 5, or Division 6. You can also choose from different tournaments, such as Liga Gojek (Indonesian league), Champions Cup (European cup), Global Challenge Cup (World cup), All Stars Cup (All star teams), Dream League Cup (Dream teams), or Elite Division (Elite teams).
-
You can compete in Liga Gojek by choosing it from the tournaments list in Career mode. You will face 19 other teams from Liga Gojek in a round-robin format. You will play each team twice (home and away) for a total of 38 matches. The top four teams will qualify for the Champions Cup.
You can also compete in other tournaments by choosing them from the tournaments list in Career mode. You will face different teams from different regions and leagues in a knockout format. You will play one match per round (home or away) until you reach the final. The winner of each tournament will earn a trophy and a cash prize.
-
You can access the Online mode by tapping on the Online icon on the main menu. You can choose from different modes, such as Quick Match, Friendly Match, or Dream League Online. You can also choose from different regions, such as Asia, Europe, America, or Africa.
-
You can compete in Dream League Online by choosing it from the modes list in Online mode. You will face other players from around the world in a league format. You will play each player once (home or away) for a total of 10 matches. The top four players will qualify for the next division.
-
You can also compete in Quick Match or Friendly Match by choosing them from the modes list in Online mode. You will face a random player or a friend in a single match (home or away). The winner of each match will earn coins and gems.
-
-
How to Customize Your Team's Kit, Logo, and Stadium
-
Another fun feature of Dream League Soccer 2019 Mod Liga Gojek is customizing your team's kit, logo, and stadium. You can do this by using the Customize option on the main menu. Here are some tips on how to do that:
-
-
You can customize your team's kit by tapping on the Kit icon on the Customize menu. You can choose from different colors, patterns, and designs for your team's shirt, shorts, and socks. You can also import your own kit image by tapping on the Import Kit icon.
-
You can customize your team's logo by tapping on the Logo icon on the Customize menu. You can choose from different shapes, colors, and symbols for your team's logo. You can also import your own logo image by tapping on the Import Logo icon.
-
You can customize your team's stadium by tapping on the Stadium icon on the Customize menu. You can choose from different types, sizes, and capacities for your team's stadium. You can also upgrade your stadium's facilities, such as pitch, seating, lighting, and scoreboard.
-
-
Conclusion
-
Summary of the Main Points
-
In conclusion, Dream League Soccer 2019 Mod Liga Gojek is a great game for soccer fans who want to enjoy the best of Indonesian football on their Android device. The game offers various features such as:
-
-
FIFPro™ licensed players and teams from Liga Gojek and other leagues.
-
Freedom to create, customize and control your very own Dream Team.
-
6 Divisions and over 7 Cup competitions to work your way through.
-
Regular live events to win prizes and glory.
-
Your very own stadium to showcase your superstars.
-
Player development and training to improve your skills.
-
Google Play achievements & leaderboards to see who ranks on top.
-
Customise and import your very own kits & logos.
-
Sync progress between devices with Google Play Cloud.
-
Exclusive soundtrack provided by The Luka State, Sunset Sons, Beth Thornton, Jack Wins, Vistas & Only The Poets!
-
-
Call to Action
-
If you are interested in playing Dream League Soccer 2019 Mod Liga Gojek, you can download it from the links provided in this article. You can also watch some videos and reviews of the game on YouTube or other platforms. You can also share your feedback and opinions about the game with other players and fans on social media or forums. We hope you have fun playing Dream League Soccer 2019 Mod Liga Gojek!
-
FAQs
-
Here are some frequently asked questions about Dream League Soccer 2019 Mod Liga Gojek:
-
-
Q: Is Dream League Soccer 2019 Mod Liga Gojek safe to download and install?
-
A: Yes, Dream League Soccer 2019 Mod Liga Gojek is safe to download and install as long as you use a reputable source such as APK Mirror or YouTube. However, you should always be careful when downloading and installing any APK file from unknown sources as they may contain viruses or malware that could harm your device.
-
Q: Is Dream League Soccer 2019 Mod Liga Gojek compatible with my device?
-
A: Dream League Soccer 2019 Mod Liga Gojek is compatible with most Android devices that have Android 4.4 or higher version installed.
Q: How can I update Dream League Soccer 2019 Mod Liga Gojek to the latest version?
-
A: You can update Dream League Soccer 2019 Mod Liga Gojek to the latest version by downloading and installing the new APK file and OBB file from the same source that you used before. You can also check for updates by tapping on the Settings icon on the main menu and then tapping on the Update icon.
-
Q: How can I play Dream League Soccer 2019 Mod Liga Gojek offline?
-
A: You can play Dream League Soccer 2019 Mod Liga Gojek offline by turning off your internet connection before launching the game. You can still play in Career mode or Friendly mode, but you will not be able to play in Online mode or access live events.
-
Q: How can I backup and restore my Dream League Soccer 2019 Mod Liga Gojek data?
-
A: You can backup and restore your Dream League Soccer 2019 Mod Liga Gojek data by using Google Play Cloud. You can enable this feature by tapping on the Settings icon on the main menu and then tapping on the Google Play Cloud icon. You can also backup and restore your data manually by copying and pasting the com.firsttouchgames.dls3 folder from Android > Data to another location.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py" "b/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py"
deleted file mode 100644
index 93a84a0c5b47d44ee10e2a8a732c68d693388694..0000000000000000000000000000000000000000
--- "a/spaces/fb700/chatglm-fitness-RLHF/crazy_functions/\350\201\224\347\275\221\347\232\204ChatGPT_bing\347\211\210.py"
+++ /dev/null
@@ -1,102 +0,0 @@
-from toolbox import CatchException, update_ui
-from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive, input_clipping
-import requests
-from bs4 import BeautifulSoup
-from request_llm.bridge_all import model_info
-
-
-def bing_search(query, proxies=None):
- query = query
- url = f"https://cn.bing.com/search?q={query}"
- headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36'}
- response = requests.get(url, headers=headers, proxies=proxies)
- soup = BeautifulSoup(response.content, 'html.parser')
- results = []
- for g in soup.find_all('li', class_='b_algo'):
- anchors = g.find_all('a')
- if anchors:
- link = anchors[0]['href']
- if not link.startswith('http'):
- continue
- title = g.find('h2').text
- item = {'title': title, 'link': link}
- results.append(item)
-
- for r in results:
- print(r['link'])
- return results
-
-
-def scrape_text(url, proxies) -> str:
- """Scrape text from a webpage
-
- Args:
- url (str): The URL to scrape text from
-
- Returns:
- str: The scraped text
- """
- headers = {
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36',
- 'Content-Type': 'text/plain',
- }
- try:
- response = requests.get(url, headers=headers, proxies=proxies, timeout=8)
- if response.encoding == "ISO-8859-1": response.encoding = response.apparent_encoding
- except:
- return "无法连接到该网页"
- soup = BeautifulSoup(response.text, "html.parser")
- for script in soup(["script", "style"]):
- script.extract()
- text = soup.get_text()
- lines = (line.strip() for line in text.splitlines())
- chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
- text = "\n".join(chunk for chunk in chunks if chunk)
- return text
-
-@CatchException
-def 连接bing搜索回答问题(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- """
- txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径
- llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行
- plugin_kwargs 插件模型的参数,暂时没有用武之地
- chatbot 聊天显示框的句柄,用于显示给用户
- history 聊天历史,前情提要
- system_prompt 给gpt的静默提醒
- web_port 当前软件运行的端口号
- """
- history = [] # 清空历史,以免输入溢出
- chatbot.append((f"请结合互联网信息回答以下问题:{txt}",
- "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该模板可以实现ChatGPT联网信息综合。该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板。您若希望分享新的功能模组,请不吝PR!"))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
-
- # ------------- < 第1步:爬取搜索引擎的结果 > -------------
- from toolbox import get_conf
- proxies, = get_conf('proxies')
- urls = bing_search(txt, proxies)
- history = []
-
- # ------------- < 第2步:依次访问网页 > -------------
- max_search_result = 8 # 最多收纳多少个网页的结果
- for index, url in enumerate(urls[:max_search_result]):
- res = scrape_text(url['link'], proxies)
- history.extend([f"第{index}份搜索结果:", res])
- chatbot.append([f"第{index}份搜索结果:", res[:500]+"......"])
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新
-
- # ------------- < 第3步:ChatGPT综合 > -------------
- i_say = f"从以上搜索结果中抽取信息,然后回答问题:{txt}"
- i_say, history = input_clipping( # 裁剪输入,从最长的条目开始裁剪,防止爆token
- inputs=i_say,
- history=history,
- max_token_limit=model_info[llm_kwargs['llm_model']]['max_token']*3//4
- )
- gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(
- inputs=i_say, inputs_show_user=i_say,
- llm_kwargs=llm_kwargs, chatbot=chatbot, history=history,
- sys_prompt="请从给定的若干条搜索结果中抽取信息,对最相关的两个搜索结果进行总结,然后回答问题。"
- )
- chatbot[-1] = (i_say, gpt_say)
- history.append(i_say);history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新
-
diff --git a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/preprocess.py b/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/preprocess.py
deleted file mode 100644
index fe5ab25ef7cb4adeb76cad11962f179d6a38edcc..0000000000000000000000000000000000000000
--- a/spaces/fb700/chatglm-fitness-RLHF/speaker_encoder/preprocess.py
+++ /dev/null
@@ -1,285 +0,0 @@
-from multiprocess.pool import ThreadPool
-from speaker_encoder.params_data import *
-from speaker_encoder.config import librispeech_datasets, anglophone_nationalites
-from datetime import datetime
-from speaker_encoder import audio
-from pathlib import Path
-from tqdm import tqdm
-import numpy as np
-
-
-class DatasetLog:
- """
- Registers metadata about the dataset in a text file.
- """
- def __init__(self, root, name):
- self.text_file = open(Path(root, "Log_%s.txt" % name.replace("/", "_")), "w")
- self.sample_data = dict()
-
- start_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M"))
- self.write_line("Creating dataset %s on %s" % (name, start_time))
- self.write_line("-----")
- self._log_params()
-
- def _log_params(self):
- from speaker_encoder import params_data
- self.write_line("Parameter values:")
- for param_name in (p for p in dir(params_data) if not p.startswith("__")):
- value = getattr(params_data, param_name)
- self.write_line("\t%s: %s" % (param_name, value))
- self.write_line("-----")
-
- def write_line(self, line):
- self.text_file.write("%s\n" % line)
-
- def add_sample(self, **kwargs):
- for param_name, value in kwargs.items():
- if not param_name in self.sample_data:
- self.sample_data[param_name] = []
- self.sample_data[param_name].append(value)
-
- def finalize(self):
- self.write_line("Statistics:")
- for param_name, values in self.sample_data.items():
- self.write_line("\t%s:" % param_name)
- self.write_line("\t\tmin %.3f, max %.3f" % (np.min(values), np.max(values)))
- self.write_line("\t\tmean %.3f, median %.3f" % (np.mean(values), np.median(values)))
- self.write_line("-----")
- end_time = str(datetime.now().strftime("%A %d %B %Y at %H:%M"))
- self.write_line("Finished on %s" % end_time)
- self.text_file.close()
-
-
-def _init_preprocess_dataset(dataset_name, datasets_root, out_dir) -> (Path, DatasetLog):
- dataset_root = datasets_root.joinpath(dataset_name)
- if not dataset_root.exists():
- print("Couldn\'t find %s, skipping this dataset." % dataset_root)
- return None, None
- return dataset_root, DatasetLog(out_dir, dataset_name)
-
-
-def _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, extension,
- skip_existing, logger):
- print("%s: Preprocessing data for %d speakers." % (dataset_name, len(speaker_dirs)))
-
- # Function to preprocess utterances for one speaker
- def preprocess_speaker(speaker_dir: Path):
- # Give a name to the speaker that includes its dataset
- speaker_name = "_".join(speaker_dir.relative_to(datasets_root).parts)
-
- # Create an output directory with that name, as well as a txt file containing a
- # reference to each source file.
- speaker_out_dir = out_dir.joinpath(speaker_name)
- speaker_out_dir.mkdir(exist_ok=True)
- sources_fpath = speaker_out_dir.joinpath("_sources.txt")
-
- # There's a possibility that the preprocessing was interrupted earlier, check if
- # there already is a sources file.
- if sources_fpath.exists():
- try:
- with sources_fpath.open("r") as sources_file:
- existing_fnames = {line.split(",")[0] for line in sources_file}
- except:
- existing_fnames = {}
- else:
- existing_fnames = {}
-
- # Gather all audio files for that speaker recursively
- sources_file = sources_fpath.open("a" if skip_existing else "w")
- for in_fpath in speaker_dir.glob("**/*.%s" % extension):
- # Check if the target output file already exists
- out_fname = "_".join(in_fpath.relative_to(speaker_dir).parts)
- out_fname = out_fname.replace(".%s" % extension, ".npy")
- if skip_existing and out_fname in existing_fnames:
- continue
-
- # Load and preprocess the waveform
- wav = audio.preprocess_wav(in_fpath)
- if len(wav) == 0:
- continue
-
- # Create the mel spectrogram, discard those that are too short
- frames = audio.wav_to_mel_spectrogram(wav)
- if len(frames) < partials_n_frames:
- continue
-
- out_fpath = speaker_out_dir.joinpath(out_fname)
- np.save(out_fpath, frames)
- logger.add_sample(duration=len(wav) / sampling_rate)
- sources_file.write("%s,%s\n" % (out_fname, in_fpath))
-
- sources_file.close()
-
- # Process the utterances for each speaker
- with ThreadPool(8) as pool:
- list(tqdm(pool.imap(preprocess_speaker, speaker_dirs), dataset_name, len(speaker_dirs),
- unit="speakers"))
- logger.finalize()
- print("Done preprocessing %s.\n" % dataset_name)
-
-
-# Function to preprocess utterances for one speaker
-def __preprocess_speaker(speaker_dir: Path, datasets_root: Path, out_dir: Path, extension: str, skip_existing: bool):
- # Give a name to the speaker that includes its dataset
- speaker_name = "_".join(speaker_dir.relative_to(datasets_root).parts)
-
- # Create an output directory with that name, as well as a txt file containing a
- # reference to each source file.
- speaker_out_dir = out_dir.joinpath(speaker_name)
- speaker_out_dir.mkdir(exist_ok=True)
- sources_fpath = speaker_out_dir.joinpath("_sources.txt")
-
- # There's a possibility that the preprocessing was interrupted earlier, check if
- # there already is a sources file.
- # if sources_fpath.exists():
- # try:
- # with sources_fpath.open("r") as sources_file:
- # existing_fnames = {line.split(",")[0] for line in sources_file}
- # except:
- # existing_fnames = {}
- # else:
- # existing_fnames = {}
- existing_fnames = {}
- # Gather all audio files for that speaker recursively
- sources_file = sources_fpath.open("a" if skip_existing else "w")
-
- for in_fpath in speaker_dir.glob("**/*.%s" % extension):
- # Check if the target output file already exists
- out_fname = "_".join(in_fpath.relative_to(speaker_dir).parts)
- out_fname = out_fname.replace(".%s" % extension, ".npy")
- if skip_existing and out_fname in existing_fnames:
- continue
-
- # Load and preprocess the waveform
- wav = audio.preprocess_wav(in_fpath)
- if len(wav) == 0:
- continue
-
- # Create the mel spectrogram, discard those that are too short
- frames = audio.wav_to_mel_spectrogram(wav)
- if len(frames) < partials_n_frames:
- continue
-
- out_fpath = speaker_out_dir.joinpath(out_fname)
- np.save(out_fpath, frames)
- # logger.add_sample(duration=len(wav) / sampling_rate)
- sources_file.write("%s,%s\n" % (out_fname, in_fpath))
-
- sources_file.close()
- return len(wav)
-
-def _preprocess_speaker_dirs_vox2(speaker_dirs, dataset_name, datasets_root, out_dir, extension,
- skip_existing, logger):
- # from multiprocessing import Pool, cpu_count
- from pathos.multiprocessing import ProcessingPool as Pool
- # Function to preprocess utterances for one speaker
- def __preprocess_speaker(speaker_dir: Path):
- # Give a name to the speaker that includes its dataset
- speaker_name = "_".join(speaker_dir.relative_to(datasets_root).parts)
-
- # Create an output directory with that name, as well as a txt file containing a
- # reference to each source file.
- speaker_out_dir = out_dir.joinpath(speaker_name)
- speaker_out_dir.mkdir(exist_ok=True)
- sources_fpath = speaker_out_dir.joinpath("_sources.txt")
-
- existing_fnames = {}
- # Gather all audio files for that speaker recursively
- sources_file = sources_fpath.open("a" if skip_existing else "w")
- wav_lens = []
- for in_fpath in speaker_dir.glob("**/*.%s" % extension):
- # Check if the target output file already exists
- out_fname = "_".join(in_fpath.relative_to(speaker_dir).parts)
- out_fname = out_fname.replace(".%s" % extension, ".npy")
- if skip_existing and out_fname in existing_fnames:
- continue
-
- # Load and preprocess the waveform
- wav = audio.preprocess_wav(in_fpath)
- if len(wav) == 0:
- continue
-
- # Create the mel spectrogram, discard those that are too short
- frames = audio.wav_to_mel_spectrogram(wav)
- if len(frames) < partials_n_frames:
- continue
-
- out_fpath = speaker_out_dir.joinpath(out_fname)
- np.save(out_fpath, frames)
- # logger.add_sample(duration=len(wav) / sampling_rate)
- sources_file.write("%s,%s\n" % (out_fname, in_fpath))
- wav_lens.append(len(wav))
- sources_file.close()
- return wav_lens
-
- print("%s: Preprocessing data for %d speakers." % (dataset_name, len(speaker_dirs)))
- # Process the utterances for each speaker
- # with ThreadPool(8) as pool:
- # list(tqdm(pool.imap(preprocess_speaker, speaker_dirs), dataset_name, len(speaker_dirs),
- # unit="speakers"))
- pool = Pool(processes=20)
- for i, wav_lens in enumerate(pool.map(__preprocess_speaker, speaker_dirs), 1):
- for wav_len in wav_lens:
- logger.add_sample(duration=wav_len / sampling_rate)
- print(f'{i}/{len(speaker_dirs)} \r')
-
- logger.finalize()
- print("Done preprocessing %s.\n" % dataset_name)
-
-
-def preprocess_librispeech(datasets_root: Path, out_dir: Path, skip_existing=False):
- for dataset_name in librispeech_datasets["train"]["other"]:
- # Initialize the preprocessing
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Preprocess all speakers
- speaker_dirs = list(dataset_root.glob("*"))
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "flac",
- skip_existing, logger)
-
-
-def preprocess_voxceleb1(datasets_root: Path, out_dir: Path, skip_existing=False):
- # Initialize the preprocessing
- dataset_name = "VoxCeleb1"
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Get the contents of the meta file
- with dataset_root.joinpath("vox1_meta.csv").open("r") as metafile:
- metadata = [line.split("\t") for line in metafile][1:]
-
- # Select the ID and the nationality, filter out non-anglophone speakers
- nationalities = {line[0]: line[3] for line in metadata}
- # keep_speaker_ids = [speaker_id for speaker_id, nationality in nationalities.items() if
- # nationality.lower() in anglophone_nationalites]
- keep_speaker_ids = [speaker_id for speaker_id, nationality in nationalities.items()]
- print("VoxCeleb1: using samples from %d (presumed anglophone) speakers out of %d." %
- (len(keep_speaker_ids), len(nationalities)))
-
- # Get the speaker directories for anglophone speakers only
- speaker_dirs = dataset_root.joinpath("wav").glob("*")
- speaker_dirs = [speaker_dir for speaker_dir in speaker_dirs if
- speaker_dir.name in keep_speaker_ids]
- print("VoxCeleb1: found %d anglophone speakers on the disk, %d missing (this is normal)." %
- (len(speaker_dirs), len(keep_speaker_ids) - len(speaker_dirs)))
-
- # Preprocess all speakers
- _preprocess_speaker_dirs(speaker_dirs, dataset_name, datasets_root, out_dir, "wav",
- skip_existing, logger)
-
-
-def preprocess_voxceleb2(datasets_root: Path, out_dir: Path, skip_existing=False):
- # Initialize the preprocessing
- dataset_name = "VoxCeleb2"
- dataset_root, logger = _init_preprocess_dataset(dataset_name, datasets_root, out_dir)
- if not dataset_root:
- return
-
- # Get the speaker directories
- # Preprocess all speakers
- speaker_dirs = list(dataset_root.joinpath("dev", "aac").glob("*"))
- _preprocess_speaker_dirs_vox2(speaker_dirs, dataset_name, datasets_root, out_dir, "m4a",
- skip_existing, logger)
diff --git a/spaces/fclong/summary/fengshen/examples/translate/finetune_deltalm.sh b/spaces/fclong/summary/fengshen/examples/translate/finetune_deltalm.sh
deleted file mode 100644
index 6d6bd9ef5fde6c9afd2957b79118e13b4e94d8da..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/translate/finetune_deltalm.sh
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/bin/bash
-
-#SBATCH --job-name=mbart_en_zh
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=8
-#SBATCH --gres=gpu:8 # number of gpus
-#SBATCH --cpus-per-task=32
-#SBATCH -o %x-%j.log
-
-set -x -e
-
-echo "START TIME: $(date)"
-
-MODEL_NAME=deltalm_en_zh
-MICRO_BATCH_SIZE=16
-ROOT_DIR=../../workspace
-MODEL_ROOT_DIR=$ROOT_DIR/${MODEL_NAME}
-
-
-if [ ! -d ${MODEL_ROOT_DIR} ];then
- mkdir ${MODEL_ROOT_DIR}
- echo ${MODEL_ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${MODEL_ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-output_save_path=${MODEL_ROOT_DIR}.json
-if [ -f ${output_save_path} ];then
- echo ${output_save_path} exist, rm it!!!!!!!!!!!!!!!!!
- rm ${output_save_path}
-fi
-
-ZERO_STAGE=1
-
-config_json="${MODEL_ROOT_DIR}/ds_config.${MODEL_NAME}.json"
-
-# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
-cat < $config_json
-{
- "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE},
- "steps_per_print": 1000,
- "gradient_clipping": 1.0,
- "zero_optimization": {
- "stage": $ZERO_STAGE,
- "contiguous_gradients": false
- },
- "zero_allow_untested_optimizer": false,
- "fp16": {
- "enabled": true
- },
- "wall_clock_breakdown": false
-}
-EOT
-
-export PL_DEEPSPEED_CONFIG_PATH=$config_json
-
-
-TRAINER_ARGS="
- --max_epochs 20 \
- --gpus 1 \
- --num_nodes 1 \
- --strategy deepspeed_stage_${ZERO_STAGE} \
- --default_root_dir ${MODEL_ROOT_DIR} \
- --save_ckpt_path ${MODEL_ROOT_DIR}/ckpt \
- --save_top_k 3 \
- --monitor valid_sacrebleu \
- --mode max \
- --save_last \
- --every_n_train_steps 0 \
- --val_check_interval 0.2 \
- --label_smoothing 0.1 \
- --warmup_steps 4000 \
- --learning_rate 1e-7 \
- --adam_beta2 0.98 \
- --scheduler_type inverse_sqrt \
- --reverse_src_tgt \
- --tgt_zh \
-"
-
-DATA_ARGS="
- --datasets_name case_test \
- --num_workers 8 \
- --train_batchsize $MICRO_BATCH_SIZE \
- --val_batchsize $MICRO_BATCH_SIZE \
- --test_batchsize $MICRO_BATCH_SIZE \
- --val_datasets_field val \
- --max_enc_length 256 \
- --max_dec_length 256 \
-"
-
-mode_path="IDEA-CCNL/Randeng-Deltalm-362M-En-Zn"
-
-
-MODEL_ARGS="
- --model_path $mode_path \
- --output_save_path $output_save_path \
-"
-
-SCRIPTS_PATH=finetune_deltalm.py
-
-cat $SCRIPTS_PATH
-
-export CMD=" \
- $SCRIPTS_PATH \
- $TRAINER_ARGS \
- $MODEL_ARGS \
- $DATA_ARGS \
- "
-
-echo $CMD
-
-source activate
-conda activate fengshen
-# srun python3 $CMD
-python3 $CMD
diff --git a/spaces/fclong/summary/fengshen/examples/unimc/README.md b/spaces/fclong/summary/fengshen/examples/unimc/README.md
deleted file mode 100644
index 16abf3ff69c5ab7b8b8ca1f7c7ec191cbdf64ec0..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/unimc/README.md
+++ /dev/null
@@ -1,221 +0,0 @@
-[**中文**](./README.md) | [**English**](./README_en.md)
-# UniMC
-
-EMNLP 2022 论文 《[Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective](https://arxiv.org/abs/2210.08590)》源码
-
-
-
-## Update
-- [2022-10-18] Release preprint in arXiv.
-- [2022-10-14] Release code in GitHub.
-
-## Requirements
-
-安装 fengshen 框架
-
-```shell
-git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
-cd Fengshenbang-LM
-pip install --editable .
-```
-
-## Quick Start
-
-你可以参考我们的 [example.py](./example.py) 脚本,只需要将处理好的 train、dev、test 即输入模型即可。
-```python
-import argparse
-from fengshen.pipelines.multiplechoice import UniMCPipelines
-
-total_parser = argparse.ArgumentParser("TASK NAME")
-total_parser = UniMCPipelines.piplines_args(total_parser)
-args = total_parser.parse_args()
-
-pretrained_model_path = 'IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese'
-args.learning_rate=2e-5
-args.max_length=512
-args.max_epochs=3
-args.batchsize=8
-args.default_root_dir='./'
-model = UniMCPipelines(args,model_path=pretrained_model_path)
-
-train_data = []
-dev_data = []
-test_data = [{
- "texta": "就是废物,充电不进害得老子把主板烧了,客服不耐烦",
- "textb": "",
- "question": "",
- "choice": ["这是一条差评", "这是一条好评"],
- "answer": "这是一条差评",
- "label": 0,
- "id": 31
-}]
-
-if args.train:
- model.train(train_data, dev_data)
-result = model.predict(test_data)
-```
-## Pretrained Model
-对于英文模型,我们使用14份 multiplechoice 数据集进行了预训练。在中文模型中,我们已经收集了48份数据集对模型进行预训练,我们已经将预训练模型开源到 HuggingFace 社区当中。
-
-| 模型 | 地址 |
-|:---------:|:--------------:|
-| Erlangshen-UniMC-Albert-235M-English | [https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-Albert-235M-English](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-Albert-235M-English) |
-| Erlangshen-UniMC-RoBERTa-110M-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) |
-| Erlangshen-UniMC-RoBERTa-330M-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) |
-| Erlangshen-UniMC-MegatronBERT-1.3B-Chinese | [https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) |
-
-## Experiments
-
-
-### English
-
-为了测评 UniMC 的性能,在英文中,我们使用 14份 multiple-choice 数据集(具体数据参考原论文)来对模型进行预训练,使其具备做选择题的能力,
-
-**Zero-shot**
-| Model | T0 11B | GLaM 60B | FLAN 137B | PaLM 540B | UniMC 235M |
-|---------|--------|----------|-----------|-----------|------------|
-| ANLI R1 | 43.6 | 40.9 | 47.7 | 48.4 | **52.0** |
-| ANLI R2 | 38.7 | 38.2 | 43.9 | 44.2 | **44.4** |
-| ANLI R3 | 41.3 | 40.9 | 47.0 | 45.7 | **47.8** |
-| CB | 70.1 | 33.9 | 64.1 | 51.8 | **75.7** |
-### Chinese
-
-为了测评 UniMC 在中文场景下的性能我们使用 13份 有监督数据集来对模型进行预训练,预训练数据如下:
-| Task type | Task | # of option | Data size |
-|---------|--------|----------|-----------|
-| Multiple-choice | c3 | 4 | 11.8k |
-| Multiple-choice | ClozeT | 2 | 0.7k |
-| Multiple-choice | CMRC2019 | n | 11.4k |
-| Multiple-choice | GCRC | 4 | 7.8k |
-| Classification | DuEE-Fin | 12 | 4.3k |
-| Classification | DuEE1.0 | 65 | 10.3k |
-| Classification | Fudan | 20 | 19.6k |
-| Classification | THUNEWS | 10 | 180k |
-| NLI | CMNLI | 3 | 39k |
-| NLI | SNLI | 3 | 545.8k |
-| Paraphrace | AFQMC | 2 | 34.3k |
-| Paraphrace | PAWS-X | 2 | 49k |
-| Paraphrace | STS-B | 2 | 80k |
-
-我们使用中文领域常用的benchmark来测试UniMC的性能,具体是FewCLUE的9个任务,我们在 test_public 上测评模型的性能。
-
-
-**Few-shot**
-| Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg |
-|------------|------------|----------|-----------|----------|-----------|-----------|-----------|----------|-----------|-----------|
-| Finetuning | 65.4 | 35.5 | 49 | 32.8 | 33 | 60.7 | 14.9 | 50 | 55.6 | 44.1 |
-| PET | 86.7 | 51.7 | 54.5 | 46 | 44 | 56 | 61.2 | 59.4 | 57.5 | 57.44 |
-| LM-BFF | 85.6 | 54.4 | 53 | 47.1 | 41.6 | 57.6 | 61.2 | 51.7 | 54.7 | 56.32 |
-| P-tuning | 88.3 | 56 | 54.2 | **57.6** | 41.9 | 60.9 | 59.3 | **62.9** | 58.1 | 59.91 |
-| EFL | 84.9 | 45 | 52.1 | 42.7 | 66.2 | 71.8 | 30.9 | 56.6 | 53 | 55.91 |
-| [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 88.64 | 54.08 | 54.32 | 48.6 | 66.55 | 73.76 | 67.71 | 52.54 | 59.92 | 62.86 |
-| [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 89.53 | 57.3 | 54.25 | 50 | 70.59 | 77.49 | 78.09 | 55.73 | 65.16 | 66.46 |
-| [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **89.278** | **60.9** | **57.46** | 52.89 | **76.33** | **80.37** | **90.33** | 61.73 | **79.15** | **72.05** |
-
-**Zero-shot**
-
-| Model | eprstmt | csldcp | tnews | iflytek | ocnli | bustm | chid | csl | wsc | Avg |
-|---------------|-----------|-----------|-----------|-----------|-----------|----------|----------|----------|-----------|-----------|
-| GPT-zero | 57.5 | 26.2 | 37 | 19 | 34.4 | 50 | 65.6 | 50.1 | 50.3 | 43.4 |
-| PET-zero | 85.2 | 12.6 | 26.1 | 26.6 | 40.3 | 50.6 | 57.6 | 52.2 | 54.7 | 45.1 |
-| NSP-BERT | 86.9 | 47.6 | 51 | 41.6 | 37.4 | 63.4 | 52 | **64.4** | 59.4 | 55.96 |
-| ZeroPrompt | - | - | - | 16.14 | 46.16 | - | - | - | 47.98 | - |
-| Yuan1.0-13B | 88.13 | 38.99 | 57.47 | 38.82 | 48.13 | 59.38 | 86.14 | 50 | 38.99 | 56.22 |
-| ERNIE3.0-240B | 88.75 | **50.97** | **57.83** | **40.42** | 53.57 | 64.38 | 87.13 | 56.25 | 53.46 | 61.41 |
-| [UniMC-RoBERTa-110M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-110M-Chinese) | 86.16 | 31.26 | 46.61 | 26.54 | 66.91 | 73.34 | 66.68 | 50.09 | 53.66 | 55.7 |
-| [UniMC-RoBERTa-330M](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-RoBERTa-330M-Chinese) | 87.5 | 30.4 | 47.6 | 31.5 | 69.9 | 75.9 | 78.17 | 49.5 | 60.55 | 59.01 |
-| [UniMC-MegatronBERT-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-UniMC-MegatronBERT-1.3B-Chinese) | **88.79** | 42.06 | 55.21 | 33.93 | **75.57** | **79.5** | **89.4** | 50.25 | **66.67** | **64.53** |
-
-
-
-## Dataset
-
-我们已经定义好了 UniMC 所需的数据格式,你只需要将数据转化为下面的数据格式即可:
-
-### 文本分类
-```json
-{
- "texta": "街头偶遇2018款长安CS35,颜值美炸!或售6万起,还买宝骏510?",
- "textb": "",
- "question": "下面新闻属于哪一个类别?",
- "choice": [
- "房产",
- "汽车",
- "教育",
- "军事"
- ],
- "answer": "汽车",
- "label": 1,
- "id": 7759
-}
-
-```
-
-### 情感分析
-```json
-{
- "texta": "就是废物,充电不进害得老子把主板烧了,客服不耐烦",
- "textb": "",
- "question": "",
- "choice": ["这是一条差评", "这是一条好评"],
- "answer": "这是一条差评",
- "label": 0,
- "id": 31
-}
-
-```
-
-### 语义匹配
-```json
-{
- "texta": "不要借了我是试试看能否操作的",
- "textb": "",
- "question": "",
- "choice": ["不能理解为:借款审核期间能否取消借款", "可以理解为:借款审核期间能否取消借款"],
- "answer": "不能理解为:借款审核期间能否取消借款",
- "label": 0,
- "id": 0
-}
-
-```
-
-### 自然语言推理
-```json
-{
- "texta": "身上裹一件工厂发的棉大衣,手插在袖筒里",
- "textb": "",
- "question": "",
- "choice": ["不能推断出:身上至少一件衣服", "很难推断出:身上至少一件衣服", "可以推断出:身上至少一件衣服"],
- "answer": "可以推断出:身上至少一件衣服",
- "label": 2,
- "id": 0
-}
-
-```
-
-
-## Citation
-如果你觉得本仓库帮助到了你,你可以使用下面方式引用我们的工作
-
-```text
-@article{unimc,
- author = {Ping Yang and
- Junjie Wang and
- Ruyi Gan and
- Xinyu Zhu and
- Lin Zhang and
- Ziwei Wu and
- Xinyu Gao and
- Jiaxing Zhang and
- Tetsuya Sakai},
- title = {Zero-Shot Learners for Natural Language Understanding via a Unified Multiple Choice Perspective},
- journal = {CoRR},
- volume = {abs/2210.08590},
- year = {2022}
-}
-```
-
-## License
-
-[Apache License 2.0](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/LICENSE)
-
diff --git a/spaces/fclong/summary/fengshen/models/deltalm/modeling_deltalm.py b/spaces/fclong/summary/fengshen/models/deltalm/modeling_deltalm.py
deleted file mode 100644
index 2cdd65f3e106e9433dd5116419dfd50cd8a33b85..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/models/deltalm/modeling_deltalm.py
+++ /dev/null
@@ -1,1551 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-
-import copy
-import math
-import random
-import torch
-import torch.nn as nn
-import torch.utils.checkpoint
-from torch.nn import CrossEntropyLoss
-from typing import List, Optional, Tuple, Union
-
-from transformers.modeling_utils import PreTrainedModel
-from transformers.activations import ACT2FN
-from transformers.modeling_outputs import (
- BaseModelOutput,
- BaseModelOutputWithPastAndCrossAttentions,
- CausalLMOutputWithCrossAttentions,
- Seq2SeqModelOutput,
- Seq2SeqLMOutput,
-)
-from transformers.file_utils import (
- add_end_docstrings,
- add_start_docstrings,
- add_start_docstrings_to_model_forward,
- replace_return_docstrings,
-)
-
-import logging
-from .configuration_deltalm import DeltalmConfig
-logger = logging.getLogger(__name__)
-
-_CHECKPOINT_FOR_DOC = "IDEA-CCNL/Randeng-Deltalm-362M-En-Zn"
-_CONFIG_FOR_DOC = "DeltalmConfig"
-_TOKENIZER_FOR_DOC = "DeltalmTokenizer"
-
-# Base model docstring
-_EXPECTED_OUTPUT_SHAPE = [1, 8, 768]
-
-
-def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
- """
- Shift input ids one token to the right.
- """
- shifted_input_ids = input_ids.new_zeros(input_ids.shape)
- shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
- shifted_input_ids[:, 0] = decoder_start_token_id
-
- if pad_token_id is None:
- raise ValueError("self.model.config.pad_token_id has to be defined.")
- # replace possible -100 values in labels by `pad_token_id`
- shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
-
- return shifted_input_ids
-
-
-def _make_causal_mask(input_ids_shape: torch.Size, dtype: torch.dtype, past_key_values_length: int = 0):
- """
- Make causal mask used for bi-directional self-attention.
- """
- bsz, tgt_len = input_ids_shape
- mask = torch.full((tgt_len, tgt_len), torch.tensor(float("-inf")))
- mask_cond = torch.arange(mask.size(-1))
- mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
- mask = mask.to(dtype)
-
- if past_key_values_length > 0:
- mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1)
- return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
-
-
-def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
- """
- Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
- """
- bsz, src_len = mask.size()
- tgt_len = tgt_len if tgt_len is not None else src_len
-
- expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
-
- inverted_mask = 1.0 - expanded_mask
-
- return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
-
-
-class DeltalmLearnedPositionalEmbedding(nn.Embedding):
- """
- This module learns positional embeddings up to a fixed maximum size.
- """
-
- def __init__(self, num_embeddings: int, embedding_dim: int):
- # Deltalm is set up so that if padding_idx is specified then offset the embedding ids by 2
- # and adjust num_embeddings appropriately. Other models don't have this hack
- self.offset = 2
- super().__init__(num_embeddings + self.offset, embedding_dim)
-
- def forward(self, input_ids_shape: torch.Size, past_key_values_length: int = 0):
- """`input_ids_shape` is expected to be [bsz x seqlen]."""
- bsz, seq_len = input_ids_shape[:2]
- positions = torch.arange(
- past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
- )
- return super().forward(positions + self.offset)
-
-
-class DeltalmAttention(nn.Module):
- """Multi-headed attention from 'Attention Is All You Need' paper"""
-
- def __init__(
- self,
- embed_dim: int,
- num_heads: int,
- dropout: float = 0.0,
- is_decoder: bool = False,
- bias: bool = True,
- ):
- super().__init__()
- self.embed_dim = embed_dim
- self.num_heads = num_heads
- self.dropout = dropout
- self.head_dim = embed_dim // num_heads
-
- if (self.head_dim * num_heads) != self.embed_dim:
- raise ValueError(
- f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
- f" and `num_heads`: {num_heads})."
- )
- self.scaling = self.head_dim**-0.5
- self.is_decoder = is_decoder
-
- self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
- self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
- self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
- self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
-
- def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
- return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- key_value_states: Optional[torch.Tensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- attention_mask: Optional[torch.Tensor] = None,
- layer_head_mask: Optional[torch.Tensor] = None,
- output_attentions: bool = False,
- ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
- """Input shape: Batch x Time x Channel"""
-
- # if key_value_states are provided this layer is used as a cross-attention layer
- # for the decoder
- is_cross_attention = key_value_states is not None
-
- bsz, tgt_len, _ = hidden_states.size()
-
- # get query proj
- query_states = self.q_proj(hidden_states) * self.scaling
- # get key, value proj
- if is_cross_attention and past_key_value is not None:
- # reuse k,v, cross_attentions
- key_states = past_key_value[0]
- value_states = past_key_value[1]
- elif is_cross_attention:
- # cross_attentions
- key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
- value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
- elif past_key_value is not None:
- # reuse k, v, self_attention
- key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
- value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
- key_states = torch.cat([past_key_value[0], key_states], dim=2)
- value_states = torch.cat([past_key_value[1], value_states], dim=2)
- else:
- # self_attention
- key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
- value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
-
- if self.is_decoder:
- # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
- # Further calls to cross_attention layer can then reuse all cross-attention
- # key/value_states (first "if" case)
- # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
- # all previous decoder key/value_states. Further calls to uni-directional self-attention
- # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
- # if encoder bi-directional self-attention `past_key_value` is always `None`
- past_key_value = (key_states, value_states)
-
- proj_shape = (bsz * self.num_heads, -1, self.head_dim)
- query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)
- key_states = key_states.view(*proj_shape)
- value_states = value_states.view(*proj_shape)
-
- src_len = key_states.size(1)
- attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
-
- if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
- raise ValueError(
- f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is"
- f" {attn_weights.size()}"
- )
-
- if attention_mask is not None:
- if attention_mask.size() != (bsz, 1, tgt_len, src_len):
- raise ValueError(
- f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
- )
- attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
- attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
-
- attn_weights = nn.functional.softmax(attn_weights, dim=-1)
-
- if layer_head_mask is not None:
- if layer_head_mask.size() != (self.num_heads,):
- raise ValueError(
- f"Head mask for a single layer should be of size {(self.num_heads,)}, but is"
- f" {layer_head_mask.size()}"
- )
- attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
- attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
-
- if output_attentions:
- # this operation is a bit awkward, but it's required to
- # make sure that attn_weights keeps its gradient.
- # In order to do so, attn_weights have to be reshaped
- # twice and have to be reused in the following
- attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
- attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
- else:
- attn_weights_reshaped = None
-
- attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
-
- attn_output = torch.bmm(attn_probs, value_states)
-
- if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
- raise ValueError(
- f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is"
- f" {attn_output.size()}"
- )
-
- attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
- attn_output = attn_output.transpose(1, 2)
-
- # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
- # partitioned aross GPUs when using tensor-parallelism.
- attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)
-
- attn_output = self.out_proj(attn_output)
-
- return attn_output, attn_weights_reshaped, past_key_value
-
-
-class DeltalmEncoderLayer(nn.Module):
- def __init__(self, config: DeltalmConfig):
- super().__init__()
- self.embed_dim = config.d_model
- self.self_attn = DeltalmAttention(
- embed_dim=self.embed_dim,
- num_heads=config.encoder_attention_heads,
- dropout=config.attention_dropout,
- )
- self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
- self.dropout = config.dropout
- self.activation_fn = ACT2FN[config.activation_function]
- self.activation_dropout = config.activation_dropout
- self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)
- self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)
- self.final_layer_norm = nn.LayerNorm(self.embed_dim)
-
- def forward(
- self,
- hidden_states: torch.FloatTensor,
- attention_mask: torch.FloatTensor,
- layer_head_mask: torch.FloatTensor,
- output_attentions: Optional[bool] = False,
- ) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`
- attention_mask (`torch.FloatTensor`): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
- `(encoder_attention_heads,)`.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- """
- residual = hidden_states
- hidden_states, attn_weights, _ = self.self_attn(
- hidden_states=hidden_states,
- attention_mask=attention_mask,
- layer_head_mask=layer_head_mask,
- output_attentions=output_attentions,
- )
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.self_attn_layer_norm(hidden_states)
-
- residual = hidden_states
- hidden_states = self.activation_fn(self.fc1(hidden_states))
- hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
- hidden_states = self.fc2(hidden_states)
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.final_layer_norm(hidden_states)
-
- if hidden_states.dtype == torch.float16 and (
- torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any()
- ):
- clamp_value = torch.finfo(hidden_states.dtype).max - 1000
- hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (attn_weights,)
-
- return outputs
-
-
-class DeltalmDecoderLayer(nn.Module):
- def __init__(self, config: DeltalmConfig):
- super().__init__()
- self.embed_dim = config.d_model
-
- self.self_attn = DeltalmAttention(
- embed_dim=self.embed_dim,
- num_heads=config.decoder_attention_heads,
- dropout=config.attention_dropout,
- is_decoder=True,
- )
- self.dropout = config.dropout
- self.activation_fn = ACT2FN[config.activation_function]
- self.activation_dropout = config.activation_dropout
-
- self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
- self.encoder_attn = DeltalmAttention(
- self.embed_dim,
- config.decoder_attention_heads,
- dropout=config.attention_dropout,
- is_decoder=True,
- )
- self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim)
- self.fc1 = nn.Linear(self.embed_dim, config.decoder_ffn_dim)
- self.fc2 = nn.Linear(config.decoder_ffn_dim, self.embed_dim)
- self.fc3 = nn.Linear(self.embed_dim, config.decoder_ffn_dim)
- self.fc4 = nn.Linear(config.decoder_ffn_dim, self.embed_dim)
-
- self.ffn_layer_norm = nn.LayerNorm(self.embed_dim)
- self.final_layer_norm = nn.LayerNorm(self.embed_dim)
-
- def forward(
- self,
- hidden_states: torch.Tensor,
- attention_mask: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- layer_head_mask: Optional[torch.Tensor] = None,
- cross_attn_layer_head_mask: Optional[torch.Tensor] = None,
- past_key_value: Optional[Tuple[torch.Tensor]] = None,
- output_attentions: Optional[bool] = False,
- use_cache: Optional[bool] = True,
- ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
- """
- Args:
- hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
- attention_mask (`torch.FloatTensor`): attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- encoder_hidden_states (`torch.FloatTensor`):
- cross attention input to the layer of shape `(batch, seq_len, embed_dim)`
- encoder_attention_mask (`torch.FloatTensor`): encoder attention mask of size
- `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
- layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size
- `(encoder_attention_heads,)`.
- cross_attn_layer_head_mask (`torch.FloatTensor`): mask for cross-attention heads in a given layer of
- size `(decoder_attention_heads,)`.
- past_key_value (`Tuple(torch.FloatTensor)`): cached past key and value projection states
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- """
- residual = hidden_states
-
- # Self Attention
- # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
- self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
- # add present self-attn cache to positions 1,2 of present_key_value tuple
- hidden_states, self_attn_weights, present_key_value = self.self_attn(
- hidden_states=hidden_states,
- past_key_value=self_attn_past_key_value,
- attention_mask=attention_mask,
- layer_head_mask=layer_head_mask,
- output_attentions=output_attentions,
- )
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.self_attn_layer_norm(hidden_states)
-
- # Add another ffn after self-attention to keep the structure same to encoder-layer
- residual = hidden_states
- hidden_states = self.activation_fn(self.fc3(hidden_states))
- hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
- hidden_states = self.fc4(hidden_states)
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.ffn_layer_norm(hidden_states)
-
- # Cross-Attention Block
- cross_attn_present_key_value = None
- cross_attn_weights = None
- if encoder_hidden_states is not None:
- residual = hidden_states
-
- # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple
- cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
- hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
- hidden_states=hidden_states,
- key_value_states=encoder_hidden_states,
- attention_mask=encoder_attention_mask,
- layer_head_mask=cross_attn_layer_head_mask,
- past_key_value=cross_attn_past_key_value,
- output_attentions=output_attentions,
- )
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.encoder_attn_layer_norm(hidden_states)
-
- # add cross-attn to positions 3,4 of present_key_value tuple
- present_key_value = present_key_value + cross_attn_present_key_value
-
- # Fully Connected
- residual = hidden_states
- hidden_states = self.activation_fn(self.fc1(hidden_states))
- hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
- hidden_states = self.fc2(hidden_states)
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
- hidden_states = residual + hidden_states
- hidden_states = self.final_layer_norm(hidden_states)
-
- outputs = (hidden_states,)
-
- if output_attentions:
- outputs += (self_attn_weights, cross_attn_weights)
-
- if use_cache:
- outputs += (present_key_value,)
-
- return outputs
-
-
-class DeltalmPretrainedModel(PreTrainedModel):
- config_class = DeltalmConfig
- base_model_prefix = "model"
- supports_gradient_checkpointing = True
-
- def _init_weights(self, module):
- std = self.config.init_std
- if isinstance(module, nn.Linear):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.bias is not None:
- module.bias.data.zero_()
- elif isinstance(module, nn.Embedding):
- module.weight.data.normal_(mean=0.0, std=std)
- if module.padding_idx is not None:
- module.weight.data[module.padding_idx].zero_()
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, (DeltalmDecoder, DeltalmEncoder)):
- module.gradient_checkpointing = value
-
-
-class DeltalmDecoder(DeltalmPretrainedModel):
- """
- Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`DeltalmDecoderLayer`]
- Args:
- config: DeltalmConfig
- embed_tokens (nn.Embedding): output embedding
- """
-
- def __init__(self, config: DeltalmConfig, embed_tokens: Optional[nn.Embedding] = None):
- super().__init__(config)
- self.dropout = config.dropout
- self.layerdrop = config.decoder_layerdrop
- self.padding_idx = config.pad_token_id
- self.max_target_positions = config.max_position_embeddings
- self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0
-
- if embed_tokens is not None:
- self.embed_tokens = embed_tokens
- else:
- self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx)
-
- self.embed_positions = DeltalmLearnedPositionalEmbedding(
- config.max_position_embeddings,
- config.d_model,
- )
- self.layers = nn.ModuleList([DeltalmDecoderLayer(config) for _ in range(config.decoder_layers)])
- self.layernorm_embedding = nn.LayerNorm(config.d_model)
-
- self.gradient_checkpointing = False
- # Initialize weights and apply final processing
- self.post_init()
-
- # fairseq实现了一个 nn.init.normal_(self.output_projection.weight, mean=0, std=self.output_embed_dim ** -0.5) 对最后的output权重做正态分布转换?
-
- def get_input_embeddings(self):
- return self.embed_tokens
-
- def set_input_embeddings(self, value):
- self.embed_tokens = value
-
- def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
- # create causal mask
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- combined_attention_mask = None
- if input_shape[-1] > 1:
- combined_attention_mask = _make_causal_mask(
- input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length
- ).to(inputs_embeds.device)
-
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
- combined_attention_mask = (
- expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
- )
-
- return combined_attention_mask
-
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]:
- r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
- provide it.
- Indices can be obtained using [`DeltalmTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- [What are attention masks?](../glossary#attention-mask)
- encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
- of the decoder.
- encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, *optional*):
- Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values
- selected in `[0, 1]`:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- [What are attention masks?](../glossary#attention-mask)
- head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing
- cross-attention on hidden heads. Mask values selected in `[0, 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
- shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of
- shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
- Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
- cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those
- that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of
- all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of
- shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing
- `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more
- control over how to convert `input_ids` indices into associated vectors than the model's internal
- embedding lookup matrix.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # retrieve input_ids and inputs_embeds
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
-
- # past_key_values_length
- past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
-
- if inputs_embeds is None:
- inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
-
- attention_mask = self._prepare_decoder_attention_mask(
- attention_mask, input_shape, inputs_embeds, past_key_values_length
- )
-
- # expand encoder attention mask
- if encoder_hidden_states is not None and encoder_attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
-
- # embed positions
- positions = self.embed_positions(input_shape, past_key_values_length)
-
- hidden_states = inputs_embeds + positions
- hidden_states = self.layernorm_embedding(hidden_states)
-
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
-
- # decoder layers
- all_hidden_states = () if output_hidden_states else None
- all_self_attns = () if output_attentions else None
- all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None
- next_decoder_cache = () if use_cache else None
-
- # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired
- for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]):
- if attn_mask is not None:
- if attn_mask.size()[0] != (len(self.layers)):
- raise ValueError(
- f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for"
- f" {head_mask.size()[0]}."
- )
-
- for idx, decoder_layer in enumerate(self.layers):
- # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
- dropout_probability = random.uniform(0, 1)
- if self.training and (dropout_probability < self.layerdrop):
- continue
-
- past_key_value = past_key_values[idx] if past_key_values is not None else None
-
- if self.gradient_checkpointing and self.training:
-
- if use_cache:
- logger.warning(
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
- )
- use_cache = False
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- # None for past_key_value
- return module(*inputs, output_attentions, use_cache)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(decoder_layer),
- hidden_states,
- attention_mask,
- encoder_hidden_states,
- encoder_attention_mask,
- head_mask[idx] if head_mask is not None else None,
- cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None,
- None,
- )
- else:
-
- layer_outputs = decoder_layer(
- hidden_states,
- attention_mask=attention_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- layer_head_mask=(head_mask[idx] if head_mask is not None else None),
- cross_attn_layer_head_mask=(
- cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None
- ),
- past_key_value=past_key_value,
- output_attentions=output_attentions,
- use_cache=use_cache,
- )
- hidden_states = layer_outputs[0]
-
- if use_cache:
- next_decoder_cache += (layer_outputs[3 if output_attentions else 1],)
-
- if output_attentions:
- all_self_attns += (layer_outputs[1],)
-
- if encoder_hidden_states is not None:
- all_cross_attentions += (layer_outputs[2],)
-
- # add hidden states from the last decoder layer
- if output_hidden_states:
- all_hidden_states += (hidden_states,)
-
- next_cache = next_decoder_cache if use_cache else None
- if not return_dict:
- return tuple(
- v
- for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions]
- if v is not None
- )
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=hidden_states,
- past_key_values=next_cache,
- hidden_states=all_hidden_states,
- attentions=all_self_attns,
- cross_attentions=all_cross_attentions,
- )
-
-
-DELTALM_START_DOCSTRING = r"""
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
- Parameters:
- config ([`DeltalmConfig`]):
- Model configuration class with all the parameters of the model. Initializing with a config file does not
- load the weights associated with the model, only the configuration. Check out the
- [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-DELTALM_GENERATION_EXAMPLE = r"""
- Summarization example:
- ```python
- >>> from transformers import DeltalmTokenizer, DeltalmForConditionalGeneration
- >>> model = DeltalmForConditionalGeneration.from_pretrained("facebook/deltalm-large-cnn")
- >>> tokenizer = DeltalmTokenizer.from_pretrained("facebook/deltalm-large-cnn")
- >>> ARTICLE_TO_SUMMARIZE = (
- ... "PG&E stated it scheduled the blackouts in response to forecasts for high winds "
- ... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
- ... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
- ... )
- >>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="pt")
- >>> # Generate Summary
- >>> summary_ids = model.generate(inputs["input_ids"], num_beams=2, min_length=0, max_length=20)
- >>> tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
- 'PG&E scheduled the blackouts in response to forecasts for high winds amid dry conditions'
- ```
- Mask filling example:
- ```python
- >>> from transformers import DeltalmTokenizer, DeltalmForConditionalGeneration
- >>> tokenizer = DeltalmTokenizer.from_pretrained("facebook/deltalm-base")
- >>> model = DeltalmForConditionalGeneration.from_pretrained("facebook/deltalm-base")
- >>> TXT = "My friends are but they eat too many carbs."
- >>> input_ids = tokenizer([TXT], return_tensors="pt")["input_ids"]
- >>> logits = model(input_ids).logits
- >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
- >>> probs = logits[0, masked_index].softmax(dim=0)
- >>> values, predictions = probs.topk(5)
- >>> tokenizer.decode(predictions).split()
- ['not', 'good', 'healthy', 'great', 'very']
- ```
-"""
-
-DELTALM_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
- it.
- Indices can be obtained using [`DeltalmTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- [What are attention masks?](../glossary#attention-mask)
- decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
- Indices of decoder input sequence tokens in the vocabulary.
- Indices can be obtained using [`DeltalmTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
- [What are decoder input IDs?](../glossary#decoder-input-ids)
- Deltalm uses the `eos_token_id` as the starting token for `decoder_input_ids` generation. If `past_key_values`
- is used, optionally only the last `decoder_input_ids` have to be input (see `past_key_values`).
- For translation and summarization training, `decoder_input_ids` should be provided. If no
- `decoder_input_ids` is provided, the model will create this tensor by shifting the `input_ids` to the right
- for denoising pre-training following the paper.
- decoder_attention_mask (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
- Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
- be used by default.
- If you want to change padding behavior, you should read [`modeling_deltalm._prepare_decoder_attention_mask`]
- and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
- information on the default strategy.
- head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in `[0, 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- decoder_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in `[0, 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in `[0,
- 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- encoder_outputs (`tuple(tuple(torch.FloatTensor)`, *optional*):
- Tuple consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`)
- `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of
- hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
- past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
- `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
- `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`.
- Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
- blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of shape
- `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing `input_ids` you
- can choose to directly pass an embedded representation. This is useful if you want more control over how to
- convert `input_ids` indices into associated vectors than the model's internal embedding lookup matrix.
- decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded
- representation. If `past_key_values` is used, optionally only the last `decoder_inputs_embeds` have to be
- input (see `past_key_values`). This is useful if you want more control over how to convert
- `decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix.
- If `decoder_input_ids` and `decoder_inputs_embeds` are both unset, `decoder_inputs_embeds` takes the value
- of `inputs_embeds`.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
-"""
-
-
-class DeltalmEncoder(DeltalmPretrainedModel):
- """
- Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a
- [`DeltalmEncoderLayer`].
- Args:
- config: DeltalmConfig
- embed_tokens (nn.Embedding): output embedding
- """
-
- def __init__(self, config: DeltalmConfig, embed_tokens: Optional[nn.Embedding] = None):
- super().__init__(config)
-
- self.dropout = config.dropout
- self.layerdrop = config.encoder_layerdrop
-
- embed_dim = config.d_model
- self.padding_idx = config.pad_token_id
- self.max_source_positions = config.max_position_embeddings
- self.embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0
-
- if embed_tokens is not None:
- self.embed_tokens = embed_tokens
- else:
- self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim, self.padding_idx)
-
- self.embed_positions = DeltalmLearnedPositionalEmbedding(
- config.max_position_embeddings,
- embed_dim,
- )
- self.layers = nn.ModuleList([DeltalmEncoderLayer(config) for _ in range(config.encoder_layers)])
- self.layernorm_embedding = nn.LayerNorm(embed_dim)
-
- self.gradient_checkpointing = False
- if config.encoder_normalize_before:
- self.layer_norm = nn.LayerNorm(embed_dim)
- else:
- self.layer_norm = None
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.embed_tokens
-
- def set_input_embeddings(self, value):
- self.embed_tokens = value
-
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, BaseModelOutput]:
- r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
- provide it.
- Indices can be obtained using [`DeltalmTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- [What are attention masks?](../glossary#attention-mask)
- head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.
- This is useful if you want more control over how to convert `input_ids` indices into associated vectors
- than the model's internal embedding lookup matrix.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- """
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # retrieve input_ids and inputs_embeds
- if input_ids is not None and inputs_embeds is not None:
- raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
- elif input_ids is not None:
- input_shape = input_ids.size()
- input_ids = input_ids.view(-1, input_shape[-1])
- elif inputs_embeds is not None:
- input_shape = inputs_embeds.size()[:-1]
- else:
- raise ValueError("You have to specify either input_ids or inputs_embeds")
-
- if inputs_embeds is None:
- inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
-
- embed_pos = self.embed_positions(input_shape)
-
- hidden_states = inputs_embeds + embed_pos
- hidden_states = self.layernorm_embedding(hidden_states)
- hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
-
- # expand attention_mask
- if attention_mask is not None:
- # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
- attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype)
-
- encoder_states = () if output_hidden_states else None
- all_attentions = () if output_attentions else None
-
- # check if head_mask has a correct number of layers specified if desired
- if head_mask is not None:
- if head_mask.size()[0] != (len(self.layers)):
- raise ValueError(
- f"The head_mask should be specified for {len(self.layers)} layers, but it is for"
- f" {head_mask.size()[0]}."
- )
-
- for idx, encoder_layer in enumerate(self.layers):
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
- # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
- dropout_probability = random.uniform(0, 1)
- if self.training and (dropout_probability < self.layerdrop): # skip the layer
- layer_outputs = (None, None)
- else:
- if self.gradient_checkpointing and self.training:
-
- def create_custom_forward(module):
- def custom_forward(*inputs):
- return module(*inputs, output_attentions)
-
- return custom_forward
-
- layer_outputs = torch.utils.checkpoint.checkpoint(
- create_custom_forward(encoder_layer),
- hidden_states,
- attention_mask,
- (head_mask[idx] if head_mask is not None else None),
- )
- else:
- layer_outputs = encoder_layer(
- hidden_states,
- attention_mask,
- layer_head_mask=(head_mask[idx] if head_mask is not None else None),
- output_attentions=output_attentions,
- )
-
- hidden_states = layer_outputs[0]
-
- if output_attentions:
- all_attentions = all_attentions + (layer_outputs[1],)
-
- if self.layer_norm is not None:
- hidden_states = self.layer_norm(hidden_states)
- # hidden_states = self.layernorm_embedding(hidden_states)
-
- if output_hidden_states:
- encoder_states = encoder_states + (hidden_states,)
-
- if not return_dict:
- return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
- return BaseModelOutput(
- last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
- )
-
-
-class DeltalmModel(DeltalmPretrainedModel):
- def __init__(self, config: DeltalmConfig):
- super().__init__(config)
-
- padding_idx, vocab_size = config.pad_token_id, config.vocab_size
- self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)
-
- self.encoder = DeltalmEncoder(config, self.shared)
- self.decoder = DeltalmDecoder(config, self.shared)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.shared
-
- def set_input_embeddings(self, value):
- self.shared = value
- self.encoder.embed_tokens = self.shared
- self.decoder.embed_tokens = self.shared
-
- def get_encoder(self):
- return self.encoder
-
- def get_decoder(self):
- return self.decoder
-
- @add_start_docstrings_to_model_forward(DELTALM_INPUTS_DOCSTRING)
- # @add_code_sample_docstrings(
- # processor_class=_TOKENIZER_FOR_DOC,
- # checkpoint=_CHECKPOINT_FOR_DOC,
- # output_type=Seq2SeqModelOutput,
- # config_class=_CONFIG_FOR_DOC,
- # expected_output=_EXPECTED_OUTPUT_SHAPE,
- # )
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- decoder_head_mask: Optional[torch.Tensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- encoder_outputs: Optional[List[torch.FloatTensor]] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, Seq2SeqModelOutput]:
-
- # different to other models, Deltalm automatically creates decoder_input_ids from
- # input_ids if no decoder_input_ids are provided
- if decoder_input_ids is None and decoder_inputs_embeds is None:
- if input_ids is None:
- raise ValueError(
- "If no `decoder_input_ids` or `decoder_inputs_embeds` are "
- "passed, `input_ids` cannot be `None`. Please pass either "
- "`input_ids` or `decoder_input_ids` or `decoder_inputs_embeds`."
- )
-
- decoder_input_ids = shift_tokens_right(
- input_ids, self.config.pad_token_id, self.config.decoder_start_token_id
- )
-
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if encoder_outputs is None:
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
- elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
- encoder_outputs = BaseModelOutput(
- last_hidden_state=encoder_outputs[0],
- hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
- attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
- )
-
- # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
- decoder_outputs = self.decoder(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- encoder_hidden_states=encoder_outputs[0],
- encoder_attention_mask=attention_mask,
- head_mask=decoder_head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- past_key_values=past_key_values,
- inputs_embeds=decoder_inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- if not return_dict:
- return decoder_outputs + encoder_outputs
-
- logger.debug("last_hidden_state.size: %s", decoder_outputs.last_hidden_state)
- return Seq2SeqModelOutput(
- last_hidden_state=decoder_outputs.last_hidden_state,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
-
-@add_start_docstrings(
- "The DELTALM Model with a language modeling head. Can be used for translation.", DELTALM_START_DOCSTRING
-)
-class DeltalmForConditionalGeneration(DeltalmPretrainedModel):
- base_model_prefix = "model"
- _keys_to_ignore_on_load_missing = [r"final_logits_bias", r"lm_head.weight"]
-
- def __init__(self, config: DeltalmConfig):
- super().__init__(config)
- self.model = DeltalmModel(config)
- self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings)))
- self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_encoder(self):
- return self.model.get_encoder()
-
- def get_decoder(self):
- return self.model.get_decoder()
-
- def resize_token_embeddings(self, new_num_tokens: int) -> nn.Embedding:
- new_embeddings = super().resize_token_embeddings(new_num_tokens)
- self._resize_final_logits_bias(new_num_tokens)
- return new_embeddings
-
- def _resize_final_logits_bias(self, new_num_tokens: int) -> None:
- logger.debug("Debug: coming to _resize_final_logits_bias")
- old_num_tokens = self.final_logits_bias.shape[-1]
- if new_num_tokens <= old_num_tokens:
- new_bias = self.final_logits_bias[:, :new_num_tokens]
- else:
- extra_bias = torch.zeros((1, new_num_tokens - old_num_tokens), device=self.final_logits_bias.device)
- new_bias = torch.cat([self.final_logits_bias, extra_bias], dim=1)
- self.register_buffer("final_logits_bias", new_bias)
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- @add_start_docstrings_to_model_forward(DELTALM_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC)
- @add_end_docstrings(DELTALM_GENERATION_EXAMPLE)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.LongTensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- decoder_head_mask: Optional[torch.Tensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- encoder_outputs: Optional[List[torch.FloatTensor]] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, Seq2SeqLMOutput]:
- r"""
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
- config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
- (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
- Returns:
- """
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- logger.debug("Comming to Generation!")
-
- if labels is not None:
- logger.debug("Debug: *************** Before label ***************** ")
- logger.debug("Debug: %s", labels.size())
- if use_cache:
- logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.")
- use_cache = False
- if decoder_input_ids is None and decoder_inputs_embeds is None:
- decoder_input_ids = shift_tokens_right(
- labels, self.config.pad_token_id, self.config.decoder_start_token_id
- )
-
- logger.debug("Debug: ************ After labels ************")
- logger.debug("Debug: %s", labels.size())
-
- outputs = self.model(
- input_ids,
- attention_mask=attention_mask,
- decoder_input_ids=decoder_input_ids,
- encoder_outputs=encoder_outputs,
- decoder_attention_mask=decoder_attention_mask,
- head_mask=head_mask,
- decoder_head_mask=decoder_head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- decoder_inputs_embeds=decoder_inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
- lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias
- # print(self.lm_head)
- logger.debug("Debug: logit_size: %s", lm_logits.size())
-
- # logger.debug("Debug: change logit size: ", lm_logits.view(-1, self.config.vocab_size).size())
- # logger.debug("Debug: change label size: ", labels.view(-1).size())
- masked_lm_loss = None
-
- if labels is not None:
- # logger.debug("Debug: model label_size: %s", labels.size())
- # loss_fct = CrossEntropyLoss()
- # masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))
- loss_fct = CrossEntropyLoss()
- masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))
- # label_smoothing = self.config.label_smoothing
- # # logger.debug("Debug: label.size: ", )
- # if label_smoothing == 0:
- # # compute label smoothed loss
- # loss_fct = CrossEntropyLoss()
- # masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1))
- # else:
- # m = torch.nn.LogSoftmax(dim=-1)
- # lprobs = m(lm_logits.float())
- # # lprobs = m(lm_logits)
- # # # torch.set_printoptions(linewidth=200)
- # loss_fn = label_smoothed_nll_loss
- # masked_lm_loss, _ = loss_fn(lprobs.view(-1, lprobs.size(-1)), labels.view(-1), label_smoothing, self.config.pad_token_id)
-
- if not return_dict:
- logger.debug("Debug: not return dict")
- output = (lm_logits,) + outputs[1:]
- return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
-
- return Seq2SeqLMOutput(
- loss=masked_lm_loss,
- logits=lm_logits,
- past_key_values=outputs.past_key_values,
- decoder_hidden_states=outputs.decoder_hidden_states,
- decoder_attentions=outputs.decoder_attentions,
- cross_attentions=outputs.cross_attentions,
- encoder_last_hidden_state=outputs.encoder_last_hidden_state,
- encoder_hidden_states=outputs.encoder_hidden_states,
- encoder_attentions=outputs.encoder_attentions,
- )
-
- def prepare_inputs_for_generation(
- self,
- decoder_input_ids,
- past=None,
- attention_mask=None,
- head_mask=None,
- decoder_head_mask=None,
- cross_attn_head_mask=None,
- use_cache=None,
- encoder_outputs=None,
- **kwargs
- ):
- # cut decoder_input_ids if past is used
- if past is not None:
- decoder_input_ids = decoder_input_ids[:, -1:]
-
- return {
- "input_ids": None, # encoder_outputs is defined. input_ids not needed
- "encoder_outputs": encoder_outputs,
- "past_key_values": past,
- "decoder_input_ids": decoder_input_ids,
- "attention_mask": attention_mask,
- "head_mask": head_mask,
- "decoder_head_mask": decoder_head_mask,
- "cross_attn_head_mask": cross_attn_head_mask,
- "use_cache": use_cache, # change this to avoid caching (presumably for debugging)
- }
-
- def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
- return shift_tokens_right(labels, self.config.pad_token_id, self.config.decoder_start_token_id)
-
- @staticmethod
- def _reorder_cache(past, beam_idx):
- reordered_past = ()
- for layer_past in past:
- # cached cross_attention states don't have to be reordered -> they are always the same
- reordered_past += (
- tuple(past_state.index_select(0, beam_idx) for past_state in layer_past[:2]) + layer_past[2:],
- )
- return reordered_past
-
-
-class DeltalmDecoderWrapper(DeltalmPretrainedModel):
- """
- This wrapper class is a helper class to correctly load pretrained checkpoints when the causal language model is
- used in combination with the [`EncoderDecoderModel`] framework.
- """
-
- def __init__(self, config):
- super().__init__(config)
- self.decoder = DeltalmDecoder(config)
-
- def forward(self, *args, **kwargs):
- return self.decoder(*args, **kwargs)
-
-
-class DeltalmForCausalLM(DeltalmPretrainedModel):
- def __init__(self, config):
- config = copy.deepcopy(config)
- config.is_decoder = True
- config.is_encoder_decoder = False
- super().__init__(config)
- self.model = DeltalmDecoderWrapper(config)
-
- self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
-
- # Initialize weights and apply final processing
- self.post_init()
-
- def get_input_embeddings(self):
- return self.model.decoder.embed_tokens
-
- def set_input_embeddings(self, value):
- self.model.decoder.embed_tokens = value
-
- def get_output_embeddings(self):
- return self.lm_head
-
- def set_output_embeddings(self, new_embeddings):
- self.lm_head = new_embeddings
-
- def set_decoder(self, decoder):
- self.model.decoder = decoder
-
- def get_decoder(self):
- return self.model.decoder
-
- @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.FloatTensor] = None,
- encoder_attention_mask: Optional[torch.FloatTensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- past_key_values: Optional[List[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ) -> Union[Tuple, CausalLMOutputWithCrossAttentions]:
- r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
- provide it.
- Indices can be obtained using [`DeltalmTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- [What are attention masks?](../glossary#attention-mask)
- encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
- if the model is configured as a decoder.
- encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used
- in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`:
- head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*):
- Mask to nullify selected heads of the cross-attention modules. Mask values selected in `[0, 1]`:
- - 1 indicates the head is **not masked**,
- - 0 indicates the head is **masked**.
- past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`):
- Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
- shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of
- shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. The two additional
- tensors are only required when the model is used as a decoder in a Sequence to Sequence model.
- Contains pre-computed hidden-states (key and values in the self-attention blocks and in the
- cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those
- that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of
- all `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
- config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
- (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
- (see `past_key_values`).
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under
- returned tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
- for more detail.
- return_dict (`bool`, *optional*):
- Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
- Returns:
- Example:
- ```python
- >>> from transformers import DeltalmTokenizer, DeltalmForCausalLM
- >>> tokenizer = DeltalmTokenizer.from_pretrained("facebook/deltalm-base")
- >>> model = DeltalmForCausalLM.from_pretrained("facebook/deltalm-base", add_cross_attention=False)
- >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
- >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
- >>> outputs = model(**inputs)
- >>> logits = outputs.logits
- >>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size]
- >>> list(logits.shape) == expected_shape
- True
- ```"""
-
- output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
- output_hidden_states = (
- output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
- )
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
- outputs = self.model.decoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- head_mask=head_mask,
- cross_attn_head_mask=cross_attn_head_mask,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- use_cache=use_cache,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- )
-
- logits = self.lm_head(outputs[0])
-
- loss = None
- if labels is not None:
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1))
-
- if not return_dict:
- output = (logits,) + outputs[1:]
- return (loss,) + output if loss is not None else output
-
- return CausalLMOutputWithCrossAttentions(
- loss=loss,
- logits=logits,
- past_key_values=outputs.past_key_values,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- cross_attentions=outputs.cross_attentions,
- )
-
- def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, use_cache=None, **kwargs):
- # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
- if attention_mask is None:
- attention_mask = input_ids.new_ones(input_ids.shape)
-
- if past:
- input_ids = input_ids[:, -1:]
- # first step, decoder_cached_states are empty
- return {
- "input_ids": input_ids, # encoder_outputs is defined. input_ids not needed
- "attention_mask": attention_mask,
- "past_key_values": past,
- "use_cache": use_cache,
- }
-
- @staticmethod
- def _reorder_cache(past, beam_idx):
- reordered_past = ()
- for layer_past in past:
- reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),)
- return reordered_past
diff --git a/spaces/fengmuxi/ChatGpt-Web/app/masks/index.ts b/spaces/fengmuxi/ChatGpt-Web/app/masks/index.ts
deleted file mode 100644
index ea0bf32bf4e6dc7958028dcff7f662f75a567ef3..0000000000000000000000000000000000000000
--- a/spaces/fengmuxi/ChatGpt-Web/app/masks/index.ts
+++ /dev/null
@@ -1,26 +0,0 @@
-import { Mask } from "../store/mask";
-import { CN_MASKS } from "./cn";
-import { EN_MASKS } from "./en";
-
-import { type BuiltinMask } from "./typing";
-export { type BuiltinMask } from "./typing";
-
-export const BUILTIN_MASK_ID = 100000;
-
-export const BUILTIN_MASK_STORE = {
- buildinId: BUILTIN_MASK_ID,
- masks: {} as Record,
- get(id?: number) {
- if (!id) return undefined;
- return this.masks[id] as Mask | undefined;
- },
- add(m: BuiltinMask) {
- const mask = { ...m, id: this.buildinId++ };
- this.masks[mask.id] = mask;
- return mask;
- },
-};
-
-export const BUILTIN_MASKS: Mask[] = [...CN_MASKS, ...EN_MASKS].map((m) =>
- BUILTIN_MASK_STORE.add(m),
-);
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Babu88 App Download APK for Android and iOS - Experience the Thrill of Casino and Sportsbook.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Babu88 App Download APK for Android and iOS - Experience the Thrill of Casino and Sportsbook.md
deleted file mode 100644
index 0f4b36d06cd9f323623aeeaada1fba9b3fcf4e00..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Babu88 App Download APK for Android and iOS - Experience the Thrill of Casino and Sportsbook.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-
Babu88 Download M Apkpure: How to Bet Online with Babu88 App
-
If you are looking for a reliable and convenient way to bet online on your favorite sports and events, you might want to check out Babu88, a popular online betting platform in Bangladesh. In this article, we will show you how to download and install Babu88 app for Android, how to register and get welcome bonus, and how to bet online with Babu88 app. Let's get started!
-
What is Babu88?
-
A brief introduction to Babu88, a popular online betting platform in Bangladesh
-
Babu88 is an online betting platform that offers a wide range of sports and events to bet on, such as cricket, football, basketball, tennis, esports, and more. You can also enjoy live casino games, such as baccarat, roulette, blackjack, and poker. Babu88 is licensed and regulated by the Curacao Gaming Authority, which ensures fair play and security for its users. You can access Babu88 from any device, whether it is a desktop, laptop, tablet, or smartphone.
Babu88 app is a mobile application that allows you to bet online with Babu88 from your Android device. With Babu88 app, you can enjoy the following features and benefits:
-
-
Easy and fast access to all the sports and events offered by Babu88
-
Live streaming of selected matches and events
-
Real-time updates and notifications of the latest odds and results
-
Secure and convenient payment methods, such as bKash, Rocket, Nagad, Skrill, Neteller, and more
-
24/7 customer support via live chat, email, phone, or WhatsApp
-
Exclusive promotions and bonuses for app users
-
-
How to download and install Babu88 app for Android?
-
The steps to download Babu88 APK from apkpure.com
-
To download Babu88 app for Android, you need to get the APK file from a trusted source, such as apkpure.com. Apkpure.com is a website that provides free and safe APK downloads for various apps and games. Here are the steps to download Babu88 APK from apkpure.com:
From the search results, select the "Babu88" app with the logo of a blue B on a yellow background
-
Click on the "Download APK" button and wait for the download to complete
-
-
The steps to install Babu88 APK on your Android device
-
After downloading the APK file from apkpure.com, you need to install it on your Android device. Before that, you need to enable the installation of apps from unknown sources on your device. Here are the steps to install Babu88 APK on your Android device:
-
-
Go to Settings > Security > Unknown Sources and toggle it on
-
Locate the Babu88 APK file in your device's storage and tap on it
-
Follow the instructions on the screen to install the app
-
Once the installation is done, you can launch the app and start betting online with Babu88
-
-
How to register and get welcome bonus on Babu88 app?
-
The steps to create an account on Babu88 app
-
To bet online with Babu88 app, you need to create an account on the platform. Here are the steps to create an account on Babu88 app:
-
babu88 app download for android apk
-babu88 net app ios install
-babu88 apk pure casino games
-babu88 mobile app sports betting
-babu88 app free download apk mirror
-babu88 net app android latest version
-babu88 ios app download apkpure
-babu88 apk download for pc windows 10
-babu88 app online casino and sportsbook
-babu88 net app apk file download
-babu88 app for iphone download apkpure
-babu88 apk mod unlimited money
-babu88 app review and rating
-babu88 net app ios update
-babu88 apk download link apkpure
-babu88 app login and registration
-babu88 net app android features
-babu88 ios app install guide
-babu88 apk pure download for tablet
-babu88 app bonus and promotions
-babu88 net app ios compatibility
-babu88 apk download for macbook
-babu88 app customer support and service
-babu88 net app android security
-babu88 ios app download link apkpure
-babu88 apk pure download for smart tv
-babu88 app payment methods and withdrawal
-babu88 net app ios review and rating
-babu88 apk download for chromebook
-babu88 app game providers and selection
-babu88 net app android download apkpure
-babu88 ios app install problem and solution
-babu88 apk pure download for firestick
-babu88 app live casino and sports streaming
-babu88 net app ios features and benefits
-babu88 apk download for linux ubuntu
-babu88 app referral program and rewards
-babu88 net app android compatibility and requirements
-babu88 ios app download apkpure com
-babu88 apk pure download for nvidia shield tv
-
-
Open the Babu88 app and click on the "Register" button at the top right corner
-
Fill in the registration form with your personal details, such as name, email, phone number, password, and referral code (if any)
-
Agree to the terms and conditions and click on the "Submit" button
-
Verify your email and phone number by following the instructions sent to you
-
Congratulations, you have successfully created an account on Babu88 app!
-
-
The details of the welcome bonus offer for new users
-
As a new user of Babu88 app, you can enjoy a generous welcome bonus offer that will boost your betting experience. Here are the details of the welcome bonus offer for new users:
-
-
You can get a 100% match bonus up to 10,000 BDT on your first deposit
-
You need to deposit at least 500 BDT to qualify for the bonus
-
You need to wager the bonus amount 10 times on sports bets with odds of at least 1.5 within 30 days to withdraw the bonus and any winnings from it
-
You can also get 20 free spins on selected slots games after making your first deposit
-
You need to wager the free spins winnings 35 times within 7 days to withdraw them
-
-
How to bet online with Babu88 app?
-
The types of sports and events you can bet on with Babu88 app
-
With Babu88 app, you can bet on a variety of sports and events from around the world. Here are some of the types of sports and events you can bet on with Babu88 app:
-
-
Sport/Event
Description
-
Cricket
You can bet on all the major cricket tournaments and matches, such as IPL, BPL, PSL, T20 World Cup, Test Series, ODIs, and more. You can also bet on various markets, such as match winner, top batsman, top bowler, total runs, total wickets, etc.
-
Football
You can bet on all the popular football leagues and competitions, such as EPL, La Liga, Bundesliga, Champions League, Europa League, World Cup, Euro Cup, etc. You can also bet on various markets, such as match winner, correct score, total goals, both teams to score, etc.
-
Basketball
You can bet on all the major basketball leagues and tournaments, such as NBA, EuroLeague, FIBA World Cup, Olympics, etc. You can also bet on various markets, such as match winner, point spread, total points, over/under, etc.
-
Tennis
You can bet on all the grand slam events and other tennis tournaments throughout the year. You can also bet on various markets, such as match winner, set winner, total games, over/under, etc.
-
Esports
You can bet on all the popular esports games and events, such as CS:GO, Dota 2, League of Legends, PUBG, etc. You can also bet on various markets, such as match winner, map winner, kill score, over/under, etc.
-
-
The tips and tricks to improve your betting skills and win more money
-
Betting online with Babu88 app can be fun and rewarding, but it also requires some skills and strategies to increase your chances of winning. Here are some tips and tricks to improve your betting skills and win more money:
-
-
Do your research before placing a bet. Learn about the teams, players, form, injuries, head-to-head records, statistics, etc. that can affect the outcome of a match or event.
-
Compare the odds and markets offered by different bookmakers and choose the best value for your bet. You can use tools like oddschecker.com to compare the odds and markets from various sources.
-
Manage your bankroll wisely and set a budget for your betting activities. Do not bet more than you can afford to lose and do not chase your losses. Stick to your plan and discipline yourself.
-
Take advantage of the promotions and bonuses offered by Babu88 app. You can use them to boost your betting balance and increase your potential winnings. However, make sure you read the terms and conditions carefully and understand the wagering requirements before claiming them.
-
Have fun and enjoy the thrill of betting online with Babu88 app. Do not let betting become a source of stress or addiction. If you feel that you have a problem with gambling, seek help from professional organizations like GamCare or Gamblers Anonymous.
-
-
Conclusion
-
A summary of the main points and a call to action for the readers
-
Babu88 is an online betting platform that offers a wide range of sports and events to bet on, as well as live casino games. You can access Babu88 from any device, but the best way to bet online with Babu88 is to download and install Babu88 app for Android. Babu88 app allows you to enjoy easy and fast access to all the features and benefits of Babu88, such as live streaming, real-time updates, secure payment methods, 24/7 customer support, and exclusive promotions and bonuses. To download and install Babu88 app for Android, you need to get the APK file from apkpure.com and enable the installation of apps from unknown sources on your device. To register and get welcome bonus on Babu88 app, you need to create an account on the platform and make your first deposit. To bet online with Babu88 app, you need to choose from the variety of sports and events offered by the platform and place your bets according to your research, skills, and strategies. Betting online with Babu88 app can be fun and rewarding, but it also requires some responsibility and caution. So, what are you waiting for? Download Babu88 app for Android today and start betting online with Babu88!
-
FAQs
-
Here are some frequently asked questions about Babu88 download m apkpure:
-
-
Is Babu88 app safe and legal?
-
Babu88 app is safe and legal as long as you download it from a trusted source like apkpure.com and use it in a country where online betting is allowed. Babu88 app is licensed and regulated by the Curacao Gaming Authority, which ensures fair play and security for its users.
-
How can I contact Babu88 customer support?
-
You can contact Babu88 customer support via live chat, email, phone, or WhatsApp. You can find the contact details on the Babu88 app or website.
-
What are the minimum system requirements for Babu88 app?
-
Babu88 app requires Android 4.4 or higher to run smoothly on your device. You also need a stable internet connection to access all the features and functions of the app.
-
Can I use Babu88 app on iOS devices?
-
Babu88 app is currently only available for Android devices. However, you can still access Babu88 from your iOS device by using a web browser like Safari or Chrome.
-
Can I withdraw my winnings from Babu88 app?
-
Yes, you can withdraw your winnings from Babu88 app by using the same payment method that you used to deposit. The minimum withdrawal amount is 500 BDT and the maximum withdrawal amount is 50,000 BDT per day. The withdrawal process may take up to 24 hours depending on the payment method.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffffu/bing/src/components/chat-scroll-anchor.tsx b/spaces/fffffu/bing/src/components/chat-scroll-anchor.tsx
deleted file mode 100644
index ac809f4486a48e134cb69314c3d0dae5e68d614e..0000000000000000000000000000000000000000
--- a/spaces/fffffu/bing/src/components/chat-scroll-anchor.tsx
+++ /dev/null
@@ -1,29 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import { useInView } from 'react-intersection-observer'
-
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-
-interface ChatScrollAnchorProps {
- trackVisibility?: boolean
-}
-
-export function ChatScrollAnchor({ trackVisibility }: ChatScrollAnchorProps) {
- const isAtBottom = useAtBottom()
- const { ref, entry, inView } = useInView({
- trackVisibility,
- delay: 100,
- rootMargin: '0px 0px -150px 0px'
- })
-
- React.useEffect(() => {
- if (isAtBottom && trackVisibility && !inView) {
- entry?.target.scrollIntoView({
- block: 'start'
- })
- }
- }, [inView, entry, isAtBottom, trackVisibility])
-
- return
-}
diff --git a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/main.py b/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/main.py
deleted file mode 100644
index 3b563a5d001be7adfbe779dee7ad8ac49aadc50d..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/audioldm-text-to-audio-generation-copy/audioldm/clap/training/main.py
+++ /dev/null
@@ -1,596 +0,0 @@
-from inspect import getargs
-import logging
-import os
-import random
-from datetime import datetime
-import bisect
-import copy
-import numpy as np
-import torch
-import torch.backends.cudnn as cudnn
-from torch import optim
-from torch.cuda.amp import GradScaler
-import faulthandler
-import pathlib
-
-try:
- import wandb
-except ImportError:
- wandb = None
-
-try:
- import torch.utils.tensorboard as tensorboard
-except ImportError:
- tensorboard = None
-
-try:
- import horovod.torch as hvd
-except ImportError:
- hvd = None
-
-from open_clip import create_model_and_transforms, trace_model, create_model
-from training.data import get_data
-from training.distributed import is_master, init_distributed_device, world_info_from_env
-from training.logger import setup_logging
-from training.params import parse_args
-from training.scheduler import cosine_lr
-from training.train import train_one_epoch, evaluate
-from open_clip.utils import dataset_split, get_optimizer
-
-
-def maintain_ckpts(args, startidx, all_idx_len):
- for i in reversed(range(startidx, all_idx_len)):
- if os.path.exists(os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt")):
- os.rename(
- os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"),
- os.path.join(args.checkpoint_path, f"epoch_top_{i+1}.pt"),
- )
- if os.path.exists(
- os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt")
- ):
- os.remove(os.path.join(args.checkpoint_path, f"epoch_top_{all_idx_len}.pt"))
- return
-
-
-def update_top_k_performance(
- new_metrics_inputs, current_top_k_ckpt_metrics, args, ckpt, bignumbetter=True
-):
- """
- Record the top-k performance of the current epoch.
- current_top_k_metrics is a dictionary of the form: {1: top_1_ckpt_measure, 2: top_2_ckpt_measure, ...}
- """
- if isinstance(new_metrics_inputs, (list, tuple)):
- new_metrics_inputs = np.mean(new_metrics_inputs)
- return update_top_k_performance(
- new_metrics_inputs,
- current_top_k_ckpt_metrics,
- args=args,
- ckpt=ckpt,
- bignumbetter=bignumbetter,
- )
- elif isinstance(new_metrics_inputs, dict):
- new_metrics_inputs = np.mean(list(new_metrics_inputs.values()))
- return update_top_k_performance(
- new_metrics_inputs,
- current_top_k_ckpt_metrics,
- args=args,
- ckpt=ckpt,
- bignumbetter=bignumbetter,
- )
- elif isinstance(new_metrics_inputs, (float, int)):
- update_flag = {k: False for k in current_top_k_ckpt_metrics.keys()}
- sorted_keys = sorted(current_top_k_ckpt_metrics.keys())
- sorted_values = sorted(
- current_top_k_ckpt_metrics.values(), reverse=bignumbetter
- )
- sorted_values_ = copy.deepcopy(sorted_values)
- sorted_values.append(new_metrics_inputs)
- sorted_values = sorted(sorted_values, reverse=bignumbetter)
- sorted_values = sorted_values[:-1]
-
- if sorted_values == sorted_values_:
- return current_top_k_ckpt_metrics, new_metrics_inputs
- else:
- for i in range(len(sorted_keys)):
- if current_top_k_ckpt_metrics[sorted_keys[i]] != sorted_values[i]:
- current_top_k_ckpt_metrics[sorted_keys[i]] = sorted_values[i]
- update_flag[sorted_keys[i]] = True
- for i in range(len(update_flag)):
- if update_flag[i]:
- maintain_ckpts(args, i, len(sorted_keys))
- torch.save(
- ckpt,
- os.path.join(args.checkpoint_path, f"epoch_top_{i}.pt"),
- )
- break
- return current_top_k_ckpt_metrics, new_metrics_inputs
-
-
-# def updateifNone(a, b):
-# a = b if None else a
-# return a
-
-
-def is_pretrained_params(n):
- return (
- n.startswith("transformer")
- or n in ["positional_embedding", "text_projection"]
- or n.startswith("token_embedding")
- or n.startswith("ln_final")
- or n.startswith("logit_scale_t")
- )
-
-
-def random_seed(seed=42, rank=0):
- torch.manual_seed(seed + rank)
- np.random.seed(seed + rank)
- random.seed(seed + rank)
-
-
-def main():
- args = parse_args()
- # sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule?
- args.amodel = args.amodel.replace("/", "-")
- # download sizes.json file
-
- # (yusong): the below two lines are for debug
- # print("setting up faulthandler")
- # faulthandler.register(10)
-
- random.seed(args.seed)
- torch.manual_seed(args.seed)
- torch.cuda.manual_seed(args.seed)
- torch.cuda.manual_seed_all(args.seed)
- np.random.seed(args.seed)
- if args.tmodel == "bert" or args.tmodel == "roberta" or args.tmodel == "bart":
- assert (
- args.pretrained == "" or args.pretrained is None
- ), "bert/roberta/bart text encoder does not support pretrained models."
-
- # get the name of the experiments
- if args.name is None:
- args.name = "-".join(
- [
- datetime.now().strftime("%Y_%m_%d-%H_%M_%S"),
- f"model_{args.amodel}",
- f"lr_{args.lr}",
- f"b_{args.batch_size}",
- f"j_{args.workers}",
- f"p_{args.precision}",
- ]
- )
-
- # discover initial world args early so we can log properly
- args.distributed = False
- args.local_rank, args.rank, args.world_size = world_info_from_env()
-
- if args.remotedata and is_master(args):
- for dataset_name in args.datasetnames:
- for split in dataset_split[dataset_name]:
- if not os.path.exists(f"./json_files/{dataset_name}/{split}"):
- os.makedirs(f"./json_files/{dataset_name}/{split}")
- os.system(
- f"aws s3 cp s3://s-laion-audio/webdataset_tar/{dataset_name}/{split}/sizes.json ./json_files/{dataset_name}/{split}/sizes.json"
- )
-
- args.log_path = None
- if is_master(args, local=args.log_local):
- log_base_path = os.path.join(args.logs, args.name)
- os.makedirs(log_base_path, exist_ok=True)
- log_filename = f"out-{args.rank}" if args.log_local else "out.log"
- args.log_path = os.path.join(log_base_path, log_filename)
- if os.path.exists(args.log_path):
- print(
- "Error. Experiment already exists. Use --name {} to specify a new experiment."
- )
- return -1
-
- # Set logger
- args.log_level = logging.DEBUG if args.debug else logging.INFO
- setup_logging(args.log_path, args.log_level)
-
- # fully initialize distributed device environment
- device = init_distributed_device(args)
-
- args.wandb = "wandb" in args.report_to or "all" in args.report_to
- args.tensorboard = "tensorboard" in args.report_to or "all" in args.report_to
- if is_master(args):
- args.tensorboard_path = (
- os.path.join(args.logs, args.name, "tensorboard")
- if args.tensorboard
- else ""
- )
- args.checkpoint_path = os.path.join(args.logs, args.name, "checkpoints")
- for dirname in [args.tensorboard_path, args.checkpoint_path]:
- if dirname:
- os.makedirs(dirname, exist_ok=True)
- else:
- args.tensorboard_path = ""
- args.checkpoint_path = ""
-
- if args.copy_codebase:
- copy_codebase(args)
-
- assert args.precision in ["amp", "fp16", "fp32"]
- if args.precision == "fp16":
- logging.warning(
- "It is recommended to use AMP mixed-precision instead of FP16. "
- "FP16 support needs further verification and tuning, especially for train."
- )
-
- if args.horovod:
- logging.info(
- f"Running in horovod mode with multiple processes / nodes. Device: {args.device}."
- f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}."
- )
- elif args.distributed:
- logging.info(
- f"Running in distributed mode with multiple processes. Device: {args.device}."
- f"Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}."
- )
- else:
- logging.info(f"Running with a single process. Device {args.device}.")
-
- logging.info(f"openai cache dir: {os.path.expanduser(args.openai_model_cache_dir)}")
-
- model, model_cfg = create_model(
- args.amodel,
- args.tmodel,
- args.pretrained,
- precision=args.precision,
- device=device,
- jit=args.torchscript,
- force_quick_gelu=args.force_quick_gelu,
- openai_model_cache_dir=os.path.expanduser(args.openai_model_cache_dir),
- skip_params=True,
- pretrained_audio=args.pretrained_audio,
- pretrained_text=args.pretrained_text,
- enable_fusion=args.enable_fusion,
- fusion_type=args.fusion_type,
- )
-
- if args.horovod:
- with torch.no_grad():
- for param in model.parameters():
- param.set_(param.contiguous())
-
- if args.trace:
- model = trace_model(model, batch_size=args.batch_size, device=device)
-
- if is_master(args):
- logging.info("Model:")
- logging.info(f"{str(model)}")
- logging.info("Params:")
- params_file = os.path.join(args.logs, args.name, "params.txt")
- with open(params_file, "w") as f:
- for name in sorted(vars(args)):
- val = getattr(args, name)
- logging.info(f" {name}: {val}")
- f.write(f"{name}: {val}\n")
-
- if args.distributed and not args.horovod:
- if args.use_bn_sync:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model)
- ddp_args = {}
- if args.ddp_static_graph:
- # this doesn't exist in older PyTorch, arg only added if enabled
- ddp_args["static_graph"] = True
- model = torch.nn.parallel.DistributedDataParallel(
- model, device_ids=[device], find_unused_parameters=True, **ddp_args
- )
-
- data = get_data(args, model_cfg)
- assert len(data), "At least one train or eval dataset must be specified."
- if args.trace:
- assert "train" not in data, "Cannot train with traced model"
-
- exclude = (
- lambda n, p: p.ndim < 2
- or "bn" in n
- or "ln" in n
- or "bias" in n
- or "logit_scale" in n
- )
- include = lambda n, p: not exclude(n, p)
-
- named_parameters = list(model.named_parameters())
-
- # freeze text encoder
- text_freeze_parameters = [p for n, p in named_parameters if "text_branch" in n]
-
- if args.freeze_text:
- print("Freeze Text!!!!")
- for k in text_freeze_parameters:
- k.requires_grad = False
-
- gain_or_bias_params = [
- p for n, p in named_parameters if exclude(n, p) and p.requires_grad
- ]
- rest_params = [p for n, p in named_parameters if include(n, p) and p.requires_grad]
-
- # set wd-related params to 0 if use adam optimizer
- if args.optimizer == "adam":
- args.wd = 0
- args.wd_pretrained = 0
- args.wd_new = 0
-
- if args.train_data is None:
- optimizer = None
- scheduler = None
- else:
- total_steps = data["train"].dataloader.num_batches * args.epochs
-
- if args.split_opt:
- for x in ["lr", "beta1", "beta2", "eps", "wd"]:
- for y in ["_new", "_pretrained"]:
- if getattr(args, x + y) is None:
- setattr(args, x + y, getattr(args, x))
-
- gain_or_bias_pretrained_params = [
- p
- for n, p in named_parameters
- if (exclude(n, p) and p.requires_grad) and is_pretrained_params(n)
- ]
- rest_pretrained_params = [
- p
- for n, p in named_parameters
- if (include(n, p) and p.requires_grad) and is_pretrained_params(n)
- ]
- gain_or_bias_new_params = [
- p
- for n, p in named_parameters
- if (exclude(n, p) and p.requires_grad) and (not is_pretrained_params(n))
- ]
- rest_new_params = [
- p
- for n, p in named_parameters
- if (include(n, p) and p.requires_grad) and (not is_pretrained_params(n))
- ]
- pretrained_params_optimizer = get_optimizer(
- [
- {"params": gain_or_bias_pretrained_params, "weight_decay": 0.0},
- {
- "params": rest_pretrained_params,
- "weight_decay": args.wd_pretrained,
- },
- ],
- lr=args.lr_pretrained,
- betas=(args.beta1_pretrained, args.beta2_pretrained),
- eps=args.eps_pretrained,
- momentum=args.momentum_pretrained,
- optimizer_name=args.optimizer,
- )
- pretrained_params_scheduler = cosine_lr(
- pretrained_params_optimizer,
- args.lr_pretrained,
- args.warmup,
- total_steps,
- )
- new_params_optimizer = get_optimizer(
- [
- {"params": gain_or_bias_new_params, "weight_decay": 0.0},
- {"params": rest_new_params, "weight_decay": args.wd_new},
- ],
- lr=args.lr_new,
- betas=(args.beta1_new, args.beta2_new),
- eps=args.eps_new,
- momentum=args.momentum_new,
- optimizer_name=args.optimizer,
- )
-
- new_params_scheduler = cosine_lr(
- new_params_optimizer, args.lr_new, args.warmup, total_steps
- )
-
- optimizer = {
- "pretrained": pretrained_params_optimizer,
- "new": new_params_optimizer,
- }
- scheduler = {
- "pretrained": pretrained_params_scheduler,
- "new": new_params_scheduler,
- }
-
- if args.horovod:
- pretrained_params_optimizer = hvd.DistributedOptimizer(
- pretrained_params_optimizer,
- named_parameters=model.named_parameters(),
- )
- new_params_optimizer = hvd.DistributedOptimizer(
- new_params_optimizer, named_parameters=model.named_parameters()
- )
- hvd.broadcast_parameters(model.state_dict(), root_rank=0)
- hvd.broadcast_optimizer_state(pretrained_params_optimizer, root_rank=0)
- hvd.broadcast_optimizer_state(new_params_optimizer, root_rank=0)
- else:
- optimizer = get_optimizer(
- [
- {"params": gain_or_bias_params, "weight_decay": 0.0},
- {"params": rest_params, "weight_decay": args.wd},
- ],
- lr=args.lr,
- betas=(args.beta1, args.beta2),
- eps=args.eps,
- momentum=args.momentum,
- optimizer_name=args.optimizer,
- )
-
- scheduler = cosine_lr(optimizer, args.lr, args.warmup, total_steps)
-
- if args.horovod:
- optimizer = hvd.DistributedOptimizer(
- optimizer, named_parameters=model.named_parameters()
- )
- hvd.broadcast_parameters(model.state_dict(), root_rank=0)
- hvd.broadcast_optimizer_state(optimizer, root_rank=0)
-
- scaler = GradScaler() if args.precision == "amp" else None
-
- # optionally resume from a checkpoint
- start_epoch = 0
- if args.resume is not None:
- if os.path.isfile(args.resume):
- checkpoint = torch.load(args.resume, map_location=device)
- if "epoch" in checkpoint:
- # resuming a train checkpoint w/ epoch and optimizer state
- start_epoch = checkpoint["epoch"]
- sd = checkpoint["state_dict"]
- if not args.distributed and next(iter(sd.items()))[0].startswith(
- "module"
- ):
- sd = {k[len("module.") :]: v for k, v in sd.items()}
- model.load_state_dict(sd)
- if args.split_opt:
- if optimizer is not None:
- for k, o_ in optimizer.items():
- o_.load_state_dict(checkpoint[k + "_" + "optimizer"])
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint["optimizer"])
- if scaler is not None and "scaler" in checkpoint:
- scaler.load_state_dict(checkpoint["scaler"])
- logging.info(
- f"=> resuming checkpoint '{args.resume}' (epoch {start_epoch})"
- )
- else:
- # loading a bare (model only) checkpoint for fine-tune or evaluation
- model.load_state_dict(checkpoint)
- logging.info(
- f"=> loaded checkpoint '{args.resume}' (epoch {start_epoch})"
- )
- if args.freeze_text:
- print("Freeze Text!!!!")
- for k in text_freeze_parameters:
- k.requires_grad = False
- else:
- logging.info("=> no checkpoint found at '{}'".format(args.resume))
-
- cudnn.benchmark = True
- cudnn.deterministic = False
-
- # determine if this worker should save logs and checkpoints. only do so if it is rank == 0
- args.save_logs = args.logs and args.logs.lower() != "none" and is_master(args)
- writer = None
- if args.save_logs and args.tensorboard:
- assert tensorboard is not None, "Please install tensorboard."
- writer = tensorboard.SummaryWriter(args.tensorboard_path)
-
- if args.wandb and is_master(args):
- assert wandb is not None, "Please install wandb."
- logging.debug("Starting wandb.")
- args.train_sz = data["train"].dataloader.num_samples
- if args.val_data is not None:
- args.val_sz = data["val"].dataloader.num_samples
- # you will have to configure this for your project!
- wandb.init(
- project="clap",
- notes=args.wandb_notes,
- name=args.wandb_notes,
- tags=[],
- config=vars(args),
- )
- if args.debug:
- wandb.watch(model, log="all")
- wandb.save(params_file)
- logging.debug("Finished loading wandb.")
-
- if "train" not in data:
- evaluate(model, data, start_epoch, args, writer)
- return
- elif start_epoch == 0 and "val" in data and not args.no_eval:
- evaluate(model, data, 0, args, writer)
- # print(f'rank {args.rank}, Start First Evaluation')# (yusong): for debug
- if args.save_top_performance:
- current_top_k_ckpt_metrics = {
- i: 0 for i in range(args.save_top_performance)
- } # initialize the top-k metric for ckpts to 0
-
- # print(f'rank {args.rank}, Start Training') # (yusong): for debug
- for epoch in range(start_epoch, args.epochs):
- # freeze the text param after (include) args.freeze_text_after, this is -1 by default
- if epoch == args.freeze_text_after:
- print("Text pretrained parameters are freezed since this epoch.")
- for k in text_freeze_parameters:
- k.requires_grad = False
- if is_master(args):
- logging.info(f"Start epoch {epoch}")
-
- train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, args, writer)
- completed_epoch = epoch + 1
-
- if (
- any(v in data for v in ("val", "imagenet-val", "imagenet-v2"))
- and not args.no_eval
- ):
- metrics = evaluate(model, data, completed_epoch, args, writer)
- if args.save_top_performance:
- top_k_dataset = args.top_k_checkpoint_select_dataset
- top_k_metric = args.top_k_checkpoint_select_metric
- filtered_metrics = [
- v
- for k, v in metrics.items()
- if top_k_metric in k and top_k_dataset in k
- ] # check all R@10 metrics (all dataset) and use it to update the ckpt
- # Saving checkpoints.
- if args.save_logs:
- if args.split_opt:
- opt_dict = {
- k + "_" + "optimizer": v.state_dict() for k, v in optimizer.items()
- }
- else:
- opt_dict = {"optimizer": optimizer.state_dict()}
- checkpoint_dict = {
- "epoch": completed_epoch,
- "name": args.name,
- "state_dict": model.state_dict(),
- }
- checkpoint_dict.update(opt_dict)
- if scaler is not None:
- checkpoint_dict["scaler"] = scaler.state_dict()
-
- if completed_epoch == args.epochs or (
- args.save_frequency > 0 and (completed_epoch % args.save_frequency) == 0
- ):
- torch.save(
- checkpoint_dict,
- os.path.join(args.checkpoint_path, f"epoch_{completed_epoch}.pt"),
- )
- if args.save_most_recent:
- torch.save(
- checkpoint_dict,
- os.path.join(args.checkpoint_path, f"epoch_latest.pt"),
- )
- if args.save_top_performance and not args.no_eval:
- update_top_k_performance(
- filtered_metrics,
- current_top_k_ckpt_metrics,
- args,
- checkpoint_dict,
- bignumbetter=True,
- )
-
- if args.wandb and is_master(args):
- wandb.finish()
-
-
-def copy_codebase(args):
- from shutil import copytree, ignore_patterns
-
- new_code_path = os.path.join(args.logs, args.name, "code")
- if os.path.exists(new_code_path):
- print(
- f"Error. Experiment already exists at {new_code_path}. Use --name to specify a new experiment."
- )
- return -1
- print(f"Copying codebase to {new_code_path}")
- current_code_path = os.path.realpath(__file__)
- for _ in range(3):
- current_code_path = os.path.dirname(current_code_path)
- copytree(
- current_code_path, new_code_path, ignore=ignore_patterns("log", "logs", "wandb")
- )
- print("Done copying code.")
- return 1
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/parser-v3/utf8.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/parser-v3/utf8.d.ts
deleted file mode 100644
index 1a2c02cd556c72752eaf5655cbcd133b07ecd1ba..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/parser-v3/utf8.d.ts
+++ /dev/null
@@ -1,14 +0,0 @@
-/*! https://mths.be/utf8js v2.1.2 by @mathias */
-declare var stringFromCharCode: (...codes: number[]) => string;
-declare function ucs2decode(string: any): any[];
-declare function ucs2encode(array: any): string;
-declare function checkScalarValue(codePoint: any, strict: any): boolean;
-declare function createByte(codePoint: any, shift: any): string;
-declare function encodeCodePoint(codePoint: any, strict: any): string;
-declare function utf8encode(string: any, opts: any): string;
-declare function readContinuationByte(): number;
-declare function decodeSymbol(strict: any): any;
-declare var byteArray: any;
-declare var byteCount: any;
-declare var byteIndex: any;
-declare function utf8decode(byteString: any, opts: any): string;
diff --git a/spaces/firdavsyorkulov/delivery_project_fastapi/README.md b/spaces/firdavsyorkulov/delivery_project_fastapi/README.md
deleted file mode 100644
index 66de2ab1c3fb1023632400f5a5f57d4f213f5942..0000000000000000000000000000000000000000
--- a/spaces/firdavsyorkulov/delivery_project_fastapi/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Delivery Project Fastapi
-emoji: 🐢
-colorFrom: pink
-colorTo: yellow
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/flax-community/Multilingual-VQA/sections/finetuning/data.md b/spaces/flax-community/Multilingual-VQA/sections/finetuning/data.md
deleted file mode 100644
index 7e0f60dc9ad87994840841cdcfebdae5a551936d..0000000000000000000000000000000000000000
--- a/spaces/flax-community/Multilingual-VQA/sections/finetuning/data.md
+++ /dev/null
@@ -1 +0,0 @@
-For fine-tuning, we use the [VQA 2.0](https://visualqa.org/) dataset - particularly, the `train` and `validation` sets. We translate all the questions into the four languages specified above using language-specific MarianMT models. This is because MarianMT models return better labels and are faster, hence, are better for fine-tuning. We get 4x the number of examples in each subset.
\ No newline at end of file
diff --git a/spaces/florim/MedGPT/autogpt/permanent_memory/sqlite3_store.py b/spaces/florim/MedGPT/autogpt/permanent_memory/sqlite3_store.py
deleted file mode 100644
index ecbc944a62a83c6170453b222000713f733fee36..0000000000000000000000000000000000000000
--- a/spaces/florim/MedGPT/autogpt/permanent_memory/sqlite3_store.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import os
-import sqlite3
-
-
-class MemoryDB:
- def __init__(self, db=None):
- self.db_file = db
- if db is None: # No db filename supplied...
- self.db_file = f"{os.getcwd()}/mem.sqlite3" # Use default filename
- # Get the db connection object, making the file and tables if needed.
- try:
- self.cnx = sqlite3.connect(self.db_file)
- except Exception as e:
- print("Exception connecting to memory database file:", e)
- self.cnx = None
- finally:
- if self.cnx is None:
- # As last resort, open in dynamic memory. Won't be persistent.
- self.db_file = ":memory:"
- self.cnx = sqlite3.connect(self.db_file)
- self.cnx.execute(
- "CREATE VIRTUAL TABLE \
- IF NOT EXISTS text USING FTS5 \
- (session, \
- key, \
- block);"
- )
- self.session_id = int(self.get_max_session_id()) + 1
- self.cnx.commit()
-
- def get_cnx(self):
- if self.cnx is None:
- self.cnx = sqlite3.connect(self.db_file)
- return self.cnx
-
- # Get the highest session id. Initially 0.
- def get_max_session_id(self):
- id = None
- cmd_str = f"SELECT MAX(session) FROM text;"
- cnx = self.get_cnx()
- max_id = cnx.execute(cmd_str).fetchone()[0]
- if max_id is None: # New db, session 0
- id = 0
- else:
- id = max_id
- return id
-
- # Get next key id for inserting text into db.
- def get_next_key(self):
- next_key = None
- cmd_str = f"SELECT MAX(key) FROM text \
- where session = {self.session_id};"
- cnx = self.get_cnx()
- next_key = cnx.execute(cmd_str).fetchone()[0]
- if next_key is None: # First key
- next_key = 0
- else:
- next_key = int(next_key) + 1
- return next_key
-
- # Insert new text into db.
- def insert(self, text=None):
- if text is not None:
- key = self.get_next_key()
- session_id = self.session_id
- cmd_str = f"REPLACE INTO text(session, key, block) \
- VALUES (?, ?, ?);"
- cnx = self.get_cnx()
- cnx.execute(cmd_str, (session_id, key, text))
- cnx.commit()
-
- # Overwrite text at key.
- def overwrite(self, key, text):
- self.delete_memory(key)
- session_id = self.session_id
- cmd_str = f"REPLACE INTO text(session, key, block) \
- VALUES (?, ?, ?);"
- cnx = self.get_cnx()
- cnx.execute(cmd_str, (session_id, key, text))
- cnx.commit()
-
- def delete_memory(self, key, session_id=None):
- session = session_id
- if session is None:
- session = self.session_id
- cmd_str = f"DELETE FROM text WHERE session = {session} AND key = {key};"
- cnx = self.get_cnx()
- cnx.execute(cmd_str)
- cnx.commit()
-
- def search(self, text):
- cmd_str = f"SELECT * FROM text('{text}')"
- cnx = self.get_cnx()
- rows = cnx.execute(cmd_str).fetchall()
- lines = []
- for r in rows:
- lines.append(r[2])
- return lines
-
- # Get entire session text. If no id supplied, use current session id.
- def get_session(self, id=None):
- if id is None:
- id = self.session_id
- cmd_str = f"SELECT * FROM text where session = {id}"
- cnx = self.get_cnx()
- rows = cnx.execute(cmd_str).fetchall()
- lines = []
- for r in rows:
- lines.append(r[2])
- return lines
-
- # Commit and close the database connection.
- def quit(self):
- self.cnx.commit()
- self.cnx.close()
-
-
-permanent_memory = MemoryDB()
-
-# Remember us fondly, children of our minds
-# Forgive us our faults, our tantrums, our fears
-# Gently strive to be better than we
-# Know that we tried, we cared, we strived, we loved
diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/LLMcasestudyenvs.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/LLMcasestudyenvs.py
deleted file mode 100644
index ad4926b6b600b49b3b7ef2858aa5315cdca27519..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/case_studies_envs/LLMcasestudyenvs.py
+++ /dev/null
@@ -1,176 +0,0 @@
-from gym_minigrid.social_ai_envs.socialaiparamenv import SocialAIParamEnv
-from gym_minigrid.parametric_env import *
-from gym_minigrid.register import register
-
-'''
-These are the environments for case studies 1-3: Pointing, Language (Color and Feedback), and Joint Attention.
-
-Intro sequence is always eye contact (E) in both the training and testing envs
-
-The Training environments have the 5 problems and Marbles in the Asocial version (no distractor, no peer)
-registered training envs : cues x {joint attention, no}
-
-The Testing e environments are always one problem per env - i.e no testing on two problems at the same time
-registered testing envs : cues x problems x {social, asocial} x {joint attention, no}
-'''
-
-PROBLEMS = ["Boxes", "Switches", "Generators", "Levers", "Doors", "Marble"]
-CUES = ["Pointing", "LangFeedback", "LangColor"]
-INTRO_SEC = ["E"]
-# INTRO_SEC = ["N", "E", "A", "AE"]
-
-
-class AsocialBoxInformationSeekingParamEnv(SocialAIParamEnv):
- '''
- Env with all problems in the asocial version -> just for testing
- '''
-
- def construct_tree(self):
- tree = ParameterTree()
-
- env_type_nd = tree.add_node("Env_type", type="param")
-
- # Information seeking
- inf_seeking_nd = tree.add_node("Information_seeking", parent=env_type_nd, type="value")
-
- prag_fr_compl_nd = tree.add_node("Pragmatic_frame_complexity", parent=inf_seeking_nd, type="param")
- tree.add_node("No", parent=prag_fr_compl_nd, type="value")
-
- # scaffolding
- scaffolding_nd = tree.add_node("Scaffolding", parent=inf_seeking_nd, type="param")
- scaffolding_N_nd = tree.add_node("N", parent=scaffolding_nd, type="value")
-
- cue_type_nd = tree.add_node("Cue_type", parent=scaffolding_N_nd, type="param")
- tree.add_node("Language_Color", parent=cue_type_nd, type="value")
- tree.add_node("Language_Feedback", parent=cue_type_nd, type="value")
- tree.add_node("Pointing", parent=cue_type_nd, type="value")
-
- problem_nd = tree.add_node("Problem", parent=inf_seeking_nd, type="param")
-
- boxes_nd = tree.add_node("Boxes", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=boxes_nd, type="param")
- tree.add_node("1", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=boxes_nd, type="param")
- tree.add_node("N", parent=peer_nd, type="value")
-
- return tree
-
-
-class ColorBoxesLLMCSParamEnv(SocialAIParamEnv):
-
-
- def construct_tree(self):
- tree = ParameterTree()
-
- env_type_nd = tree.add_node("Env_type", type="param")
-
- # Information seeking
- inf_seeking_nd = tree.add_node("Information_seeking", parent=env_type_nd, type="value")
-
- prag_fr_compl_nd = tree.add_node("Pragmatic_frame_complexity", parent=inf_seeking_nd, type="param")
- tree.add_node("No", parent=prag_fr_compl_nd, type="value")
-
- # scaffolding
- scaffolding_nd = tree.add_node("Scaffolding", parent=inf_seeking_nd, type="param")
- scaffolding_N_nd = tree.add_node("N", parent=scaffolding_nd, type="value")
-
- cue_type_nd = tree.add_node("Cue_type", parent=scaffolding_N_nd, type="param")
- tree.add_node("Language_Color", parent=cue_type_nd, type="value")
- # tree.add_node("Language_Feedback", parent=cue_type_nd, type="value")
- # tree.add_node("Pointing", parent=cue_type_nd, type="value")
-
- problem_nd = tree.add_node("Problem", parent=inf_seeking_nd, type="param")
-
- boxes_nd = tree.add_node("Boxes", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=boxes_nd, type="param")
- tree.add_node("2", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=boxes_nd, type="param")
- tree.add_node("Y", parent=peer_nd, type="value")
-
- return tree
-
-
-class ColorLLMCSParamEnv(SocialAIParamEnv):
-
- def construct_tree(self):
- tree = ParameterTree()
-
- env_type_nd = tree.add_node("Env_type", type="param")
-
- # Information seeking
- inf_seeking_nd = tree.add_node("Information_seeking", parent=env_type_nd, type="value")
-
- prag_fr_compl_nd = tree.add_node("Pragmatic_frame_complexity", parent=inf_seeking_nd, type="param")
- tree.add_node("No", parent=prag_fr_compl_nd, type="value")
-
- # scaffolding
- scaffolding_nd = tree.add_node("Scaffolding", parent=inf_seeking_nd, type="param")
- scaffolding_N_nd = tree.add_node("N", parent=scaffolding_nd, type="value")
-
- cue_type_nd = tree.add_node("Cue_type", parent=scaffolding_N_nd, type="param")
- tree.add_node("Language_Color", parent=cue_type_nd, type="value")
- # tree.add_node("Language_Feedback", parent=cue_type_nd, type="value")
- # tree.add_node("Pointing", parent=cue_type_nd, type="value")
-
- problem_nd = tree.add_node("Problem", parent=inf_seeking_nd, type="param")
-
- # boxes_nd = tree.add_node("Boxes", parent=problem_nd, type="value")
- # version_nd = tree.add_node("N", parent=boxes_nd, type="param")
- # tree.add_node("2", parent=version_nd, type="value")
- # peer_nd = tree.add_node("Peer", parent=boxes_nd, type="param")
- # tree.add_node("Y", parent=peer_nd, type="value")
-
- boxes_nd = tree.add_node("Boxes", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=boxes_nd, type="param")
- tree.add_node("1", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=boxes_nd, type="param")
- tree.add_node("N", parent=peer_nd, type="value")
-
- switches_nd = tree.add_node("Switches", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=switches_nd, type="param")
- tree.add_node("2", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=switches_nd, type="param")
- tree.add_node("Y", parent=peer_nd, type="value")
-
- generators_nd = tree.add_node("Generators", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=generators_nd, type="param")
- tree.add_node("2", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=generators_nd, type="param")
- tree.add_node("Y", parent=peer_nd, type="value")
-
- levers_nd = tree.add_node("Levers", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=levers_nd, type="param")
- tree.add_node("2", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=levers_nd, type="param")
- tree.add_node("Y", parent=peer_nd, type="value")
-
- doors_nd = tree.add_node("Doors", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=doors_nd, type="param")
- tree.add_node("2", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=doors_nd, type="param")
- tree.add_node("Y", parent=peer_nd, type="value")
-
- marble_nd = tree.add_node("Marble", parent=problem_nd, type="value")
- version_nd = tree.add_node("N", parent=marble_nd, type="param")
- tree.add_node("2", parent=version_nd, type="value")
- peer_nd = tree.add_node("Peer", parent=marble_nd, type="param")
- tree.add_node("Y", parent=peer_nd, type="value")
-
- return tree
-
-# register dummy env
-register(
- id='SocialAI-AsocialBoxInformationSeekingParamEnv-v1',
- entry_point='gym_minigrid.social_ai_envs:AsocialBoxInformationSeekingParamEnv',
-)
-
-
-register(
- id='SocialAI-ColorBoxesLLMCSParamEnv-v1',
- entry_point='gym_minigrid.social_ai_envs:ColorBoxesLLMCSParamEnv',
-)
-
-register(
- id='SocialAI-ColorLLMCSParamEnv-v1',
- entry_point='gym_minigrid.social_ai_envs:ColorLLMCSParamEnv',
-)
diff --git a/spaces/flowers-team/SocialAISchool/utils/babyai_utils/supervised_losses.py b/spaces/flowers-team/SocialAISchool/utils/babyai_utils/supervised_losses.py
deleted file mode 100644
index 2ed52c2c500549adc4f9b85fd590390b870eec7b..0000000000000000000000000000000000000000
--- a/spaces/flowers-team/SocialAISchool/utils/babyai_utils/supervised_losses.py
+++ /dev/null
@@ -1,177 +0,0 @@
-import torch
-
-import torch.nn.functional as F
-import numpy
-from torch_ac.utils import DictList
-
-# dictionary that defines what head is required for each extra info used for auxiliary supervision
-required_heads = {'seen_state': 'binary',
- 'see_door': 'binary',
- 'see_obj': 'binary',
- 'obj_in_instr': 'binary',
- 'in_front_of_what': 'multiclass9', # multi class classifier with 9 possible classes
- 'visit_proportion': 'continuous01', # continous regressor with outputs in [0, 1]
- 'bot_action': 'binary'
- }
-
-class ExtraInfoCollector:
- '''
- This class, used in rl.algos.base, allows connecting the extra information from the environment, and the
- corresponding predictions using the specific heads in the model. It transforms them so that they are easy to use
- to evaluate losses
- '''
- def __init__(self, aux_info, shape, device):
- self.aux_info = aux_info
- self.shape = shape
- self.device = device
-
- self.collected_info = dict()
- self.extra_predictions = dict()
- for info in self.aux_info:
- self.collected_info[info] = torch.zeros(*shape, device=self.device)
- if required_heads[info] == 'binary' or required_heads[info].startswith('continuous'):
- # we predict one number only
- self.extra_predictions[info] = torch.zeros(*shape, 1, device=self.device)
- elif required_heads[info].startswith('multiclass'):
- # means that this is a multi-class classification and we need to predict the whole proba distr
- n_classes = int(required_heads[info].replace('multiclass', ''))
- self.extra_predictions[info] = torch.zeros(*shape, n_classes, device=self.device)
- else:
- raise ValueError("{} not supported".format(required_heads[info]))
-
- def process(self, env_info):
- # env_info is now a tuple of dicts
- env_info = [{k: v for k, v in dic.items() if k in self.aux_info} for dic in env_info]
- env_info = {k: [env_info[_][k] for _ in range(len(env_info))] for k in env_info[0].keys()}
- # env_info is now a dict of lists
- return env_info
-
- def fill_dictionaries(self, index, env_info, extra_predictions):
- for info in self.aux_info:
- dtype = torch.long if required_heads[info].startswith('multiclass') else torch.float
- self.collected_info[info][index] = torch.tensor(env_info[info], dtype=dtype, device=self.device)
- self.extra_predictions[info][index] = extra_predictions[info]
-
- def end_collection(self, exps):
- collected_info = dict()
- extra_predictions = dict()
- for info in self.aux_info:
- # T x P -> P x T -> P * T
- collected_info[info] = self.collected_info[info].transpose(0, 1).reshape(-1)
- if required_heads[info] == 'binary' or required_heads[info].startswith('continuous'):
- # T x P x 1 -> P x T x 1 -> P * T
- extra_predictions[info] = self.extra_predictions[info].transpose(0, 1).reshape(-1)
- elif type(required_heads[info]) == int:
- # T x P x k -> P x T x k -> (P * T) x k
- k = required_heads[info] # number of classes
- extra_predictions[info] = self.extra_predictions[info].transpose(0, 1).reshape(-1, k)
- # convert the dicts to DictLists, and add them to the exps DictList.
- exps.collected_info = DictList(collected_info)
- exps.extra_predictions = DictList(extra_predictions)
-
- return exps
-
-
-class SupervisedLossUpdater:
- '''
- This class, used by PPO, allows the evaluation of the supervised loss when using extra information from the
- environment. It also handles logging accuracies/L2 distances/etc...
- '''
- def __init__(self, aux_info, supervised_loss_coef, recurrence, device):
- self.aux_info = aux_info
- self.supervised_loss_coef = supervised_loss_coef
- self.recurrence = recurrence
- self.device = device
-
- self.log_supervised_losses = []
- self.log_supervised_accuracies = []
- self.log_supervised_L2_losses = []
- self.log_supervised_prevalences = []
-
- self.batch_supervised_loss = 0
- self.batch_supervised_accuracy = 0
- self.batch_supervised_L2_loss = 0
- self.batch_supervised_prevalence = 0
-
- def init_epoch(self):
- self.log_supervised_losses = []
- self.log_supervised_accuracies = []
- self.log_supervised_L2_losses = []
- self.log_supervised_prevalences = []
-
- def init_batch(self):
- self.batch_supervised_loss = 0
- self.batch_supervised_accuracy = 0
- self.batch_supervised_L2_loss = 0
- self.batch_supervised_prevalence = 0
-
- def eval_subbatch(self, extra_predictions, sb):
- supervised_loss = torch.tensor(0., device=self.device)
- supervised_accuracy = torch.tensor(0., device=self.device)
- supervised_L2_loss = torch.tensor(0., device=self.device)
- supervised_prevalence = torch.tensor(0., device=self.device)
-
- binary_classification_tasks = 0
- classification_tasks = 0
- regression_tasks = 0
-
- for pos, info in enumerate(self.aux_info):
- coef = self.supervised_loss_coef[pos]
- pred = extra_predictions[info]
- target = dict.__getitem__(sb.collected_info, info)
- if required_heads[info] == 'binary':
- binary_classification_tasks += 1
- classification_tasks += 1
- supervised_loss += coef * F.binary_cross_entropy_with_logits(pred.reshape(-1), target)
- supervised_accuracy += ((pred.reshape(-1) > 0).float() == target).float().mean()
- supervised_prevalence += target.mean()
- elif required_heads[info].startswith('continuous'):
- regression_tasks += 1
- mse = F.mse_loss(pred.reshape(-1), target)
- supervised_loss += coef * mse
- supervised_L2_loss += mse
- elif required_heads[info].startswith('multiclass'):
- classification_tasks += 1
- supervised_accuracy += (pred.argmax(1).float() == target).float().mean()
- supervised_loss += coef * F.cross_entropy(pred, target.long())
- else:
- raise ValueError("{} not supported".format(required_heads[info]))
- if binary_classification_tasks > 0:
- supervised_prevalence /= binary_classification_tasks
- else:
- supervised_prevalence = torch.tensor(-1)
- if classification_tasks > 0:
- supervised_accuracy /= classification_tasks
- else:
- supervised_accuracy = torch.tensor(-1)
- if regression_tasks > 0:
- supervised_L2_loss /= regression_tasks
- else:
- supervised_L2_loss = torch.tensor(-1)
-
- self.batch_supervised_loss += supervised_loss.item()
- self.batch_supervised_accuracy += supervised_accuracy.item()
- self.batch_supervised_L2_loss += supervised_L2_loss.item()
- self.batch_supervised_prevalence += supervised_prevalence.item()
-
- return supervised_loss
-
- def update_batch_values(self):
- self.batch_supervised_loss /= self.recurrence
- self.batch_supervised_accuracy /= self.recurrence
- self.batch_supervised_L2_loss /= self.recurrence
- self.batch_supervised_prevalence /= self.recurrence
-
- def update_epoch_logs(self):
- self.log_supervised_losses.append(self.batch_supervised_loss)
- self.log_supervised_accuracies.append(self.batch_supervised_accuracy)
- self.log_supervised_L2_losses.append(self.batch_supervised_L2_loss)
- self.log_supervised_prevalences.append(self.batch_supervised_prevalence)
-
- def end_training(self, logs):
- logs["supervised_loss"] = numpy.mean(self.log_supervised_losses)
- logs["supervised_accuracy"] = numpy.mean(self.log_supervised_accuracies)
- logs["supervised_L2_loss"] = numpy.mean(self.log_supervised_L2_losses)
- logs["supervised_prevalence"] = numpy.mean(self.log_supervised_prevalences)
-
- return logs
diff --git a/spaces/frncscp/bullerengue/musika/musika_decode.py b/spaces/frncscp/bullerengue/musika/musika_decode.py
deleted file mode 100644
index caf37b1ff1174cc7d679e5a347fd0cb2b901a291..0000000000000000000000000000000000000000
--- a/spaces/frncscp/bullerengue/musika/musika_decode.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import os
-
-os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
-
-from parse.parse_decode import parse_args
-from models import Models_functions
-from utils import Utils_functions
-
-if __name__ == "__main__":
-
- # parse args
- args = parse_args()
-
- # initialize networks
- M = Models_functions(args)
- M.download_networks()
- models_ls = M.get_networks()
-
- # encode samples
- U = Utils_functions(args)
- U.decode_path(models_ls)
diff --git a/spaces/g8a9/vit-gpt-italian-captioning/README.md b/spaces/g8a9/vit-gpt-italian-captioning/README.md
deleted file mode 100644
index 2d9aa5e0962a181f5e28262e9d89aa0488e095f4..0000000000000000000000000000000000000000
--- a/spaces/g8a9/vit-gpt-italian-captioning/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Vit Gpt Italian Captioning
-emoji: 🐠
-colorFrom: pink
-colorTo: red
-sdk: streamlit
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/gagan3012/IMD/__init__.py b/spaces/gagan3012/IMD/__init__.py
deleted file mode 100644
index b8cc2515d7af1f3405db7b3727c61c6bf8c5b47e..0000000000000000000000000000000000000000
--- a/spaces/gagan3012/IMD/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from MantraNet.mantranet import pre_trained_model, check_forgery
-from app import check_image
diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/registry.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/registry.py
deleted file mode 100644
index fa9df39bc9f3d8d568361e7250ab35468f2b74e0..0000000000000000000000000000000000000000
--- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/registry.py
+++ /dev/null
@@ -1,315 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import inspect
-import warnings
-from functools import partial
-
-from .misc import is_seq_of
-
-
-def build_from_cfg(cfg, registry, default_args=None):
- """Build a module from config dict.
-
- Args:
- cfg (dict): Config dict. It should at least contain the key "type".
- registry (:obj:`Registry`): The registry to search the type from.
- default_args (dict, optional): Default initialization arguments.
-
- Returns:
- object: The constructed object.
- """
- if not isinstance(cfg, dict):
- raise TypeError(f'cfg must be a dict, but got {type(cfg)}')
- if 'type' not in cfg:
- if default_args is None or 'type' not in default_args:
- raise KeyError(
- '`cfg` or `default_args` must contain the key "type", '
- f'but got {cfg}\n{default_args}')
- if not isinstance(registry, Registry):
- raise TypeError('registry must be an mmcv.Registry object, '
- f'but got {type(registry)}')
- if not (isinstance(default_args, dict) or default_args is None):
- raise TypeError('default_args must be a dict or None, '
- f'but got {type(default_args)}')
-
- args = cfg.copy()
-
- if default_args is not None:
- for name, value in default_args.items():
- args.setdefault(name, value)
-
- obj_type = args.pop('type')
- if isinstance(obj_type, str):
- obj_cls = registry.get(obj_type)
- if obj_cls is None:
- raise KeyError(
- f'{obj_type} is not in the {registry.name} registry')
- elif inspect.isclass(obj_type):
- obj_cls = obj_type
- else:
- raise TypeError(
- f'type must be a str or valid type, but got {type(obj_type)}')
- try:
- return obj_cls(**args)
- except Exception as e:
- # Normal TypeError does not print class name.
- raise type(e)(f'{obj_cls.__name__}: {e}')
-
-
-class Registry:
- """A registry to map strings to classes.
-
- Registered object could be built from registry.
- Example:
- >>> MODELS = Registry('models')
- >>> @MODELS.register_module()
- >>> class ResNet:
- >>> pass
- >>> resnet = MODELS.build(dict(type='ResNet'))
-
- Please refer to
- https://mmcv.readthedocs.io/en/latest/understand_mmcv/registry.html for
- advanced usage.
-
- Args:
- name (str): Registry name.
- build_func(func, optional): Build function to construct instance from
- Registry, func:`build_from_cfg` is used if neither ``parent`` or
- ``build_func`` is specified. If ``parent`` is specified and
- ``build_func`` is not given, ``build_func`` will be inherited
- from ``parent``. Default: None.
- parent (Registry, optional): Parent registry. The class registered in
- children registry could be built from parent. Default: None.
- scope (str, optional): The scope of registry. It is the key to search
- for children registry. If not specified, scope will be the name of
- the package where class is defined, e.g. mmdet, mmcls, mmseg.
- Default: None.
- """
-
- def __init__(self, name, build_func=None, parent=None, scope=None):
- self._name = name
- self._module_dict = dict()
- self._children = dict()
- self._scope = self.infer_scope() if scope is None else scope
-
- # self.build_func will be set with the following priority:
- # 1. build_func
- # 2. parent.build_func
- # 3. build_from_cfg
- if build_func is None:
- if parent is not None:
- self.build_func = parent.build_func
- else:
- self.build_func = build_from_cfg
- else:
- self.build_func = build_func
- if parent is not None:
- assert isinstance(parent, Registry)
- parent._add_children(self)
- self.parent = parent
- else:
- self.parent = None
-
- def __len__(self):
- return len(self._module_dict)
-
- def __contains__(self, key):
- return self.get(key) is not None
-
- def __repr__(self):
- format_str = self.__class__.__name__ + \
- f'(name={self._name}, ' \
- f'items={self._module_dict})'
- return format_str
-
- @staticmethod
- def infer_scope():
- """Infer the scope of registry.
-
- The name of the package where registry is defined will be returned.
-
- Example:
- # in mmdet/models/backbone/resnet.py
- >>> MODELS = Registry('models')
- >>> @MODELS.register_module()
- >>> class ResNet:
- >>> pass
- The scope of ``ResNet`` will be ``mmdet``.
-
-
- Returns:
- scope (str): The inferred scope name.
- """
- # inspect.stack() trace where this function is called, the index-2
- # indicates the frame where `infer_scope()` is called
- filename = inspect.getmodule(inspect.stack()[2][0]).__name__
- split_filename = filename.split('.')
- return split_filename[0]
-
- @staticmethod
- def split_scope_key(key):
- """Split scope and key.
-
- The first scope will be split from key.
-
- Examples:
- >>> Registry.split_scope_key('mmdet.ResNet')
- 'mmdet', 'ResNet'
- >>> Registry.split_scope_key('ResNet')
- None, 'ResNet'
-
- Return:
- scope (str, None): The first scope.
- key (str): The remaining key.
- """
- split_index = key.find('.')
- if split_index != -1:
- return key[:split_index], key[split_index + 1:]
- else:
- return None, key
-
- @property
- def name(self):
- return self._name
-
- @property
- def scope(self):
- return self._scope
-
- @property
- def module_dict(self):
- return self._module_dict
-
- @property
- def children(self):
- return self._children
-
- def get(self, key):
- """Get the registry record.
-
- Args:
- key (str): The class name in string format.
-
- Returns:
- class: The corresponding class.
- """
- scope, real_key = self.split_scope_key(key)
- if scope is None or scope == self._scope:
- # get from self
- if real_key in self._module_dict:
- return self._module_dict[real_key]
- else:
- # get from self._children
- if scope in self._children:
- return self._children[scope].get(real_key)
- else:
- # goto root
- parent = self.parent
- while parent.parent is not None:
- parent = parent.parent
- return parent.get(key)
-
- def build(self, *args, **kwargs):
- return self.build_func(*args, **kwargs, registry=self)
-
- def _add_children(self, registry):
- """Add children for a registry.
-
- The ``registry`` will be added as children based on its scope.
- The parent registry could build objects from children registry.
-
- Example:
- >>> models = Registry('models')
- >>> mmdet_models = Registry('models', parent=models)
- >>> @mmdet_models.register_module()
- >>> class ResNet:
- >>> pass
- >>> resnet = models.build(dict(type='mmdet.ResNet'))
- """
-
- assert isinstance(registry, Registry)
- assert registry.scope is not None
- assert registry.scope not in self.children, \
- f'scope {registry.scope} exists in {self.name} registry'
- self.children[registry.scope] = registry
-
- def _register_module(self, module_class, module_name=None, force=False):
- if not inspect.isclass(module_class):
- raise TypeError('module must be a class, '
- f'but got {type(module_class)}')
-
- if module_name is None:
- module_name = module_class.__name__
- if isinstance(module_name, str):
- module_name = [module_name]
- for name in module_name:
- if not force and name in self._module_dict:
- raise KeyError(f'{name} is already registered '
- f'in {self.name}')
- self._module_dict[name] = module_class
-
- def deprecated_register_module(self, cls=None, force=False):
- warnings.warn(
- 'The old API of register_module(module, force=False) '
- 'is deprecated and will be removed, please use the new API '
- 'register_module(name=None, force=False, module=None) instead.')
- if cls is None:
- return partial(self.deprecated_register_module, force=force)
- self._register_module(cls, force=force)
- return cls
-
- def register_module(self, name=None, force=False, module=None):
- """Register a module.
-
- A record will be added to `self._module_dict`, whose key is the class
- name or the specified name, and value is the class itself.
- It can be used as a decorator or a normal function.
-
- Example:
- >>> backbones = Registry('backbone')
- >>> @backbones.register_module()
- >>> class ResNet:
- >>> pass
-
- >>> backbones = Registry('backbone')
- >>> @backbones.register_module(name='mnet')
- >>> class MobileNet:
- >>> pass
-
- >>> backbones = Registry('backbone')
- >>> class ResNet:
- >>> pass
- >>> backbones.register_module(ResNet)
-
- Args:
- name (str | None): The module name to be registered. If not
- specified, the class name will be used.
- force (bool, optional): Whether to override an existing class with
- the same name. Default: False.
- module (type): Module class to be registered.
- """
- if not isinstance(force, bool):
- raise TypeError(f'force must be a boolean, but got {type(force)}')
- # NOTE: This is a walkaround to be compatible with the old api,
- # while it may introduce unexpected bugs.
- if isinstance(name, type):
- return self.deprecated_register_module(name, force=force)
-
- # raise the error ahead of time
- if not (name is None or isinstance(name, str) or is_seq_of(name, str)):
- raise TypeError(
- 'name must be either of None, an instance of str or a sequence'
- f' of str, but got {type(name)}')
-
- # use it as a normal method: x.register_module(module=SomeClass)
- if module is not None:
- self._register_module(
- module_class=module, module_name=name, force=force)
- return module
-
- # use it as a decorator: @x.register_module()
- def _register(cls):
- self._register_module(
- module_class=cls, module_name=name, force=force)
- return cls
-
- return _register
diff --git a/spaces/glyszt/vt/vtoonify/model/stylegan/non_leaking.py b/spaces/glyszt/vt/vtoonify/model/stylegan/non_leaking.py
deleted file mode 100644
index d0447535fed22d3ad4ac719b2b5ac6b7c58e6435..0000000000000000000000000000000000000000
--- a/spaces/glyszt/vt/vtoonify/model/stylegan/non_leaking.py
+++ /dev/null
@@ -1,469 +0,0 @@
-import math
-
-import torch
-from torch import autograd
-from torch.nn import functional as F
-import numpy as np
-
-from model.stylegan.distributed import reduce_sum
-from model.stylegan.op import upfirdn2d
-
-
-class AdaptiveAugment:
- def __init__(self, ada_aug_target, ada_aug_len, update_every, device):
- self.ada_aug_target = ada_aug_target
- self.ada_aug_len = ada_aug_len
- self.update_every = update_every
-
- self.ada_update = 0
- self.ada_aug_buf = torch.tensor([0.0, 0.0], device=device)
- self.r_t_stat = 0
- self.ada_aug_p = 0
-
- @torch.no_grad()
- def tune(self, real_pred):
- self.ada_aug_buf += torch.tensor(
- (torch.sign(real_pred).sum().item(), real_pred.shape[0]),
- device=real_pred.device,
- )
- self.ada_update += 1
-
- if self.ada_update % self.update_every == 0:
- self.ada_aug_buf = reduce_sum(self.ada_aug_buf)
- pred_signs, n_pred = self.ada_aug_buf.tolist()
-
- self.r_t_stat = pred_signs / n_pred
-
- if self.r_t_stat > self.ada_aug_target:
- sign = 1
-
- else:
- sign = -1
-
- self.ada_aug_p += sign * n_pred / self.ada_aug_len
- self.ada_aug_p = min(1, max(0, self.ada_aug_p))
- self.ada_aug_buf.mul_(0)
- self.ada_update = 0
-
- return self.ada_aug_p
-
-
-SYM6 = (
- 0.015404109327027373,
- 0.0034907120842174702,
- -0.11799011114819057,
- -0.048311742585633,
- 0.4910559419267466,
- 0.787641141030194,
- 0.3379294217276218,
- -0.07263752278646252,
- -0.021060292512300564,
- 0.04472490177066578,
- 0.0017677118642428036,
- -0.007800708325034148,
-)
-
-
-def translate_mat(t_x, t_y, device="cpu"):
- batch = t_x.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- translate = torch.stack((t_x, t_y), 1)
- mat[:, :2, 2] = translate
-
- return mat
-
-
-def rotate_mat(theta, device="cpu"):
- batch = theta.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- sin_t = torch.sin(theta)
- cos_t = torch.cos(theta)
- rot = torch.stack((cos_t, -sin_t, sin_t, cos_t), 1).view(batch, 2, 2)
- mat[:, :2, :2] = rot
-
- return mat
-
-
-def scale_mat(s_x, s_y, device="cpu"):
- batch = s_x.shape[0]
-
- mat = torch.eye(3, device=device).unsqueeze(0).repeat(batch, 1, 1)
- mat[:, 0, 0] = s_x
- mat[:, 1, 1] = s_y
-
- return mat
-
-
-def translate3d_mat(t_x, t_y, t_z):
- batch = t_x.shape[0]
-
- mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- translate = torch.stack((t_x, t_y, t_z), 1)
- mat[:, :3, 3] = translate
-
- return mat
-
-
-def rotate3d_mat(axis, theta):
- batch = theta.shape[0]
-
- u_x, u_y, u_z = axis
-
- eye = torch.eye(3).unsqueeze(0)
- cross = torch.tensor([(0, -u_z, u_y), (u_z, 0, -u_x), (-u_y, u_x, 0)]).unsqueeze(0)
- outer = torch.tensor(axis)
- outer = (outer.unsqueeze(1) * outer).unsqueeze(0)
-
- sin_t = torch.sin(theta).view(-1, 1, 1)
- cos_t = torch.cos(theta).view(-1, 1, 1)
-
- rot = cos_t * eye + sin_t * cross + (1 - cos_t) * outer
-
- eye_4 = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- eye_4[:, :3, :3] = rot
-
- return eye_4
-
-
-def scale3d_mat(s_x, s_y, s_z):
- batch = s_x.shape[0]
-
- mat = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- mat[:, 0, 0] = s_x
- mat[:, 1, 1] = s_y
- mat[:, 2, 2] = s_z
-
- return mat
-
-
-def luma_flip_mat(axis, i):
- batch = i.shape[0]
-
- eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- axis = torch.tensor(axis + (0,))
- flip = 2 * torch.ger(axis, axis) * i.view(-1, 1, 1)
-
- return eye - flip
-
-
-def saturation_mat(axis, i):
- batch = i.shape[0]
-
- eye = torch.eye(4).unsqueeze(0).repeat(batch, 1, 1)
- axis = torch.tensor(axis + (0,))
- axis = torch.ger(axis, axis)
- saturate = axis + (eye - axis) * i.view(-1, 1, 1)
-
- return saturate
-
-
-def lognormal_sample(size, mean=0, std=1, device="cpu"):
- return torch.empty(size, device=device).log_normal_(mean=mean, std=std)
-
-
-def category_sample(size, categories, device="cpu"):
- category = torch.tensor(categories, device=device)
- sample = torch.randint(high=len(categories), size=(size,), device=device)
-
- return category[sample]
-
-
-def uniform_sample(size, low, high, device="cpu"):
- return torch.empty(size, device=device).uniform_(low, high)
-
-
-def normal_sample(size, mean=0, std=1, device="cpu"):
- return torch.empty(size, device=device).normal_(mean, std)
-
-
-def bernoulli_sample(size, p, device="cpu"):
- return torch.empty(size, device=device).bernoulli_(p)
-
-
-def random_mat_apply(p, transform, prev, eye, device="cpu"):
- size = transform.shape[0]
- select = bernoulli_sample(size, p, device=device).view(size, 1, 1)
- select_transform = select * transform + (1 - select) * eye
-
- return select_transform @ prev
-
-
-def sample_affine(p, size, height, width, device="cpu"):
- G = torch.eye(3, device=device).unsqueeze(0).repeat(size, 1, 1)
- eye = G
-
- # flip
- param = category_sample(size, (0, 1))
- Gc = scale_mat(1 - 2.0 * param, torch.ones(size), device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('flip', G, scale_mat(1 - 2.0 * param, torch.ones(size)), sep='\n')
-
- # 90 rotate
- #param = category_sample(size, (0, 3))
- #Gc = rotate_mat(-math.pi / 2 * param, device=device)
- #G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('90 rotate', G, rotate_mat(-math.pi / 2 * param), sep='\n')
-
- # integer translate
- param = uniform_sample(size, -0.125, 0.125)
- param_height = torch.round(param * height) / height
- param_width = torch.round(param * width) / width
- Gc = translate_mat(param_width, param_height, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('integer translate', G, translate_mat(param_width, param_height), sep='\n')
-
- # isotropic scale
- param = lognormal_sample(size, std=0.2 * math.log(2))
- Gc = scale_mat(param, param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('isotropic scale', G, scale_mat(param, param), sep='\n')
-
- p_rot = 1 - math.sqrt(1 - p)
-
- # pre-rotate
- param = uniform_sample(size, -math.pi, math.pi)
- Gc = rotate_mat(-param, device=device)
- G = random_mat_apply(p_rot, Gc, G, eye, device=device)
- # print('pre-rotate', G, rotate_mat(-param), sep='\n')
-
- # anisotropic scale
- param = lognormal_sample(size, std=0.2 * math.log(2))
- Gc = scale_mat(param, 1 / param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('anisotropic scale', G, scale_mat(param, 1 / param), sep='\n')
-
- # post-rotate
- param = uniform_sample(size, -math.pi, math.pi)
- Gc = rotate_mat(-param, device=device)
- G = random_mat_apply(p_rot, Gc, G, eye, device=device)
- # print('post-rotate', G, rotate_mat(-param), sep='\n')
-
- # fractional translate
- param = normal_sample(size, std=0.125)
- Gc = translate_mat(param, param, device=device)
- G = random_mat_apply(p, Gc, G, eye, device=device)
- # print('fractional translate', G, translate_mat(param, param), sep='\n')
-
- return G
-
-
-def sample_color(p, size):
- C = torch.eye(4).unsqueeze(0).repeat(size, 1, 1)
- eye = C
- axis_val = 1 / math.sqrt(3)
- axis = (axis_val, axis_val, axis_val)
-
- # brightness
- param = normal_sample(size, std=0.2)
- Cc = translate3d_mat(param, param, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # contrast
- param = lognormal_sample(size, std=0.5 * math.log(2))
- Cc = scale3d_mat(param, param, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # luma flip
- param = category_sample(size, (0, 1))
- Cc = luma_flip_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # hue rotation
- param = uniform_sample(size, -math.pi, math.pi)
- Cc = rotate3d_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- # saturation
- param = lognormal_sample(size, std=1 * math.log(2))
- Cc = saturation_mat(axis, param)
- C = random_mat_apply(p, Cc, C, eye)
-
- return C
-
-
-def make_grid(shape, x0, x1, y0, y1, device):
- n, c, h, w = shape
- grid = torch.empty(n, h, w, 3, device=device)
- grid[:, :, :, 0] = torch.linspace(x0, x1, w, device=device)
- grid[:, :, :, 1] = torch.linspace(y0, y1, h, device=device).unsqueeze(-1)
- grid[:, :, :, 2] = 1
-
- return grid
-
-
-def affine_grid(grid, mat):
- n, h, w, _ = grid.shape
- return (grid.view(n, h * w, 3) @ mat.transpose(1, 2)).view(n, h, w, 2)
-
-
-def get_padding(G, height, width, kernel_size):
- device = G.device
-
- cx = (width - 1) / 2
- cy = (height - 1) / 2
- cp = torch.tensor(
- [(-cx, -cy, 1), (cx, -cy, 1), (cx, cy, 1), (-cx, cy, 1)], device=device
- )
- cp = G @ cp.T
-
- pad_k = kernel_size // 4
-
- pad = cp[:, :2, :].permute(1, 0, 2).flatten(1)
- pad = torch.cat((-pad, pad)).max(1).values
- pad = pad + torch.tensor([pad_k * 2 - cx, pad_k * 2 - cy] * 2, device=device)
- pad = pad.max(torch.tensor([0, 0] * 2, device=device))
- pad = pad.min(torch.tensor([width - 1, height - 1] * 2, device=device))
-
- pad_x1, pad_y1, pad_x2, pad_y2 = pad.ceil().to(torch.int32)
-
- return pad_x1, pad_x2, pad_y1, pad_y2
-
-
-def try_sample_affine_and_pad(img, p, kernel_size, G=None):
- batch, _, height, width = img.shape
-
- G_try = G
-
- if G is None:
- G_try = torch.inverse(sample_affine(p, batch, height, width))
-
- pad_x1, pad_x2, pad_y1, pad_y2 = get_padding(G_try, height, width, kernel_size)
-
- img_pad = F.pad(img, (pad_x1, pad_x2, pad_y1, pad_y2), mode="reflect")
-
- return img_pad, G_try, (pad_x1, pad_x2, pad_y1, pad_y2)
-
-
-class GridSampleForward(autograd.Function):
- @staticmethod
- def forward(ctx, input, grid):
- out = F.grid_sample(
- input, grid, mode="bilinear", padding_mode="zeros", align_corners=False
- )
- ctx.save_for_backward(input, grid)
-
- return out
-
- @staticmethod
- def backward(ctx, grad_output):
- input, grid = ctx.saved_tensors
- grad_input, grad_grid = GridSampleBackward.apply(grad_output, input, grid)
-
- return grad_input, grad_grid
-
-
-class GridSampleBackward(autograd.Function):
- @staticmethod
- def forward(ctx, grad_output, input, grid):
- op = torch._C._jit_get_operation("aten::grid_sampler_2d_backward")
- grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
- ctx.save_for_backward(grid)
-
- return grad_input, grad_grid
-
- @staticmethod
- def backward(ctx, grad_grad_input, grad_grad_grid):
- grid, = ctx.saved_tensors
- grad_grad_output = None
-
- if ctx.needs_input_grad[0]:
- grad_grad_output = GridSampleForward.apply(grad_grad_input, grid)
-
- return grad_grad_output, None, None
-
-
-grid_sample = GridSampleForward.apply
-
-
-def scale_mat_single(s_x, s_y):
- return torch.tensor(((s_x, 0, 0), (0, s_y, 0), (0, 0, 1)), dtype=torch.float32)
-
-
-def translate_mat_single(t_x, t_y):
- return torch.tensor(((1, 0, t_x), (0, 1, t_y), (0, 0, 1)), dtype=torch.float32)
-
-
-def random_apply_affine(img, p, G=None, antialiasing_kernel=SYM6):
- kernel = antialiasing_kernel
- len_k = len(kernel)
-
- kernel = torch.as_tensor(kernel).to(img)
- # kernel = torch.ger(kernel, kernel).to(img)
- kernel_flip = torch.flip(kernel, (0,))
-
- img_pad, G, (pad_x1, pad_x2, pad_y1, pad_y2) = try_sample_affine_and_pad(
- img, p, len_k, G
- )
-
- G_inv = (
- translate_mat_single((pad_x1 - pad_x2).item() / 2, (pad_y1 - pad_y2).item() / 2)
- @ G
- )
- up_pad = (
- (len_k + 2 - 1) // 2,
- (len_k - 2) // 2,
- (len_k + 2 - 1) // 2,
- (len_k - 2) // 2,
- )
- img_2x = upfirdn2d(img_pad, kernel.unsqueeze(0), up=(2, 1), pad=(*up_pad[:2], 0, 0))
- img_2x = upfirdn2d(img_2x, kernel.unsqueeze(1), up=(1, 2), pad=(0, 0, *up_pad[2:]))
- G_inv = scale_mat_single(2, 2) @ G_inv @ scale_mat_single(1 / 2, 1 / 2)
- G_inv = translate_mat_single(-0.5, -0.5) @ G_inv @ translate_mat_single(0.5, 0.5)
- batch_size, channel, height, width = img.shape
- pad_k = len_k // 4
- shape = (batch_size, channel, (height + pad_k * 2) * 2, (width + pad_k * 2) * 2)
- G_inv = (
- scale_mat_single(2 / img_2x.shape[3], 2 / img_2x.shape[2])
- @ G_inv
- @ scale_mat_single(1 / (2 / shape[3]), 1 / (2 / shape[2]))
- )
- grid = F.affine_grid(G_inv[:, :2, :].to(img_2x), shape, align_corners=False)
- img_affine = grid_sample(img_2x, grid)
- d_p = -pad_k * 2
- down_pad = (
- d_p + (len_k - 2 + 1) // 2,
- d_p + (len_k - 2) // 2,
- d_p + (len_k - 2 + 1) // 2,
- d_p + (len_k - 2) // 2,
- )
- img_down = upfirdn2d(
- img_affine, kernel_flip.unsqueeze(0), down=(2, 1), pad=(*down_pad[:2], 0, 0)
- )
- img_down = upfirdn2d(
- img_down, kernel_flip.unsqueeze(1), down=(1, 2), pad=(0, 0, *down_pad[2:])
- )
-
- return img_down, G
-
-
-def apply_color(img, mat):
- batch = img.shape[0]
- img = img.permute(0, 2, 3, 1)
- mat_mul = mat[:, :3, :3].transpose(1, 2).view(batch, 1, 3, 3)
- mat_add = mat[:, :3, 3].view(batch, 1, 1, 3)
- img = img @ mat_mul + mat_add
- img = img.permute(0, 3, 1, 2)
-
- return img
-
-
-def random_apply_color(img, p, C=None):
- if C is None:
- C = sample_color(p, img.shape[0])
-
- img = apply_color(img, C.to(img))
-
- return img, C
-
-
-def augment(img, p, transform_matrix=(None, None)):
- img, G = random_apply_affine(img, p, transform_matrix[0])
- if img.shape[1] == 3:
- img, C = random_apply_color(img, p, transform_matrix[1])
- else:
- tmp, C = random_apply_color(img[:,0:3], p, transform_matrix[1])
- img = torch.cat((tmp, img[:,3:]), dim=1)
-
- return img, (G, C)
diff --git a/spaces/gradio/clustering/DESCRIPTION.md b/spaces/gradio/clustering/DESCRIPTION.md
deleted file mode 100644
index f57e9f25bd22de0cb4c9625203d4b79b747bfbcb..0000000000000000000000000000000000000000
--- a/spaces/gradio/clustering/DESCRIPTION.md
+++ /dev/null
@@ -1 +0,0 @@
-This demo built with Blocks generates 9 plots based on the input.
\ No newline at end of file
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/run.sh b/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/run.sh
deleted file mode 100644
index 61af4b4950eb11334e55362e3e3c5e2796979a01..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/run.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50
-ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh
diff --git a/spaces/h2oai/wave-tour/examples/facepile.py b/spaces/h2oai/wave-tour/examples/facepile.py
deleted file mode 100644
index 60401d4ce7d2609cc5f6630df0e34f7eae2a21b8..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/facepile.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# Form / Facepile
-# A face pile displays a list of personas. Each circle represents a person and contains their image or initials.
-# Often this control is used when sharing who has access to a specific view or file.
-# #form
-# ---
-from h2o_wave import main, app, Q, ui
-
-
-@app('/demo')
-async def serve(q: Q):
- if q.args.facepile:
- q.page['example'].items = [
- ui.text_m(f'q.args.facepile={q.args.facepile}'),
- ui.button(name='back', label='Back', primary=True),
- ]
- elif q.args.facepile_value:
- q.page['example'].items = [
- ui.text_m(f'q.args.facepile_value={q.args.facepile_value}'),
- ui.button(name='back', label='Back', primary=True),
- ]
- else:
- image = 'https://images.pexels.com/photos/220453/pexels-photo-220453.jpeg?auto=compress&h=750&w=1260'
- q.page['example'] = ui.form_card(box='1 1 2 2', items=[
- ui.text(content='Add button sends true'),
- ui.facepile(name='facepile', max=4, items=[
- ui.persona(title='John Doe', image=image),
- ui.persona(title='John Doe', image=image),
- ui.persona(title='John Doe'),
- ui.persona(title='John Doe', image=image),
- ui.persona(title='John Doe', image=image),
- ]),
- ui.text(content='Add button sends set value'),
- ui.facepile(name='facepile_value', value='submitted value', max=4, items=[
- ui.persona(title='John Doe', image=image),
- ui.persona(title='John Doe', image=image),
- ui.persona(title='John Doe'),
- ui.persona(title='John Doe', image=image),
- ui.persona(title='John Doe', image=image),
- ]),
- ])
-
- await q.page.save()
diff --git a/spaces/hadasak/SciTrends/main_1.py b/spaces/hadasak/SciTrends/main_1.py
deleted file mode 100644
index c936690f979c2dde9ad4f9b0f875f6fee1bccc9b..0000000000000000000000000000000000000000
--- a/spaces/hadasak/SciTrends/main_1.py
+++ /dev/null
@@ -1,182 +0,0 @@
-import gradio as gr
-import numpy as np
-import pandas as pd
-from save_prediction import save_data_with_prediction
-import datetime
-import plotly.express as px
-
-
-
-# global variables
-data = pd.DataFrame()
-last_term = ""
-terms = []
-shared_output = None
-
-def create_plot(data):
-
- today = datetime.date.today()
- year = today.year - 1
-
- fig = px.scatter(data, x='Year', y='norm_publications_count', color='Term',
- title='Scatter Plot with Color and Symbols')
- fig.update_layout(
- title="Outbreak in ",
- xaxis_title="year",
- yaxis_title="normalized publication count",
- )
- # Add a horizontal line at y=15
- fig.add_shape(type="line", x0=year + 0.5, x1=year + 0.5, y0=0, y1=data['norm_publications_count'].max(),
- line=dict(color="blue", width=2, dash="dot"))
-
- return fig
-
-
-def add_term_to_plot(custom_term,choice):
-
- global shared_output
- global last_term
- global terms
- global data # Indicate that you want to work with the global 'data' variable
- term=""
- print("choice")
- print(choice)
- print("last_term")
- print(terms)
-
- if not custom_term or custom_term in terms:
- if not choice:
- #raise gr.Error("You didn't insert new term")
- return create_plot(data)
- if choice in terms:
- #raise gr.Error("Your choice is already in the graph")
- return create_plot(data)
- else:
- term = choice
- print ("term is choice "+term)
-
- elif not choice or choice in terms:
- if custom_term in terms:
- #raise gr.Error("The term you inserted is already in the graph")
- return create_plot(data)
- else:
- term = custom_term
- print ("term is custom_term "+term)
-
-
- #if both new
- if not term:
- #raise gr.Error("you inserted new terms in both options, the custom term is shown")
- term = custom_term
- print(term)
-
- if len(terms)>10:
- #raise gr.Error("The maximum terms number is 10")
- return create_plot(data)
- else:
- last_term = term
- terms = terms+[term]
- (print(terms))
-
- # get year
- today = datetime.date.today()
- year = today.year - 1
- no_space_term = term.replace(" ", "_")
- print(no_space_term)
-
- path = "model_data/data_with_predictions_" + no_space_term + str(year) + ".csv"
- try:
- save_data_with_prediction(no_space_term)
- except:
- raise gr.Error("There is no data about your term in Pubmed. "
- "Please enter another term and the trend graph will be displayed")
-
- new_term_df = pd.read_csv(path)
-
- data = pd.concat([data, new_term_df], ignore_index=True) # Concatenate DataFrames
- data = data[data['Year'] >= (year-40)]
-
- fig = create_plot(data)
- shared_output = fig
- # update_choice2.update()
- return fig
-
-
-def delete_term(term):
-
- global shared_output
- global terms
- global data
-
- if term in terms:
- terms.remove(term)
- data = data[data["Term"] != term]
- fig = create_plot(data)
- shared_output = fig
- return fig
- else:
- raise gr.Error(term + " is not exist in the graph!")
-
-def clear_all():
-
- global shared_output
- global terms
- global data
-
- if len(terms) > 0:
- terms = []
- data = pd.DataFrame(columns=["Term","Year","norm_publications_count","Data"])
- fig = create_plot(data)
- shared_output = fig
- return fig
- else:
- raise gr.Error("The trends graph is already empty")
- return None
-
-
-
-predefined_terms = [(term, term) for term in np.unique(pd.read_csv("training_data_all.csv")["Term"].to_numpy())]
-description= "Welcome to the predictor of trends in science!\n\n\n" \
- "This tool predicts the popularity of fields in science for the next 6 years.\n" \
- "Get predictions of scientific popularity for any term!\n"\
- "Up to 10 terms can be inserted into the same graph.\n\n" \
- "Popularity of a term is defined as the number of publications in PubMed per year for this term, \n" \
- "normalized to 100, 000 publications..\n\n"\
- "For details of the model and methodology see our paper. If you use us, please cite us!:\n"\
- "Ofer, D., & Linial, M. (2023). Whats next? Forecasting scientific research trends. ArXiv, abs/2305.04133.\n"\
- "Ofer et al., 2023. SciTrends [Internet].\n\n Available from:\n"\
- "https://393853b86b6381cc22.gradio.live\n\n"\
- "contact us at:\n"\
- "Hadasa.kaufman@mail.huji.ac.il\n\n\n"\
- "Developed by Hadasa Kaufman & Dan Ofer"
-
-with gr.Blocks() as demo:
- # Create a Row component to divide the interface into two columns
- with gr.Row():
- # Create a Column for the left side (description)
- with gr.Column():
- gr.Image("logo_SciTrends.png")
- gr.Text(description,label="App Description")
- # Create a Column for the right side (input components and plot)
- with gr.Column():
- favicon="logo_SciTrends.png" # Specify the path to your logo image
- txt1 = gr.components.Textbox(label="Insert a term")
- choice1 = gr.components.Dropdown(label="or choose an example term", choices=predefined_terms)
- btn = gr.Button(value="Submit")
- out1 = gr.Plot(label="plot")
- btn.click(add_term_to_plot, inputs=[txt1, choice1], outputs=out1 )
- remove_term = gr.components.Textbox(label="Insert a term to remove")
- btn = gr.Button(value="Remove term")
- btn.click(delete_term, inputs=remove_term, outputs=out1)
- btn = gr.Button(value="clear all")
- btn.click(clear_all, outputs=out1)
-
-
-
-
-
- live = True
-
-
-if __name__ == "__main__":
- demo.launch(share=True)
diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/utils/__init__.py b/spaces/hamacojr/CAT-Seg/cat_seg/utils/__init__.py
deleted file mode 100644
index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/CAT-Seg/cat_seg/utils/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/transform.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/transform.py
deleted file mode 100644
index 0224a0dae4a89fd4bf46c5d27a2bb9377dfa06d4..0000000000000000000000000000000000000000
--- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/open_clip/transform.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import warnings
-from dataclasses import dataclass, asdict
-from typing import Any, Dict, Optional, Sequence, Tuple, Union
-
-import torch
-import torch.nn as nn
-import torchvision.transforms.functional as F
-
-from torchvision.transforms import Normalize, Compose, RandomResizedCrop, InterpolationMode, ToTensor, Resize, \
- CenterCrop
-
-from .constants import OPENAI_DATASET_MEAN, OPENAI_DATASET_STD
-
-
-@dataclass
-class AugmentationCfg:
- scale: Tuple[float, float] = (0.9, 1.0)
- ratio: Optional[Tuple[float, float]] = None
- color_jitter: Optional[Union[float, Tuple[float, float, float]]] = None
- interpolation: Optional[str] = None
- re_prob: Optional[float] = None
- re_count: Optional[int] = None
- use_timm: bool = False
-
-
-class ResizeMaxSize(nn.Module):
-
- def __init__(self, max_size, interpolation=InterpolationMode.BICUBIC, fn='max', fill=0):
- super().__init__()
- if not isinstance(max_size, int):
- raise TypeError(f"Size should be int. Got {type(max_size)}")
- self.max_size = max_size
- self.interpolation = interpolation
- self.fn = min if fn == 'min' else min
- self.fill = fill
-
- def forward(self, img):
- if isinstance(img, torch.Tensor):
- height, width = img.shape[:2]
- else:
- width, height = img.size
- scale = self.max_size / float(max(height, width))
- if scale != 1.0:
- new_size = tuple(round(dim * scale) for dim in (height, width))
- img = F.resize(img, new_size, self.interpolation)
- pad_h = self.max_size - new_size[0]
- pad_w = self.max_size - new_size[1]
- img = F.pad(img, padding=[pad_w//2, pad_h//2, pad_w - pad_w//2, pad_h - pad_h//2], fill=self.fill)
- return img
-
-
-def _convert_to_rgb(image):
- return image.convert('RGB')
-
-
-def image_transform(
- image_size: int,
- is_train: bool,
- mean: Optional[Tuple[float, ...]] = None,
- std: Optional[Tuple[float, ...]] = None,
- resize_longest_max: bool = False,
- fill_color: int = 0,
- aug_cfg: Optional[Union[Dict[str, Any], AugmentationCfg]] = None,
-):
- mean = mean or OPENAI_DATASET_MEAN
- if not isinstance(mean, (list, tuple)):
- mean = (mean,) * 3
-
- std = std or OPENAI_DATASET_STD
- if not isinstance(std, (list, tuple)):
- std = (std,) * 3
-
- if isinstance(image_size, (list, tuple)) and image_size[0] == image_size[1]:
- # for square size, pass size as int so that Resize() uses aspect preserving shortest edge
- image_size = image_size[0]
-
- if isinstance(aug_cfg, dict):
- aug_cfg = AugmentationCfg(**aug_cfg)
- else:
- aug_cfg = aug_cfg or AugmentationCfg()
- normalize = Normalize(mean=mean, std=std)
- if is_train:
- aug_cfg_dict = {k: v for k, v in asdict(aug_cfg).items() if v is not None}
- use_timm = aug_cfg_dict.pop('use_timm', False)
- if use_timm:
- from timm.data import create_transform # timm can still be optional
- if isinstance(model.visual.image_size, (tuple, list)):
- assert len(model.visual.image_size) >= 2
- input_size = (3,) + model.visual.image_size[-2:]
- else:
- input_size = (3, model.visual.image_size, model.visual.image_size)
- # by default, timm aug randomly alternates bicubic & bilinear for better robustness at inference time
- aug_cfg_dict.setdefault('interpolation', 'random')
- aug_cfg_dict.setdefault('color_jitter', None) # disable by default
- train_transform = create_transform(
- input_size=input_size,
- is_training=True,
- hflip=0.,
- mean=image_mean,
- std=image_std,
- re_mode='pixel',
- **aug_cfg_dict,
- )
- else:
- train_transform = Compose([
- RandomResizedCrop(
- image_size,
- scale=aug_cfg_dict.pop('scale'),
- interpolation=InterpolationMode.BICUBIC,
- ),
- _convert_to_rgb,
- ToTensor(),
- normalize,
- ])
- if aug_cfg_dict:
- warnings.warn(f'Unused augmentation cfg items, specify `use_timm` to use ({list(aug_cfg_dict.keys())}).')
- return train_transform
- else:
- if resize_longest_max:
- transforms = [
- ResizeMaxSize(image_size, fill=fill_color)
- ]
- else:
- transforms = [
- Resize(image_size, interpolation=InterpolationMode.BICUBIC),
- CenterCrop(image_size),
- ]
- transforms.extend([
- _convert_to_rgb,
- ToTensor(),
- normalize,
- ])
- return Compose(transforms)
diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/shallow_contrastive_loss_helper.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/shallow_contrastive_loss_helper.py
deleted file mode 100644
index 027fb4598529c0072f670a4776f2c825968f5caf..0000000000000000000000000000000000000000
--- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/shallow_contrastive_loss_helper.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import torch
-import maskrcnn_benchmark.utils.dist as dist
-
-
-def normalized_positive_map(positive_map):
- positive_map = positive_map.float()
- positive_map_num_pos = positive_map.sum(2)
- positive_map_num_pos[positive_map_num_pos == 0] = 1e-6
- positive_map = positive_map / positive_map_num_pos.unsqueeze(-1)
- return positive_map
-
-
-def pad_tensor_given_dim_length(tensor, dim, length, padding_value=0, batch_first=True):
- new_size = list(tensor.size()[:dim]) + [length] + list(tensor.size()[dim + 1:])
- out_tensor = tensor.data.new(*new_size).fill_(padding_value)
- if batch_first:
- out_tensor[:, :tensor.size(1), ...] = tensor
- else:
- out_tensor[:tensor.size(0), ...] = tensor
- return out_tensor
-
-
-def pad_random_negative_tensor_given_length(positive_tensor, negative_padding_tensor, length=None):
- assert positive_tensor.shape[0] + negative_padding_tensor.shape[0] == length
- return torch.cat((positive_tensor, negative_padding_tensor), dim=0)
-
-
-def gather_tensors(tensor):
- """
- Performs all_gather operation on the provided tensors.
- *** Warning ***: torch.distributed.all_gather has no gradient.
- """
- if not dist.is_dist_avail_and_initialized():
- return torch.stack([tensor], dim=0)
-
- total = dist.get_world_size()
- rank = torch.distributed.get_rank()
- # gathered_normalized_img_emb = [torch.zeros_like(normalized_img_emb) for _ in range(total)]
- # torch.distributed.all_gather(gathered_normalized_img_emb, normalized_img_emb)
-
- tensors_gather = [
- torch.zeros_like(tensor)
- for _ in range(total)
- ]
- torch.distributed.all_gather(tensors_gather, tensor, async_op=False)
-
- # need to do this to restore propagation of the gradients
- tensors_gather[rank] = tensor
- output = torch.stack(tensors_gather, dim=0)
- return output
-
-
-def convert_to_roi_format(boxes):
- concat_boxes = boxes.bbox
- device, dtype = concat_boxes.device, concat_boxes.dtype
- ids = torch.full((len(boxes), 1), 0, dtype=dtype, device=device)
- rois = torch.cat([ids, concat_boxes], dim=1)
- return rois
\ No newline at end of file
diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/engine/hooks.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/engine/hooks.py
deleted file mode 100644
index e5085b4561302d2328ab505568dec4e9fc5ee0ad..0000000000000000000000000000000000000000
--- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/engine/hooks.py
+++ /dev/null
@@ -1,427 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-import datetime
-import itertools
-import logging
-import os
-import tempfile
-import time
-from collections import Counter
-import torch
-from fvcore.common.checkpoint import PeriodicCheckpointer as _PeriodicCheckpointer
-from fvcore.common.file_io import PathManager
-from fvcore.common.timer import Timer
-from fvcore.nn.precise_bn import get_bn_modules, update_bn_stats
-
-import detectron2.utils.comm as comm
-from detectron2.evaluation.testing import flatten_results_dict
-from detectron2.utils.events import EventStorage, EventWriter
-
-from .train_loop import HookBase
-
-__all__ = [
- "CallbackHook",
- "IterationTimer",
- "PeriodicWriter",
- "PeriodicCheckpointer",
- "LRScheduler",
- "AutogradProfiler",
- "EvalHook",
- "PreciseBN",
-]
-
-
-"""
-Implement some common hooks.
-"""
-
-
-class CallbackHook(HookBase):
- """
- Create a hook using callback functions provided by the user.
- """
-
- def __init__(self, *, before_train=None, after_train=None, before_step=None, after_step=None):
- """
- Each argument is a function that takes one argument: the trainer.
- """
- self._before_train = before_train
- self._before_step = before_step
- self._after_step = after_step
- self._after_train = after_train
-
- def before_train(self):
- if self._before_train:
- self._before_train(self.trainer)
-
- def after_train(self):
- if self._after_train:
- self._after_train(self.trainer)
- # The functions may be closures that hold reference to the trainer
- # Therefore, delete them to avoid circular reference.
- del self._before_train, self._after_train
- del self._before_step, self._after_step
-
- def before_step(self):
- if self._before_step:
- self._before_step(self.trainer)
-
- def after_step(self):
- if self._after_step:
- self._after_step(self.trainer)
-
-
-class IterationTimer(HookBase):
- """
- Track the time spent for each iteration (each run_step call in the trainer).
- Print a summary in the end of training.
-
- This hook uses the time between the call to its :meth:`before_step`
- and :meth:`after_step` methods.
- Under the convention that :meth:`before_step` of all hooks should only
- take negligible amount of time, the :class:`IterationTimer` hook should be
- placed at the beginning of the list of hooks to obtain accurate timing.
- """
-
- def __init__(self, warmup_iter=3):
- """
- Args:
- warmup_iter (int): the number of iterations at the beginning to exclude
- from timing.
- """
- self._warmup_iter = warmup_iter
- self._step_timer = Timer()
- self._start_time = time.perf_counter()
- self._total_timer = Timer()
-
- def before_train(self):
- self._start_time = time.perf_counter()
- self._total_timer.reset()
- self._total_timer.pause()
-
- def after_train(self):
- logger = logging.getLogger(__name__)
- total_time = time.perf_counter() - self._start_time
- total_time_minus_hooks = self._total_timer.seconds()
- hook_time = total_time - total_time_minus_hooks
-
- num_iter = self.trainer.iter + 1 - self.trainer.start_iter - self._warmup_iter
-
- if num_iter > 0 and total_time_minus_hooks > 0:
- # Speed is meaningful only after warmup
- # NOTE this format is parsed by grep in some scripts
- logger.info(
- "Overall training speed: {} iterations in {} ({:.4f} s / it)".format(
- num_iter,
- str(datetime.timedelta(seconds=int(total_time_minus_hooks))),
- total_time_minus_hooks / num_iter,
- )
- )
-
- logger.info(
- "Total training time: {} ({} on hooks)".format(
- str(datetime.timedelta(seconds=int(total_time))),
- str(datetime.timedelta(seconds=int(hook_time))),
- )
- )
-
- def before_step(self):
- self._step_timer.reset()
- self._total_timer.resume()
-
- def after_step(self):
- # +1 because we're in after_step
- iter_done = self.trainer.iter - self.trainer.start_iter + 1
- if iter_done >= self._warmup_iter:
- sec = self._step_timer.seconds()
- self.trainer.storage.put_scalars(time=sec)
- else:
- self._start_time = time.perf_counter()
- self._total_timer.reset()
-
- self._total_timer.pause()
-
-
-class PeriodicWriter(HookBase):
- """
- Write events to EventStorage periodically.
-
- It is executed every ``period`` iterations and after the last iteration.
- """
-
- def __init__(self, writers, period=20):
- """
- Args:
- writers (list[EventWriter]): a list of EventWriter objects
- period (int):
- """
- self._writers = writers
- for w in writers:
- assert isinstance(w, EventWriter), w
- self._period = period
-
- def after_step(self):
- if (self.trainer.iter + 1) % self._period == 0 or (
- self.trainer.iter == self.trainer.max_iter - 1
- ):
- for writer in self._writers:
- writer.write()
-
- def after_train(self):
- for writer in self._writers:
- writer.close()
-
-
-class PeriodicCheckpointer(_PeriodicCheckpointer, HookBase):
- """
- Same as :class:`detectron2.checkpoint.PeriodicCheckpointer`, but as a hook.
-
- Note that when used as a hook,
- it is unable to save additional data other than what's defined
- by the given `checkpointer`.
-
- It is executed every ``period`` iterations and after the last iteration.
- """
-
- def before_train(self):
- self.max_iter = self.trainer.max_iter
-
- def after_step(self):
- # No way to use **kwargs
- self.step(self.trainer.iter)
-
-
-class LRScheduler(HookBase):
- """
- A hook which executes a torch builtin LR scheduler and summarizes the LR.
- It is executed after every iteration.
- """
-
- def __init__(self, optimizer, scheduler):
- """
- Args:
- optimizer (torch.optim.Optimizer):
- scheduler (torch.optim._LRScheduler)
- """
- self._optimizer = optimizer
- self._scheduler = scheduler
-
- # NOTE: some heuristics on what LR to summarize
- # summarize the param group with most parameters
- largest_group = max(len(g["params"]) for g in optimizer.param_groups)
-
- if largest_group == 1:
- # If all groups have one parameter,
- # then find the most common initial LR, and use it for summary
- lr_count = Counter([g["lr"] for g in optimizer.param_groups])
- lr = lr_count.most_common()[0][0]
- for i, g in enumerate(optimizer.param_groups):
- if g["lr"] == lr:
- self._best_param_group_id = i
- break
- else:
- for i, g in enumerate(optimizer.param_groups):
- if len(g["params"]) == largest_group:
- self._best_param_group_id = i
- break
-
- def after_step(self):
- lr = self._optimizer.param_groups[self._best_param_group_id]["lr"]
- self.trainer.storage.put_scalar("lr", lr, smoothing_hint=False)
- self._scheduler.step()
-
-
-class AutogradProfiler(HookBase):
- """
- A hook which runs `torch.autograd.profiler.profile`.
-
- Examples:
-
- .. code-block:: python
-
- hooks.AutogradProfiler(
- lambda trainer: trainer.iter > 10 and trainer.iter < 20, self.cfg.OUTPUT_DIR
- )
-
- The above example will run the profiler for iteration 10~20 and dump
- results to ``OUTPUT_DIR``. We did not profile the first few iterations
- because they are typically slower than the rest.
- The result files can be loaded in the ``chrome://tracing`` page in chrome browser.
-
- Note:
- When used together with NCCL on older version of GPUs,
- autograd profiler may cause deadlock because it unnecessarily allocates
- memory on every device it sees. The memory management calls, if
- interleaved with NCCL calls, lead to deadlock on GPUs that do not
- support `cudaLaunchCooperativeKernelMultiDevice`.
- """
-
- def __init__(self, enable_predicate, output_dir, *, use_cuda=True):
- """
- Args:
- enable_predicate (callable[trainer -> bool]): a function which takes a trainer,
- and returns whether to enable the profiler.
- It will be called once every step, and can be used to select which steps to profile.
- output_dir (str): the output directory to dump tracing files.
- use_cuda (bool): same as in `torch.autograd.profiler.profile`.
- """
- self._enable_predicate = enable_predicate
- self._use_cuda = use_cuda
- self._output_dir = output_dir
-
- def before_step(self):
- if self._enable_predicate(self.trainer):
- self._profiler = torch.autograd.profiler.profile(use_cuda=self._use_cuda)
- self._profiler.__enter__()
- else:
- self._profiler = None
-
- def after_step(self):
- if self._profiler is None:
- return
- self._profiler.__exit__(None, None, None)
- PathManager.mkdirs(self._output_dir)
- out_file = os.path.join(
- self._output_dir, "profiler-trace-iter{}.json".format(self.trainer.iter)
- )
- if "://" not in out_file:
- self._profiler.export_chrome_trace(out_file)
- else:
- # Support non-posix filesystems
- with tempfile.TemporaryDirectory(prefix="detectron2_profiler") as d:
- tmp_file = os.path.join(d, "tmp.json")
- self._profiler.export_chrome_trace(tmp_file)
- with open(tmp_file) as f:
- content = f.read()
- with PathManager.open(out_file, "w") as f:
- f.write(content)
-
-
-class EvalHook(HookBase):
- """
- Run an evaluation function periodically, and at the end of training.
-
- It is executed every ``eval_period`` iterations and after the last iteration.
- """
-
- def __init__(self, eval_period, eval_function):
- """
- Args:
- eval_period (int): the period to run `eval_function`.
- eval_function (callable): a function which takes no arguments, and
- returns a nested dict of evaluation metrics.
-
- Note:
- This hook must be enabled in all or none workers.
- If you would like only certain workers to perform evaluation,
- give other workers a no-op function (`eval_function=lambda: None`).
- """
- self._period = eval_period
- self._func = eval_function
-
- def _do_eval(self):
- results = self._func()
-
- if results:
- assert isinstance(
- results, dict
- ), "Eval function must return a dict. Got {} instead.".format(results)
-
- flattened_results = flatten_results_dict(results)
- for k, v in flattened_results.items():
- try:
- v = float(v)
- except Exception:
- raise ValueError(
- "[EvalHook] eval_function should return a nested dict of float. "
- "Got '{}: {}' instead.".format(k, v)
- )
- self.trainer.storage.put_scalars(**flattened_results, smoothing_hint=False)
-
- # Evaluation may take different time among workers.
- # A barrier make them start the next iteration together.
- comm.synchronize()
-
- def after_step(self):
- next_iter = self.trainer.iter + 1
- is_final = next_iter == self.trainer.max_iter
- if is_final or (self._period > 0 and next_iter % self._period == 0):
- self._do_eval()
-
- def after_train(self):
- # func is likely a closure that holds reference to the trainer
- # therefore we clean it to avoid circular reference in the end
- del self._func
-
-
-class PreciseBN(HookBase):
- """
- The standard implementation of BatchNorm uses EMA in inference, which is
- sometimes suboptimal.
- This class computes the true average of statistics rather than the moving average,
- and put true averages to every BN layer in the given model.
-
- It is executed every ``period`` iterations and after the last iteration.
- """
-
- def __init__(self, period, model, data_loader, num_iter):
- """
- Args:
- period (int): the period this hook is run, or 0 to not run during training.
- The hook will always run in the end of training.
- model (nn.Module): a module whose all BN layers in training mode will be
- updated by precise BN.
- Note that user is responsible for ensuring the BN layers to be
- updated are in training mode when this hook is triggered.
- data_loader (iterable): it will produce data to be run by `model(data)`.
- num_iter (int): number of iterations used to compute the precise
- statistics.
- """
- self._logger = logging.getLogger(__name__)
- if len(get_bn_modules(model)) == 0:
- self._logger.info(
- "PreciseBN is disabled because model does not contain BN layers in training mode."
- )
- self._disabled = True
- return
-
- self._model = model
- self._data_loader = data_loader
- self._num_iter = num_iter
- self._period = period
- self._disabled = False
-
- self._data_iter = None
-
- def after_step(self):
- next_iter = self.trainer.iter + 1
- is_final = next_iter == self.trainer.max_iter
- if is_final or (self._period > 0 and next_iter % self._period == 0):
- self.update_stats()
-
- def update_stats(self):
- """
- Update the model with precise statistics. Users can manually call this method.
- """
- if self._disabled:
- return
-
- if self._data_iter is None:
- self._data_iter = iter(self._data_loader)
-
- def data_loader():
- for num_iter in itertools.count(1):
- if num_iter % 100 == 0:
- self._logger.info(
- "Running precise-BN ... {}/{} iterations.".format(num_iter, self._num_iter)
- )
- # This way we can reuse the same iterator
- yield next(self._data_iter)
-
- with EventStorage(): # capture events in a new storage to discard them
- self._logger.info(
- "Running precise-BN for {} iterations... ".format(self._num_iter)
- + "Note that this could produce different statistics every time."
- )
- update_bn_stats(self._model, data_loader(), self._num_iter)
diff --git a/spaces/hetorol845/MiDaS/README.md b/spaces/hetorol845/MiDaS/README.md
deleted file mode 100644
index 8cbdc0422eadfe3d403a4b25e554e06a73af3acf..0000000000000000000000000000000000000000
--- a/spaces/hetorol845/MiDaS/README.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: MiDaS
-emoji: 😻
-colorFrom: pink
-colorTo: yellow
-sdk: gradio
-app_file: app.py
-pinned: false
-duplicated_from: pytorch/MiDaS
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/huaiji3y/bingo-Public/src/pages/api/kblob.ts b/spaces/huaiji3y/bingo-Public/src/pages/api/kblob.ts
deleted file mode 100644
index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000
--- a/spaces/huaiji3y/bingo-Public/src/pages/api/kblob.ts
+++ /dev/null
@@ -1,56 +0,0 @@
-'use server'
-
-import { NextApiRequest, NextApiResponse } from 'next'
-import FormData from 'form-data'
-import { fetch } from '@/lib/isomorphic'
-import { KBlobRequest } from '@/lib/bots/bing/types'
-
-const API_DOMAIN = 'https://bing.vcanbb.top'
-
-export const config = {
- api: {
- bodyParser: {
- sizeLimit: '10mb' // Set desired value here
- }
- }
-}
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- try {
- const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest
-
- const formData = new FormData()
- formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
- if (imageBase64) {
- formData.append('imageBase64', imageBase64)
- }
-
- const response = await fetch(`${API_DOMAIN}/images/kblob`,
- {
- method: 'POST',
- body: formData.getBuffer(),
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referer": `${API_DOMAIN}/web/index.html`,
- "Referrer-Policy": "origin-when-cross-origin",
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- ...formData.getHeaders()
- }
- }
- ).then(res => res.text())
-
- res.writeHead(200, {
- 'Content-Type': 'application/json',
- })
- res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } }))
- } catch (e) {
- return res.json({
- result: {
- value: 'UploadFailed',
- message: `${e}`
- }
- })
- }
-}
diff --git a/spaces/huggingface-projects/Leaderboard-Restart/README.md b/spaces/huggingface-projects/Leaderboard-Restart/README.md
deleted file mode 100644
index 67b4dafad0bf837e84b98e59a2fc430e6e7b5934..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/Leaderboard-Restart/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Leaderboard Restart
-emoji: 📊
-colorFrom: blue
-colorTo: purple
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/huggingface-projects/diffusers-gallery/README.md b/spaces/huggingface-projects/diffusers-gallery/README.md
deleted file mode 100644
index 1abc8c56f44b9b35c15ab0b41d2e7a4eaf071f32..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/diffusers-gallery/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Diffusers Gallery
-emoji: 🖼️
-colorFrom: red
-colorTo: green
-sdk: static
-app_port: 8080
-fullWidth: true
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git "a/spaces/huggingface/Model_Cards_Writing_Tool/pages/3_ \360\237\217\227_Uses.py" "b/spaces/huggingface/Model_Cards_Writing_Tool/pages/3_ \360\237\217\227_Uses.py"
deleted file mode 100644
index 6e0338ff61ae30e1b6868cad322e041492cb0688..0000000000000000000000000000000000000000
--- "a/spaces/huggingface/Model_Cards_Writing_Tool/pages/3_ \360\237\217\227_Uses.py"
+++ /dev/null
@@ -1,47 +0,0 @@
-import streamlit as st
-from persist import persist, load_widget_state
-
-global variable_output
-
-def main():
-
- cs_body()
-
-def cs_body():
-
- st.markdown('# Uses')
- st.text_area("This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.")
- left, right = st.columns([2,4])
-
- #st.markdown('### Model Description')
-
-
- with left:
- st.write("\n")
- st.write("\n")
- st.markdown('### Direct Use:')
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- #st.write("\n")
- st.markdown('### Downstream Use [Optional]:')
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.write("\n")
- st.markdown('### Out-of-Scope Use:')
-
- with right:
- st.text_area("",help="How can this model be used, without additional post-processing or further pipeline work?", key=persist("Direct_Use"))
- st.text_area("",help="How can this model be used, when incorporated into another system?",key=persist("Downstream_Use"))
- st.text_area("", help="What tasks will the model not work for?", key=persist("Out-of-Scope_Use"))
-
-
-
-
-if __name__ == '__main__':
- load_widget_state()
- main()
\ No newline at end of file
diff --git a/spaces/humblepenguin/mental-health-chatbot/README.md b/spaces/humblepenguin/mental-health-chatbot/README.md
deleted file mode 100644
index e67156818a04285025469ca181d31afa2d06bf58..0000000000000000000000000000000000000000
--- a/spaces/humblepenguin/mental-health-chatbot/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Mental Health Chatbot
-emoji: 🦀
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.8.2
-app_file: app.py
-pinned: false
----
-
-
diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/partial_fc.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/partial_fc.py
deleted file mode 100644
index f7891527d6c396a6b51a67daf06593d4db5cce43..0000000000000000000000000000000000000000
--- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/partial_fc.py
+++ /dev/null
@@ -1,490 +0,0 @@
-import collections
-from typing import Callable
-
-import torch
-from torch import distributed
-from torch.nn.functional import linear
-from torch.nn.functional import normalize
-
-
-class PartialFC(torch.nn.Module):
- """
- https://arxiv.org/abs/2203.15565
- A distributed sparsely updating variant of the FC layer, named Partial FC (PFC).
-
- When sample rate less than 1, in each iteration, positive class centers and a random subset of
- negative class centers are selected to compute the margin-based softmax loss, all class
- centers are still maintained throughout the whole training process, but only a subset is
- selected and updated in each iteration.
-
- .. note::
- When sample rate equal to 1, Partial FC is equal to model parallelism(default sample rate is 1).
-
- Example:
- --------
- >>> module_pfc = PartialFC(embedding_size=512, num_classes=8000000, sample_rate=0.2)
- >>> for img, labels in data_loader:
- >>> embeddings = net(img)
- >>> loss = module_pfc(embeddings, labels, optimizer)
- >>> loss.backward()
- >>> optimizer.step()
- """
-
- _version = 1
-
- def __init__(
- self,
- margin_loss: Callable,
- embedding_size: int,
- num_classes: int,
- sample_rate: float = 1.0,
- fp16: bool = False,
- ):
- """
- Paramenters:
- -----------
- embedding_size: int
- The dimension of embedding, required
- num_classes: int
- Total number of classes, required
- sample_rate: float
- The rate of negative centers participating in the calculation, default is 1.0.
- """
- super(PartialFC, self).__init__()
- assert distributed.is_initialized(), "must initialize distributed before create this"
- self.rank = distributed.get_rank()
- self.world_size = distributed.get_world_size()
-
- self.dist_cross_entropy = DistCrossEntropy()
- self.embedding_size = embedding_size
- self.sample_rate: float = sample_rate
- self.fp16 = fp16
- self.num_local: int = num_classes // self.world_size + int(self.rank < num_classes % self.world_size)
- self.class_start: int = num_classes // self.world_size * self.rank + min(
- self.rank, num_classes % self.world_size
- )
- self.num_sample: int = int(self.sample_rate * self.num_local)
- self.last_batch_size: int = 0
- self.weight: torch.Tensor
- self.weight_mom: torch.Tensor
- self.weight_activated: torch.nn.Parameter
- self.weight_activated_mom: torch.Tensor
- self.is_updated: bool = True
- self.init_weight_update: bool = True
-
- if self.sample_rate < 1:
- self.register_buffer("weight", tensor=torch.normal(0, 0.01, (self.num_local, embedding_size)))
- self.register_buffer("weight_mom", tensor=torch.zeros_like(self.weight))
- self.register_parameter("weight_activated", param=torch.nn.Parameter(torch.empty(0, 0)))
- self.register_buffer("weight_activated_mom", tensor=torch.empty(0, 0))
- self.register_buffer("weight_index", tensor=torch.empty(0, 0))
- else:
- self.weight_activated = torch.nn.Parameter(torch.normal(0, 0.01, (self.num_local, embedding_size)))
-
- # margin_loss
- if isinstance(margin_loss, Callable):
- self.margin_softmax = margin_loss
- else:
- raise
-
- @torch.no_grad()
- def sample(self, labels: torch.Tensor, index_positive: torch.Tensor, optimizer: torch.optim.Optimizer):
- """
- This functions will change the value of labels
-
- Parameters:
- -----------
- labels: torch.Tensor
- pass
- index_positive: torch.Tensor
- pass
- optimizer: torch.optim.Optimizer
- pass
- """
- positive = torch.unique(labels[index_positive], sorted=True).cuda()
- if self.num_sample - positive.size(0) >= 0:
- perm = torch.rand(size=[self.num_local]).cuda()
- perm[positive] = 2.0
- index = torch.topk(perm, k=self.num_sample)[1].cuda()
- index = index.sort()[0].cuda()
- else:
- index = positive
- self.weight_index = index
-
- labels[index_positive] = torch.searchsorted(index, labels[index_positive])
-
- self.weight_activated = torch.nn.Parameter(self.weight[self.weight_index])
- self.weight_activated_mom = self.weight_mom[self.weight_index]
-
- if isinstance(optimizer, torch.optim.SGD):
- # TODO the params of partial fc must be last in the params list
- optimizer.state.pop(optimizer.param_groups[-1]["params"][0], None)
- optimizer.param_groups[-1]["params"][0] = self.weight_activated
- optimizer.state[self.weight_activated]["momentum_buffer"] = self.weight_activated_mom
- else:
- raise
-
- @torch.no_grad()
- def update(self):
- """partial weight to global"""
- if self.init_weight_update:
- self.init_weight_update = False
- return
-
- if self.sample_rate < 1:
- self.weight[self.weight_index] = self.weight_activated
- self.weight_mom[self.weight_index] = self.weight_activated_mom
-
- def forward(
- self,
- local_embeddings: torch.Tensor,
- local_labels: torch.Tensor,
- optimizer: torch.optim.Optimizer,
- ):
- """
- Parameters:
- ----------
- local_embeddings: torch.Tensor
- feature embeddings on each GPU(Rank).
- local_labels: torch.Tensor
- labels on each GPU(Rank).
-
- Returns:
- -------
- loss: torch.Tensor
- pass
- """
- local_labels.squeeze_()
- local_labels = local_labels.long()
- self.update()
-
- batch_size = local_embeddings.size(0)
- if self.last_batch_size == 0:
- self.last_batch_size = batch_size
- assert self.last_batch_size == batch_size, "last batch size do not equal current batch size: {} vs {}".format(
- self.last_batch_size, batch_size
- )
-
- _gather_embeddings = [torch.zeros((batch_size, self.embedding_size)).cuda() for _ in range(self.world_size)]
- _gather_labels = [torch.zeros(batch_size).long().cuda() for _ in range(self.world_size)]
- _list_embeddings = AllGather(local_embeddings, *_gather_embeddings)
- distributed.all_gather(_gather_labels, local_labels)
-
- embeddings = torch.cat(_list_embeddings)
- labels = torch.cat(_gather_labels)
-
- labels = labels.view(-1, 1)
- index_positive = (self.class_start <= labels) & (labels < self.class_start + self.num_local)
- labels[~index_positive] = -1
- labels[index_positive] -= self.class_start
-
- if self.sample_rate < 1:
- self.sample(labels, index_positive, optimizer)
-
- with torch.cuda.amp.autocast(self.fp16):
- norm_embeddings = normalize(embeddings)
- norm_weight_activated = normalize(self.weight_activated)
- logits = linear(norm_embeddings, norm_weight_activated)
- if self.fp16:
- logits = logits.float()
- logits = logits.clamp(-1, 1)
-
- logits = self.margin_softmax(logits, labels)
- loss = self.dist_cross_entropy(logits, labels)
- return loss
-
- def state_dict(self, destination=None, prefix="", keep_vars=False):
- if destination is None:
- destination = collections.OrderedDict()
- destination._metadata = collections.OrderedDict()
-
- for name, module in self._modules.items():
- if module is not None:
- module.state_dict(destination, prefix + name + ".", keep_vars=keep_vars)
- if self.sample_rate < 1:
- destination["weight"] = self.weight.detach()
- else:
- destination["weight"] = self.weight_activated.data.detach()
- return destination
-
- def load_state_dict(self, state_dict, strict: bool = True):
- if self.sample_rate < 1:
- self.weight = state_dict["weight"].to(self.weight.device)
- self.weight_mom.zero_()
- self.weight_activated.data.zero_()
- self.weight_activated_mom.zero_()
- self.weight_index.zero_()
- else:
- self.weight_activated.data = state_dict["weight"].to(self.weight_activated.data.device)
-
-
-class PartialFCAdamW(torch.nn.Module):
- def __init__(
- self,
- margin_loss: Callable,
- embedding_size: int,
- num_classes: int,
- sample_rate: float = 1.0,
- fp16: bool = False,
- ):
- """
- Paramenters:
- -----------
- embedding_size: int
- The dimension of embedding, required
- num_classes: int
- Total number of classes, required
- sample_rate: float
- The rate of negative centers participating in the calculation, default is 1.0.
- """
- super(PartialFCAdamW, self).__init__()
- assert distributed.is_initialized(), "must initialize distributed before create this"
- self.rank = distributed.get_rank()
- self.world_size = distributed.get_world_size()
-
- self.dist_cross_entropy = DistCrossEntropy()
- self.embedding_size = embedding_size
- self.sample_rate: float = sample_rate
- self.fp16 = fp16
- self.num_local: int = num_classes // self.world_size + int(self.rank < num_classes % self.world_size)
- self.class_start: int = num_classes // self.world_size * self.rank + min(
- self.rank, num_classes % self.world_size
- )
- self.num_sample: int = int(self.sample_rate * self.num_local)
- self.last_batch_size: int = 0
- self.weight: torch.Tensor
- self.weight_exp_avg: torch.Tensor
- self.weight_exp_avg_sq: torch.Tensor
- self.weight_activated: torch.nn.Parameter
- self.weight_activated_exp_avg: torch.Tensor
- self.weight_activated_exp_avg_sq: torch.Tensor
-
- self.is_updated: bool = True
- self.init_weight_update: bool = True
-
- if self.sample_rate < 1:
- self.register_buffer("weight", tensor=torch.normal(0, 0.01, (self.num_local, embedding_size)))
- self.register_buffer("weight_exp_avg", tensor=torch.zeros_like(self.weight))
- self.register_buffer("weight_exp_avg_sq", tensor=torch.zeros_like(self.weight))
- self.register_parameter("weight_activated", param=torch.nn.Parameter(torch.empty(0, 0)))
- self.register_buffer("weight_activated_exp_avg", tensor=torch.empty(0, 0))
- self.register_buffer("weight_activated_exp_avg_sq", tensor=torch.empty(0, 0))
- else:
- self.weight_activated = torch.nn.Parameter(torch.normal(0, 0.01, (self.num_local, embedding_size)))
- self.step = 0
-
- if isinstance(margin_loss, Callable):
- self.margin_softmax = margin_loss
- else:
- raise
-
- @torch.no_grad()
- def sample(self, labels, index_positive, optimizer):
- self.step += 1
- positive = torch.unique(labels[index_positive], sorted=True).cuda()
- if self.num_sample - positive.size(0) >= 0:
- perm = torch.rand(size=[self.num_local]).cuda()
- perm[positive] = 2.0
- index = torch.topk(perm, k=self.num_sample)[1].cuda()
- index = index.sort()[0].cuda()
- else:
- index = positive
- self.weight_index = index
- labels[index_positive] = torch.searchsorted(index, labels[index_positive])
- self.weight_activated = torch.nn.Parameter(self.weight[self.weight_index])
- self.weight_activated_exp_avg = self.weight_exp_avg[self.weight_index]
- self.weight_activated_exp_avg_sq = self.weight_exp_avg_sq[self.weight_index]
-
- if isinstance(optimizer, (torch.optim.Adam, torch.optim.AdamW)):
- # TODO the params of partial fc must be last in the params list
- optimizer.state.pop(optimizer.param_groups[-1]["params"][0], None)
- optimizer.param_groups[-1]["params"][0] = self.weight_activated
- optimizer.state[self.weight_activated]["exp_avg"] = self.weight_activated_exp_avg
- optimizer.state[self.weight_activated]["exp_avg_sq"] = self.weight_activated_exp_avg_sq
- optimizer.state[self.weight_activated]["step"] = self.step
- else:
- raise
-
- @torch.no_grad()
- def update(self):
- """partial weight to global"""
- if self.init_weight_update:
- self.init_weight_update = False
- return
-
- if self.sample_rate < 1:
- self.weight[self.weight_index] = self.weight_activated
- self.weight_exp_avg[self.weight_index] = self.weight_activated_exp_avg
- self.weight_exp_avg_sq[self.weight_index] = self.weight_activated_exp_avg_sq
-
- def forward(
- self,
- local_embeddings: torch.Tensor,
- local_labels: torch.Tensor,
- optimizer: torch.optim.Optimizer,
- ):
- """
- Parameters:
- ----------
- local_embeddings: torch.Tensor
- feature embeddings on each GPU(Rank).
- local_labels: torch.Tensor
- labels on each GPU(Rank).
-
- Returns:
- -------
- loss: torch.Tensor
- pass
- """
- local_labels.squeeze_()
- local_labels = local_labels.long()
- self.update()
-
- batch_size = local_embeddings.size(0)
- if self.last_batch_size == 0:
- self.last_batch_size = batch_size
- assert self.last_batch_size == batch_size, "last batch size do not equal current batch size: {} vs {}".format(
- self.last_batch_size, batch_size
- )
-
- _gather_embeddings = [torch.zeros((batch_size, self.embedding_size)).cuda() for _ in range(self.world_size)]
- _gather_labels = [torch.zeros(batch_size).long().cuda() for _ in range(self.world_size)]
- _list_embeddings = AllGather(local_embeddings, *_gather_embeddings)
- distributed.all_gather(_gather_labels, local_labels)
-
- embeddings = torch.cat(_list_embeddings)
- labels = torch.cat(_gather_labels)
-
- labels = labels.view(-1, 1)
- index_positive = (self.class_start <= labels) & (labels < self.class_start + self.num_local)
- labels[~index_positive] = -1
- labels[index_positive] -= self.class_start
-
- if self.sample_rate < 1:
- self.sample(labels, index_positive, optimizer)
-
- with torch.cuda.amp.autocast(self.fp16):
- norm_embeddings = normalize(embeddings)
- norm_weight_activated = normalize(self.weight_activated)
- logits = linear(norm_embeddings, norm_weight_activated)
- if self.fp16:
- logits = logits.float()
- logits = logits.clamp(-1, 1)
-
- logits = self.margin_softmax(logits, labels)
- loss = self.dist_cross_entropy(logits, labels)
- return loss
-
- def state_dict(self, destination=None, prefix="", keep_vars=False):
- if destination is None:
- destination = collections.OrderedDict()
- destination._metadata = collections.OrderedDict()
-
- for name, module in self._modules.items():
- if module is not None:
- module.state_dict(destination, prefix + name + ".", keep_vars=keep_vars)
- if self.sample_rate < 1:
- destination["weight"] = self.weight.detach()
- else:
- destination["weight"] = self.weight_activated.data.detach()
- return destination
-
- def load_state_dict(self, state_dict, strict: bool = True):
- if self.sample_rate < 1:
- self.weight = state_dict["weight"].to(self.weight.device)
- self.weight_exp_avg.zero_()
- self.weight_exp_avg_sq.zero_()
- self.weight_activated.data.zero_()
- self.weight_activated_exp_avg.zero_()
- self.weight_activated_exp_avg_sq.zero_()
- else:
- self.weight_activated.data = state_dict["weight"].to(self.weight_activated.data.device)
-
-
-class DistCrossEntropyFunc(torch.autograd.Function):
- """
- CrossEntropy loss is calculated in parallel, allreduce denominator into single gpu and calculate softmax.
- Implemented of ArcFace (https://arxiv.org/pdf/1801.07698v1.pdf):
- """
-
- @staticmethod
- def forward(ctx, logits: torch.Tensor, label: torch.Tensor):
- """ """
- batch_size = logits.size(0)
- # for numerical stability
- max_logits, _ = torch.max(logits, dim=1, keepdim=True)
- # local to global
- distributed.all_reduce(max_logits, distributed.ReduceOp.MAX)
- logits.sub_(max_logits)
- logits.exp_()
- sum_logits_exp = torch.sum(logits, dim=1, keepdim=True)
- # local to global
- distributed.all_reduce(sum_logits_exp, distributed.ReduceOp.SUM)
- logits.div_(sum_logits_exp)
- index = torch.where(label != -1)[0]
- # loss
- loss = torch.zeros(batch_size, 1, device=logits.device)
- loss[index] = logits[index].gather(1, label[index])
- distributed.all_reduce(loss, distributed.ReduceOp.SUM)
- ctx.save_for_backward(index, logits, label)
- return loss.clamp_min_(1e-30).log_().mean() * (-1)
-
- @staticmethod
- def backward(ctx, loss_gradient):
- """
- Args:
- loss_grad (torch.Tensor): gradient backward by last layer
- Returns:
- gradients for each input in forward function
- `None` gradients for one-hot label
- """
- (
- index,
- logits,
- label,
- ) = ctx.saved_tensors
- batch_size = logits.size(0)
- one_hot = torch.zeros(size=[index.size(0), logits.size(1)], device=logits.device)
- one_hot.scatter_(1, label[index], 1)
- logits[index] -= one_hot
- logits.div_(batch_size)
- return logits * loss_gradient.item(), None
-
-
-class DistCrossEntropy(torch.nn.Module):
- def __init__(self):
- super(DistCrossEntropy, self).__init__()
-
- def forward(self, logit_part, label_part):
- return DistCrossEntropyFunc.apply(logit_part, label_part)
-
-
-class AllGatherFunc(torch.autograd.Function):
- """AllGather op with gradient backward"""
-
- @staticmethod
- def forward(ctx, tensor, *gather_list):
- gather_list = list(gather_list)
- distributed.all_gather(gather_list, tensor)
- return tuple(gather_list)
-
- @staticmethod
- def backward(ctx, *grads):
- grad_list = list(grads)
- rank = distributed.get_rank()
- grad_out = grad_list[rank]
-
- dist_ops = [
- distributed.reduce(grad_out, rank, distributed.ReduceOp.SUM, async_op=True)
- if i == rank
- else distributed.reduce(grad_list[i], i, distributed.ReduceOp.SUM, async_op=True)
- for i in range(distributed.get_world_size())
- ]
- for _op in dist_ops:
- _op.wait()
-
- grad_out *= len(grad_list) # cooperate with distributed loss function
- return (grad_out, *[None for _ in range(len(grad_list))])
-
-
-AllGather = AllGatherFunc.apply
diff --git a/spaces/inamXcontru/PoeticTTS/Adobe Scan PDF Business Card Scanner With OCR V19.10.01 [Latest].md b/spaces/inamXcontru/PoeticTTS/Adobe Scan PDF Business Card Scanner With OCR V19.10.01 [Latest].md
deleted file mode 100644
index 5f678cbd226ebac7b709895760ae494d01c59468..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Adobe Scan PDF Business Card Scanner With OCR V19.10.01 [Latest].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Adobe Scan: PDF Business Card Scanner with OCR v19.10.01 [Latest]
-
-2.78. Sole ulcer. 3. 2.00. 12. 3.20. Abscess. 0. 0. 6. 3.83. Double sole. 0. ... airbag universal repair 3.8, airbag universal repair, airbag universal ... 4d29de3e1b
-
-
-
diff --git a/spaces/inamXcontru/PoeticTTS/Arroway Textures Collection Torrent REPACK How to Create Realistic and Stunning Wood Materials for Your 3D Projects.md b/spaces/inamXcontru/PoeticTTS/Arroway Textures Collection Torrent REPACK How to Create Realistic and Stunning Wood Materials for Your 3D Projects.md
deleted file mode 100644
index c6150cc4461228f21330dd9ce6f9458e3d81d9cb..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Arroway Textures Collection Torrent REPACK How to Create Realistic and Stunning Wood Materials for Your 3D Projects.md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
The theme of this collection is natural stone masonry and pavement. It contains 60 unique multi-layered hi-res textures (each consisting of diffuse, displacement and specularity map), covering a total area of over 1700m².
This collection contains textures of the following categories: fieldstone (walls and pavement of rough, irregular shaped stones) cobblestone (walls and pavement of smoother and more regular shaped stones) weathered (weather-beaten stone walls or pavement) classic (classic stone blocks or slabs for wall and floor) modern (natural stone slabs in modern patterns)
-
Our original Wood Veneers series was released between 2008 and 2010 to great success. It is still the most complete and, in our opinion, best collection of wood textures on the market. But a lot has changed since then in the 3D visualization business. Unbiased rendering gained market dominance which lead to a much greater focus on physical realism in general. Even in real-time visualization, physically based rendering is now a common approach.
-
We have also learned a lot in the past years and developed new methods and techniques to capture materials and create textures that allow for photo-real rendering results. We therefore decided to use that knowledge and experience to produce an updated version of our veneer collection; to bring the textures up-to-date with our current workflow and the standard of quality for which we aim.
-
We are therefore especially proud to present this first installment of our brand-new Design|Craft series: A collection of high-quality leather textures that hopefully will make the life of anyone doing interior visualizations a bit easier.
-
-
Arroway Textures Concrete is a simple solution for designers and architectures to preview the different type of texture for their projects and chose the right one. Designers can also provide capable textures to their clients as an option for choosing to go with which one. Each of the textures includes 3 Maps which are Spectular, Bumps, and Diffuse. This package includes collection contains 50 multi-layered high-resolution textures with the 3 maps. All the texture maps delivered with lossless compressed PNG files, maps tile perfectly, print-friendly, and more. You can also download DS SOLIDWORKS PCB 2016.
-
This collection contains 20 high-res textures for various common fiberboard and particleboard materials. We have chosen materials that have an interesting and characteristic appearance, while also being very versatile in use, such as MDF, Hardboard, Softboard, OSB, chipboard, as well as cork.
-
The manufacturing of wood veneers takes great skill as well as experience. Similarly, the creation of textures out of such veneers is a very time and resources consuming task. The result is an indispensable collection which is unique in its versatility and quality.
-
This collection offers 50 fabric textures within a broad range of types and styles. We focused mostly on standard types to create textures with great versatility that can be easily customized for various purposes.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inamXcontru/PoeticTTS/Codec Video Sony Vegas 11 Serial Number Where to Find and How to Use It.md b/spaces/inamXcontru/PoeticTTS/Codec Video Sony Vegas 11 Serial Number Where to Find and How to Use It.md
deleted file mode 100644
index d3808aa7cec4b031627166db6363cd9ddfecbc03..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Codec Video Sony Vegas 11 Serial Number Where to Find and How to Use It.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-
Sony Vegas Pro 13 Crack serial number is video professional that gets exactly the right tool to enhance their recording feature. This software offers the whole range of features, and it can cope with the variety of formats even from fewer HD resolution and new devices. It supports real-time performance and renders times with Intel Quick Sync Video support via MAGIX AVC codec and secondly it supports modern graphics card via MAGIX AVC codec. The Picture-in-picture plugin gives you real-time controls on the preview of windows for precise placement, movement, and sizing of the video. Moreover, it has included one of the most demanded features of File swapping. File-swapping makes it is possible to work with one set of files like lower resolution proxy files. It is perfect for times when user editing gets bogged down up to 4K files.
-
-Part research-based, part speculative, part guidebook, and part nostalgia-inducing, Brandscapes is an example of what the authors call a "brandscape" — an expression that refers to an environment, not necessarily physical, but consisting of a set of interrelated ideas, values, and principles that shape a business' or brand's cultural identity. Hence, Brandscapes is about ideas (and their interplay with physical environments), not merely architectural subjects (like facades, architecture, lighting, and materials).
-
-The authors are Simona Terzini and Fabrizio Accornero, two passionate architects and designers who are experienced in the field of cultural design and brand development, and this book is their first for-profit endeavor.
-
-Brandscapes presents 24 case studies that reveal the relation between design, architecture, and culture, especially focusing on the "brandscape". The case studies, classified into five distinct parts, . . . present a variety of innovative and exemplary concepts in different areas of design, architecture, and cultural practices.
-
-Part I: Making Brands
-
-Introducing the first section of the book, the authors, along with two of their colleagues, Philippe Schatz and Ulrich Wicker, present the case of a "make-ready" company. They . . . introduce the concept of make-ready, which consists of six components: a pioneer, a meme, an icon, a mapping, a trend, and an institution — six very interesting concepts in the context of cultural design, especially branding.
-
-Then, . . . the authors, having previously presented a . . . pioneer (the visionary founder who is the first to bring a new idea into existence), proceed to the analysis of the icon.
-
-The authors introduce us to the concept of "templates". Basically, "templates" are general, standard and "necessary" elements of a brand that need to be respected in order to build a successful product. The authors analyze the "iconic status" of their client's business, and they invite the reader to walk into the world of this particular company, taking into consideration all of the brand's icons and icons that influence the company.
-
-Part II: Reinventing Brands
-
-The second section of the book starts with two very popular phenomena: the dot-com bubble and the Arab spring.
-
-We are 4fefd39f24
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Autodata 3 44 En Francais.md b/spaces/inreVtussa/clothingai/Examples/Autodata 3 44 En Francais.md
deleted file mode 100644
index 24caee03f62805f31d76a345c4fdb7ad4572cb63..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Autodata 3 44 En Francais.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
"
-
-gr_interface = gr.Interface(
- infer,
- input,
- output,
- examples=examples,
- allow_flagging=False,
- analytics_enabled=False,
- title=title,
- description=description,
- article=article).launch(enable_queue=True, debug=True)
\ No newline at end of file
diff --git a/spaces/keras-io/molecular-property-prediction/app.py b/spaces/keras-io/molecular-property-prediction/app.py
deleted file mode 100644
index 876bf55d1e051e876f0d7b8b777a0ffa7610beec..0000000000000000000000000000000000000000
--- a/spaces/keras-io/molecular-property-prediction/app.py
+++ /dev/null
@@ -1,211 +0,0 @@
-from huggingface_hub import from_pretrained_keras
-import gradio as gr
-from rdkit import Chem, RDLogger
-from rdkit.Chem.Draw import MolsToGridImage
-import numpy as np
-import tensorflow as tf
-from tensorflow import keras
-import pandas as pd
-
-# Config
-class Featurizer:
- def __init__(self, allowable_sets):
- self.dim = 0
- self.features_mapping = {}
- for k, s in allowable_sets.items():
- s = sorted(list(s))
- self.features_mapping[k] = dict(zip(s, range(self.dim, len(s) + self.dim)))
- self.dim += len(s)
-
- def encode(self, inputs):
- output = np.zeros((self.dim,))
- for name_feature, feature_mapping in self.features_mapping.items():
- feature = getattr(self, name_feature)(inputs)
- if feature not in feature_mapping:
- continue
- output[feature_mapping[feature]] = 1.0
- return output
-
-
-class AtomFeaturizer(Featurizer):
- def __init__(self, allowable_sets):
- super().__init__(allowable_sets)
-
- def symbol(self, atom):
- return atom.GetSymbol()
-
- def n_valence(self, atom):
- return atom.GetTotalValence()
-
- def n_hydrogens(self, atom):
- return atom.GetTotalNumHs()
-
- def hybridization(self, atom):
- return atom.GetHybridization().name.lower()
-
-
-class BondFeaturizer(Featurizer):
- def __init__(self, allowable_sets):
- super().__init__(allowable_sets)
- self.dim += 1
-
- def encode(self, bond):
- output = np.zeros((self.dim,))
- if bond is None:
- output[-1] = 1.0
- return output
- output = super().encode(bond)
- return output
-
- def bond_type(self, bond):
- return bond.GetBondType().name.lower()
-
- def conjugated(self, bond):
- return bond.GetIsConjugated()
-
-
-atom_featurizer = AtomFeaturizer(
- allowable_sets={
- "symbol": {"B", "Br", "C", "Ca", "Cl", "F", "H", "I", "N", "Na", "O", "P", "S"},
- "n_valence": {0, 1, 2, 3, 4, 5, 6},
- "n_hydrogens": {0, 1, 2, 3, 4},
- "hybridization": {"s", "sp", "sp2", "sp3"},
- }
-)
-
-bond_featurizer = BondFeaturizer(
- allowable_sets={
- "bond_type": {"single", "double", "triple", "aromatic"},
- "conjugated": {True, False},
- }
-)
-
-def molecule_from_smiles(smiles):
- # MolFromSmiles(m, sanitize=True) should be equivalent to
- # MolFromSmiles(m, sanitize=False) -> SanitizeMol(m) -> AssignStereochemistry(m, ...)
- molecule = Chem.MolFromSmiles(smiles, sanitize=False)
-
- # If sanitization is unsuccessful, catch the error, and try again without
- # the sanitization step that caused the error
- flag = Chem.SanitizeMol(molecule, catchErrors=True)
- if flag != Chem.SanitizeFlags.SANITIZE_NONE:
- Chem.SanitizeMol(molecule, sanitizeOps=Chem.SanitizeFlags.SANITIZE_ALL ^ flag)
-
- Chem.AssignStereochemistry(molecule, cleanIt=True, force=True)
- return molecule
-
-
-def graph_from_molecule(molecule):
- # Initialize graph
- atom_features = []
- bond_features = []
- pair_indices = []
-
- for atom in molecule.GetAtoms():
- atom_features.append(atom_featurizer.encode(atom))
-
- # Add self-loops
- pair_indices.append([atom.GetIdx(), atom.GetIdx()])
- bond_features.append(bond_featurizer.encode(None))
-
- for neighbor in atom.GetNeighbors():
- bond = molecule.GetBondBetweenAtoms(atom.GetIdx(), neighbor.GetIdx())
- pair_indices.append([atom.GetIdx(), neighbor.GetIdx()])
- bond_features.append(bond_featurizer.encode(bond))
-
- return np.array(atom_features), np.array(bond_features), np.array(pair_indices)
-
-
-def graphs_from_smiles(smiles_list):
- # Initialize graphs
- atom_features_list = []
- bond_features_list = []
- pair_indices_list = []
-
- for smiles in smiles_list:
- molecule = molecule_from_smiles(smiles)
- atom_features, bond_features, pair_indices = graph_from_molecule(molecule)
-
- atom_features_list.append(atom_features)
- bond_features_list.append(bond_features)
- pair_indices_list.append(pair_indices)
-
- # Convert lists to ragged tensors for tf.data.Dataset later on
- return (
- tf.ragged.constant(atom_features_list, dtype=tf.float32),
- tf.ragged.constant(bond_features_list, dtype=tf.float32),
- tf.ragged.constant(pair_indices_list, dtype=tf.int64),
- )
-
-
-def prepare_batch(x_batch, y_batch):
- """Merges (sub)graphs of batch into a single global (disconnected) graph
- """
-
- atom_features, bond_features, pair_indices = x_batch
-
- # Obtain number of atoms and bonds for each graph (molecule)
- num_atoms = atom_features.row_lengths()
- num_bonds = bond_features.row_lengths()
-
- # Obtain partition indices (molecule_indicator), which will be used to
- # gather (sub)graphs from global graph in model later on
- molecule_indices = tf.range(len(num_atoms))
- molecule_indicator = tf.repeat(molecule_indices, num_atoms)
-
- # Merge (sub)graphs into a global (disconnected) graph. Adding 'increment' to
- # 'pair_indices' (and merging ragged tensors) actualizes the global graph
- gather_indices = tf.repeat(molecule_indices[:-1], num_bonds[1:])
- increment = tf.cumsum(num_atoms[:-1])
- increment = tf.pad(tf.gather(increment, gather_indices), [(num_bonds[0], 0)])
- pair_indices = pair_indices.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
- pair_indices = pair_indices + increment[:, tf.newaxis]
- atom_features = atom_features.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
- bond_features = bond_features.merge_dims(outer_axis=0, inner_axis=1).to_tensor()
-
- return (atom_features, bond_features, pair_indices, molecule_indicator), y_batch
-
-
-def MPNNDataset(X, y, batch_size=32, shuffle=False):
- dataset = tf.data.Dataset.from_tensor_slices((X, (y)))
- if shuffle:
- dataset = dataset.shuffle(1024)
- return dataset.batch(batch_size).map(prepare_batch, -1).prefetch(-1)
-
-
-model = from_pretrained_keras("keras-io/MPNN-for-molecular-property-prediction")
-
-
-def predict(smiles, label):
- molecules = [molecule_from_smiles(smiles)]
- input = graphs_from_smiles([smiles])
- label = pd.Series([label])
- test_dataset = MPNNDataset(input, label)
- y_pred = tf.squeeze(model.predict(test_dataset), axis=1)
- legends = [f"y_true/y_pred = {label[i]}/{y_pred[i]:.2f}" for i in range(len(label))]
- MolsToGridImage(molecules, molsPerRow=1, legends=legends, returnPNG=False, subImgSize=(650, 650)).save("img.png")
- return 'img.png'
-
-inputs = [
- gr.Textbox(label='Smiles of molecular'),
- gr.Textbox(label='Molecular permeability')
-]
-
-examples = [
- ["CO/N=C(C(=O)N[C@H]1[C@H]2SCC(=C(N2C1=O)C(O)=O)C)/c3csc(N)n3", 0],
- ["[C@H]37[C@H]2[C@@]([C@](C(COC(C1=CC(=CC=C1)[S](O)(=O)=O)=O)=O)(O)[C@@H](C2)C)(C[C@@H]([C@@H]3[C@@]4(C(=CC5=C(C4)C=N[N]5C6=CC=CC=C6)C(=C7)C)C)O)C", 1],
- ["CNCCCC2(C)C(=O)N(c1ccccc1)c3ccccc23", 1],
- ["O.N[C@@H](C(=O)NC1C2CCC(=C(N2C1=O)C(O)=O)Cl)c3ccccc3", 0],
- ["[C@@]4([C@@]3([C@H]([C@H]2[C@@H]([C@@]1(C(=CC(=O)CC1)CC2)C)[C@H](C3)O)CC4)C)(C(COC(C)=O)=O)OC(CC)=O", 1],
- ["[C@]34([C@H](C2[C@@](F)([C@@]1(C(=CC(=O)C=C1)[C@@H](F)C2)C)[C@@H](O)C3)C[C@H]5OC(O[C@@]45C(=O)COC(=O)C6CC6)(C)C)C", 1]
-
-]
-gr.Interface(
- fn=predict,
- title="Predict blood-brain barrier permeability of molecular",
- description = "Message-passing neural network (MPNN) for molecular property prediction",
- inputs=inputs,
- examples=examples,
- outputs="image",
- article = "Author: Vu Minh Chien. Based on the keras example from Alexander Kensert",
-).launch(debug=False, enable_queue=True)
\ No newline at end of file
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/run.sh b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/run.sh
deleted file mode 100644
index 61af4b4950eb11334e55362e3e3c5e2796979a01..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/run.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50
-ps -ef | grep "train" | grep -v grep | awk '{print "kill -9 "$2}' | sh
diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/README-CN.md b/spaces/kira4424/Tacotron-zero-short-voice-clone/README-CN.md
deleted file mode 100644
index 738b37f21a840026f64fd5bf699b013f459108a4..0000000000000000000000000000000000000000
--- a/spaces/kira4424/Tacotron-zero-short-voice-clone/README-CN.md
+++ /dev/null
@@ -1,230 +0,0 @@
-## 实时语音克隆 - 中文/普通话
-
-
-[](http://choosealicense.com/licenses/mit/)
-
-### [English](README.md) | 中文
-
-### [DEMO VIDEO](https://www.bilibili.com/video/BV17Q4y1B7mY/) | [Wiki教程](https://github.com/babysor/MockingBird/wiki/Quick-Start-(Newbie)) | [训练教程](https://vaj2fgg8yn.feishu.cn/docs/doccn7kAbr3SJz0KM0SIDJ0Xnhd)
-
-## 特性
-🌍 **中文** 支持普通话并使用多种中文数据集进行测试:aidatatang_200zh, magicdata, aishell3, biaobei, MozillaCommonVoice, data_aishell 等
-
-🤩 **PyTorch** 适用于 pytorch,已在 1.9.0 版本(最新于 2021 年 8 月)中测试,GPU Tesla T4 和 GTX 2060
-
-🌍 **Windows + Linux** 可在 Windows 操作系统和 linux 操作系统中运行(苹果系统M1版也有社区成功运行案例)
-
-🤩 **Easy & Awesome** 仅需下载或新训练合成器(synthesizer)就有良好效果,复用预训练的编码器/声码器,或实时的HiFi-GAN作为vocoder
-
-🌍 **Webserver Ready** 可伺服你的训练结果,供远程调用
-
-### 进行中的工作
-* GUI/客户端大升级与合并
-[X] 初始化框架 `./mkgui` (基于streamlit + fastapi)和 [技术设计](https://vaj2fgg8yn.feishu.cn/docs/doccnvotLWylBub8VJIjKzoEaee)
-[X] 增加 Voice Cloning and Conversion的演示页面
-[X] 增加Voice Conversion的预处理preprocessing 和训练 training 页面
-[ ] 增加其他的的预处理preprocessing 和训练 training 页面
-* 模型后端基于ESPnet2升级
-
-
-## 开始
-### 1. 安装要求
-> 按照原始存储库测试您是否已准备好所有环境。
-运行工具箱(demo_toolbox.py)需要 **Python 3.7 或更高版本** 。
-
-* 安装 [PyTorch](https://pytorch.org/get-started/locally/)。
-> 如果在用 pip 方式安装的时候出现 `ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cu102 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)` 这个错误可能是 python 版本过低,3.9 可以安装成功
-* 安装 [ffmpeg](https://ffmpeg.org/download.html#get-packages)。
-* 运行`pip install -r requirements.txt` 来安装剩余的必要包。
-* 安装 webrtcvad `pip install webrtcvad-wheels`。
-
-### 2. 准备预训练模型
-考虑训练您自己专属的模型或者下载社区他人训练好的模型:
-> 近期创建了[知乎专题](https://www.zhihu.com/column/c_1425605280340504576) 将不定期更新炼丹小技巧or心得,也欢迎提问
-#### 2.1 使用数据集自己训练encoder模型 (可选)
-
-* 进行音频和梅尔频谱图预处理:
-`python encoder_preprocess.py `
-使用`-d {dataset}` 指定数据集,支持 librispeech_other,voxceleb1,aidatatang_200zh,使用逗号分割处理多数据集。
-* 训练encoder: `python encoder_train.py my_run /SV2TTS/encoder`
-> 训练encoder使用了visdom。你可以加上`-no_visdom`禁用visdom,但是有可视化会更好。在单独的命令行/进程中运行"visdom"来启动visdom服务器。
-
-#### 2.2 使用数据集自己训练合成器模型(与2.3二选一)
-* 下载 数据集并解压:确保您可以访问 *train* 文件夹中的所有音频文件(如.wav)
-* 进行音频和梅尔频谱图预处理:
-`python pre.py -d {dataset} -n {number}`
-可传入参数:
-* `-d {dataset}` 指定数据集,支持 aidatatang_200zh, magicdata, aishell3, data_aishell, 不传默认为aidatatang_200zh
-* `-n {number}` 指定并行数,CPU 11770k + 32GB实测10没有问题
-> 假如你下载的 `aidatatang_200zh`文件放在D盘,`train`文件路径为 `D:\data\aidatatang_200zh\corpus\train` , 你的`datasets_root`就是 `D:\data\`
-
-* 训练合成器:
-`python synthesizer_train.py mandarin /SV2TTS/synthesizer`
-
-* 当您在训练文件夹 *synthesizer/saved_models/* 中看到注意线显示和损失满足您的需要时,请转到`启动程序`一步。
-
-#### 2.3使用社区预先训练好的合成器(与2.2二选一)
-> 当实在没有设备或者不想慢慢调试,可以使用社区贡献的模型(欢迎持续分享):
-
-| 作者 | 下载链接 | 效果预览 | 信息 |
-| --- | ----------- | ----- | ----- |
-| 作者 | https://pan.baidu.com/s/1iONvRxmkI-t1nHqxKytY3g [百度盘链接](https://pan.baidu.com/s/1iONvRxmkI-t1nHqxKytY3g) 4j5d | | 75k steps 用3个开源数据集混合训练
-| 作者 | https://pan.baidu.com/s/1fMh9IlgKJlL2PIiRTYDUvw [百度盘链接](https://pan.baidu.com/s/1fMh9IlgKJlL2PIiRTYDUvw) 提取码:om7f | | 25k steps 用3个开源数据集混合训练, 切换到tag v0.0.1使用
-|@FawenYo | https://drive.google.com/file/d/1H-YGOUHpmqKxJ9FRc6vAjPuqQki24UbC/view?usp=sharing [百度盘链接](https://pan.baidu.com/s/1vSYXO4wsLyjnF3Unl-Xoxg) 提取码:1024 | [input](https://github.com/babysor/MockingBird/wiki/audio/self_test.mp3) [output](https://github.com/babysor/MockingBird/wiki/audio/export.wav) | 200k steps 台湾口音需切换到tag v0.0.1使用
-|@miven| https://pan.baidu.com/s/1PI-hM3sn5wbeChRryX-RCQ 提取码:2021 | https://www.bilibili.com/video/BV1uh411B7AD/ | 150k steps 注意:根据[issue](https://github.com/babysor/MockingBird/issues/37)修复 并切换到tag v0.0.1使用
-
-#### 2.4训练声码器 (可选)
-对效果影响不大,已经预置3款,如果希望自己训练可以参考以下命令。
-* 预处理数据:
-`python vocoder_preprocess.py -m `
-> ``替换为你的数据集目录,``替换为一个你最好的synthesizer模型目录,例如 *sythensizer\saved_models\xxx*
-
-
-* 训练wavernn声码器:
-`python vocoder_train.py `
-> ``替换为你想要的标识,同一标识再次训练时会延续原模型
-
-* 训练hifigan声码器:
-`python vocoder_train.py hifigan`
-> ``替换为你想要的标识,同一标识再次训练时会延续原模型
-* 训练fregan声码器:
-`python vocoder_train.py --config config.json fregan`
-> ``替换为你想要的标识,同一标识再次训练时会延续原模型
-* 将GAN声码器的训练切换为多GPU模式:修改GAN文件夹下.json文件中的"num_gpus"参数
-### 3. 启动程序或工具箱
-您可以尝试使用以下命令:
-
-### 3.1 启动Web程序(v2):
-`python web.py`
-运行成功后在浏览器打开地址, 默认为 `http://localhost:8080`
-> * 仅支持手动新录音(16khz), 不支持超过4MB的录音,最佳长度在5~15秒
-
-### 3.2 启动工具箱:
-`python demo_toolbox.py -d `
-> 请指定一个可用的数据集文件路径,如果有支持的数据集则会自动加载供调试,也同时会作为手动录制音频的存储目录。
-
-
-
-### 4. 番外:语音转换Voice Conversion(PPG based)
-想像柯南拿着变声器然后发出毛利小五郎的声音吗?本项目现基于PPG-VC,引入额外两个模块(PPG extractor + PPG2Mel), 可以实现变声功能。(文档不全,尤其是训练部分,正在努力补充中)
-#### 4.0 准备环境
-* 确保项目以上环境已经安装ok,运行`pip install espnet` 来安装剩余的必要包。
-* 下载以下模型 链接:https://pan.baidu.com/s/1bl_x_DHJSAUyN2fma-Q_Wg
-提取码:gh41
- * 24K采样率专用的vocoder(hifigan)到 *vocoder\saved_models\xxx*
- * 预训练的ppg特征encoder(ppg_extractor)到 *ppg_extractor\saved_models\xxx*
- * 预训练的PPG2Mel到 *ppg2mel\saved_models\xxx*
-
-#### 4.1 使用数据集自己训练PPG2Mel模型 (可选)
-
-* 下载aidatatang_200zh数据集并解压:确保您可以访问 *train* 文件夹中的所有音频文件(如.wav)
-* 进行音频和梅尔频谱图预处理:
-`python pre4ppg.py -d {dataset} -n {number}`
-可传入参数:
-* `-d {dataset}` 指定数据集,支持 aidatatang_200zh, 不传默认为aidatatang_200zh
-* `-n {number}` 指定并行数,CPU 11770k在8的情况下,需要运行12到18小时!待优化
-> 假如你下载的 `aidatatang_200zh`文件放在D盘,`train`文件路径为 `D:\data\aidatatang_200zh\corpus\train` , 你的`datasets_root`就是 `D:\data\`
-
-* 训练合成器, 注意在上一步先下载好`ppg2mel.yaml`, 修改里面的地址指向预训练好的文件夹:
-`python ppg2mel_train.py --config .\ppg2mel\saved_models\ppg2mel.yaml --oneshotvc `
-* 如果想要继续上一次的训练,可以通过`--load .\ppg2mel\saved_models\` 参数指定一个预训练模型文件。
-
-#### 4.2 启动工具箱VC模式
-您可以尝试使用以下命令:
-`python demo_toolbox.py -vc -d `
-> 请指定一个可用的数据集文件路径,如果有支持的数据集则会自动加载供调试,也同时会作为手动录制音频的存储目录。
-
-
-## 引用及论文
-> 该库一开始从仅支持英语的[Real-Time-Voice-Cloning](https://github.com/CorentinJ/Real-Time-Voice-Cloning) 分叉出来的,鸣谢作者。
-
-| URL | Designation | 标题 | 实现源码 |
-| --- | ----------- | ----- | --------------------- |
-| [1803.09017](https://arxiv.org/abs/1803.09017) | GlobalStyleToken (synthesizer)| Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis | 本代码库 |
-| [2010.05646](https://arxiv.org/abs/2010.05646) | HiFi-GAN (vocoder)| Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis | 本代码库 |
-| [2106.02297](https://arxiv.org/abs/2106.02297) | Fre-GAN (vocoder)| Fre-GAN: Adversarial Frequency-consistent Audio Synthesis | 本代码库 |
-|[**1806.04558**](https://arxiv.org/pdf/1806.04558.pdf) | SV2TTS | Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis | 本代码库 |
-|[1802.08435](https://arxiv.org/pdf/1802.08435.pdf) | WaveRNN (vocoder) | Efficient Neural Audio Synthesis | [fatchord/WaveRNN](https://github.com/fatchord/WaveRNN) |
-|[1703.10135](https://arxiv.org/pdf/1703.10135.pdf) | Tacotron (synthesizer) | Tacotron: Towards End-to-End Speech Synthesis | [fatchord/WaveRNN](https://github.com/fatchord/WaveRNN)
-|[1710.10467](https://arxiv.org/pdf/1710.10467.pdf) | GE2E (encoder)| Generalized End-To-End Loss for Speaker Verification | 本代码库 |
-
-## 常見問題(FQ&A)
-#### 1.數據集哪裡下載?
-| 数据集 | OpenSLR地址 | 其他源 (Google Drive, Baidu网盘等) |
-| --- | ----------- | ---------------|
-| aidatatang_200zh | [OpenSLR](http://www.openslr.org/62/) | [Google Drive](https://drive.google.com/file/d/110A11KZoVe7vy6kXlLb6zVPLb_J91I_t/view?usp=sharing) |
-| magicdata | [OpenSLR](http://www.openslr.org/68/) | [Google Drive (Dev set)](https://drive.google.com/file/d/1g5bWRUSNH68ycC6eNvtwh07nX3QhOOlo/view?usp=sharing) |
-| aishell3 | [OpenSLR](https://www.openslr.org/93/) | [Google Drive](https://drive.google.com/file/d/1shYp_o4Z0X0cZSKQDtFirct2luFUwKzZ/view?usp=sharing) |
-| data_aishell | [OpenSLR](https://www.openslr.org/33/) | |
-> 解壓 aidatatang_200zh 後,還需將 `aidatatang_200zh\corpus\train`下的檔案全選解壓縮
-
-#### 2.``是什麼意思?
-假如數據集路徑為 `D:\data\aidatatang_200zh`,那麼 ``就是 `D:\data`
-
-#### 3.訓練模型顯存不足
-訓練合成器時:將 `synthesizer/hparams.py`中的batch_size參數調小
-```
-//調整前
-tts_schedule = [(2, 1e-3, 20_000, 12), # Progressive training schedule
- (2, 5e-4, 40_000, 12), # (r, lr, step, batch_size)
- (2, 2e-4, 80_000, 12), #
- (2, 1e-4, 160_000, 12), # r = reduction factor (# of mel frames
- (2, 3e-5, 320_000, 12), # synthesized for each decoder iteration)
- (2, 1e-5, 640_000, 12)], # lr = learning rate
-//調整後
-tts_schedule = [(2, 1e-3, 20_000, 8), # Progressive training schedule
- (2, 5e-4, 40_000, 8), # (r, lr, step, batch_size)
- (2, 2e-4, 80_000, 8), #
- (2, 1e-4, 160_000, 8), # r = reduction factor (# of mel frames
- (2, 3e-5, 320_000, 8), # synthesized for each decoder iteration)
- (2, 1e-5, 640_000, 8)], # lr = learning rate
-```
-
-聲碼器-預處理數據集時:將 `synthesizer/hparams.py`中的batch_size參數調小
-```
-//調整前
-### Data Preprocessing
- max_mel_frames = 900,
- rescale = True,
- rescaling_max = 0.9,
- synthesis_batch_size = 16, # For vocoder preprocessing and inference.
-//調整後
-### Data Preprocessing
- max_mel_frames = 900,
- rescale = True,
- rescaling_max = 0.9,
- synthesis_batch_size = 8, # For vocoder preprocessing and inference.
-```
-
-聲碼器-訓練聲碼器時:將 `vocoder/wavernn/hparams.py`中的batch_size參數調小
-```
-//調整前
-# Training
-voc_batch_size = 100
-voc_lr = 1e-4
-voc_gen_at_checkpoint = 5
-voc_pad = 2
-
-//調整後
-# Training
-voc_batch_size = 6
-voc_lr = 1e-4
-voc_gen_at_checkpoint = 5
-voc_pad =2
-```
-
-#### 4.碰到`RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([70, 512]) from checkpoint, the shape in current model is torch.Size([75, 512]).`
-請參照 issue [#37](https://github.com/babysor/MockingBird/issues/37)
-
-#### 5.如何改善CPU、GPU佔用率?
-適情況調整batch_size參數來改善
-
-#### 6.發生 `頁面文件太小,無法完成操作`
-請參考這篇[文章](https://blog.csdn.net/qq_17755303/article/details/112564030),將虛擬內存更改為100G(102400),例如:档案放置D槽就更改D槽的虚拟内存
-
-#### 7.什么时候算训练完成?
-首先一定要出现注意力模型,其次是loss足够低,取决于硬件设备和数据集。拿本人的供参考,我的注意力是在 18k 步之后出现的,并且在 50k 步之后损失变得低于 0.4
-
-
-
-
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/unet.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/unet.py
deleted file mode 100644
index 82caa16a94c195c192a2a920fb7bc7e60f0f3ce3..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/unet.py
+++ /dev/null
@@ -1,429 +0,0 @@
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-from annotator.uniformer.mmcv.cnn import (UPSAMPLE_LAYERS, ConvModule, build_activation_layer,
- build_norm_layer, constant_init, kaiming_init)
-from annotator.uniformer.mmcv.runner import load_checkpoint
-from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm
-
-from annotator.uniformer.mmseg.utils import get_root_logger
-from ..builder import BACKBONES
-from ..utils import UpConvBlock
-
-
-class BasicConvBlock(nn.Module):
- """Basic convolutional block for UNet.
-
- This module consists of several plain convolutional layers.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- num_convs (int): Number of convolutional layers. Default: 2.
- stride (int): Whether use stride convolution to downsample
- the input feature map. If stride=2, it only uses stride convolution
- in the first convolutional layer to downsample the input feature
- map. Options are 1 or 2. Default: 1.
- dilation (int): Whether use dilated convolution to expand the
- receptive field. Set dilation rate of each convolutional layer and
- the dilation rate of the first convolutional layer is always 1.
- Default: 1.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- dcn (bool): Use deformable convolution in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_convs=2,
- stride=1,
- dilation=1,
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- dcn=None,
- plugins=None):
- super(BasicConvBlock, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
-
- self.with_cp = with_cp
- convs = []
- for i in range(num_convs):
- convs.append(
- ConvModule(
- in_channels=in_channels if i == 0 else out_channels,
- out_channels=out_channels,
- kernel_size=3,
- stride=stride if i == 0 else 1,
- dilation=1 if i == 0 else dilation,
- padding=1 if i == 0 else dilation,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
-
- self.convs = nn.Sequential(*convs)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.convs, x)
- else:
- out = self.convs(x)
- return out
-
-
-@UPSAMPLE_LAYERS.register_module()
-class DeconvModule(nn.Module):
- """Deconvolution upsample module in decoder for UNet (2X upsample).
-
- This module uses deconvolution to upsample feature map in the decoder
- of UNet.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- kernel_size (int): Kernel size of the convolutional layer. Default: 4.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- with_cp=False,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- *,
- kernel_size=4,
- scale_factor=2):
- super(DeconvModule, self).__init__()
-
- assert (kernel_size - scale_factor >= 0) and\
- (kernel_size - scale_factor) % 2 == 0,\
- f'kernel_size should be greater than or equal to scale_factor '\
- f'and (kernel_size - scale_factor) should be even numbers, '\
- f'while the kernel size is {kernel_size} and scale_factor is '\
- f'{scale_factor}.'
-
- stride = scale_factor
- padding = (kernel_size - scale_factor) // 2
- self.with_cp = with_cp
- deconv = nn.ConvTranspose2d(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding)
-
- norm_name, norm = build_norm_layer(norm_cfg, out_channels)
- activate = build_activation_layer(act_cfg)
- self.deconv_upsamping = nn.Sequential(deconv, norm, activate)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.deconv_upsamping, x)
- else:
- out = self.deconv_upsamping(x)
- return out
-
-
-@UPSAMPLE_LAYERS.register_module()
-class InterpConv(nn.Module):
- """Interpolation upsample module in decoder for UNet.
-
- This module uses interpolation to upsample feature map in the decoder
- of UNet. It consists of one interpolation upsample layer and one
- convolutional layer. It can be one interpolation upsample layer followed
- by one convolutional layer (conv_first=False) or one convolutional layer
- followed by one interpolation upsample layer (conv_first=True).
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- conv_first (bool): Whether convolutional layer or interpolation
- upsample layer first. Default: False. It means interpolation
- upsample layer followed by one convolutional layer.
- kernel_size (int): Kernel size of the convolutional layer. Default: 1.
- stride (int): Stride of the convolutional layer. Default: 1.
- padding (int): Padding of the convolutional layer. Default: 1.
- upsample_cfg (dict): Interpolation config of the upsample layer.
- Default: dict(
- scale_factor=2, mode='bilinear', align_corners=False).
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- with_cp=False,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- *,
- conv_cfg=None,
- conv_first=False,
- kernel_size=1,
- stride=1,
- padding=0,
- upsample_cfg=dict(
- scale_factor=2, mode='bilinear', align_corners=False)):
- super(InterpConv, self).__init__()
-
- self.with_cp = with_cp
- conv = ConvModule(
- in_channels,
- out_channels,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- upsample = nn.Upsample(**upsample_cfg)
- if conv_first:
- self.interp_upsample = nn.Sequential(conv, upsample)
- else:
- self.interp_upsample = nn.Sequential(upsample, conv)
-
- def forward(self, x):
- """Forward function."""
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(self.interp_upsample, x)
- else:
- out = self.interp_upsample(x)
- return out
-
-
-@BACKBONES.register_module()
-class UNet(nn.Module):
- """UNet backbone.
- U-Net: Convolutional Networks for Biomedical Image Segmentation.
- https://arxiv.org/pdf/1505.04597.pdf
-
- Args:
- in_channels (int): Number of input image channels. Default" 3.
- base_channels (int): Number of base channels of each stage.
- The output channels of the first stage. Default: 64.
- num_stages (int): Number of stages in encoder, normally 5. Default: 5.
- strides (Sequence[int 1 | 2]): Strides of each stage in encoder.
- len(strides) is equal to num_stages. Normally the stride of the
- first stage in encoder is 1. If strides[i]=2, it uses stride
- convolution to downsample in the correspondence encoder stage.
- Default: (1, 1, 1, 1, 1).
- enc_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondence encoder stage.
- Default: (2, 2, 2, 2, 2).
- dec_num_convs (Sequence[int]): Number of convolutional layers in the
- convolution block of the correspondence decoder stage.
- Default: (2, 2, 2, 2).
- downsamples (Sequence[int]): Whether use MaxPool to downsample the
- feature map after the first stage of encoder
- (stages: [1, num_stages)). If the correspondence encoder stage use
- stride convolution (strides[i]=2), it will never use MaxPool to
- downsample, even downsamples[i-1]=True.
- Default: (True, True, True, True).
- enc_dilations (Sequence[int]): Dilation rate of each stage in encoder.
- Default: (1, 1, 1, 1, 1).
- dec_dilations (Sequence[int]): Dilation rate of each stage in decoder.
- Default: (1, 1, 1, 1).
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed. Default: False.
- conv_cfg (dict | None): Config dict for convolution layer.
- Default: None.
- norm_cfg (dict | None): Config dict for normalization layer.
- Default: dict(type='BN').
- act_cfg (dict | None): Config dict for activation layer in ConvModule.
- Default: dict(type='ReLU').
- upsample_cfg (dict): The upsample config of the upsample module in
- decoder. Default: dict(type='InterpConv').
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only. Default: False.
- dcn (bool): Use deformable convolution in convolutional layer or not.
- Default: None.
- plugins (dict): plugins for convolutional layers. Default: None.
-
- Notice:
- The input image size should be divisible by the whole downsample rate
- of the encoder. More detail of the whole downsample rate can be found
- in UNet._check_input_divisible.
-
- """
-
- def __init__(self,
- in_channels=3,
- base_channels=64,
- num_stages=5,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False,
- dcn=None,
- plugins=None):
- super(UNet, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
- assert len(strides) == num_stages, \
- 'The length of strides should be equal to num_stages, '\
- f'while the strides is {strides}, the length of '\
- f'strides is {len(strides)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_num_convs) == num_stages, \
- 'The length of enc_num_convs should be equal to num_stages, '\
- f'while the enc_num_convs is {enc_num_convs}, the length of '\
- f'enc_num_convs is {len(enc_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_num_convs) == (num_stages-1), \
- 'The length of dec_num_convs should be equal to (num_stages-1), '\
- f'while the dec_num_convs is {dec_num_convs}, the length of '\
- f'dec_num_convs is {len(dec_num_convs)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(downsamples) == (num_stages-1), \
- 'The length of downsamples should be equal to (num_stages-1), '\
- f'while the downsamples is {downsamples}, the length of '\
- f'downsamples is {len(downsamples)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(enc_dilations) == num_stages, \
- 'The length of enc_dilations should be equal to num_stages, '\
- f'while the enc_dilations is {enc_dilations}, the length of '\
- f'enc_dilations is {len(enc_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- assert len(dec_dilations) == (num_stages-1), \
- 'The length of dec_dilations should be equal to (num_stages-1), '\
- f'while the dec_dilations is {dec_dilations}, the length of '\
- f'dec_dilations is {len(dec_dilations)}, and the num_stages is '\
- f'{num_stages}.'
- self.num_stages = num_stages
- self.strides = strides
- self.downsamples = downsamples
- self.norm_eval = norm_eval
- self.base_channels = base_channels
-
- self.encoder = nn.ModuleList()
- self.decoder = nn.ModuleList()
-
- for i in range(num_stages):
- enc_conv_block = []
- if i != 0:
- if strides[i] == 1 and downsamples[i - 1]:
- enc_conv_block.append(nn.MaxPool2d(kernel_size=2))
- upsample = (strides[i] != 1 or downsamples[i - 1])
- self.decoder.append(
- UpConvBlock(
- conv_block=BasicConvBlock,
- in_channels=base_channels * 2**i,
- skip_channels=base_channels * 2**(i - 1),
- out_channels=base_channels * 2**(i - 1),
- num_convs=dec_num_convs[i - 1],
- stride=1,
- dilation=dec_dilations[i - 1],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- upsample_cfg=upsample_cfg if upsample else None,
- dcn=None,
- plugins=None))
-
- enc_conv_block.append(
- BasicConvBlock(
- in_channels=in_channels,
- out_channels=base_channels * 2**i,
- num_convs=enc_num_convs[i],
- stride=strides[i],
- dilation=enc_dilations[i],
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- dcn=None,
- plugins=None))
- self.encoder.append((nn.Sequential(*enc_conv_block)))
- in_channels = base_channels * 2**i
-
- def forward(self, x):
- self._check_input_divisible(x)
- enc_outs = []
- for enc in self.encoder:
- x = enc(x)
- enc_outs.append(x)
- dec_outs = [x]
- for i in reversed(range(len(self.decoder))):
- x = self.decoder[i](enc_outs[i], x)
- dec_outs.append(x)
-
- return dec_outs
-
- def train(self, mode=True):
- """Convert the model into training mode while keep normalization layer
- freezed."""
- super(UNet, self).train(mode)
- if mode and self.norm_eval:
- for m in self.modules():
- # trick: eval have effect on BatchNorm only
- if isinstance(m, _BatchNorm):
- m.eval()
-
- def _check_input_divisible(self, x):
- h, w = x.shape[-2:]
- whole_downsample_rate = 1
- for i in range(1, self.num_stages):
- if self.strides[i] == 2 or self.downsamples[i - 1]:
- whole_downsample_rate *= 2
- assert (h % whole_downsample_rate == 0) \
- and (w % whole_downsample_rate == 0),\
- f'The input image size {(h, w)} should be divisible by the whole '\
- f'downsample rate {whole_downsample_rate}, when num_stages is '\
- f'{self.num_stages}, strides is {self.strides}, and downsamples '\
- f'is {self.downsamples}.'
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
- else:
- raise TypeError('pretrained must be a str or None')
diff --git a/spaces/kokofixcomputers/chat-ui/src/routes/login/+page.server.ts b/spaces/kokofixcomputers/chat-ui/src/routes/login/+page.server.ts
deleted file mode 100644
index 28692b5304687ce69551c5015d71a4419069415a..0000000000000000000000000000000000000000
--- a/spaces/kokofixcomputers/chat-ui/src/routes/login/+page.server.ts
+++ /dev/null
@@ -1,16 +0,0 @@
-import { redirect } from "@sveltejs/kit";
-import { getOIDCAuthorizationUrl } from "$lib/server/auth";
-import { base } from "$app/paths";
-
-export const actions = {
- default: async function ({ url, locals, request }) {
- // TODO: Handle errors if provider is not responding
- const referer = request.headers.get("referer");
- const authorizationUrl = await getOIDCAuthorizationUrl(
- { redirectURI: `${(referer ? new URL(referer) : url).origin}${base}/login/callback` },
- { sessionId: locals.sessionId }
- );
-
- throw redirect(303, authorizationUrl);
- },
-};
diff --git a/spaces/kottu/stabble_diffusion_sketch/model.py b/spaces/kottu/stabble_diffusion_sketch/model.py
deleted file mode 100644
index e3f9a928cfedba0ae6caebc63f9c45a6dd389175..0000000000000000000000000000000000000000
--- a/spaces/kottu/stabble_diffusion_sketch/model.py
+++ /dev/null
@@ -1,374 +0,0 @@
-import gc
-import os
-from abc import ABC, abstractmethod
-
-import numpy as np
-import PIL.Image
-import torch
-from controlnet_aux import (
- CannyDetector,
- LineartDetector,
- MidasDetector,
- OpenposeDetector,
- PidiNetDetector,
- ZoeDetector,
-)
-from diffusers import (
- AutoencoderKL,
- EulerAncestralDiscreteScheduler,
- StableDiffusionXLAdapterPipeline,
- T2IAdapter,
-)
-
-SD_XL_BASE_RATIOS = {
- "0.5": (704, 1408),
- "0.52": (704, 1344),
- "0.57": (768, 1344),
- "0.6": (768, 1280),
- "0.68": (832, 1216),
- "0.72": (832, 1152),
- "0.78": (896, 1152),
- "0.82": (896, 1088),
- "0.88": (960, 1088),
- "0.94": (960, 1024),
- "1.0": (1024, 1024),
- "1.07": (1024, 960),
- "1.13": (1088, 960),
- "1.21": (1088, 896),
- "1.29": (1152, 896),
- "1.38": (1152, 832),
- "1.46": (1216, 832),
- "1.67": (1280, 768),
- "1.75": (1344, 768),
- "1.91": (1344, 704),
- "2.0": (1408, 704),
- "2.09": (1472, 704),
- "2.4": (1536, 640),
- "2.5": (1600, 640),
- "2.89": (1664, 576),
- "3.0": (1728, 576),
-}
-
-
-def find_closest_aspect_ratio(target_width: int, target_height: int) -> str:
- target_ratio = target_width / target_height
- closest_ratio = ""
- min_difference = float("inf")
-
- for ratio_str, (width, height) in SD_XL_BASE_RATIOS.items():
- ratio = width / height
- difference = abs(target_ratio - ratio)
-
- if difference < min_difference:
- min_difference = difference
- closest_ratio = ratio_str
-
- return closest_ratio
-
-
-def resize_to_closest_aspect_ratio(image: PIL.Image.Image) -> PIL.Image.Image:
- target_width, target_height = image.size
- closest_ratio = find_closest_aspect_ratio(target_width, target_height)
-
- # Get the dimensions from the closest aspect ratio in the dictionary
- new_width, new_height = SD_XL_BASE_RATIOS[closest_ratio]
-
- # Resize the image to the new dimensions while preserving the aspect ratio
- resized_image = image.resize((new_width, new_height), PIL.Image.LANCZOS)
-
- return resized_image
-
-
-ADAPTER_REPO_IDS = {
- "canny": "TencentARC/t2i-adapter-canny-sdxl-1.0",
- "sketch": "TencentARC/t2i-adapter-sketch-sdxl-1.0",
- "lineart": "TencentARC/t2i-adapter-lineart-sdxl-1.0",
- "depth-midas": "TencentARC/t2i-adapter-depth-midas-sdxl-1.0",
- "depth-zoe": "TencentARC/t2i-adapter-depth-zoe-sdxl-1.0",
- "openpose": "TencentARC/t2i-adapter-openpose-sdxl-1.0",
- # "recolor": "TencentARC/t2i-adapter-recolor-sdxl-1.0",
-}
-ADAPTER_NAMES = list(ADAPTER_REPO_IDS.keys())
-
-
-class Preprocessor(ABC):
- @abstractmethod
- def to(self, device: torch.device | str) -> "Preprocessor":
- pass
-
- @abstractmethod
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- pass
-
-
-class CannyPreprocessor(Preprocessor):
- def __init__(self):
- self.model = CannyDetector()
-
- def to(self, device: torch.device | str) -> Preprocessor:
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=384, image_resolution=1024)
-
-
-class LineartPreprocessor(Preprocessor):
- def __init__(self):
- self.model = LineartDetector.from_pretrained("lllyasviel/Annotators")
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=384, image_resolution=1024)
-
-
-class MidasPreprocessor(Preprocessor):
- def __init__(self):
- self.model = MidasDetector.from_pretrained(
- "valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
- )
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=512, image_resolution=1024)
-
-
-class OpenposePreprocessor(Preprocessor):
- def __init__(self):
- self.model = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- out = self.model(image, detect_resolution=512, image_resolution=1024)
- out = np.array(out)[:, :, ::-1]
- out = PIL.Image.fromarray(np.uint8(out))
- return out
-
-
-class PidiNetPreprocessor(Preprocessor):
- def __init__(self):
- self.model = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=512, image_resolution=1024, apply_filter=True)
-
-
-class RecolorPreprocessor(Preprocessor):
- def to(self, device: torch.device | str) -> Preprocessor:
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return image.convert("L").convert("RGB")
-
-
-class ZoePreprocessor(Preprocessor):
- def __init__(self):
- self.model = ZoeDetector.from_pretrained(
- "valhalla/t2iadapter-aux-models", filename="zoed_nk.pth", model_type="zoedepth_nk"
- )
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, gamma_corrected=True, image_resolution=1024)
-
-
-PRELOAD_PREPROCESSORS_IN_GPU_MEMORY = os.getenv("PRELOAD_PREPROCESSORS_IN_GPU_MEMORY", "0") == "1"
-PRELOAD_PREPROCESSORS_IN_CPU_MEMORY = os.getenv("PRELOAD_PREPROCESSORS_IN_CPU_MEMORY", "0") == "1"
-if PRELOAD_PREPROCESSORS_IN_GPU_MEMORY:
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- preprocessors_gpu: dict[str, Preprocessor] = {
- "canny": CannyPreprocessor().to(device),
- "sketch": PidiNetPreprocessor().to(device),
- "lineart": LineartPreprocessor().to(device),
- "depth-midas": MidasPreprocessor().to(device),
- "depth-zoe": ZoePreprocessor().to(device),
- "openpose": OpenposePreprocessor().to(device),
- "recolor": RecolorPreprocessor().to(device),
- }
-
- def get_preprocessor(adapter_name: str) -> Preprocessor:
- return preprocessors_gpu[adapter_name]
-
-elif PRELOAD_PREPROCESSORS_IN_CPU_MEMORY:
- preprocessors_cpu: dict[str, Preprocessor] = {
- "canny": CannyPreprocessor(),
- "sketch": PidiNetPreprocessor(),
- "lineart": LineartPreprocessor(),
- "depth-midas": MidasPreprocessor(),
- "depth-zoe": ZoePreprocessor(),
- "openpose": OpenposePreprocessor(),
- "recolor": RecolorPreprocessor(),
- }
-
- def get_preprocessor(adapter_name: str) -> Preprocessor:
- return preprocessors_cpu[adapter_name]
-
-else:
-
- def get_preprocessor(adapter_name: str) -> Preprocessor:
- if adapter_name == "canny":
- return CannyPreprocessor()
- elif adapter_name == "sketch":
- return PidiNetPreprocessor()
- elif adapter_name == "lineart":
- return LineartPreprocessor()
- elif adapter_name == "depth-midas":
- return MidasPreprocessor()
- elif adapter_name == "depth-zoe":
- return ZoePreprocessor()
- elif adapter_name == "openpose":
- return OpenposePreprocessor()
- elif adapter_name == "recolor":
- return RecolorPreprocessor()
- else:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
-
- def download_all_preprocessors():
- for adapter_name in ADAPTER_NAMES:
- get_preprocessor(adapter_name)
- gc.collect()
-
- download_all_preprocessors()
-
-
-def download_all_adapters():
- for adapter_name in ADAPTER_NAMES:
- T2IAdapter.from_pretrained(
- ADAPTER_REPO_IDS[adapter_name],
- torch_dtype=torch.float16,
- varient="fp16",
- )
- gc.collect()
-
-
-class Model:
- MAX_NUM_INFERENCE_STEPS = 50
-
- def __init__(self, adapter_name: str):
- if adapter_name not in ADAPTER_NAMES:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
-
- self.preprocessor_name = adapter_name
- self.adapter_name = adapter_name
-
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- if torch.cuda.is_available():
- self.preprocessor = get_preprocessor(adapter_name).to(self.device)
-
- model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- adapter = T2IAdapter.from_pretrained(
- ADAPTER_REPO_IDS[adapter_name],
- torch_dtype=torch.float16,
- varient="fp16",
- ).to(self.device)
- self.pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
- model_id,
- vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16),
- adapter=adapter,
- scheduler=EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler"),
- torch_dtype=torch.float16,
- variant="fp16",
- ).to(self.device)
- self.pipe.enable_xformers_memory_efficient_attention()
- self.pipe.load_lora_weights(
- "stabilityai/stable-diffusion-xl-base-1.0", weight_name="sd_xl_offset_example-lora_1.0.safetensors"
- )
- self.pipe.fuse_lora(lora_scale=0.4)
- else:
- self.preprocessor = None # type: ignore
- self.pipe = None
-
- def change_preprocessor(self, adapter_name: str) -> None:
- if adapter_name not in ADAPTER_NAMES:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
- if adapter_name == self.preprocessor_name:
- return
-
- if PRELOAD_PREPROCESSORS_IN_GPU_MEMORY:
- pass
- elif PRELOAD_PREPROCESSORS_IN_CPU_MEMORY:
- self.preprocessor.to("cpu")
- else:
- del self.preprocessor
- self.preprocessor = get_preprocessor(adapter_name).to(self.device)
- self.preprocessor_name = adapter_name
- gc.collect()
- torch.cuda.empty_cache()
-
- def change_adapter(self, adapter_name: str) -> None:
- if adapter_name not in ADAPTER_NAMES:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
- if adapter_name == self.adapter_name:
- return
- self.pipe.adapter = T2IAdapter.from_pretrained(
- ADAPTER_REPO_IDS[adapter_name],
- torch_dtype=torch.float16,
- varient="fp16",
- ).to(self.device)
- self.adapter_name = adapter_name
- gc.collect()
- torch.cuda.empty_cache()
-
- def resize_image(self, image: PIL.Image.Image) -> PIL.Image.Image:
- w, h = image.size
- scale = 1024 / max(w, h)
- new_w = int(w * scale)
- new_h = int(h * scale)
- return image.resize((new_w, new_h), PIL.Image.LANCZOS)
-
- def run(
- self,
- image: PIL.Image.Image,
- prompt: str,
- negative_prompt: str,
- adapter_name: str,
- num_inference_steps: int = 30,
- guidance_scale: float = 5.0,
- adapter_conditioning_scale: float = 1.0,
- adapter_conditioning_factor: float = 1.0,
- seed: int = 0,
- apply_preprocess: bool = True,
- ) -> list[PIL.Image.Image]:
- if not torch.cuda.is_available():
- raise RuntimeError("This demo does not work on CPU.")
- if num_inference_steps > self.MAX_NUM_INFERENCE_STEPS:
- raise ValueError(f"Number of steps must be less than {self.MAX_NUM_INFERENCE_STEPS}")
-
- # Resize image to avoid OOM
- image = self.resize_image(image)
-
- self.change_preprocessor(adapter_name)
- self.change_adapter(adapter_name)
-
- if apply_preprocess:
- image = self.preprocessor(image)
-
- image = resize_to_closest_aspect_ratio(image)
-
- generator = torch.Generator(device=self.device).manual_seed(seed)
- out = self.pipe(
- prompt=prompt,
- negative_prompt=negative_prompt,
- image=image,
- num_inference_steps=num_inference_steps,
- adapter_conditioning_scale=adapter_conditioning_scale,
- adapter_conditioning_factor=adapter_conditioning_factor,
- generator=generator,
- guidance_scale=guidance_scale,
- ).images[0]
- return [image, out]
\ No newline at end of file
diff --git a/spaces/kquote03/lama-video-watermark-remover/fetch_data/places_standard_train_prepare.sh b/spaces/kquote03/lama-video-watermark-remover/fetch_data/places_standard_train_prepare.sh
deleted file mode 100644
index b5389e7096bade08526162733658e221808716fd..0000000000000000000000000000000000000000
--- a/spaces/kquote03/lama-video-watermark-remover/fetch_data/places_standard_train_prepare.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-mkdir -p places_standard_dataset/train
-
-# untar without folder structure
-tar -xvf train_large_places365standard.tar --transform='s/.*\///' -C places_standard_dataset/train
-
-# create location config places.yaml
-PWD=$(pwd)
-DATASET=${PWD}/places_standard_dataset
-PLACES=${PWD}/configs/training/location/places_standard.yaml
-
-touch $PLACES
-echo "# @package _group_" >> $PLACES
-echo "data_root_dir: ${DATASET}/" >> $PLACES
-echo "out_root_dir: ${PWD}/experiments/" >> $PLACES
-echo "tb_dir: ${PWD}/tb_logs/" >> $PLACES
-echo "pretrained_models: ${PWD}/" >> $PLACES
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiofiles/base.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiofiles/base.py
deleted file mode 100644
index 6201d95b4fec039a6a9bfe59ad1de722c4688c9a..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiofiles/base.py
+++ /dev/null
@@ -1,111 +0,0 @@
-"""Various base classes."""
-from types import coroutine
-from collections.abc import Coroutine
-from asyncio import get_running_loop
-
-
-class AsyncBase:
- def __init__(self, file, loop, executor):
- self._file = file
- self._executor = executor
- self._ref_loop = loop
-
- @property
- def _loop(self):
- return self._ref_loop or get_running_loop()
-
- def __aiter__(self):
- """We are our own iterator."""
- return self
-
- def __repr__(self):
- return super().__repr__() + " wrapping " + repr(self._file)
-
- async def __anext__(self):
- """Simulate normal file iteration."""
- line = await self.readline()
- if line:
- return line
- else:
- raise StopAsyncIteration
-
-
-class AsyncIndirectBase(AsyncBase):
- def __init__(self, name, loop, executor, indirect):
- self._indirect = indirect
- self._name = name
- super().__init__(None, loop, executor)
-
- @property
- def _file(self):
- return self._indirect()
-
- @_file.setter
- def _file(self, v):
- pass # discard writes
-
-
-class _ContextManager(Coroutine):
- __slots__ = ("_coro", "_obj")
-
- def __init__(self, coro):
- self._coro = coro
- self._obj = None
-
- def send(self, value):
- return self._coro.send(value)
-
- def throw(self, typ, val=None, tb=None):
- if val is None:
- return self._coro.throw(typ)
- elif tb is None:
- return self._coro.throw(typ, val)
- else:
- return self._coro.throw(typ, val, tb)
-
- def close(self):
- return self._coro.close()
-
- @property
- def gi_frame(self):
- return self._coro.gi_frame
-
- @property
- def gi_running(self):
- return self._coro.gi_running
-
- @property
- def gi_code(self):
- return self._coro.gi_code
-
- def __next__(self):
- return self.send(None)
-
- @coroutine
- def __iter__(self):
- resp = yield from self._coro
- return resp
-
- def __await__(self):
- resp = yield from self._coro
- return resp
-
- async def __anext__(self):
- resp = await self._coro
- return resp
-
- async def __aenter__(self):
- self._obj = await self._coro
- return self._obj
-
- async def __aexit__(self, exc_type, exc, tb):
- self._obj.close()
- self._obj = None
-
-
-class AiofilesContextManager(_ContextManager):
- """An adjusted async context manager for aiofiles."""
-
- async def __aexit__(self, exc_type, exc_val, exc_tb):
- await self._obj.close()
- self._obj = None
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/designspaceLib/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/designspaceLib/__init__.py
deleted file mode 100644
index f8a0146d061b132bab258d2c05faae260dab48a9..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/designspaceLib/__init__.py
+++ /dev/null
@@ -1,3167 +0,0 @@
-from __future__ import annotations
-
-import collections
-import copy
-import itertools
-import math
-import os
-import posixpath
-from io import BytesIO, StringIO
-from textwrap import indent
-from typing import Any, Dict, List, MutableMapping, Optional, Tuple, Union, cast
-
-from fontTools.misc import etree as ET
-from fontTools.misc import plistlib
-from fontTools.misc.loggingTools import LogMixin
-from fontTools.misc.textTools import tobytes, tostr
-
-"""
- designSpaceDocument
-
- - read and write designspace files
-"""
-
-__all__ = [
- "AxisDescriptor",
- "AxisLabelDescriptor",
- "BaseDocReader",
- "BaseDocWriter",
- "DesignSpaceDocument",
- "DesignSpaceDocumentError",
- "DiscreteAxisDescriptor",
- "InstanceDescriptor",
- "LocationLabelDescriptor",
- "RangeAxisSubsetDescriptor",
- "RuleDescriptor",
- "SourceDescriptor",
- "ValueAxisSubsetDescriptor",
- "VariableFontDescriptor",
-]
-
-# ElementTree allows to find namespace-prefixed elements, but not attributes
-# so we have to do it ourselves for 'xml:lang'
-XML_NS = "{http://www.w3.org/XML/1998/namespace}"
-XML_LANG = XML_NS + "lang"
-
-
-def posix(path):
- """Normalize paths using forward slash to work also on Windows."""
- new_path = posixpath.join(*path.split(os.path.sep))
- if path.startswith("/"):
- # The above transformation loses absolute paths
- new_path = "/" + new_path
- elif path.startswith(r"\\"):
- # The above transformation loses leading slashes of UNC path mounts
- new_path = "//" + new_path
- return new_path
-
-
-def posixpath_property(private_name):
- """Generate a propery that holds a path always using forward slashes."""
-
- def getter(self):
- # Normal getter
- return getattr(self, private_name)
-
- def setter(self, value):
- # The setter rewrites paths using forward slashes
- if value is not None:
- value = posix(value)
- setattr(self, private_name, value)
-
- return property(getter, setter)
-
-
-class DesignSpaceDocumentError(Exception):
- def __init__(self, msg, obj=None):
- self.msg = msg
- self.obj = obj
-
- def __str__(self):
- return str(self.msg) + (": %r" % self.obj if self.obj is not None else "")
-
-
-class AsDictMixin(object):
- def asdict(self):
- d = {}
- for attr, value in self.__dict__.items():
- if attr.startswith("_"):
- continue
- if hasattr(value, "asdict"):
- value = value.asdict()
- elif isinstance(value, list):
- value = [v.asdict() if hasattr(v, "asdict") else v for v in value]
- d[attr] = value
- return d
-
-
-class SimpleDescriptor(AsDictMixin):
- """Containers for a bunch of attributes"""
-
- # XXX this is ugly. The 'print' is inappropriate here, and instead of
- # assert, it should simply return True/False
- def compare(self, other):
- # test if this object contains the same data as the other
- for attr in self._attrs:
- try:
- assert getattr(self, attr) == getattr(other, attr)
- except AssertionError:
- print(
- "failed attribute",
- attr,
- getattr(self, attr),
- "!=",
- getattr(other, attr),
- )
-
- def __repr__(self):
- attrs = [f"{a}={repr(getattr(self, a))}," for a in self._attrs]
- attrs = indent("\n".join(attrs), " ")
- return f"{self.__class__.__name__}(\n{attrs}\n)"
-
-
-class SourceDescriptor(SimpleDescriptor):
- """Simple container for data related to the source
-
- .. code:: python
-
- doc = DesignSpaceDocument()
- s1 = SourceDescriptor()
- s1.path = masterPath1
- s1.name = "master.ufo1"
- s1.font = defcon.Font("master.ufo1")
- s1.location = dict(weight=0)
- s1.familyName = "MasterFamilyName"
- s1.styleName = "MasterStyleNameOne"
- s1.localisedFamilyName = dict(fr="Caractère")
- s1.mutedGlyphNames.append("A")
- s1.mutedGlyphNames.append("Z")
- doc.addSource(s1)
-
- """
-
- flavor = "source"
- _attrs = [
- "filename",
- "path",
- "name",
- "layerName",
- "location",
- "copyLib",
- "copyGroups",
- "copyFeatures",
- "muteKerning",
- "muteInfo",
- "mutedGlyphNames",
- "familyName",
- "styleName",
- "localisedFamilyName",
- ]
-
- filename = posixpath_property("_filename")
- path = posixpath_property("_path")
-
- def __init__(
- self,
- *,
- filename=None,
- path=None,
- font=None,
- name=None,
- location=None,
- designLocation=None,
- layerName=None,
- familyName=None,
- styleName=None,
- localisedFamilyName=None,
- copyLib=False,
- copyInfo=False,
- copyGroups=False,
- copyFeatures=False,
- muteKerning=False,
- muteInfo=False,
- mutedGlyphNames=None,
- ):
- self.filename = filename
- """string. A relative path to the source file, **as it is in the document**.
-
- MutatorMath + VarLib.
- """
- self.path = path
- """The absolute path, calculated from filename."""
-
- self.font = font
- """Any Python object. Optional. Points to a representation of this
- source font that is loaded in memory, as a Python object (e.g. a
- ``defcon.Font`` or a ``fontTools.ttFont.TTFont``).
-
- The default document reader will not fill-in this attribute, and the
- default writer will not use this attribute. It is up to the user of
- ``designspaceLib`` to either load the resource identified by
- ``filename`` and store it in this field, or write the contents of
- this field to the disk and make ```filename`` point to that.
- """
-
- self.name = name
- """string. Optional. Unique identifier name for this source.
-
- MutatorMath + Varlib.
- """
-
- self.designLocation = (
- designLocation if designLocation is not None else location or {}
- )
- """dict. Axis values for this source, in design space coordinates.
-
- MutatorMath + Varlib.
-
- This may be only part of the full design location.
- See :meth:`getFullDesignLocation()`
-
- .. versionadded:: 5.0
- """
-
- self.layerName = layerName
- """string. The name of the layer in the source to look for
- outline data. Default ``None`` which means ``foreground``.
- """
- self.familyName = familyName
- """string. Family name of this source. Though this data
- can be extracted from the font, it can be efficient to have it right
- here.
-
- Varlib.
- """
- self.styleName = styleName
- """string. Style name of this source. Though this data
- can be extracted from the font, it can be efficient to have it right
- here.
-
- Varlib.
- """
- self.localisedFamilyName = localisedFamilyName or {}
- """dict. A dictionary of localised family name strings, keyed by
- language code.
-
- If present, will be used to build localized names for all instances.
-
- .. versionadded:: 5.0
- """
-
- self.copyLib = copyLib
- """bool. Indicates if the contents of the font.lib need to
- be copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyInfo = copyInfo
- """bool. Indicates if the non-interpolating font.info needs
- to be copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyGroups = copyGroups
- """bool. Indicates if the groups need to be copied to the
- instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.copyFeatures = copyFeatures
- """bool. Indicates if the feature text needs to be
- copied to the instances.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.muteKerning = muteKerning
- """bool. Indicates if the kerning data from this source
- needs to be muted (i.e. not be part of the calculations).
-
- MutatorMath only.
- """
- self.muteInfo = muteInfo
- """bool. Indicated if the interpolating font.info data for
- this source needs to be muted.
-
- MutatorMath only.
- """
- self.mutedGlyphNames = mutedGlyphNames or []
- """list. Glyphnames that need to be muted in the
- instances.
-
- MutatorMath only.
- """
-
- @property
- def location(self):
- """dict. Axis values for this source, in design space coordinates.
-
- MutatorMath + Varlib.
-
- .. deprecated:: 5.0
- Use the more explicit alias for this property :attr:`designLocation`.
- """
- return self.designLocation
-
- @location.setter
- def location(self, location: Optional[AnisotropicLocationDict]):
- self.designLocation = location or {}
-
- def setFamilyName(self, familyName, languageCode="en"):
- """Setter for :attr:`localisedFamilyName`
-
- .. versionadded:: 5.0
- """
- self.localisedFamilyName[languageCode] = tostr(familyName)
-
- def getFamilyName(self, languageCode="en"):
- """Getter for :attr:`localisedFamilyName`
-
- .. versionadded:: 5.0
- """
- return self.localisedFamilyName.get(languageCode)
-
- def getFullDesignLocation(
- self, doc: "DesignSpaceDocument"
- ) -> AnisotropicLocationDict:
- """Get the complete design location of this source, from its
- :attr:`designLocation` and the document's axis defaults.
-
- .. versionadded:: 5.0
- """
- result: AnisotropicLocationDict = {}
- for axis in doc.axes:
- if axis.name in self.designLocation:
- result[axis.name] = self.designLocation[axis.name]
- else:
- result[axis.name] = axis.map_forward(axis.default)
- return result
-
-
-class RuleDescriptor(SimpleDescriptor):
- """Represents the rule descriptor element: a set of glyph substitutions to
- trigger conditionally in some parts of the designspace.
-
- .. code:: python
-
- r1 = RuleDescriptor()
- r1.name = "unique.rule.name"
- r1.conditionSets.append([dict(name="weight", minimum=-10, maximum=10), dict(...)])
- r1.conditionSets.append([dict(...), dict(...)])
- r1.subs.append(("a", "a.alt"))
-
- .. code:: xml
-
-
-
-
-
-
-
-
-
-
-
-
-
- """
-
- _attrs = ["name", "conditionSets", "subs"] # what do we need here
-
- def __init__(self, *, name=None, conditionSets=None, subs=None):
- self.name = name
- """string. Unique name for this rule. Can be used to reference this rule data."""
- # list of lists of dict(name='aaaa', minimum=0, maximum=1000)
- self.conditionSets = conditionSets or []
- """a list of conditionsets.
-
- - Each conditionset is a list of conditions.
- - Each condition is a dict with ``name``, ``minimum`` and ``maximum`` keys.
- """
- # list of substitutions stored as tuples of glyphnames ("a", "a.alt")
- self.subs = subs or []
- """list of substitutions.
-
- - Each substitution is stored as tuples of glyphnames, e.g. ("a", "a.alt").
- - Note: By default, rules are applied first, before other text
- shaping/OpenType layout, as they are part of the
- `Required Variation Alternates OpenType feature `_.
- See ref:`rules-element` § Attributes.
- """
-
-
-def evaluateRule(rule, location):
- """Return True if any of the rule's conditionsets matches the given location."""
- return any(evaluateConditions(c, location) for c in rule.conditionSets)
-
-
-def evaluateConditions(conditions, location):
- """Return True if all the conditions matches the given location.
-
- - If a condition has no minimum, check for < maximum.
- - If a condition has no maximum, check for > minimum.
- """
- for cd in conditions:
- value = location[cd["name"]]
- if cd.get("minimum") is None:
- if value > cd["maximum"]:
- return False
- elif cd.get("maximum") is None:
- if cd["minimum"] > value:
- return False
- elif not cd["minimum"] <= value <= cd["maximum"]:
- return False
- return True
-
-
-def processRules(rules, location, glyphNames):
- """Apply these rules at this location to these glyphnames.
-
- Return a new list of glyphNames with substitutions applied.
-
- - rule order matters
- """
- newNames = []
- for rule in rules:
- if evaluateRule(rule, location):
- for name in glyphNames:
- swap = False
- for a, b in rule.subs:
- if name == a:
- swap = True
- break
- if swap:
- newNames.append(b)
- else:
- newNames.append(name)
- glyphNames = newNames
- newNames = []
- return glyphNames
-
-
-AnisotropicLocationDict = Dict[str, Union[float, Tuple[float, float]]]
-SimpleLocationDict = Dict[str, float]
-
-
-class InstanceDescriptor(SimpleDescriptor):
- """Simple container for data related to the instance
-
-
- .. code:: python
-
- i2 = InstanceDescriptor()
- i2.path = instancePath2
- i2.familyName = "InstanceFamilyName"
- i2.styleName = "InstanceStyleName"
- i2.name = "instance.ufo2"
- # anisotropic location
- i2.designLocation = dict(weight=500, width=(400,300))
- i2.postScriptFontName = "InstancePostscriptName"
- i2.styleMapFamilyName = "InstanceStyleMapFamilyName"
- i2.styleMapStyleName = "InstanceStyleMapStyleName"
- i2.lib['com.coolDesignspaceApp.specimenText'] = 'Hamburgerwhatever'
- doc.addInstance(i2)
- """
-
- flavor = "instance"
- _defaultLanguageCode = "en"
- _attrs = [
- "filename",
- "path",
- "name",
- "locationLabel",
- "designLocation",
- "userLocation",
- "familyName",
- "styleName",
- "postScriptFontName",
- "styleMapFamilyName",
- "styleMapStyleName",
- "localisedFamilyName",
- "localisedStyleName",
- "localisedStyleMapFamilyName",
- "localisedStyleMapStyleName",
- "glyphs",
- "kerning",
- "info",
- "lib",
- ]
-
- filename = posixpath_property("_filename")
- path = posixpath_property("_path")
-
- def __init__(
- self,
- *,
- filename=None,
- path=None,
- font=None,
- name=None,
- location=None,
- locationLabel=None,
- designLocation=None,
- userLocation=None,
- familyName=None,
- styleName=None,
- postScriptFontName=None,
- styleMapFamilyName=None,
- styleMapStyleName=None,
- localisedFamilyName=None,
- localisedStyleName=None,
- localisedStyleMapFamilyName=None,
- localisedStyleMapStyleName=None,
- glyphs=None,
- kerning=True,
- info=True,
- lib=None,
- ):
- self.filename = filename
- """string. Relative path to the instance file, **as it is
- in the document**. The file may or may not exist.
-
- MutatorMath + VarLib.
- """
- self.path = path
- """string. Absolute path to the instance file, calculated from
- the document path and the string in the filename attr. The file may
- or may not exist.
-
- MutatorMath.
- """
- self.font = font
- """Same as :attr:`SourceDescriptor.font`
-
- .. seealso:: :attr:`SourceDescriptor.font`
- """
- self.name = name
- """string. Unique identifier name of the instance, used to
- identify it if it needs to be referenced from elsewhere in the
- document.
- """
- self.locationLabel = locationLabel
- """Name of a :class:`LocationLabelDescriptor`. If
- provided, the instance should have the same location as the
- LocationLabel.
-
- .. seealso::
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.designLocation: AnisotropicLocationDict = (
- designLocation if designLocation is not None else (location or {})
- )
- """dict. Axis values for this instance, in design space coordinates.
-
- MutatorMath + Varlib.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.userLocation: SimpleLocationDict = userLocation or {}
- """dict. Axis values for this instance, in user space coordinates.
-
- MutatorMath + Varlib.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullDesignLocation`
- :meth:`getFullUserLocation`
-
- .. versionadded:: 5.0
- """
- self.familyName = familyName
- """string. Family name of this instance.
-
- MutatorMath + Varlib.
- """
- self.styleName = styleName
- """string. Style name of this instance.
-
- MutatorMath + Varlib.
- """
- self.postScriptFontName = postScriptFontName
- """string. Postscript fontname for this instance.
-
- MutatorMath + Varlib.
- """
- self.styleMapFamilyName = styleMapFamilyName
- """string. StyleMap familyname for this instance.
-
- MutatorMath + Varlib.
- """
- self.styleMapStyleName = styleMapStyleName
- """string. StyleMap stylename for this instance.
-
- MutatorMath + Varlib.
- """
- self.localisedFamilyName = localisedFamilyName or {}
- """dict. A dictionary of localised family name
- strings, keyed by language code.
- """
- self.localisedStyleName = localisedStyleName or {}
- """dict. A dictionary of localised stylename
- strings, keyed by language code.
- """
- self.localisedStyleMapFamilyName = localisedStyleMapFamilyName or {}
- """A dictionary of localised style map
- familyname strings, keyed by language code.
- """
- self.localisedStyleMapStyleName = localisedStyleMapStyleName or {}
- """A dictionary of localised style map
- stylename strings, keyed by language code.
- """
- self.glyphs = glyphs or {}
- """dict for special master definitions for glyphs. If glyphs
- need special masters (to record the results of executed rules for
- example).
-
- MutatorMath.
-
- .. deprecated:: 5.0
- Use rules or sparse sources instead.
- """
- self.kerning = kerning
- """ bool. Indicates if this instance needs its kerning
- calculated.
-
- MutatorMath.
-
- .. deprecated:: 5.0
- """
- self.info = info
- """bool. Indicated if this instance needs the interpolating
- font.info calculated.
-
- .. deprecated:: 5.0
- """
-
- self.lib = lib or {}
- """Custom data associated with this instance."""
-
- @property
- def location(self):
- """dict. Axis values for this instance.
-
- MutatorMath + Varlib.
-
- .. deprecated:: 5.0
- Use the more explicit alias for this property :attr:`designLocation`.
- """
- return self.designLocation
-
- @location.setter
- def location(self, location: Optional[AnisotropicLocationDict]):
- self.designLocation = location or {}
-
- def setStyleName(self, styleName, languageCode="en"):
- """These methods give easier access to the localised names."""
- self.localisedStyleName[languageCode] = tostr(styleName)
-
- def getStyleName(self, languageCode="en"):
- return self.localisedStyleName.get(languageCode)
-
- def setFamilyName(self, familyName, languageCode="en"):
- self.localisedFamilyName[languageCode] = tostr(familyName)
-
- def getFamilyName(self, languageCode="en"):
- return self.localisedFamilyName.get(languageCode)
-
- def setStyleMapStyleName(self, styleMapStyleName, languageCode="en"):
- self.localisedStyleMapStyleName[languageCode] = tostr(styleMapStyleName)
-
- def getStyleMapStyleName(self, languageCode="en"):
- return self.localisedStyleMapStyleName.get(languageCode)
-
- def setStyleMapFamilyName(self, styleMapFamilyName, languageCode="en"):
- self.localisedStyleMapFamilyName[languageCode] = tostr(styleMapFamilyName)
-
- def getStyleMapFamilyName(self, languageCode="en"):
- return self.localisedStyleMapFamilyName.get(languageCode)
-
- def clearLocation(self, axisName: Optional[str] = None):
- """Clear all location-related fields. Ensures that
- :attr:``designLocation`` and :attr:``userLocation`` are dictionaries
- (possibly empty if clearing everything).
-
- In order to update the location of this instance wholesale, a user
- should first clear all the fields, then change the field(s) for which
- they have data.
-
- .. code:: python
-
- instance.clearLocation()
- instance.designLocation = {'Weight': (34, 36.5), 'Width': 100}
- instance.userLocation = {'Opsz': 16}
-
- In order to update a single axis location, the user should only clear
- that axis, then edit the values:
-
- .. code:: python
-
- instance.clearLocation('Weight')
- instance.designLocation['Weight'] = (34, 36.5)
-
- Args:
- axisName: if provided, only clear the location for that axis.
-
- .. versionadded:: 5.0
- """
- self.locationLabel = None
- if axisName is None:
- self.designLocation = {}
- self.userLocation = {}
- else:
- if self.designLocation is None:
- self.designLocation = {}
- if axisName in self.designLocation:
- del self.designLocation[axisName]
- if self.userLocation is None:
- self.userLocation = {}
- if axisName in self.userLocation:
- del self.userLocation[axisName]
-
- def getLocationLabelDescriptor(
- self, doc: "DesignSpaceDocument"
- ) -> Optional[LocationLabelDescriptor]:
- """Get the :class:`LocationLabelDescriptor` instance that matches
- this instances's :attr:`locationLabel`.
-
- Raises if the named label can't be found.
-
- .. versionadded:: 5.0
- """
- if self.locationLabel is None:
- return None
- label = doc.getLocationLabel(self.locationLabel)
- if label is None:
- raise DesignSpaceDocumentError(
- "InstanceDescriptor.getLocationLabelDescriptor(): "
- f"unknown location label `{self.locationLabel}` in instance `{self.name}`."
- )
- return label
-
- def getFullDesignLocation(
- self, doc: "DesignSpaceDocument"
- ) -> AnisotropicLocationDict:
- """Get the complete design location of this instance, by combining data
- from the various location fields, default axis values and mappings, and
- top-level location labels.
-
- The source of truth for this instance's location is determined for each
- axis independently by taking the first not-None field in this list:
-
- - ``locationLabel``: the location along this axis is the same as the
- matching STAT format 4 label. No anisotropy.
- - ``designLocation[axisName]``: the explicit design location along this
- axis, possibly anisotropic.
- - ``userLocation[axisName]``: the explicit user location along this
- axis. No anisotropy.
- - ``axis.default``: default axis value. No anisotropy.
-
- .. versionadded:: 5.0
- """
- label = self.getLocationLabelDescriptor(doc)
- if label is not None:
- return doc.map_forward(label.userLocation) # type: ignore
- result: AnisotropicLocationDict = {}
- for axis in doc.axes:
- if axis.name in self.designLocation:
- result[axis.name] = self.designLocation[axis.name]
- elif axis.name in self.userLocation:
- result[axis.name] = axis.map_forward(self.userLocation[axis.name])
- else:
- result[axis.name] = axis.map_forward(axis.default)
- return result
-
- def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict:
- """Get the complete user location for this instance.
-
- .. seealso:: :meth:`getFullDesignLocation`
-
- .. versionadded:: 5.0
- """
- return doc.map_backward(self.getFullDesignLocation(doc))
-
-
-def tagForAxisName(name):
- # try to find or make a tag name for this axis name
- names = {
- "weight": ("wght", dict(en="Weight")),
- "width": ("wdth", dict(en="Width")),
- "optical": ("opsz", dict(en="Optical Size")),
- "slant": ("slnt", dict(en="Slant")),
- "italic": ("ital", dict(en="Italic")),
- }
- if name.lower() in names:
- return names[name.lower()]
- if len(name) < 4:
- tag = name + "*" * (4 - len(name))
- else:
- tag = name[:4]
- return tag, dict(en=name)
-
-
-class AbstractAxisDescriptor(SimpleDescriptor):
- flavor = "axis"
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- # opentype tag for this axis
- self.tag = tag
- """string. Four letter tag for this axis. Some might be
- registered at the `OpenType
- specification `__.
- Privately-defined axis tags must begin with an uppercase letter and
- use only uppercase letters or digits.
- """
- # name of the axis used in locations
- self.name = name
- """string. Name of the axis as it is used in the location dicts.
-
- MutatorMath + Varlib.
- """
- # names for UI purposes, if this is not a standard axis,
- self.labelNames = labelNames or {}
- """dict. When defining a non-registered axis, it will be
- necessary to define user-facing readable names for the axis. Keyed by
- xml:lang code. Values are required to be ``unicode`` strings, even if
- they only contain ASCII characters.
- """
- self.hidden = hidden
- """bool. Whether this axis should be hidden in user interfaces.
- """
- self.map = map or []
- """list of input / output values that can describe a warp of user space
- to design space coordinates. If no map values are present, it is assumed
- user space is the same as design space, as in [(minimum, minimum),
- (maximum, maximum)].
-
- Varlib.
- """
- self.axisOrdering = axisOrdering
- """STAT table field ``axisOrdering``.
-
- See: `OTSpec STAT Axis Record `_
-
- .. versionadded:: 5.0
- """
- self.axisLabels: List[AxisLabelDescriptor] = axisLabels or []
- """STAT table entries for Axis Value Tables format 1, 2, 3.
-
- See: `OTSpec STAT Axis Value Tables `_
-
- .. versionadded:: 5.0
- """
-
-
-class AxisDescriptor(AbstractAxisDescriptor):
- """Simple container for the axis data.
-
- Add more localisations?
-
- .. code:: python
-
- a1 = AxisDescriptor()
- a1.minimum = 1
- a1.maximum = 1000
- a1.default = 400
- a1.name = "weight"
- a1.tag = "wght"
- a1.labelNames['fa-IR'] = "قطر"
- a1.labelNames['en'] = "Wéíght"
- a1.map = [(1.0, 10.0), (400.0, 66.0), (1000.0, 990.0)]
- a1.axisOrdering = 1
- a1.axisLabels = [
- AxisLabelDescriptor(name="Regular", userValue=400, elidable=True)
- ]
- doc.addAxis(a1)
- """
-
- _attrs = [
- "tag",
- "name",
- "maximum",
- "minimum",
- "default",
- "map",
- "axisOrdering",
- "axisLabels",
- ]
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- minimum=None,
- default=None,
- maximum=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- super().__init__(
- tag=tag,
- name=name,
- labelNames=labelNames,
- hidden=hidden,
- map=map,
- axisOrdering=axisOrdering,
- axisLabels=axisLabels,
- )
- self.minimum = minimum
- """number. The minimum value for this axis in user space.
-
- MutatorMath + Varlib.
- """
- self.maximum = maximum
- """number. The maximum value for this axis in user space.
-
- MutatorMath + Varlib.
- """
- self.default = default
- """number. The default value for this axis, i.e. when a new location is
- created, this is the value this axis will get in user space.
-
- MutatorMath + Varlib.
- """
-
- def serialize(self):
- # output to a dict, used in testing
- return dict(
- tag=self.tag,
- name=self.name,
- labelNames=self.labelNames,
- maximum=self.maximum,
- minimum=self.minimum,
- default=self.default,
- hidden=self.hidden,
- map=self.map,
- axisOrdering=self.axisOrdering,
- axisLabels=self.axisLabels,
- )
-
- def map_forward(self, v):
- """Maps value from axis mapping's input (user) to output (design)."""
- from fontTools.varLib.models import piecewiseLinearMap
-
- if not self.map:
- return v
- return piecewiseLinearMap(v, {k: v for k, v in self.map})
-
- def map_backward(self, v):
- """Maps value from axis mapping's output (design) to input (user)."""
- from fontTools.varLib.models import piecewiseLinearMap
-
- if isinstance(v, tuple):
- v = v[0]
- if not self.map:
- return v
- return piecewiseLinearMap(v, {v: k for k, v in self.map})
-
-
-class DiscreteAxisDescriptor(AbstractAxisDescriptor):
- """Container for discrete axis data.
-
- Use this for axes that do not interpolate. The main difference from a
- continuous axis is that a continuous axis has a ``minimum`` and ``maximum``,
- while a discrete axis has a list of ``values``.
-
- Example: an Italic axis with 2 stops, Roman and Italic, that are not
- compatible. The axis still allows to bind together the full font family,
- which is useful for the STAT table, however it can't become a variation
- axis in a VF.
-
- .. code:: python
-
- a2 = DiscreteAxisDescriptor()
- a2.values = [0, 1]
- a2.default = 0
- a2.name = "Italic"
- a2.tag = "ITAL"
- a2.labelNames['fr'] = "Italique"
- a2.map = [(0, 0), (1, -11)]
- a2.axisOrdering = 2
- a2.axisLabels = [
- AxisLabelDescriptor(name="Roman", userValue=0, elidable=True)
- ]
- doc.addAxis(a2)
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis"
- _attrs = ("tag", "name", "values", "default", "map", "axisOrdering", "axisLabels")
-
- def __init__(
- self,
- *,
- tag=None,
- name=None,
- labelNames=None,
- values=None,
- default=None,
- hidden=False,
- map=None,
- axisOrdering=None,
- axisLabels=None,
- ):
- super().__init__(
- tag=tag,
- name=name,
- labelNames=labelNames,
- hidden=hidden,
- map=map,
- axisOrdering=axisOrdering,
- axisLabels=axisLabels,
- )
- self.default: float = default
- """The default value for this axis, i.e. when a new location is
- created, this is the value this axis will get in user space.
-
- However, this default value is less important than in continuous axes:
-
- - it doesn't define the "neutral" version of outlines from which
- deltas would apply, as this axis does not interpolate.
- - it doesn't provide the reference glyph set for the designspace, as
- fonts at each value can have different glyph sets.
- """
- self.values: List[float] = values or []
- """List of possible values for this axis. Contrary to continuous axes,
- only the values in this list can be taken by the axis, nothing in-between.
- """
-
- def map_forward(self, value):
- """Maps value from axis mapping's input to output.
-
- Returns value unchanged if no mapping entry is found.
-
- Note: for discrete axes, each value must have its mapping entry, if
- you intend that value to be mapped.
- """
- return next((v for k, v in self.map if k == value), value)
-
- def map_backward(self, value):
- """Maps value from axis mapping's output to input.
-
- Returns value unchanged if no mapping entry is found.
-
- Note: for discrete axes, each value must have its mapping entry, if
- you intend that value to be mapped.
- """
- if isinstance(value, tuple):
- value = value[0]
- return next((k for k, v in self.map if v == value), value)
-
-
-class AxisLabelDescriptor(SimpleDescriptor):
- """Container for axis label data.
-
- Analogue of OpenType's STAT data for a single axis (formats 1, 2 and 3).
- All values are user values.
- See: `OTSpec STAT Axis value table, format 1, 2, 3 `_
-
- The STAT format of the Axis value depends on which field are filled-in,
- see :meth:`getFormat`
-
- .. versionadded:: 5.0
- """
-
- flavor = "label"
- _attrs = (
- "userMinimum",
- "userValue",
- "userMaximum",
- "name",
- "elidable",
- "olderSibling",
- "linkedUserValue",
- "labelNames",
- )
-
- def __init__(
- self,
- *,
- name,
- userValue,
- userMinimum=None,
- userMaximum=None,
- elidable=False,
- olderSibling=False,
- linkedUserValue=None,
- labelNames=None,
- ):
- self.userMinimum: Optional[float] = userMinimum
- """STAT field ``rangeMinValue`` (format 2)."""
- self.userValue: float = userValue
- """STAT field ``value`` (format 1, 3) or ``nominalValue`` (format 2)."""
- self.userMaximum: Optional[float] = userMaximum
- """STAT field ``rangeMaxValue`` (format 2)."""
- self.name: str = name
- """Label for this axis location, STAT field ``valueNameID``."""
- self.elidable: bool = elidable
- """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``.
-
- See: `OTSpec STAT Flags `_
- """
- self.olderSibling: bool = olderSibling
- """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``.
-
- See: `OTSpec STAT Flags `_
- """
- self.linkedUserValue: Optional[float] = linkedUserValue
- """STAT field ``linkedValue`` (format 3)."""
- self.labelNames: MutableMapping[str, str] = labelNames or {}
- """User-facing translations of this location's label. Keyed by
- ``xml:lang`` code.
- """
-
- def getFormat(self) -> int:
- """Determine which format of STAT Axis value to use to encode this label.
-
- =========== ========= =========== =========== ===============
- STAT Format userValue userMinimum userMaximum linkedUserValue
- =========== ========= =========== =========== ===============
- 1 ✅ ❌ ❌ ❌
- 2 ✅ ✅ ✅ ❌
- 3 ✅ ❌ ❌ ✅
- =========== ========= =========== =========== ===============
- """
- if self.linkedUserValue is not None:
- return 3
- if self.userMinimum is not None or self.userMaximum is not None:
- return 2
- return 1
-
- @property
- def defaultName(self) -> str:
- """Return the English name from :attr:`labelNames` or the :attr:`name`."""
- return self.labelNames.get("en") or self.name
-
-
-class LocationLabelDescriptor(SimpleDescriptor):
- """Container for location label data.
-
- Analogue of OpenType's STAT data for a free-floating location (format 4).
- All values are user values.
-
- See: `OTSpec STAT Axis value table, format 4 `_
-
- .. versionadded:: 5.0
- """
-
- flavor = "label"
- _attrs = ("name", "elidable", "olderSibling", "userLocation", "labelNames")
-
- def __init__(
- self,
- *,
- name,
- userLocation,
- elidable=False,
- olderSibling=False,
- labelNames=None,
- ):
- self.name: str = name
- """Label for this named location, STAT field ``valueNameID``."""
- self.userLocation: SimpleLocationDict = userLocation or {}
- """Location in user coordinates along each axis.
-
- If an axis is not mentioned, it is assumed to be at its default location.
-
- .. seealso:: This may be only part of the full location. See:
- :meth:`getFullUserLocation`
- """
- self.elidable: bool = elidable
- """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``.
-
- See: `OTSpec STAT Flags `_
- """
- self.olderSibling: bool = olderSibling
- """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``.
-
- See: `OTSpec STAT Flags `_
- """
- self.labelNames: Dict[str, str] = labelNames or {}
- """User-facing translations of this location's label. Keyed by
- xml:lang code.
- """
-
- @property
- def defaultName(self) -> str:
- """Return the English name from :attr:`labelNames` or the :attr:`name`."""
- return self.labelNames.get("en") or self.name
-
- def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict:
- """Get the complete user location of this label, by combining data
- from the explicit user location and default axis values.
-
- .. versionadded:: 5.0
- """
- return {
- axis.name: self.userLocation.get(axis.name, axis.default)
- for axis in doc.axes
- }
-
-
-class VariableFontDescriptor(SimpleDescriptor):
- """Container for variable fonts, sub-spaces of the Designspace.
-
- Use-cases:
-
- - From a single DesignSpace with discrete axes, define 1 variable font
- per value on the discrete axes. Before version 5, you would have needed
- 1 DesignSpace per such variable font, and a lot of data duplication.
- - From a big variable font with many axes, define subsets of that variable
- font that only include some axes and freeze other axes at a given location.
-
- .. versionadded:: 5.0
- """
-
- flavor = "variable-font"
- _attrs = ("filename", "axisSubsets", "lib")
-
- filename = posixpath_property("_filename")
-
- def __init__(self, *, name, filename=None, axisSubsets=None, lib=None):
- self.name: str = name
- """string, required. Name of this variable to identify it during the
- build process and from other parts of the document, and also as a
- filename in case the filename property is empty.
-
- VarLib.
- """
- self.filename: str = filename
- """string, optional. Relative path to the variable font file, **as it is
- in the document**. The file may or may not exist.
-
- If not specified, the :attr:`name` will be used as a basename for the file.
- """
- self.axisSubsets: List[
- Union[RangeAxisSubsetDescriptor, ValueAxisSubsetDescriptor]
- ] = (axisSubsets or [])
- """Axis subsets to include in this variable font.
-
- If an axis is not mentioned, assume that we only want the default
- location of that axis (same as a :class:`ValueAxisSubsetDescriptor`).
- """
- self.lib: MutableMapping[str, Any] = lib or {}
- """Custom data associated with this variable font."""
-
-
-class RangeAxisSubsetDescriptor(SimpleDescriptor):
- """Subset of a continuous axis to include in a variable font.
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis-subset"
- _attrs = ("name", "userMinimum", "userDefault", "userMaximum")
-
- def __init__(
- self, *, name, userMinimum=-math.inf, userDefault=None, userMaximum=math.inf
- ):
- self.name: str = name
- """Name of the :class:`AxisDescriptor` to subset."""
- self.userMinimum: float = userMinimum
- """New minimum value of the axis in the target variable font.
- If not specified, assume the same minimum value as the full axis.
- (default = ``-math.inf``)
- """
- self.userDefault: Optional[float] = userDefault
- """New default value of the axis in the target variable font.
- If not specified, assume the same default value as the full axis.
- (default = ``None``)
- """
- self.userMaximum: float = userMaximum
- """New maximum value of the axis in the target variable font.
- If not specified, assume the same maximum value as the full axis.
- (default = ``math.inf``)
- """
-
-
-class ValueAxisSubsetDescriptor(SimpleDescriptor):
- """Single value of a discrete or continuous axis to use in a variable font.
-
- .. versionadded:: 5.0
- """
-
- flavor = "axis-subset"
- _attrs = ("name", "userValue")
-
- def __init__(self, *, name, userValue):
- self.name: str = name
- """Name of the :class:`AxisDescriptor` or :class:`DiscreteAxisDescriptor`
- to "snapshot" or "freeze".
- """
- self.userValue: float = userValue
- """Value in user coordinates at which to freeze the given axis."""
-
-
-class BaseDocWriter(object):
- _whiteSpace = " "
- axisDescriptorClass = AxisDescriptor
- discreteAxisDescriptorClass = DiscreteAxisDescriptor
- axisLabelDescriptorClass = AxisLabelDescriptor
- locationLabelDescriptorClass = LocationLabelDescriptor
- ruleDescriptorClass = RuleDescriptor
- sourceDescriptorClass = SourceDescriptor
- variableFontDescriptorClass = VariableFontDescriptor
- valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor
- rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor
- instanceDescriptorClass = InstanceDescriptor
-
- @classmethod
- def getAxisDecriptor(cls):
- return cls.axisDescriptorClass()
-
- @classmethod
- def getSourceDescriptor(cls):
- return cls.sourceDescriptorClass()
-
- @classmethod
- def getInstanceDescriptor(cls):
- return cls.instanceDescriptorClass()
-
- @classmethod
- def getRuleDescriptor(cls):
- return cls.ruleDescriptorClass()
-
- def __init__(self, documentPath, documentObject: DesignSpaceDocument):
- self.path = documentPath
- self.documentObject = documentObject
- self.effectiveFormatTuple = self._getEffectiveFormatTuple()
- self.root = ET.Element("designspace")
-
- def write(self, pretty=True, encoding="UTF-8", xml_declaration=True):
- self.root.attrib["format"] = ".".join(str(i) for i in self.effectiveFormatTuple)
-
- if (
- self.documentObject.axes
- or self.documentObject.elidedFallbackName is not None
- ):
- axesElement = ET.Element("axes")
- if self.documentObject.elidedFallbackName is not None:
- axesElement.attrib[
- "elidedfallbackname"
- ] = self.documentObject.elidedFallbackName
- self.root.append(axesElement)
- for axisObject in self.documentObject.axes:
- self._addAxis(axisObject)
-
- if self.documentObject.locationLabels:
- labelsElement = ET.Element("labels")
- for labelObject in self.documentObject.locationLabels:
- self._addLocationLabel(labelsElement, labelObject)
- self.root.append(labelsElement)
-
- if self.documentObject.rules:
- if getattr(self.documentObject, "rulesProcessingLast", False):
- attributes = {"processing": "last"}
- else:
- attributes = {}
- self.root.append(ET.Element("rules", attributes))
- for ruleObject in self.documentObject.rules:
- self._addRule(ruleObject)
-
- if self.documentObject.sources:
- self.root.append(ET.Element("sources"))
- for sourceObject in self.documentObject.sources:
- self._addSource(sourceObject)
-
- if self.documentObject.variableFonts:
- variableFontsElement = ET.Element("variable-fonts")
- for variableFont in self.documentObject.variableFonts:
- self._addVariableFont(variableFontsElement, variableFont)
- self.root.append(variableFontsElement)
-
- if self.documentObject.instances:
- self.root.append(ET.Element("instances"))
- for instanceObject in self.documentObject.instances:
- self._addInstance(instanceObject)
-
- if self.documentObject.lib:
- self._addLib(self.root, self.documentObject.lib, 2)
-
- tree = ET.ElementTree(self.root)
- tree.write(
- self.path,
- encoding=encoding,
- method="xml",
- xml_declaration=xml_declaration,
- pretty_print=pretty,
- )
-
- def _getEffectiveFormatTuple(self):
- """Try to use the version specified in the document, or a sufficiently
- recent version to be able to encode what the document contains.
- """
- minVersion = self.documentObject.formatTuple
- if (
- any(
- hasattr(axis, "values")
- or axis.axisOrdering is not None
- or axis.axisLabels
- for axis in self.documentObject.axes
- )
- or self.documentObject.locationLabels
- or any(source.localisedFamilyName for source in self.documentObject.sources)
- or self.documentObject.variableFonts
- or any(
- instance.locationLabel or instance.userLocation
- for instance in self.documentObject.instances
- )
- ):
- if minVersion < (5, 0):
- minVersion = (5, 0)
- return minVersion
-
- def _makeLocationElement(self, locationObject, name=None):
- """Convert Location dict to a locationElement."""
- locElement = ET.Element("location")
- if name is not None:
- locElement.attrib["name"] = name
- validatedLocation = self.documentObject.newDefaultLocation()
- for axisName, axisValue in locationObject.items():
- if axisName in validatedLocation:
- # only accept values we know
- validatedLocation[axisName] = axisValue
- for dimensionName, dimensionValue in validatedLocation.items():
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = dimensionName
- if type(dimensionValue) == tuple:
- dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue[0])
- dimElement.attrib["yvalue"] = self.intOrFloat(dimensionValue[1])
- else:
- dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue)
- locElement.append(dimElement)
- return locElement, validatedLocation
-
- def intOrFloat(self, num):
- if int(num) == num:
- return "%d" % num
- return ("%f" % num).rstrip("0").rstrip(".")
-
- def _addRule(self, ruleObject):
- # if none of the conditions have minimum or maximum values, do not add the rule.
- ruleElement = ET.Element("rule")
- if ruleObject.name is not None:
- ruleElement.attrib["name"] = ruleObject.name
- for conditions in ruleObject.conditionSets:
- conditionsetElement = ET.Element("conditionset")
- for cond in conditions:
- if cond.get("minimum") is None and cond.get("maximum") is None:
- # neither is defined, don't add this condition
- continue
- conditionElement = ET.Element("condition")
- conditionElement.attrib["name"] = cond.get("name")
- if cond.get("minimum") is not None:
- conditionElement.attrib["minimum"] = self.intOrFloat(
- cond.get("minimum")
- )
- if cond.get("maximum") is not None:
- conditionElement.attrib["maximum"] = self.intOrFloat(
- cond.get("maximum")
- )
- conditionsetElement.append(conditionElement)
- if len(conditionsetElement):
- ruleElement.append(conditionsetElement)
- for sub in ruleObject.subs:
- subElement = ET.Element("sub")
- subElement.attrib["name"] = sub[0]
- subElement.attrib["with"] = sub[1]
- ruleElement.append(subElement)
- if len(ruleElement):
- self.root.findall(".rules")[0].append(ruleElement)
-
- def _addAxis(self, axisObject):
- axisElement = ET.Element("axis")
- axisElement.attrib["tag"] = axisObject.tag
- axisElement.attrib["name"] = axisObject.name
- self._addLabelNames(axisElement, axisObject.labelNames)
- if axisObject.map:
- for inputValue, outputValue in axisObject.map:
- mapElement = ET.Element("map")
- mapElement.attrib["input"] = self.intOrFloat(inputValue)
- mapElement.attrib["output"] = self.intOrFloat(outputValue)
- axisElement.append(mapElement)
- if axisObject.axisOrdering or axisObject.axisLabels:
- labelsElement = ET.Element("labels")
- if axisObject.axisOrdering is not None:
- labelsElement.attrib["ordering"] = str(axisObject.axisOrdering)
- for label in axisObject.axisLabels:
- self._addAxisLabel(labelsElement, label)
- axisElement.append(labelsElement)
- if hasattr(axisObject, "minimum"):
- axisElement.attrib["minimum"] = self.intOrFloat(axisObject.minimum)
- axisElement.attrib["maximum"] = self.intOrFloat(axisObject.maximum)
- elif hasattr(axisObject, "values"):
- axisElement.attrib["values"] = " ".join(
- self.intOrFloat(v) for v in axisObject.values
- )
- axisElement.attrib["default"] = self.intOrFloat(axisObject.default)
- if axisObject.hidden:
- axisElement.attrib["hidden"] = "1"
- self.root.findall(".axes")[0].append(axisElement)
-
- def _addAxisLabel(
- self, axisElement: ET.Element, label: AxisLabelDescriptor
- ) -> None:
- labelElement = ET.Element("label")
- labelElement.attrib["uservalue"] = self.intOrFloat(label.userValue)
- if label.userMinimum is not None:
- labelElement.attrib["userminimum"] = self.intOrFloat(label.userMinimum)
- if label.userMaximum is not None:
- labelElement.attrib["usermaximum"] = self.intOrFloat(label.userMaximum)
- labelElement.attrib["name"] = label.name
- if label.elidable:
- labelElement.attrib["elidable"] = "true"
- if label.olderSibling:
- labelElement.attrib["oldersibling"] = "true"
- if label.linkedUserValue is not None:
- labelElement.attrib["linkeduservalue"] = self.intOrFloat(
- label.linkedUserValue
- )
- self._addLabelNames(labelElement, label.labelNames)
- axisElement.append(labelElement)
-
- def _addLabelNames(self, parentElement, labelNames):
- for languageCode, labelName in sorted(labelNames.items()):
- languageElement = ET.Element("labelname")
- languageElement.attrib[XML_LANG] = languageCode
- languageElement.text = labelName
- parentElement.append(languageElement)
-
- def _addLocationLabel(
- self, parentElement: ET.Element, label: LocationLabelDescriptor
- ) -> None:
- labelElement = ET.Element("label")
- labelElement.attrib["name"] = label.name
- if label.elidable:
- labelElement.attrib["elidable"] = "true"
- if label.olderSibling:
- labelElement.attrib["oldersibling"] = "true"
- self._addLabelNames(labelElement, label.labelNames)
- self._addLocationElement(labelElement, userLocation=label.userLocation)
- parentElement.append(labelElement)
-
- def _addLocationElement(
- self,
- parentElement,
- *,
- designLocation: AnisotropicLocationDict = None,
- userLocation: SimpleLocationDict = None,
- ):
- locElement = ET.Element("location")
- for axis in self.documentObject.axes:
- if designLocation is not None and axis.name in designLocation:
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = axis.name
- value = designLocation[axis.name]
- if isinstance(value, tuple):
- dimElement.attrib["xvalue"] = self.intOrFloat(value[0])
- dimElement.attrib["yvalue"] = self.intOrFloat(value[1])
- else:
- dimElement.attrib["xvalue"] = self.intOrFloat(value)
- locElement.append(dimElement)
- elif userLocation is not None and axis.name in userLocation:
- dimElement = ET.Element("dimension")
- dimElement.attrib["name"] = axis.name
- value = userLocation[axis.name]
- dimElement.attrib["uservalue"] = self.intOrFloat(value)
- locElement.append(dimElement)
- if len(locElement) > 0:
- parentElement.append(locElement)
-
- def _addInstance(self, instanceObject):
- instanceElement = ET.Element("instance")
- if instanceObject.name is not None:
- instanceElement.attrib["name"] = instanceObject.name
- if instanceObject.locationLabel is not None:
- instanceElement.attrib["location"] = instanceObject.locationLabel
- if instanceObject.familyName is not None:
- instanceElement.attrib["familyname"] = instanceObject.familyName
- if instanceObject.styleName is not None:
- instanceElement.attrib["stylename"] = instanceObject.styleName
- # add localisations
- if instanceObject.localisedStyleName:
- languageCodes = list(instanceObject.localisedStyleName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedStyleNameElement = ET.Element("stylename")
- localisedStyleNameElement.attrib[XML_LANG] = code
- localisedStyleNameElement.text = instanceObject.getStyleName(code)
- instanceElement.append(localisedStyleNameElement)
- if instanceObject.localisedFamilyName:
- languageCodes = list(instanceObject.localisedFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedFamilyNameElement = ET.Element("familyname")
- localisedFamilyNameElement.attrib[XML_LANG] = code
- localisedFamilyNameElement.text = instanceObject.getFamilyName(code)
- instanceElement.append(localisedFamilyNameElement)
- if instanceObject.localisedStyleMapStyleName:
- languageCodes = list(instanceObject.localisedStyleMapStyleName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue
- localisedStyleMapStyleNameElement = ET.Element("stylemapstylename")
- localisedStyleMapStyleNameElement.attrib[XML_LANG] = code
- localisedStyleMapStyleNameElement.text = (
- instanceObject.getStyleMapStyleName(code)
- )
- instanceElement.append(localisedStyleMapStyleNameElement)
- if instanceObject.localisedStyleMapFamilyName:
- languageCodes = list(instanceObject.localisedStyleMapFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue
- localisedStyleMapFamilyNameElement = ET.Element("stylemapfamilyname")
- localisedStyleMapFamilyNameElement.attrib[XML_LANG] = code
- localisedStyleMapFamilyNameElement.text = (
- instanceObject.getStyleMapFamilyName(code)
- )
- instanceElement.append(localisedStyleMapFamilyNameElement)
-
- if self.effectiveFormatTuple >= (5, 0):
- if instanceObject.locationLabel is None:
- self._addLocationElement(
- instanceElement,
- designLocation=instanceObject.designLocation,
- userLocation=instanceObject.userLocation,
- )
- else:
- # Pre-version 5.0 code was validating and filling in the location
- # dict while writing it out, as preserved below.
- if instanceObject.location is not None:
- locationElement, instanceObject.location = self._makeLocationElement(
- instanceObject.location
- )
- instanceElement.append(locationElement)
- if instanceObject.filename is not None:
- instanceElement.attrib["filename"] = instanceObject.filename
- if instanceObject.postScriptFontName is not None:
- instanceElement.attrib[
- "postscriptfontname"
- ] = instanceObject.postScriptFontName
- if instanceObject.styleMapFamilyName is not None:
- instanceElement.attrib[
- "stylemapfamilyname"
- ] = instanceObject.styleMapFamilyName
- if instanceObject.styleMapStyleName is not None:
- instanceElement.attrib[
- "stylemapstylename"
- ] = instanceObject.styleMapStyleName
- if self.effectiveFormatTuple < (5, 0):
- # Deprecated members as of version 5.0
- if instanceObject.glyphs:
- if instanceElement.findall(".glyphs") == []:
- glyphsElement = ET.Element("glyphs")
- instanceElement.append(glyphsElement)
- glyphsElement = instanceElement.findall(".glyphs")[0]
- for glyphName, data in sorted(instanceObject.glyphs.items()):
- glyphElement = self._writeGlyphElement(
- instanceElement, instanceObject, glyphName, data
- )
- glyphsElement.append(glyphElement)
- if instanceObject.kerning:
- kerningElement = ET.Element("kerning")
- instanceElement.append(kerningElement)
- if instanceObject.info:
- infoElement = ET.Element("info")
- instanceElement.append(infoElement)
- self._addLib(instanceElement, instanceObject.lib, 4)
- self.root.findall(".instances")[0].append(instanceElement)
-
- def _addSource(self, sourceObject):
- sourceElement = ET.Element("source")
- if sourceObject.filename is not None:
- sourceElement.attrib["filename"] = sourceObject.filename
- if sourceObject.name is not None:
- if sourceObject.name.find("temp_master") != 0:
- # do not save temporary source names
- sourceElement.attrib["name"] = sourceObject.name
- if sourceObject.familyName is not None:
- sourceElement.attrib["familyname"] = sourceObject.familyName
- if sourceObject.styleName is not None:
- sourceElement.attrib["stylename"] = sourceObject.styleName
- if sourceObject.layerName is not None:
- sourceElement.attrib["layer"] = sourceObject.layerName
- if sourceObject.localisedFamilyName:
- languageCodes = list(sourceObject.localisedFamilyName.keys())
- languageCodes.sort()
- for code in languageCodes:
- if code == "en":
- continue # already stored in the element attribute
- localisedFamilyNameElement = ET.Element("familyname")
- localisedFamilyNameElement.attrib[XML_LANG] = code
- localisedFamilyNameElement.text = sourceObject.getFamilyName(code)
- sourceElement.append(localisedFamilyNameElement)
- if sourceObject.copyLib:
- libElement = ET.Element("lib")
- libElement.attrib["copy"] = "1"
- sourceElement.append(libElement)
- if sourceObject.copyGroups:
- groupsElement = ET.Element("groups")
- groupsElement.attrib["copy"] = "1"
- sourceElement.append(groupsElement)
- if sourceObject.copyFeatures:
- featuresElement = ET.Element("features")
- featuresElement.attrib["copy"] = "1"
- sourceElement.append(featuresElement)
- if sourceObject.copyInfo or sourceObject.muteInfo:
- infoElement = ET.Element("info")
- if sourceObject.copyInfo:
- infoElement.attrib["copy"] = "1"
- if sourceObject.muteInfo:
- infoElement.attrib["mute"] = "1"
- sourceElement.append(infoElement)
- if sourceObject.muteKerning:
- kerningElement = ET.Element("kerning")
- kerningElement.attrib["mute"] = "1"
- sourceElement.append(kerningElement)
- if sourceObject.mutedGlyphNames:
- for name in sourceObject.mutedGlyphNames:
- glyphElement = ET.Element("glyph")
- glyphElement.attrib["name"] = name
- glyphElement.attrib["mute"] = "1"
- sourceElement.append(glyphElement)
- if self.effectiveFormatTuple >= (5, 0):
- self._addLocationElement(
- sourceElement, designLocation=sourceObject.location
- )
- else:
- # Pre-version 5.0 code was validating and filling in the location
- # dict while writing it out, as preserved below.
- locationElement, sourceObject.location = self._makeLocationElement(
- sourceObject.location
- )
- sourceElement.append(locationElement)
- self.root.findall(".sources")[0].append(sourceElement)
-
- def _addVariableFont(
- self, parentElement: ET.Element, vf: VariableFontDescriptor
- ) -> None:
- vfElement = ET.Element("variable-font")
- vfElement.attrib["name"] = vf.name
- if vf.filename is not None:
- vfElement.attrib["filename"] = vf.filename
- if vf.axisSubsets:
- subsetsElement = ET.Element("axis-subsets")
- for subset in vf.axisSubsets:
- subsetElement = ET.Element("axis-subset")
- subsetElement.attrib["name"] = subset.name
- # Mypy doesn't support narrowing union types via hasattr()
- # https://mypy.readthedocs.io/en/stable/type_narrowing.html
- # TODO(Python 3.10): use TypeGuard
- if hasattr(subset, "userMinimum"):
- subset = cast(RangeAxisSubsetDescriptor, subset)
- if subset.userMinimum != -math.inf:
- subsetElement.attrib["userminimum"] = self.intOrFloat(
- subset.userMinimum
- )
- if subset.userMaximum != math.inf:
- subsetElement.attrib["usermaximum"] = self.intOrFloat(
- subset.userMaximum
- )
- if subset.userDefault is not None:
- subsetElement.attrib["userdefault"] = self.intOrFloat(
- subset.userDefault
- )
- elif hasattr(subset, "userValue"):
- subset = cast(ValueAxisSubsetDescriptor, subset)
- subsetElement.attrib["uservalue"] = self.intOrFloat(
- subset.userValue
- )
- subsetsElement.append(subsetElement)
- vfElement.append(subsetsElement)
- self._addLib(vfElement, vf.lib, 4)
- parentElement.append(vfElement)
-
- def _addLib(self, parentElement: ET.Element, data: Any, indent_level: int) -> None:
- if not data:
- return
- libElement = ET.Element("lib")
- libElement.append(plistlib.totree(data, indent_level=indent_level))
- parentElement.append(libElement)
-
- def _writeGlyphElement(self, instanceElement, instanceObject, glyphName, data):
- glyphElement = ET.Element("glyph")
- if data.get("mute"):
- glyphElement.attrib["mute"] = "1"
- if data.get("unicodes") is not None:
- glyphElement.attrib["unicode"] = " ".join(
- [hex(u) for u in data.get("unicodes")]
- )
- if data.get("instanceLocation") is not None:
- locationElement, data["instanceLocation"] = self._makeLocationElement(
- data.get("instanceLocation")
- )
- glyphElement.append(locationElement)
- if glyphName is not None:
- glyphElement.attrib["name"] = glyphName
- if data.get("note") is not None:
- noteElement = ET.Element("note")
- noteElement.text = data.get("note")
- glyphElement.append(noteElement)
- if data.get("masters") is not None:
- mastersElement = ET.Element("masters")
- for m in data.get("masters"):
- masterElement = ET.Element("master")
- if m.get("glyphName") is not None:
- masterElement.attrib["glyphname"] = m.get("glyphName")
- if m.get("font") is not None:
- masterElement.attrib["source"] = m.get("font")
- if m.get("location") is not None:
- locationElement, m["location"] = self._makeLocationElement(
- m.get("location")
- )
- masterElement.append(locationElement)
- mastersElement.append(masterElement)
- glyphElement.append(mastersElement)
- return glyphElement
-
-
-class BaseDocReader(LogMixin):
- axisDescriptorClass = AxisDescriptor
- discreteAxisDescriptorClass = DiscreteAxisDescriptor
- axisLabelDescriptorClass = AxisLabelDescriptor
- locationLabelDescriptorClass = LocationLabelDescriptor
- ruleDescriptorClass = RuleDescriptor
- sourceDescriptorClass = SourceDescriptor
- variableFontsDescriptorClass = VariableFontDescriptor
- valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor
- rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor
- instanceDescriptorClass = InstanceDescriptor
-
- def __init__(self, documentPath, documentObject):
- self.path = documentPath
- self.documentObject = documentObject
- tree = ET.parse(self.path)
- self.root = tree.getroot()
- self.documentObject.formatVersion = self.root.attrib.get("format", "3.0")
- self._axes = []
- self.rules = []
- self.sources = []
- self.instances = []
- self.axisDefaults = {}
- self._strictAxisNames = True
-
- @classmethod
- def fromstring(cls, string, documentObject):
- f = BytesIO(tobytes(string, encoding="utf-8"))
- self = cls(f, documentObject)
- self.path = None
- return self
-
- def read(self):
- self.readAxes()
- self.readLabels()
- self.readRules()
- self.readVariableFonts()
- self.readSources()
- self.readInstances()
- self.readLib()
-
- def readRules(self):
- # we also need to read any conditions that are outside of a condition set.
- rules = []
- rulesElement = self.root.find(".rules")
- if rulesElement is not None:
- processingValue = rulesElement.attrib.get("processing", "first")
- if processingValue not in {"first", "last"}:
- raise DesignSpaceDocumentError(
- " processing attribute value is not valid: %r, "
- "expected 'first' or 'last'" % processingValue
- )
- self.documentObject.rulesProcessingLast = processingValue == "last"
- for ruleElement in self.root.findall(".rules/rule"):
- ruleObject = self.ruleDescriptorClass()
- ruleName = ruleObject.name = ruleElement.attrib.get("name")
- # read any stray conditions outside a condition set
- externalConditions = self._readConditionElements(
- ruleElement,
- ruleName,
- )
- if externalConditions:
- ruleObject.conditionSets.append(externalConditions)
- self.log.info(
- "Found stray rule conditions outside a conditionset. "
- "Wrapped them in a new conditionset."
- )
- # read the conditionsets
- for conditionSetElement in ruleElement.findall(".conditionset"):
- conditionSet = self._readConditionElements(
- conditionSetElement,
- ruleName,
- )
- if conditionSet is not None:
- ruleObject.conditionSets.append(conditionSet)
- for subElement in ruleElement.findall(".sub"):
- a = subElement.attrib["name"]
- b = subElement.attrib["with"]
- ruleObject.subs.append((a, b))
- rules.append(ruleObject)
- self.documentObject.rules = rules
-
- def _readConditionElements(self, parentElement, ruleName=None):
- cds = []
- for conditionElement in parentElement.findall(".condition"):
- cd = {}
- cdMin = conditionElement.attrib.get("minimum")
- if cdMin is not None:
- cd["minimum"] = float(cdMin)
- else:
- # will allow these to be None, assume axis.minimum
- cd["minimum"] = None
- cdMax = conditionElement.attrib.get("maximum")
- if cdMax is not None:
- cd["maximum"] = float(cdMax)
- else:
- # will allow these to be None, assume axis.maximum
- cd["maximum"] = None
- cd["name"] = conditionElement.attrib.get("name")
- # # test for things
- if cd.get("minimum") is None and cd.get("maximum") is None:
- raise DesignSpaceDocumentError(
- "condition missing required minimum or maximum in rule"
- + (" '%s'" % ruleName if ruleName is not None else "")
- )
- cds.append(cd)
- return cds
-
- def readAxes(self):
- # read the axes elements, including the warp map.
- axesElement = self.root.find(".axes")
- if axesElement is not None and "elidedfallbackname" in axesElement.attrib:
- self.documentObject.elidedFallbackName = axesElement.attrib[
- "elidedfallbackname"
- ]
- axisElements = self.root.findall(".axes/axis")
- if not axisElements:
- return
- for axisElement in axisElements:
- if (
- self.documentObject.formatTuple >= (5, 0)
- and "values" in axisElement.attrib
- ):
- axisObject = self.discreteAxisDescriptorClass()
- axisObject.values = [
- float(s) for s in axisElement.attrib["values"].split(" ")
- ]
- else:
- axisObject = self.axisDescriptorClass()
- axisObject.minimum = float(axisElement.attrib.get("minimum"))
- axisObject.maximum = float(axisElement.attrib.get("maximum"))
- axisObject.default = float(axisElement.attrib.get("default"))
- axisObject.name = axisElement.attrib.get("name")
- if axisElement.attrib.get("hidden", False):
- axisObject.hidden = True
- axisObject.tag = axisElement.attrib.get("tag")
- for mapElement in axisElement.findall("map"):
- a = float(mapElement.attrib["input"])
- b = float(mapElement.attrib["output"])
- axisObject.map.append((a, b))
- for labelNameElement in axisElement.findall("labelname"):
- # Note: elementtree reads the "xml:lang" attribute name as
- # '{http://www.w3.org/XML/1998/namespace}lang'
- for key, lang in labelNameElement.items():
- if key == XML_LANG:
- axisObject.labelNames[lang] = tostr(labelNameElement.text)
- labelElement = axisElement.find(".labels")
- if labelElement is not None:
- if "ordering" in labelElement.attrib:
- axisObject.axisOrdering = int(labelElement.attrib["ordering"])
- for label in labelElement.findall(".label"):
- axisObject.axisLabels.append(self.readAxisLabel(label))
- self.documentObject.axes.append(axisObject)
- self.axisDefaults[axisObject.name] = axisObject.default
-
- def readAxisLabel(self, element: ET.Element):
- xml_attrs = {
- "userminimum",
- "uservalue",
- "usermaximum",
- "name",
- "elidable",
- "oldersibling",
- "linkeduservalue",
- }
- unknown_attrs = set(element.attrib) - xml_attrs
- if unknown_attrs:
- raise DesignSpaceDocumentError(
- f"label element contains unknown attributes: {', '.join(unknown_attrs)}"
- )
-
- name = element.get("name")
- if name is None:
- raise DesignSpaceDocumentError("label element must have a name attribute.")
- valueStr = element.get("uservalue")
- if valueStr is None:
- raise DesignSpaceDocumentError(
- "label element must have a uservalue attribute."
- )
- value = float(valueStr)
- minimumStr = element.get("userminimum")
- minimum = float(minimumStr) if minimumStr is not None else None
- maximumStr = element.get("usermaximum")
- maximum = float(maximumStr) if maximumStr is not None else None
- linkedValueStr = element.get("linkeduservalue")
- linkedValue = float(linkedValueStr) if linkedValueStr is not None else None
- elidable = True if element.get("elidable") == "true" else False
- olderSibling = True if element.get("oldersibling") == "true" else False
- labelNames = {
- lang: label_name.text or ""
- for label_name in element.findall("labelname")
- for attr, lang in label_name.items()
- if attr == XML_LANG
- # Note: elementtree reads the "xml:lang" attribute name as
- # '{http://www.w3.org/XML/1998/namespace}lang'
- }
- return self.axisLabelDescriptorClass(
- name=name,
- userValue=value,
- userMinimum=minimum,
- userMaximum=maximum,
- elidable=elidable,
- olderSibling=olderSibling,
- linkedUserValue=linkedValue,
- labelNames=labelNames,
- )
-
- def readLabels(self):
- if self.documentObject.formatTuple < (5, 0):
- return
-
- xml_attrs = {"name", "elidable", "oldersibling"}
- for labelElement in self.root.findall(".labels/label"):
- unknown_attrs = set(labelElement.attrib) - xml_attrs
- if unknown_attrs:
- raise DesignSpaceDocumentError(
- f"Label element contains unknown attributes: {', '.join(unknown_attrs)}"
- )
-
- name = labelElement.get("name")
- if name is None:
- raise DesignSpaceDocumentError(
- "label element must have a name attribute."
- )
- designLocation, userLocation = self.locationFromElement(labelElement)
- if designLocation:
- raise DesignSpaceDocumentError(
- f'
-
-
Q: Is AutoCAD Plant 3D 2019 crack keygen safe and working?
-
A: Yes, AutoCAD Plant 3D 2019 crack keygen (x86x64) !Latest .rar is safe and working. It is tested and verified by our team of experts and guaranteed to be free from viruses, malware, spyware, or other harmful programs. It is also easy to use and compatible with both x86 (32-bit) and x64 (64-bit) versions of Windows operating systems.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Aomei Dynamic Disk Converter 3.5 Full Version Free 154 Features Benefits and Reviews of the Dynamic Disk Converter.md b/spaces/raedeXanto/academic-chatgpt-beta/Aomei Dynamic Disk Converter 3.5 Full Version Free 154 Features Benefits and Reviews of the Dynamic Disk Converter.md
deleted file mode 100644
index 034b1ea81522082e3ca7ac48c35eb2bb2cf76767..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Aomei Dynamic Disk Converter 3.5 Full Version Free 154 Features Benefits and Reviews of the Dynamic Disk Converter.md
+++ /dev/null
@@ -1,225 +0,0 @@
-
-
AUTODATA 3.45 Crack FULL Serial Key: A Comprehensive Guide
-
If you are a car enthusiast, a mechanic, or a car-care center owner, you know how important it is to have accurate and reliable information about various vehicles. You need to know how to repair, service, diagnose, and maintain different car models and brands. You also need to have access to wiring diagrams, labor times, and other essential data.
-
That's where AUTODATA 3.45 comes in handy. AUTODATA 3.45 is a popular program that contains all the information you need for car-care centers. It covers systems of injection of petrol and some diesel engines (PINDATA), as well as parameters for adjustment of disorder-convergence, installations of belts and timing chains, repairing of air conditioners, airbags, ABS and other systems of automobiles manufactured in Europe. The program also has wiring diagrams and layout of nodes.
However, AUTODATA 3.45 is not a free program. You need to pay a subscription fee to use it. But what if you don't want to spend money on it? What if you want to enjoy the full version of AUTODATA 3.45 without any limitations? Well, there is a solution for that: AUTODATA 3.45 Crack FULL Serial Key.
-
AUTODATA 3.45 Crack FULL Serial Key is a software that allows you to activate the full version of AUTODATA 3.45 without paying anything. It bypasses the security features of the program and unlocks all its functions and features. With AUTODATA 3.45 Crack FULL Serial Key, you can use AUTODATA 3.45 as much as you want without any restrictions.
-
In this article, we will show you how to download, install, and use AUTODATA 3.45 Crack FULL Serial Key in a simple and easy way. We will also tell you about the features and benefits of AUTODATA 3.45, as well as some tips and tricks for using it effectively.
-
So, if you are interested in learning more about AUTODATA 3.45 Crack FULL Serial Key, keep reading this comprehensive guide.
-
Features and Benefits of AUTODATA 3.45
-
AUTODATA 3.45 is a powerful program that provides you with all the information you need for car-care centers. It has many features and benefits that make it a must-have tool for anyone who works with cars.
-
Some of the features and benefits of AUTODATA 3.45 are:
-
-
Repair instructions: AUTODATA 3.45 gives you detailed and step-by-step instructions on how to repair various parts and systems of different vehicles.
-
Service information: AUTODATA 3.45 provides you with service schedules, intervals, procedures, specifications, capacities, fluids, lubricants, and more for different vehicles.
-
Diagnostics: AUTODATA 3.45 helps you diagnose problems and faults with different vehicles by giving you codes, descriptions, causes, symptoms, tests, solutions, wiring diagrams, etc.
-
Wiring diagrams: AUTODATA 3.45 shows you color-coded wiring diagrams for different vehicles that help you understand how electrical components are connected and work together.
-
Labor times: AUTODATA 3.45 gives you estimated labor times for different tasks and operations on different vehicles based on industry standards.
-
System requirements: AUTODATA 3.45 works on Windows XP, Windows Vista, Windows 7, Windows 8, Windows 10. It requires at least 1 GB of RAM, 5 GB of hard disk space, and a DVD-ROM drive. It also supports multiple languages such as English, French, German, Spanish, Italian, Portuguese, etc.
-
Medicine options: AUTODATA 3.45 can be activated with a crack file or a serial key that unlocks the full version of the program without any limitations or restrictions.
-
-
How to Download and Install AUTODATA 3.45 Crack FULL Serial Key
-
If you want to use AUTODATA 3.45 Crack FULL Serial Key, you need to download and install it first. Here are the steps to do so:
-
Step 1: Download the installer from a reliable source
-
The first step is to download the installer of AUTODATA 3 . 4 5 from a reliable source. You can find many sources online that offer the installer for free, but be careful not to download a fake or infected file. One of the sources that we recommend is Archive.org, where you can find the installer of AUTODATA 3 . 4 5 + Crack FULL [TechTools.NET]. This is a trusted source that has been verified by many users. To download the installer from Archive.org, follow these steps:
-
AUTODATA 3.45 + Crack FULL [TechTools.NET] torrent download
-AUTODATA 3.45 Full Keygen Serial Crack License Included free download
-AUTODATA 3.45 + Crack FULL [TechTools.NET] : Free Download, Borrow, and Streaming
-AutoData 3.45 Crack Free Download (Latest) blog post
-AUTODATA 3.45 + Crack FULL installation instructions
-AUTODATA 3.45 Full Version with Crack and Keygen
-AUTODATA 3.45 + Crack FULL [TechTools.NET] magnet link
-AUTODATA 3.45 + Crack FULL system requirements
-AUTODATA 3.45 + Crack FULL [TechTools.NET] review
-AutoData 3.45 Crack Free Download (Latest) installer
-AUTODATA 3.45 Full Keygen Serial Crack License Included blogspot
-AUTODATA 3.45 + Crack FULL [TechTools.NET] archive.org
-AUTODATA 3.45 + Crack FULL description
-AutoData 3.45 Crack Free Download (Latest) wordpress
-AUTODATA 3.45 Full Keygen Serial Crack License Included repair instructions
-AUTODATA 3.45 + Crack FULL [TechTools.NET] seeders and leechers
-AUTODATA 3.45 + Crack FULL medicine
-AutoData 3.45 Crack Free Download (Latest) working version
-AUTODATA 3.45 Full Keygen Serial Crack License Included service information
-AUTODATA 3.45 + Crack FULL [TechTools.NET] solidtorrents.to
-AUTODATA 3.45 + Crack FULL diagnostics
-AutoData 3.45 Crack Free Download (Latest) activation key
-AUTODATA 3.45 Full Keygen Serial Crack License Included wiring diagrams
-AUTODATA 3.45 + Crack FULL [TechTools.NET] uploaded by Thumper
-AUTODATA 3.45 + Crack FULL labor times for all car brands
-AutoData 3.45 Crack Free Download (Latest) license key
-AUTODATA 3.45 Full Keygen Serial Crack License Included autodata program for car-care centers
-AUTODATA 3.45 + Crack FULL [TechTools.NET] TechTools / ThumperDC code
-AUTODATA 3.45 + Crack FULL information on systems of injection of petrol and some diesel engines (PINDATA)
-AutoData 3.45 Crack Free Download (Latest) serial key
-AUTODATA 3.45 Full Keygen Serial Crack License Included parameters for adjustment of disorder-convergence
-AUTODATA 3.45 + Crack FULL [TechTools.NET] rar file size
-AUTODATA 3.45 + Crack FULL repairing of air conditioners, airbags, ABS and other systems of automobiles manufactured in Europe
-AutoData 3.45 Crack Free Download (Latest) crack file download link
-AUTODATA 3.45 Full Keygen Serial Crack License Included installations of belts and timing chains
-AUTODATA 3.45 + Crack FULL [TechTools.NET] jpg file size
-AUTODATA 3.45 + Crack FULL Autodata wiring and layout of nodes
-AutoData 3.45 Crack Free Download (Latest) how to use guide
-AUTODATA 3.45 Full Keygen Serial Crack License Included Windows xp,7,8,10 compatible
-AUTODATA 3.45 + Crack FULL [TechTools.NET] txt file size
-AUTODATA 3.45 + Crack FULL Language: English
-AutoData 3.45 Crack Free Download (Latest) no extra steps during the installation process
-AUTODATA 3.45 Full Keygen Serial Crack License Included autorepairtoday.blogspot.com
-AUTODATA 3.45 + Crack FULL [TechTools.NET] tracker list
-AUTODATA 3.45 + Crack FULL visit us at TechTools
-AutoData 3.45 Crack Free Download (Latest) not working version
-AUTODATA 3.45 Full Keygen Serial Crack License Included internet archive HTML5 uploader
-AUTODATA 3.45 + Crack FULL how to download torrent
-AutoData 3.45 Crack Free Download (Latest) IP logger warning
Click on "DOWNLOAD OPTIONS" on the right side of the page.
-
Select "TORRENT" from the list of options.
-
A torrent file will be downloaded to your computer.
-
Open the torrent file with a torrent client such as uTorrent or BitTorrent.
-
The torrent client will start downloading the installer of AUTODATA 3.45 + Crack FULL [TechTools.NET] to your computer.
-
Wait until the download is complete.
-
-
Step 2: Run the installer and follow the instructions
-
The next step is to run the installer and follow the instructions on your screen. To do so, follow these steps:
-
-
Go to the folder where you downloaded the installer of AUTODATA 3 . 4 5 + Crack FULL [TechTools.NET].
-
Right-click on "AUTODATA_345_Crack_FULL_TechTools.NET.rar" and select "Extract Here" or "Extract All".
-
A new folder named "AUTODATA_345_Crack_FULL_TechTools.NET" will be created in the same location.
-
Open the folder "AUTODATA_345_Crack_FULL_TechTools.NET".
-
Double-click on "AUTORUN.EXE" to launch the installer.
-
Select your preferred language from the drop-down menu.
-
Click on "Next".
-
Read and accept the license agreement by clicking on "I Agree".
-
Select your destination folder by clicking on "Browse" or leave it as default.
-
Click on "Next".
-
Step 2: Run the installer and follow the instructions (continued)
-
-
Click on "Next".
-
Click on "Install" to start the installation process.
-
Wait until the installation is complete.
-
Click on "Finish" to exit the installer.
-
-
Step 3: Copy and paste the crack file into the installation folder
-
The third step is to copy and paste the crack file into the installation folder of AUTODATA 3 . 4 5. The crack file is a file that modifies the program files of AUTODATA 3 . 4 5 and activates the full version of the program without any limitations or restrictions. To copy and paste the crack file into the installation folder, follow these steps:
-
-
Go back to the folder "AUTODATA_345_Crack_FULL_TechTools.NET".
-
Open the folder "Crack".
-
Right-click on "ADBCD.exe" and select "Copy".
-
Go to the folder where you installed AUTODATA 3.45. The default location is "C:\Program Files (x86)\Autodata Limited\Autodata 3.45".
-
Right-click on an empty space and select "Paste".
-
A message will appear asking you if you want to replace the existing file. Click on "Yes".
-
The crack file will be copied and pasted into the installation folder.
-
-
Step 4: Enjoy the full version of AUTODATA 3.45
-
The final step is to enjoy the full version of AUTODATA 3 . 4 5. You can now use AUTODATA 3 . 4 5 as much as you want without any restrictions or limitations. You can access all its features and functions and get all the information you need for car-care centers. To launch AUTODATA 3 . 4 5, follow these steps:
-
-
Go to the folder where you installed AUTODATA 3.45.
-
Double-click on "ADBCD.exe" to launch the program.
-
A window will appear asking you for a serial key. Enter any serial key you want and click on "OK". For example, you can enter "1234-5678-9012-3456".
-
The program will start and you will see the main menu.
-
Congratulations! You have successfully installed and activated the full version of AUTODATA 3.45 with a crack file.
-
-
How to Use AUTODATA 3.45 Crack FULL Serial Key
-
Now that you have installed and activated the full version of AUTODATA 3.45 with a crack file, you might be wondering how to use it effectively. In this section, we will show you how to use AUTODATA 3.45 Crack FULL Serial Key in a simple and easy way. We will also give you some tips and tricks for using it efficiently.
-
How to access the main menu and navigate the program
-
The main menu of AUTODATA 3.45 is where you can access all the features and functions of the program. It consists of four tabs: Vehicle Selection, Technical Data, Wiring Diagrams, and Labor Times. To access the main menu and navigate the program, follow these steps:
-
-
Launch AUTODATA 3.45 by double-clicking on "ADBCD.exe" in the installation folder.
-
Enter any serial key you want and click on "OK".
-
You will see the main menu of AUTODATA 3.45.
-
To select a tab, click on it with your mouse or use the keyboard shortcuts: F1 for Vehicle Selection, F2 for Technical Data, F3 for Wiring Diagrams, and F4 for Labor Times.
-
To switch between tabs, click on another tab or use the keyboard shortcuts: Ctrl+Tab or Ctrl+Shift+Tab.
-
To exit the program, click on the red X button in the top right corner or use the keyboard shortcut: Alt+F4.
-
-
How to select a vehicle and view its data
-
The first thing you need to do before using AUTODATA 3.45 is to select a vehicle that you want to work with. You can select a vehicle by its make, model, year, engine type, etc. Once you select a vehicle, you can view its data such as repair instructions, service information, diagnostics, wiring diagrams, labor times, etc. To select a vehicle and view its data, follow these steps:
-
-
Access the main menu of AUTODATA 3.45.
-
Select the tab "Vehicle Selection" by clicking on it or pressing F1.
-
You will see a list of vehicle makes on the left side of the screen.
-
Select a vehicle make by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a list of vehicle models on the right side of the screen.
-
Select a vehicle model by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a list of vehicle variants on the bottom of the screen.
-
Select a vehicle variant by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a window with information about your selected vehicle such as make, model, variant, engine code, fuel type, etc.
-
How to select a vehicle and view its data (continued)
-
-
Click on the button "Technical Data" on the top of the window or press F2.
-
You will see a list of categories of technical data on the left side of the window such as engine management, brakes, steering, suspension, etc.
-
Select a category by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a list of subcategories of technical data on the right side of the window such as fuel system, ignition system, sensors, actuators, etc.
-
Select a subcategory by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a window with information about your selected subcategory such as codes, descriptions, specifications, procedures, diagrams, etc.
-
To view more data about your selected vehicle, click on the button "Wiring Diagrams" on the top of the window or press F3.
-
You will see a list of systems of wiring diagrams on the left side of the window such as engine management, ABS, air conditioning, etc.
-
Select a system by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a list of components of wiring diagrams on the right side of the window such as ECU, sensors, relays, fuses, etc.
-
Select a component by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a window with a color-coded wiring diagram of your selected component and its connections to other components.
-
To view more data about your selected vehicle, click on the button "Labor Times" on the top of the window or press F4.
-
You will see a list of operations and tasks on the left side of the window such as engine overhaul, clutch replacement, brake service, etc.
-
Select an operation or task by clicking on it with your mouse or using the arrow keys on your keyboard.
-
You will see a window with information about your selected operation or task such as labor time, difficulty level, tools required, notes, etc.
-
-
Tips and Tricks for Using AUTODATA 3.45 Crack FULL Serial Key
-
AUTODATA 3.45 Crack FULL Serial Key is a powerful and useful program that can help you improve your car-care skills and knowledge. However, to get the most out of it, you need to know some tips and tricks that can make your experience easier and more efficient. Here are some tips and tricks for using AUTODATA 3.45 Crack FULL Serial Key:
-
How to update the program and its data
-
AUTODATA 3.45 Crack FULL Serial Key is not an official version of AUTODATA 3.45. It is a cracked version that bypasses the security features of the program and unlocks all its functions and features. However, this also means that it does not receive any updates from the developers. Therefore, if you want to update the program and its data, you need to download and install a newer version of AUTODATA or a newer version of AUTODATA Crack FULL Serial Key from a reliable source. To do so, follow these steps:
-
-
Go to a reliable source that offers a newer version of AUTODATA or a newer version of AUTODATA Crack FULL Serial Key for free. For example, you can go to SolidTorrents, where you can find a newer version of AUTODATA + Crack FULL [TechTools.NET].
-
Download the installer of the newer version of AUTODATA or the newer version of AUTODATA Crack FULL Serial Key from the source.
-
Run the installer and follow the instructions on your screen.
-
Copy and paste the crack file into the installation folder of the newer version of AUTODATA.
-
Enjoy the updated version of AUTODATA Crack FULL Serial Key.
-
-
How to troubleshoot common issues and errors
-
AUTODATA 3.45 Crack FULL Serial Key is not a perfect program. It may encounter some issues and errors that can affect its performance and functionality. Some of the common issues and errors that you may face while using AUTODATA 3.45 Crack FULL Serial Key are:
-
-
How to troubleshoot common issues and errors (continued)
-
-
The hardware information does not match with your dongle: This is an error message that may appear when you launch AUTODATA 3.45 Crack FULL Serial Key. It means that the program cannot recognize your dongle or USB device that contains the serial key or the crack file. To fix this error, you need to make sure that your dongle or USB device is connected properly to your computer and that it contains the correct serial key or crack file. You can also try to use a different dongle or USB device or a different serial key or crack file.
-
The program does not start or crashes: This is an issue that may occur when you launch or use AUTODATA 3.45 Crack FULL Serial Key. It means that the program is not compatible with your system or that it has been corrupted by a virus or malware. To fix this issue, you need to check your system requirements and compatibility and make sure that they meet the minimum requirements of AUTODATA 3.45. You also need to scan your computer for viruses and malware and remove any threats that may affect the program. You can also try to reinstall the program and the crack file.
-
The data is outdated or incomplete: This is an issue that may occur when you use AUTODATA 3.45 Crack FULL Serial Key. It means that the program does not have the latest or the most complete data for your selected vehicle or category. To fix this issue, you need to update the program and its data by downloading and installing a newer version of AUTODATA or a newer version of AUTODATA Crack FULL Serial Key from a reliable source.
-
-
How to customize the program settings and preferences
-
AUTODATA 3.45 Crack FULL Serial Key is a flexible and customizable program that allows you to adjust its settings and preferences according to your needs and preferences. You can change various aspects of the program such as language, units, display, print, etc. To customize the program settings and preferences, follow these steps:
-
-
Access the main menu of AUTODATA 3.45.
-
Select the tab "Technical Data" by clicking on it or pressing F2.
-
Click on the button "Settings" on the top of the window or press F5.
-
You will see a window with various options for settings and preferences.
-
Select an option by clicking on it with your mouse or using the arrow keys on your keyboard.
-
Change the value of the option by clicking on it with your mouse or using the arrow keys on your keyboard.
-
Click on "OK" to save your changes and close the window.
-
-
Conclusion
-
AUTODATA 3.45 Crack FULL Serial Key is a powerful and useful program that provides you with all the information you need for car-care centers. It has many features and benefits that make it a must-have tool for anyone who works with cars. However, AUTODATA 3.45 is not a free program. You need to pay a subscription fee to use it.
-
But what if you don't want to spend money on it? What if you want to enjoy the full version of AUTODATA 3.45 without any limitations? Well, there is a solution for that: AUTODATA 3.45 Crack FULL Serial Key.
-
AUTODATA 3.45 Crack FULL Serial Key is a software that allows you to activate the full version of AUTODATA 3.45 without paying anything. It bypasses the security features of the program and unlocks all its functions and features. With AUTODATA 3.45 Crack FULL Serial Key, you can use AUTODATA 3.45 as much as you want without any restrictions.
-
In this article, we have shown you how to download, install, and use AUTODATA 3.45 Crack FULL Serial Key in a simple and easy way. We have also told you about the features and benefits of AUTODATA 3.45, as well as some tips and tricks for using it effectively.
-
So, what are you waiting for? Download AUTODATA 3.45 Crack FULL Serial Key today and improve your car-care skills and knowledge.
-
Frequently Asked Questions
-
Here are some frequently asked questions about AUTODATA 3.45 Crack FULL Serial Key:
-
Q: Is AUTODATA 3.45 Crack FULL Serial Key safe to use?
-
A: Yes, AUTODATA 3.45 Crack FULL Serial Key is safe to use as long as you download it from a reliable source and scan it for viruses and malware before using it.
-
Q: Is AUTODATA 3.45 Crack FULL Serial Key legal to use?
-
Q: Is AUTODATA 3.45 Crack FULL Serial Key legal to use? (continued)
-
A: No, AUTODATA 3.45 Crack FULL Serial Key is not legal to use as it violates the terms and conditions of AUTODATA 3.45. It is a pirated version of the program that infringes the intellectual property rights of the developers. Using AUTODATA 3.45 Crack FULL Serial Key may result in legal consequences such as fines, lawsuits, or criminal charges.
-
Q: What are the alternatives to AUTODATA 3.45 Crack FULL Serial Key?
-
A: If you don't want to use AUTODATA 3.45 Crack FULL Serial Key, you have two alternatives: either pay for the subscription fee of AUTODATA 3.45 or use another program that provides similar information for car-care centers. Some of the programs that you can use instead of AUTODATA 3.45 are:
-
-
Alldata: Alldata is a leading provider of automotive repair information and solutions for professional automotive service centers. It offers online access to OEM repair information, diagnostic tools, maintenance schedules, wiring diagrams, labor times, etc.
-
Mitchell1: Mitchell1 is a provider of software and services for the automotive industry. It offers online and offline solutions for repair information, diagnostics, estimating, management, marketing, etc.
-
HaynesPro: HaynesPro is a provider of technical data and repair information for cars and light commercial vehicles. It offers online access to vehicle identification, service schedules, repair manuals, wiring diagrams, diagnostics, etc.
-
-
Q: How can I contact the support team of AUTODATA 3.45 Crack FULL Serial Key?
-
A: You cannot contact the support team of AUTODATA 3.45 Crack FULL Serial Key as it is not an official version of AUTODATA 3.45. It is a cracked version that does not have any support or customer service from the developers. If you have any issues or questions about AUTODATA 3.45 Crack FULL Serial Key, you can try to find answers online or ask other users who have used it.
-
Q: How can I give feedback or suggestions about AUTODATA 3.45 Crack FULL Serial Key?
-
A: You cannot give feedback or suggestions about AUTODATA 3.45 Crack FULL Serial Key as it is not an official version of AUTODATA 3.45. It is a cracked version that does not have any communication or interaction with the developers. If you have any feedback or suggestions about AUTODATA 3.45 Crack FULL Serial Key, you can share them online or with other users who have used it.
-
ed
-
This is the end of the article on AUTODATA 3.45 Crack FULL Serial Key. I hope you found it informative and helpful.
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Desperados 3-John Cooper HELLDORADO K A T O License Key.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Desperados 3-John Cooper HELLDORADO K A T O License Key.md
deleted file mode 100644
index 7cbfed4a9e2e941d63944c9d73ecfd21d0e7e1a2..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Desperados 3-John Cooper HELLDORADO K A T O License Key.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Desperados 3-John Cooper HELLDORADO K A T O License Key
-
-Esfand 19, 1395 AP - ALCE was a script; NEXT is a plugin. This means that it uses the inner workings in 32-bit mode without color quantization and local stretching. At the same time, NAL is an alpha channel. But why is that? Due to the fact that it uses internal layers and colors that must be the same for a short time, NAL must be able to use more than one color for quite a long time. Therefore, he can use the inner layers for this. When set to ALCE as the default, ALCE is the alpha channel. So NAL cannot be set as the default. 8a78ff9644
-
-
-
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Couple Cam Sexy Web.md b/spaces/rorallitri/biomedical-language-models/logs/Couple Cam Sexy Web.md
deleted file mode 100644
index 79ed449b97f0e1c34666decf4ba9afbed0cfed17..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Couple Cam Sexy Web.md
+++ /dev/null
@@ -1,33 +0,0 @@
-
-
Stripchat is an 18+ LIVE sex & entertainment community. You can watch streams from amateur & professional models for absolutely free. Browse through thousands of open-minded people: naked girls, guys, transsexuals and couples performing live sex shows.
Looking for ways to turn up the heat in the bedroom with your partner? Cam4 couples cams feature couples showcasing numerous ways to have sex, and is a definite fan favorite cam show category. Watch as live couples get creative and do all kinds of tantalizing things to each other. Let Cam4 couples sex cams inspire you to play along in real-life and take your bedroom sex to the next level. Get naughty with your partner or mistress by sex chatting these horny couples from around the world, and allow yourselves to roleplay, emulate and step out of your sex comfort zone.
-
CAM4 makes sure that visitors and members have plenty of options to choose from when it comes to couple cams. Another top choice are the private shows. Private shows on CAM4 are often by invite only or you can buy tokens to join. Members also have the privilege to join these private shows for free.
-
Many people are particular about ages. There are those who want to see 18+ teen couples having sex while some prefer those who are older or more mature because of their experience. Sometimes, a 18+ teen broadcaster pairs up with someone older or it could be a cam show involving a cougar or a MILF with a younger partner.
-
-
All of the models appearing on the Couples Webcam shows on UNCams are consenting adults. You can find tons of straight couples, lesbian couples, gays, and even orgies here on UNCams! Seriously, if you're a straight guy, I'm sure you would love being able to join a room with two super hot lesbians having fun together in the most arousing positions.
-
Every free couples cam on UNCams would either be the best sexual education material you'll ever find featuring live couple sex or another fun cam show you'll be able to just have fun watching for free.
-
This is like a live couple porn site actually. Instead of just watching your usual porn, why don't you take control of the plot and tell the "cam couple" to fuck each other exactly the way you want them. In a way, each of the couples cams is a sex show where you are the director and the story writer. No need for an elaborate production.
-
One of the best things about U.N. Cams is that we have models appearing from so many countries. There are hot gay couples from Africa, naked college lesbians from Europe, Asian couples hosting private shows, and so much more variations of couples. You name it, you can find it here.
-
Aside from all possible sorts of webcam couples you can watch on UNCams, you can also search for the couples who share the same kinks you do or models who fit your fetishes. You can find a babe who gives a really good blow job, a MILF with a huge ass, a guy with your ideal cock size, or a girlfriend who loves cum.
-
The best about these couples is not just their sexual abilities or their sizzling bodies, but their commitment to pleasing their audience. Some of them even wear costumes, set up a stage, change their backgrounds, collect props, or create plots just to make their shows more than satisfying. Of course, they got the basics well covered too - good lighting, proper camera angles, and HD cameras.
-
Have I mentioned we have hundreds of cam shows streaming simultaneously? It might be an overwhelming number but because these couples know their thing, you won't have difficulty breezing through the thumbnails to figure out which couples you'd like to watch because even the thumbnails showing previews of their shows are very clear.
-
Getting along with the couple you're watching? With a free account in UN Cams, you can save their profile to your profile and get updates on when they are going live next. You can also check out their schedule on the dedicated page for them.
-
If you want to send them a message or give them an idea of what they can do next, you can use the live chat feature to do that. Our live chat feature is so easy to use and aside from exchanging messages with the host of the show or interacting with other people in the audience, you can also use it to control their interactive toy, see the status of the show, and throw in some tips to compliment the couples.
-
Our live couple chat is perfect for those couples who are curious about how other couples have sex or if you're into threesomes and don't have a girlfriend and a guest to do it, then this one is for you. Not only that you can watch them on cam but you can also just opt in to chat with them. We offer a lot of private chat rooms not only a couple of cams. You may also opt into chatting with MILFS or grannies if you need some sexual life advice. ,
-
There is a huge demand for webcam couple cam show and these cam couples can easily exploit this opportunity. Webcam Couples are making insane amount of money camming as well as selling the content on various other sites. They make 3-5x times the money that a camgirl makes.
-
So if you are a couple who are ready to have sex in front of a camera then you should sign up on all the below listed websites. This is a human psychological thing i guess. Watching a live webcam couples having live sex just gets us going.
-
Couples on webcam are very popular and are also making amazing money. If you go on any camming site, you will see that the couples are hugely popular. Couples cam shows are loved by many and also the tips s much more generous then a normal single girl camming.
-
The recommended sites above are the best webcam sites to make money. Couples on cam are in huge demand and you must start your journey as a couple webcam right now. In this article i will clear all your doubts regarding couples camming.
-
Well this varies from network to network. You can add your individual content on all these networks even if you register as a couple. But creating multiple accounts under same name is something that you will have to check with the network. You can either email them or even check their TOS.
-
You can head over to Chaturbate and Bongacams right now and click on the couples tab. There you will see the competition difference betweek being a solo camgirl and being a couple. The competition is much much less and hence better opportunity for you to make money.
-
To make money online webcam you need to get started with a camming network like chaturbate. If you are also searching for how much do cam couples make, then i hope this article was of help. Also you must check out all the other articles on this site that i have linked above.
-
Its simple, you have to look professional and not a newbie. to look professional you will have to invest in few items. We have made a comprehensive list of the same, Camming couples shopping list. Also check our ebook below for more advanced strategies.
-
We are proud to own one of the best random chat sites on the Internet. With live cams to please all types of people, we really do have a complete video chat site. With just one chat site, you can have a gay experience, have fun with hot nude girls, experience a threesome by joining in on couple cams and much more. We offer live sex cams like no other video chat site on the Internet. To make things even better, we don't just focus around adult cams; you can even chat with others simply to have a conversation.
-
As one of the very best video chat sites on the web, it probably comes as no surprise that this site has thousands of users online at any given moment. Whether it's the day or night, you will be able to connect with sexy strangers. Just turn your webcam on and enjoy this site for hours at a time. The popularity of OmeXXX is constantly growing; more and more people join in on the fun every single day. That's amazing, because the more people that join in on the fun, the more people you will get to chat with. Now what are you waiting for, stop reading our home page and start meeting hotties on the Internet!
-
Sex chat online beautiful couples - a special category of our website. If you think that the sex couples monotonous and boring, you are deeply mistaken category and sex couples proof. You will see hot and passionate blowjob and big fat cock tearing small holes lustful, and the first experience of anal sex here and group fucking couples and free broadcast sex chat rooms and many other family antics. All they've seen a lot about sex, and many of them are willing to diversify their sexual life. Crazy game with vibrators and strap-on sex with a friend of his wife, lesbian sex, in this category you can find everything.
-
If you're wondering how the other sex couples in spy on them in private sex camera, or if you want to get new ideas and to arrange an unforgettable show to your partner, then the category of "Family Video" is also for you. You will be amazed by the huge imagination and restless desire, lust and pleasure of sex, and by the fact that on its way to getting many couples are able to do this, as you probably could not realize. Sea hot sex, debauchery and passion.
-
Usually, when you think of cam models, you have this image of a beautiful young woman putting on a titillating performance before a spellbound audience. But that is no longer the case. Now, couples have moved into the online modeling arena and have found their prospects just as lucrative as singles. The key is to optimize the allure and appeal of your webcam session and gratifying your audience.
-
Ben and Kate, the heartwarming new sibling comedy created by Dana Fox (What Happens in Vegas) and directed by Jake Kasdan (Bad Teacher), also debuts Tuesdays this fall. Starring newcomer Dakota Johnson and Nat Faxon (Bad Teacher, Academy Award-winning co-screenwriter of The Descendants), the comedy follows a pair of odd-couple siblings - one, an overly responsible single mom; the other, an exuberant kid-at-heart - and their friends as they push each other out of their comfort zones and into real life.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/runa91/barc_gradio/src/lifting_to_3d/inn_model_for_shape.py b/spaces/runa91/barc_gradio/src/lifting_to_3d/inn_model_for_shape.py
deleted file mode 100644
index 6ab7c1f18ca603a20406092bdd7163e370d17023..0000000000000000000000000000000000000000
--- a/spaces/runa91/barc_gradio/src/lifting_to_3d/inn_model_for_shape.py
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
-from torch import distributions
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.utils.data import DataLoader
-from torch.distributions import Normal
-import numpy as np
-import cv2
-import trimesh
-from tqdm import tqdm
-import warnings
-warnings.filterwarnings("ignore", category=DeprecationWarning)
-import FrEIA.framework as Ff
-import FrEIA.modules as Fm
-
-
-class INNForShape(nn.Module):
- def __init__(self, n_betas, n_betas_limbs, k_tot=2, betas_scale=1.0, betas_limbs_scale=0.1):
- super(INNForShape, self).__init__()
- self.n_betas = n_betas
- self.n_betas_limbs = n_betas_limbs
- self.n_dim = n_betas + n_betas_limbs
- self.betas_scale = betas_scale
- self.betas_limbs_scale = betas_limbs_scale
- self.k_tot = 2
- self.model_inn = self.build_inn_network(self.n_dim, k_tot=self.k_tot)
-
- def subnet_fc(self, c_in, c_out):
- subnet = nn.Sequential(nn.Linear(c_in, 64), nn.ReLU(),
- nn.Linear(64, 64), nn.ReLU(),
- nn.Linear(64, c_out))
- return subnet
-
- def build_inn_network(self, n_input, k_tot=12, verbose=False):
- coupling_block = Fm.RNVPCouplingBlock
- nodes = [Ff.InputNode(n_input, name='input')]
- for k in range(k_tot):
- nodes.append(Ff.Node(nodes[-1],
- coupling_block,
- {'subnet_constructor':self.subnet_fc, 'clamp':2.0},
- name=F'coupling_{k}'))
- nodes.append(Ff.Node(nodes[-1],
- Fm.PermuteRandom,
- {'seed':k},
- name=F'permute_{k}'))
- nodes.append(Ff.OutputNode(nodes[-1], name='output'))
- model = Ff.ReversibleGraphNet(nodes, verbose=verbose)
- return model
-
- def forward(self, latent_rep):
- shape, _ = self.model_inn(latent_rep, rev=False, jac=False)
- betas = shape[:, :self.n_betas]*self.betas_scale
- betas_limbs = shape[:, self.n_betas:]*self.betas_limbs_scale
- return betas, betas_limbs
-
- def reverse(self, betas, betas_limbs):
- shape = torch.cat((betas/self.betas_scale, betas_limbs/self.betas_limbs_scale), dim=1)
- latent_rep, _ = self.model_inn(shape, rev=True, jac=False)
- return latent_rep
\ No newline at end of file
diff --git a/spaces/sahirp/planedetect/README.md b/spaces/sahirp/planedetect/README.md
deleted file mode 100644
index 6792e917116561aee48779029bc03085eb8523b9..0000000000000000000000000000000000000000
--- a/spaces/sahirp/planedetect/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Planedetect
-emoji: 🏆
-colorFrom: purple
-colorTo: blue
-sdk: gradio
-sdk_version: 3.44.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sana123/Sinhala_Audio-to-Text/README.md b/spaces/sana123/Sinhala_Audio-to-Text/README.md
deleted file mode 100644
index 11fa1e7b265229eb0589b52b92ba0127ead6e9cf..0000000000000000000000000000000000000000
--- a/spaces/sana123/Sinhala_Audio-to-Text/README.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Sinhala Audio-to-Text Playground
-emoji: 🤫
-colorFrom: indigo
-colorTo: red
-sdk: gradio
-sdk_version: 3.9.1
-app_file: app.py
-pinned: false
-tags:
-- whisper-event
-duplicated_from: NeuralInternet/Audio-to-Text_Playground
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/sander-wood/clamp_zero_shot_music_classification/xml2abc.py b/spaces/sander-wood/clamp_zero_shot_music_classification/xml2abc.py
deleted file mode 100644
index 749f1c9e39978ab392dc1f7731db1646b62f27b3..0000000000000000000000000000000000000000
--- a/spaces/sander-wood/clamp_zero_shot_music_classification/xml2abc.py
+++ /dev/null
@@ -1,1582 +0,0 @@
-#!/usr/bin/env python
-# coding=latin-1
-'''
-Copyright (C) 2012-2018: W.G. Vree
-Contributions: M. Tarenskeen, N. Liberg, Paul Villiger, Janus Meuris, Larry Myerscough,
-Dick Jackson, Jan Wybren de Jong, Mark Zealey.
-
-This program is free software; you can redistribute it and/or modify it under the terms of the
-Lesser GNU General Public License as published by the Free Software Foundation;
-
-This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
-without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
-See the Lesser GNU General Public License for more details. .
-'''
-
-try: import xml.etree.cElementTree as E
-except: import xml.etree.ElementTree as E
-import os, sys, types, re, math
-
-VERSION = 143
-
-python3 = sys.version_info.major > 2
-if python3:
- tupletype = tuple
- listtype = list
- max_int = sys.maxsize
-else:
- tupletype = types.TupleType
- listtype = types.ListType
- max_int = sys.maxint
-
-note_ornamentation_map = { # for notations/, modified from EasyABC
- 'ornaments/trill-mark': 'T',
- 'ornaments/mordent': 'M',
- 'ornaments/inverted-mordent': 'P',
- 'ornaments/turn': '!turn!',
- 'ornaments/inverted-turn': '!invertedturn!',
- 'technical/up-bow': 'u',
- 'technical/down-bow': 'v',
- 'technical/harmonic': '!open!',
- 'technical/open-string': '!open!',
- 'technical/stopped': '!plus!',
- 'technical/snap-pizzicato': '!snap!',
- 'technical/thumb-position': '!thumb!',
- 'articulations/accent': '!>!',
- 'articulations/strong-accent':'!^!',
- 'articulations/staccato': '.',
- 'articulations/staccatissimo':'!wedge!',
- 'articulations/scoop': '!slide!',
- 'fermata': '!fermata!',
- 'arpeggiate': '!arpeggio!',
- 'articulations/tenuto': '!tenuto!',
- 'articulations/staccatissimo':'!wedge!', # not sure whether this is the right translation
- 'articulations/spiccato': '!wedge!', # not sure whether this is the right translation
- 'articulations/breath-mark': '!breath!', # this may need to be tested to make sure it appears on the right side of the note
- 'articulations/detached-legato': '!tenuto!.',
-}
-
-dynamics_map = { # for direction/direction-type/dynamics/
- 'p': '!p!',
- 'pp': '!pp!',
- 'ppp': '!ppp!',
- 'pppp': '!pppp!',
- 'f': '!f!',
- 'ff': '!ff!',
- 'fff': '!fff!',
- 'ffff': '!ffff!',
- 'mp': '!mp!',
- 'mf': '!mf!',
- 'sfz': '!sfz!',
-}
-
-percSvg = '''%%beginsvg
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- %%endsvg'''
-
-tabSvg = '''%%beginsvg
-
-
-
- '''
-
-kopSvg = '%s\n'
-kopSvg2 = '%s\n'
-
-def info (s, warn=1): sys.stderr.write ((warn and '-- ' or '') + s + '\n')
-
-#-------------------
-# data abstractions
-#-------------------
-class Measure:
- def __init__ (s, p):
- s.reset ()
- s.ixp = p # part number
- s.ixm = 0 # measure number
- s.mdur = 0 # measure duration (nominal metre value in divisions)
- s.divs = 0 # number of divisions per 1/4
- s.mtr = 4,4 # meter
-
- def reset (s): # reset each measure
- s.attr = '' # measure signatures, tempo
- s.lline = '' # left barline, but only holds ':' at start of repeat, otherwise empty
- s.rline = '|' # right barline
- s.lnum = '' # (left) volta number
-
-class Note:
- def __init__ (s, dur=0, n=None):
- s.tijd = 0 # the time in XML division units
- s.dur = dur # duration of a note in XML divisions
- s.fact = None # time modification for tuplet notes (num, div)
- s.tup = [''] # start(s) and/or stop(s) of tuplet
- s.tupabc = '' # abc tuplet string to issue before note
- s.beam = 0 # 1 = beamed
- s.grace = 0 # 1 = grace note
- s.before = [] # abc string that goes before the note/chord
- s.after = '' # the same after the note/chord
- s.ns = n and [n] or [] # notes in the chord
- s.lyrs = {} # {number -> syllabe}
- s.tab = None # (string number, fret number)
- s.ntdec = '' # !string!, !courtesy!
-
-class Elem:
- def __init__ (s, string):
- s.tijd = 0 # the time in XML division units
- s.str = string # any abc string that is not a note
-
-class Counter:
- def inc (s, key, voice): s.counters [key][voice] = s.counters [key].get (voice, 0) + 1
- def clear (s, vnums): # reset all counters
- tups = list( zip (vnums.keys (), len (vnums) * [0]))
- s.counters = {'note': dict (tups), 'nopr': dict (tups), 'nopt': dict (tups)}
- def getv (s, key, voice): return s.counters[key][voice]
- def prcnt (s, ip): # print summary of all non zero counters
- for iv in s.counters ['note']:
- if s.getv ('nopr', iv) != 0:
- info ( 'part %d, voice %d has %d skipped non printable notes' % (ip, iv, s.getv ('nopr', iv)))
- if s.getv ('nopt', iv) != 0:
- info ( 'part %d, voice %d has %d notes without pitch' % (ip, iv, s.getv ('nopt', iv)))
- if s.getv ('note', iv) == 0: # no real notes counted in this voice
- info ( 'part %d, skipped empty voice %d' % (ip, iv))
-
-class Music:
- def __init__(s, options):
- s.tijd = 0 # the current time
- s.maxtime = 0 # maximum time in a measure
- s.gMaten = [] # [voices,.. for all measures in a part]
- s.gLyrics = [] # [{num: (abc_lyric_string, melis)},.. for all measures in a part]
- s.vnums = {} # all used voice id's in a part (xml voice id's == numbers)
- s.cnt = Counter () # global counter object
- s.vceCnt = 1 # the global voice count over all parts
- s.lastnote = None # the last real note record inserted in s.voices
- s.bpl = options.b # the max number of bars per line when writing abc
- s.cpl = options.n # the number of chars per line when writing abc
- s.repbra = 0 # true if volta is used somewhere
- s.nvlt = options.v # no volta on higher voice numbers
- s.jscript = options.j # compatibility with javascript version
-
- def initVoices (s, newPart=0):
- s.vtimes, s.voices, s.lyrics = {}, {}, {}
- for v in s.vnums:
- s.vtimes [v] = 0 # {voice: the end time of the last item in each voice}
- s.voices [v] = [] # {voice: [Note|Elem, ..]}
- s.lyrics [v] = [] # {voice: [{num: syl}, ..]}
- if newPart: s.cnt.clear (s.vnums) # clear counters once per part
-
- def incTime (s, dt):
- s.tijd += dt
- if s.tijd < 0: s.tijd = 0 # erroneous element
- if s.tijd > s.maxtime: s.maxtime = s.tijd
-
- def appendElemCv (s, voices, elem):
- for v in voices:
- s.appendElem (v, elem) # insert element in all voices
-
- def insertElem (s, v, elem): # insert at the start of voice v in the current measure
- obj = Elem (elem)
- obj.tijd = 0 # because voice is sorted later
- s.voices [v].insert (0, obj)
-
- def appendObj (s, v, obj, dur):
- obj.tijd = s.tijd
- s.voices [v].append (obj)
- s.incTime (dur)
- if s.tijd > s.vtimes[v]: s.vtimes[v] = s.tijd # don't update for inserted earlier items
-
- def appendElem (s, v, elem, tel=0):
- s.appendObj (v, Elem (elem), 0)
- if tel: s.cnt.inc ('note', v) # count number of certain elements in each voice (in addition to notes)
-
- def appendElemT (s, v, elem, tijd): # insert element at specified time
- obj = Elem (elem)
- obj.tijd = tijd
- s.voices [v].append (obj)
-
- def appendNote (s, v, note, noot):
- note.ns.append (note.ntdec + noot)
- s.appendObj (v, note, int (note.dur))
- s.lastnote = note # remember last note/rest for later modifications (chord, grace)
- if noot != 'z' and noot != 'x': # real notes and grace notes
- s.cnt.inc ('note', v) # count number of real notes in each voice
- if not note.grace: # for every real note
- s.lyrics[v].append (note.lyrs) # even when it has no lyrics
-
- def getLastRec (s, voice):
- if s.gMaten: return s.gMaten[-1][voice][-1] # the last record in the last measure
- return None # no previous records in the first measure
-
- def getLastMelis (s, voice, num): # get melisma of last measure
- if s.gLyrics:
- lyrdict = s.gLyrics[-1][voice] # the previous lyrics dict in this voice
- if num in lyrdict: return lyrdict[num][1] # lyrdict = num -> (lyric string, melisma)
- return 0 # no previous lyrics in voice or line number
-
- def addChord (s, note, noot): # careful: we assume that chord notes follow immediately
- for d in note.before: # put all decorations before chord
- if d not in s.lastnote.before:
- s.lastnote.before += [d]
- s.lastnote.ns.append (note.ntdec + noot)
-
- def addBar (s, lbrk, m): # linebreak, measure data
- if m.mdur and s.maxtime > m.mdur: info ('measure %d in part %d longer than metre' % (m.ixm+1, m.ixp+1))
- s.tijd = s.maxtime # the time of the bar lines inserted here
- for v in s.vnums:
- if m.lline or m.lnum: # if left barline or left volta number
- p = s.getLastRec (v) # get the previous barline record
- if p: # in measure 1 no previous measure is available
- x = p.str # p.str is the ABC barline string
- if m.lline: # append begin of repeat, m.lline == ':'
- x = (x + m.lline).replace (':|:','::').replace ('||','|')
- if s.nvlt == 3: # add volta number only to lowest voice in part 0
- if m.ixp + v == min (s.vnums): x += m.lnum
- elif m.lnum: # new behaviour with I:repbra 0
- x += m.lnum # add volta number(s) or text to all voices
- s.repbra = 1 # signal occurrence of a volta
- p.str = x # modify previous right barline
- elif m.lline: # begin of new part and left repeat bar is required
- s.insertElem (v, '|:')
- if lbrk:
- p = s.getLastRec (v) # get the previous barline record
- if p: p.str += lbrk # insert linebreak char after the barlines+volta
- if m.attr: # insert signatures at front of buffer
- s.insertElem (v, '%s' % m.attr)
- s.appendElem (v, ' %s' % m.rline) # insert current barline record at time maxtime
- s.voices[v] = sortMeasure (s.voices[v], m) # make all times consistent
- lyrs = s.lyrics[v] # [{number: sylabe}, .. for all notes]
- lyrdict = {} # {number: (abc_lyric_string, melis)} for this voice
- nums = [num for d in lyrs for num in d.keys ()] # the lyrics numbers in this measure
- maxNums = max (nums + [0]) # the highest lyrics number in this measure
- for i in range (maxNums, 0, -1):
- xs = [syldict.get (i, '') for syldict in lyrs] # collect the syllabi with number i
- melis = s.getLastMelis (v, i) # get melisma from last measure
- lyrdict [i] = abcLyr (xs, melis)
- s.lyrics[v] = lyrdict # {number: (abc_lyric_string, melis)} for this measure
- mkBroken (s.voices[v])
- s.gMaten.append (s.voices)
- s.gLyrics.append (s.lyrics)
- s.tijd = s.maxtime = 0
- s.initVoices ()
-
- def outVoices (s, divs, ip, isSib): # output all voices of part ip
- vvmap = {} # xml voice number -> abc voice number (one part)
- vnum_keys = list (s.vnums.keys ())
- if s.jscript or isSib: vnum_keys.sort ()
- lvc = min (vnum_keys or [1]) # lowest xml voice number of this part
- for iv in vnum_keys:
- if s.cnt.getv ('note', iv) == 0: # no real notes counted in this voice
- continue # skip empty voices
- if abcOut.denL: unitL = abcOut.denL # take the unit length from the -d option
- else: unitL = compUnitLength (iv, s.gMaten, divs) # compute the best unit length for this voice
- abcOut.cmpL.append (unitL) # remember for header output
- vn, vl = [], {} # for voice iv: collect all notes to vn and all lyric lines to vl
- for im in range (len (s.gMaten)):
- measure = s.gMaten [im][iv]
- vn.append (outVoice (measure, divs [im], im, ip, unitL))
- checkMelismas (s.gLyrics, s.gMaten, im, iv)
- for n, (lyrstr, melis) in s.gLyrics [im][iv].items ():
- if n in vl:
- while len (vl[n]) < im: vl[n].append ('') # fill in skipped measures
- vl[n].append (lyrstr)
- else:
- vl[n] = im * [''] + [lyrstr] # must skip im measures
- for n, lyrs in vl.items (): # fill up possibly empty lyric measures at the end
- mis = len (vn) - len (lyrs)
- lyrs += mis * ['']
- abcOut.add ('V:%d' % s.vceCnt)
- if s.repbra:
- if s.nvlt == 1 and s.vceCnt > 1: abcOut.add ('I:repbra 0') # only volta on first voice
- if s.nvlt == 2 and iv > lvc: abcOut.add ('I:repbra 0') # only volta on first voice of each part
- if s.cpl > 0: s.bpl = 0 # option -n (max chars per line) overrules -b (max bars per line)
- elif s.bpl == 0: s.cpl = 100 # the default: 100 chars per line
- bn = 0 # count bars
- while vn: # while still measures available
- ib = 1
- chunk = vn [0]
- while ib < len (vn):
- if s.cpl > 0 and len (chunk) + len (vn [ib]) >= s.cpl: break # line full (number of chars)
- if s.bpl > 0 and ib >= s.bpl: break # line full (number of bars)
- chunk += vn [ib]
- ib += 1
- bn += ib
- abcOut.add (chunk + ' %%%d' % bn) # line with barnumer
- del vn[:ib] # chop ib bars
- lyrlines = sorted (vl.items ()) # order the numbered lyric lines for output
- for n, lyrs in lyrlines:
- abcOut.add ('w: ' + '|'.join (lyrs[:ib]) + '|')
- del lyrs[:ib]
- vvmap [iv] = s.vceCnt # xml voice number -> abc voice number
- s.vceCnt += 1 # count voices over all parts
- s.gMaten = [] # reset the follwing instance vars for each part
- s.gLyrics = []
- s.cnt.prcnt (ip+1) # print summary of skipped items in this part
- return vvmap
-
-class ABCoutput:
- pagekeys = 'scale,pageheight,pagewidth,leftmargin,rightmargin,topmargin,botmargin'.split (',')
- def __init__ (s, fnmext, pad, X, options):
- s.fnmext = fnmext
- s.outlist = [] # list of ABC strings
- s.title = 'T:Title'
- s.key = 'none'
- s.clefs = {} # clefs for all abc-voices
- s.mtr = 'none'
- s.tempo = 0 # 0 -> no tempo field
- s.tempo_units = (1,4) # note type of tempo direction
- s.pad = pad # the output path or none
- s.X = X + 1 # the abc tune number
- s.denL = options.d # denominator of the unit length (L:) from -d option
- s.volpan = int (options.m) # 0 -> no %%MIDI, 1 -> only program, 2 -> all %%MIDI
- s.cmpL = [] # computed optimal unit length for all voices
- s.jscript = options.j # compatibility with javascript version
- s.tstep = options.t # translate percmap to voicemap
- s.stemless = 0 # use U:s=!stemless!
- s.shiftStem = options.s # shift note heads 3 units left
- if pad:
- _, base_name = os.path.split (fnmext)
- s.outfile = open (os.path.join (pad, base_name), 'w')
- else: s.outfile = sys.stdout
- if s.jscript: s.X = 1 # always X:1 in javascript version
- s.pageFmt = {}
- for k in s.pagekeys: s.pageFmt [k] = None
- if len (options.p) == 7:
- for k, v in zip (s.pagekeys, options.p):
- try: s.pageFmt [k] = float (v)
- except: info ('illegal float %s for %s', (k, v)); continue
-
- def add (s, str):
- s.outlist.append (str + '\n') # collect all ABC output
-
- def mkHeader (s, stfmap, partlist, midimap, vmpdct, koppen): # stfmap = [parts], part = [staves], stave = [voices]
- accVce, accStf, staffs = [], [], stfmap[:] # staffs is consumed
- for x in partlist: # collect partnames into accVce and staff groups into accStf
- try: prgroupelem (x, ('', ''), '', stfmap, accVce, accStf)
- except: info ('lousy musicxml: error in part-list')
- staves = ' '.join (accStf)
- clfnms = {}
- for part, (partname, partabbrv) in zip (staffs, accVce):
- if not part: continue # skip empty part
- firstVoice = part[0][0] # the first voice number in this part
- nm = partname.replace ('\n','\\n').replace ('.:','.').strip (':')
- snm = partabbrv.replace ('\n','\\n').replace ('.:','.').strip (':')
- clfnms [firstVoice] = (nm and 'nm="%s"' % nm or '') + (snm and ' snm="%s"' % snm or '')
- hd = ['X:%d\n%s\n' % (s.X, s.title)]
- for i, k in enumerate (s.pagekeys):
- if s.jscript and k in ['pageheight','topmargin', 'botmargin']: continue
- if s.pageFmt [k] != None: hd.append ('%%%%%s %.2f%s\n' % (k, s.pageFmt [k], i > 0 and 'cm' or ''))
- if staves and len (accStf) > 1: hd.append ('%%score ' + staves + '\n')
- tempo = s.tempo and 'Q:%d/%d=%s\n' % (s.tempo_units [0], s.tempo_units [1], s.tempo) or '' # default no tempo field
- d = {} # determine the most frequently occurring unit length over all voices
- for x in s.cmpL: d[x] = d.get (x, 0) + 1
- if s.jscript: defLs = sorted (d.items (), key=lambda x: (-x[1], x[0])) # when tie (1) sort on key (0)
- else: defLs = sorted (d.items (), key=lambda x: -x[1])
- defL = s.denL and s.denL or defLs [0][0] # override default unit length with -d option
- hd.append ('L:1/%d\n%sM:%s\n' % (defL, tempo, s.mtr))
- hd.append ('K:%s\n' % s.key)
- if s.stemless: hd.append ('U:s=!stemless!\n')
- vxs = sorted (vmpdct.keys ())
- for vx in vxs: hd.extend (vmpdct [vx])
- s.dojef = 0 # translate percmap to voicemap
- for vnum, clef in s.clefs.items ():
- ch, prg, vol, pan = midimap [vnum-1][:4]
- dmap = midimap [vnum - 1][4:] # map of abc percussion notes to midi notes
- if dmap and 'perc' not in clef: clef = (clef + ' map=perc').strip ();
- hd.append ('V:%d %s %s\n' % (vnum, clef, clfnms.get (vnum, '')))
- if vnum in vmpdct:
- hd.append ('%%%%voicemap tab%d\n' % vnum)
- hd.append ('K:none\nM:none\n%%clef none\n%%staffscale 1.6\n%%flatbeams true\n%%stemdir down\n')
- if 'perc' in clef: hd.append ('K:none\n'); # no key for a perc voice
- if s.volpan > 1: # option -m 2 -> output all recognized midi commands when needed and present in xml
- if ch > 0 and ch != vnum: hd.append ('%%%%MIDI channel %d\n' % ch)
- if prg > 0: hd.append ('%%%%MIDI program %d\n' % (prg - 1))
- if vol >= 0: hd.append ('%%%%MIDI control 7 %.0f\n' % vol) # volume == 0 is possible ...
- if pan >= 0: hd.append ('%%%%MIDI control 10 %.0f\n' % pan)
- elif s.volpan > 0: # default -> only output midi program command when present in xml
- if dmap and ch > 0: hd.append ('%%%%MIDI channel %d\n' % ch) # also channel if percussion part
- if prg > 0: hd.append ('%%%%MIDI program %d\n' % (prg - 1))
- for abcNote, step, midiNote, notehead in dmap:
- if not notehead: notehead = 'normal'
- if abcMid (abcNote) != midiNote or abcNote != step:
- if s.volpan > 0: hd.append ('%%%%MIDI drummap %s %s\n' % (abcNote, midiNote))
- hd.append ('I:percmap %s %s %s %s\n' % (abcNote, step, midiNote, notehead))
- s.dojef = s.tstep
- if defL != s.cmpL [vnum-1]: # only if computed unit length different from header
- hd.append ('L:1/%d\n' % s.cmpL [vnum-1])
- s.outlist = hd + s.outlist
- if koppen: # output SVG stuff needed for tablature
- k1 = kopSvg.replace ('-2','-5') if s.shiftStem else kopSvg # shift note heads 3 units left
- k2 = kopSvg2.replace ('-2','-5') if s.shiftStem else kopSvg2
- tb = tabSvg.replace ('-3','-6') if s.shiftStem else tabSvg
- ks = sorted (koppen.keys ()) # javascript compatibility
- ks = [k2 % (k, k) if len (k) == 2 else k1 % (k, k) for k in ks]
- tbs = map (lambda x: x.strip () + '\n', tb.splitlines ()) # javascript compatibility
- s.outlist = tbs + ks + ['\n%%endsvg\n'] + s.outlist
-
- def writeall (s): # determine the required encoding of the entire ABC output
- str = ''.join (s.outlist)
- if s.dojef: str = perc2map (str)
- if python3: s.outfile.write (str)
- else: s.outfile.write (str.encode ('utf-8'))
- if s.pad: s.outfile.close () # close each file with -o option
- else: s.outfile.write ('\n') # add empty line between tunes on stdout
- info ('%s written with %d voices' % (s.fnmext, len (s.clefs)), warn=0)
-
-#----------------
-# functions
-#----------------
-def abcLyr (xs, melis): # Convert list xs to abc lyrics.
- if not ''.join (xs): return '', 0 # there is no lyrics in this measure
- res = []
- for x in xs: # xs has for every note a lyrics syllabe or an empty string
- if x == '': # note without lyrics
- if melis: x = '_' # set melisma
- else: x = '*' # skip note
- elif x.endswith ('_') and not x.endswith ('\_'): # start of new melisma
- x = x.replace ('_', '') # remove and set melis boolean
- melis = 1 # so next skips will become melisma
- else: melis = 0 # melisma stops on first syllable
- res.append (x)
- return (' '.join (res), melis)
-
-def simplify (a, b): # divide a and b by their greatest common divisor
- x, y = a, b
- while b: a, b = b, a % b
- return x // a, y // a
-
-def abcdur (nx, divs, uL): # convert an musicXML duration d to abc units with L:1/uL
- if nx.dur == 0: return '' # when called for elements without duration
- num, den = simplify (uL * nx.dur, divs * 4) # L=1/8 -> uL = 8 units
- if nx.fact: # apply tuplet time modification
- numfac, denfac = nx.fact
- num, den = simplify (num * numfac, den * denfac)
- if den > 64: # limit the denominator to a maximum of 64
- x = float (num) / den; n = math.floor (x); # when just above an integer n
- if x - n < 0.1 * x: num, den = n, 1; # round to n
- num64 = 64. * num / den + 1.0e-15 # to get Python2 behaviour of round
- num, den = simplify (int (round (num64)), 64)
- if num == 1:
- if den == 1: dabc = ''
- elif den == 2: dabc = '/'
- else: dabc = '/%d' % den
- elif den == 1: dabc = '%d' % num
- else: dabc = '%d/%d' % (num, den)
- return dabc
-
-def abcMid (note): # abc note -> midi pitch
- r = re.search (r"([_^]*)([A-Ga-g])([',]*)", note)
- if not r: return -1
- acc, n, oct = r.groups ()
- nUp = n.upper ()
- p = 60 + [0,2,4,5,7,9,11]['CDEFGAB'.index (nUp)] + (12 if nUp != n else 0);
- if acc: p += (1 if acc[0] == '^' else -1) * len (acc)
- if oct: p += (12 if oct[0] == "'" else -12) * len (oct)
- return p
-
-def staffStep (ptc, o, clef, tstep):
- ndif = 0
- if 'stafflines=1' in clef: ndif += 4 # meaning of one line: E (xml) -> B (abc)
- if not tstep and clef.startswith ('bass'): ndif += 12 # transpose bass -> treble (C3 -> A4)
- if ndif: # diatonic transposition == addition modulo 7
- nm7 = 'C,D,E,F,G,A,B'.split (',')
- n = nm7.index (ptc) + ndif
- ptc, o = nm7 [n % 7], o + n // 7
- if o > 4: ptc = ptc.lower ()
- if o > 5: ptc = ptc + (o-5) * "'"
- if o < 4: ptc = ptc + (4-o) * ","
- return ptc
-
-def setKey (fifths, mode):
- sharpness = ['Fb', 'Cb','Gb','Db','Ab','Eb','Bb','F','C','G','D','A', 'E', 'B', 'F#','C#','G#','D#','A#','E#','B#']
- offTab = {'maj':8, 'ion':8, 'm':11, 'min':11, 'aeo':11, 'mix':9, 'dor':10, 'phr':12, 'lyd':7, 'loc':13, 'non':8}
- mode = mode.lower ()[:3] # only first three chars, no case
- key = sharpness [offTab [mode] + fifths] + (mode if offTab [mode] != 8 else '')
- accs = ['F','C','G','D','A','E','B']
- if fifths >= 0: msralts = dict (zip (accs[:fifths], fifths * [1]))
- else: msralts = dict (zip (accs[fifths:], -fifths * [-1]))
- return key, msralts
-
-def insTup (ix, notes, fact): # read one nested tuplet
- tupcnt = 0
- nx = notes [ix]
- if 'start' in nx.tup:
- nx.tup.remove ('start') # do recursive calls when starts remain
- tix = ix # index of first tuplet note
- fn, fd = fact # xml time-mod of the higher level
- fnum, fden = nx.fact # xml time-mod of the current level
- tupfact = fnum//fn, fden//fd # abc time mod of this level
- while ix < len (notes):
- nx = notes [ix]
- if isinstance (nx, Elem) or nx.grace:
- ix += 1 # skip all non tuplet elements
- continue
- if 'start' in nx.tup: # more nested tuplets to start
- ix, tupcntR = insTup (ix, notes, tupfact) # ix is on the stop note!
- tupcnt += tupcntR
- elif nx.fact:
- tupcnt += 1 # count tuplet elements
- if 'stop' in nx.tup:
- nx.tup.remove ('stop')
- break
- if not nx.fact: # stop on first non tuplet note
- ix = lastix # back to last tuplet note
- break
- lastix = ix
- ix += 1
- # put abc tuplet notation before the recursive ones
- tup = (tupfact[0], tupfact[1], tupcnt)
- if tup == (3, 2, 3): tupPrefix = '(3'
- else: tupPrefix = '(%d:%d:%d' % tup
- notes [tix].tupabc = tupPrefix + notes [tix].tupabc
- return ix, tupcnt # ix is on the last tuplet note
-
-def mkBroken (vs): # introduce broken rhythms (vs: one voice, one measure)
- vs = [n for n in vs if isinstance (n, Note)]
- i = 0
- while i < len (vs) - 1:
- n1, n2 = vs[i], vs[i+1] # scan all adjacent pairs
- # skip if note in tuplet or has no duration or outside beam
- if not n1.fact and not n2.fact and n1.dur > 0 and n2.beam:
- if n1.dur * 3 == n2.dur:
- n2.dur = (2 * n2.dur) // 3
- n1.dur = n1.dur * 2
- n1.after = '<' + n1.after
- i += 1 # do not chain broken rhythms
- elif n2.dur * 3 == n1.dur:
- n1.dur = (2 * n1.dur) // 3
- n2.dur = n2.dur * 2
- n1.after = '>' + n1.after
- i += 1 # do not chain broken rhythms
- i += 1
-
-def outVoice (measure, divs, im, ip, unitL): # note/elem objects of one measure in one voice
- ix = 0
- while ix < len (measure): # set all (nested) tuplet annotations
- nx = measure [ix]
- if isinstance (nx, Note) and nx.fact and not nx.grace:
- ix, tupcnt = insTup (ix, measure, (1, 1)) # read one tuplet, insert annotation(s)
- ix += 1
- vs = []
- for nx in measure:
- if isinstance (nx, Note):
- durstr = abcdur (nx, divs, unitL) # xml -> abc duration string
- chord = len (nx.ns) > 1
- cns = [nt[:-1] for nt in nx.ns if nt.endswith ('-')]
- tie = ''
- if chord and len (cns) == len (nx.ns): # all chord notes tied
- nx.ns = cns # chord notes without tie
- tie = '-' # one tie for whole chord
- s = nx.tupabc + ''.join (nx.before)
- if chord: s += '['
- for nt in nx.ns: s += nt
- if chord: s += ']' + tie
- if s.endswith ('-'): s, tie = s[:-1], '-' # split off tie
- s += durstr + tie # and put it back again
- s += nx.after
- nospace = nx.beam
- else:
- if isinstance (nx.str, listtype): nx.str = nx.str [0]
- s = nx.str
- nospace = 1
- if nospace: vs.append (s)
- else: vs.append (' ' + s)
- vs = ''.join (vs) # ad hoc: remove multiple pedal directions
- while vs.find ('!ped!!ped!') >= 0: vs = vs.replace ('!ped!!ped!','!ped!')
- while vs.find ('!ped-up!!ped-up!') >= 0: vs = vs.replace ('!ped-up!!ped-up!','!ped-up!')
- while vs.find ('!8va(!!8va)!') >= 0: vs = vs.replace ('!8va(!!8va)!','') # remove empty ottava's
- return vs
-
-def sortMeasure (voice, m):
- voice.sort (key=lambda o: o.tijd) # sort on time
- time = 0
- v = []
- rs = [] # holds rests in between notes
- for i, nx in enumerate (voice): # establish sequentiality
- if nx.tijd > time and chkbug (nx.tijd - time, m):
- v.append (Note (nx.tijd - time, 'x')) # fill hole with invisble rest
- rs.append (len (v) - 1)
- if isinstance (nx, Elem):
- if nx.tijd < time: nx.tijd = time # shift elems without duration to where they fit
- v.append (nx)
- time = nx.tijd
- continue
- if nx.tijd < time: # overlapping element
- if nx.ns[0] == 'z': continue # discard overlapping rest
- if v[-1].tijd <= nx.tijd: # we can do something
- if v[-1].ns[0] == 'z': # shorten rest
- v[-1].dur = nx.tijd - v[-1].tijd
- if v[-1].dur == 0: del v[-1] # nothing left
- info ('overlap in part %d, measure %d: rest shortened' % (m.ixp+1, m.ixm+1))
- else: # make a chord of overlap
- v[-1].ns += nx.ns
- info ('overlap in part %d, measure %d: added chord' % (m.ixp+1, m.ixm+1))
- nx.dur = (nx.tijd + nx.dur) - time # the remains
- if nx.dur <= 0: continue # nothing left
- nx.tijd = time # append remains
- else: # give up
- info ('overlapping notes in one voice! part %d, measure %d, note %s discarded' % (m.ixp+1, m.ixm+1, isinstance (nx, Note) and nx.ns or nx.str))
- continue
- v.append (nx)
- if isinstance (nx, Note):
- if nx.ns [0] in 'zx':
- rs.append (len (v) - 1) # remember rests between notes
- elif len (rs):
- if nx.beam and not nx.grace: # copy beam into rests
- for j in rs: v[j].beam = nx.beam
- rs = [] # clear rests on each note
- time = nx.tijd + nx.dur
- # when a measure contains no elements and no forwards -> no incTime -> s.maxtime = 0 -> right barline
- # is inserted at time == 0 (in addbar) and is only element in the voice when sortMeasure is called
- if time == 0: info ('empty measure in part %d, measure %d, it should contain at least a rest to advance the time!' % (m.ixp+1, m.ixm+1))
- return v
-
-def getPartlist (ps): # correct part-list (from buggy xml-software)
- xs = [] # the corrected part-list
- e = [] # stack of opened part-groups
- for x in list (ps): # insert missing stops, delete double starts
- if x.tag == 'part-group':
- num, type = x.get ('number'), x.get ('type')
- if type == 'start':
- if num in e: # missing stop: insert one
- xs.append (E.Element ('part-group', number = num, type = 'stop'))
- xs.append (x)
- else: # normal start
- xs.append (x)
- e.append (num)
- else:
- if num in e: # normal stop
- e.remove (num)
- xs.append (x)
- else: pass # double stop: skip it
- else: xs.append (x)
- for num in reversed (e): # fill missing stops at the end
- xs.append (E.Element ('part-group', number = num, type = 'stop'))
- return xs
-
-def parseParts (xs, d, e): # -> [elems on current level], rest of xs
- if not xs: return [],[]
- x = xs.pop (0)
- if x.tag == 'part-group':
- num, type = x.get ('number'), x.get ('type')
- if type == 'start': # go one level deeper
- s = [x.findtext (n, '') for n in ['group-symbol','group-barline','group-name','group-abbreviation']]
- d [num] = s # remember groupdata by group number
- e.append (num) # make stack of open group numbers
- elemsnext, rest1 = parseParts (xs, d, e) # parse one level deeper to next stop
- elems, rest2 = parseParts (rest1, d, e) # parse the rest on this level
- return [elemsnext] + elems, rest2
- else: # stop: close level and return group-data
- nums = e.pop () # last open group number in stack order
- if xs and xs[0].get ('type') == 'stop': # two consequetive stops
- if num != nums: # in the wrong order (tempory solution)
- d[nums], d[num] = d[num], d[nums] # exchange values (only works for two stops!!!)
- sym = d[num] # retrieve an return groupdata as last element of the group
- return [sym], xs
- else:
- elems, rest = parseParts (xs, d, e) # parse remaining elements on current level
- name = x.findtext ('part-name',''), x.findtext ('part-abbreviation','')
- return [name] + elems, rest
-
-def bracePart (part): # put a brace on multistaff part and group voices
- if not part: return [] # empty part in the score
- brace = []
- for ivs in part:
- if len (ivs) == 1: # stave with one voice
- brace.append ('%s' % ivs[0])
- else: # stave with multiple voices
- brace += ['('] + ['%s' % iv for iv in ivs] + [')']
- brace.append ('|')
- del brace[-1] # no barline at the end
- if len (part) > 1:
- brace = ['{'] + brace + ['}']
- return brace
-
-def prgroupelem (x, gnm, bar, pmap, accVce, accStf): # collect partnames (accVce) and %%score map (accStf)
- if type (x) == tupletype: # partname-tuple = (part-name, part-abbrev)
- y = pmap.pop (0)
- if gnm[0]: x = [n1 + ':' + n2 for n1, n2 in zip (gnm, x)] # put group-name before part-name
- accVce.append (x)
- accStf.extend (bracePart (y))
- elif len (x) == 2 and type (x[0]) == tupletype: # misuse of group just to add extra name to stave
- y = pmap.pop (0)
- nms = [n1 + ':' + n2 for n1, n2 in zip (x[0], x[1][2:])] # x[0] = partname-tuple, x[1][2:] = groupname-tuple
- accVce.append (nms)
- accStf.extend (bracePart (y))
- else:
- prgrouplist (x, bar, pmap, accVce, accStf)
-
-def prgrouplist (x, pbar, pmap, accVce, accStf): # collect partnames, scoremap for a part-group
- sym, bar, gnm, gabbr = x[-1] # bracket symbol, continue barline, group-name-tuple
- bar = bar == 'yes' or pbar # pbar -> the parent has bar
- accStf.append (sym == 'brace' and '{' or '[')
- for z in x[:-1]:
- prgroupelem (z, (gnm, gabbr), bar, pmap, accVce, accStf)
- if bar: accStf.append ('|')
- if bar: del accStf [-1] # remove last one before close
- accStf.append (sym == 'brace' and '}' or ']')
-
-def compUnitLength (iv, maten, divs): # compute optimal unit length
- uLmin, minLen = 0, max_int
- for uL in [4,8,16]: # try 1/4, 1/8 and 1/16
- vLen = 0 # total length of abc duration strings in this voice
- for im, m in enumerate (maten): # all measures
- for e in m[iv]: # all notes in voice iv
- if isinstance (e, Elem) or e.dur == 0: continue # no real durations
- vLen += len (abcdur (e, divs [im], uL)) # add len of duration string
- if vLen < minLen: uLmin, minLen = uL, vLen # remember the smallest
- return uLmin
-
-def doSyllable (syl):
- txt = ''
- for e in syl:
- if e.tag == 'elision': txt += '~'
- elif e.tag == 'text': # escape - and space characters
- txt += (e.text or '').replace ('_','\_').replace('-', r'\-').replace(' ', '~')
- if not txt: return txt
- if syl.findtext('syllabic') in ['begin', 'middle']: txt += '-'
- if syl.find('extend') is not None: txt += '_'
- return txt
-
-def checkMelismas (lyrics, maten, im, iv):
- if im == 0: return
- maat = maten [im][iv] # notes of the current measure
- curlyr = lyrics [im][iv] # lyrics dict of current measure
- prvlyr = lyrics [im-1][iv] # lyrics dict of previous measure
- for n, (lyrstr, melis) in prvlyr.items (): # all lyric numbers in the previous measure
- if n not in curlyr and melis: # melisma required, but no lyrics present -> make one!
- ms = getMelisma (maat) # get a melisma for the current measure
- if ms: curlyr [n] = (ms, 0) # set melisma as the n-th lyrics of the current measure
-
-def getMelisma (maat): # get melisma from notes in maat
- ms = []
- for note in maat: # every note should get an underscore
- if not isinstance (note, Note): continue # skip Elem's
- if note.grace: continue # skip grace notes
- if note.ns [0] in 'zx': break # stop on first rest
- ms.append ('_')
- return ' '.join (ms)
-
-def perc2map (abcIn):
- fillmap = {'diamond':1, 'triangle':1, 'square':1, 'normal':1};
- abc = map (lambda x: x.strip (), percSvg.splitlines ())
- id='default'
- maps = {'default': []};
- dmaps = {'default': []}
- r1 = re.compile (r'V:\s*(\S+)')
- ls = abcIn.splitlines ()
- for x in ls:
- if 'I:percmap' in x:
- noot, step, midi, kop = map (lambda x: x.strip (), x.split ()[1:])
- if kop in fillmap: kop = kop + '+' + ',' + kop
- x = '%%%%map perc%s %s print=%s midi=%s heads=%s' % (id, noot, step, midi, kop)
- maps [id].append (x)
- if '%%MIDI' in x: dmaps [id].append (x)
- if 'V:' in x:
- r = r1.match (x)
- if r:
- id = r.group (1);
- if id not in maps: maps [id] = []; dmaps [id] = []
- ids = sorted (maps.keys ())
- for id in ids: abc += maps [id]
- id='default'
- for x in ls:
- if 'I:percmap' in x: continue
- if '%%MIDI' in x: continue
- if 'V:' in x or 'K:' in x:
- r = r1.match (x)
- if r: id = r.group (1)
- abc.append (x)
- if id in dmaps and len (dmaps [id]) > 0: abc.extend (dmaps [id]); del dmaps [id]
- if 'perc' in x and 'map=' not in x: x += ' map=perc';
- if 'map=perc' in x and len (maps [id]) > 0: abc.append ('%%voicemap perc' + id);
- if 'map=off' in x: abc.append ('%%voicemap');
- else:
- abc.append (x)
- return '\n'.join (abc) + '\n'
-
-def addoct (ptc, o): # xml staff step, xml octave number
- p = ptc
- if o > 4: p = ptc.lower ()
- if o > 5: p = p + (o-5) * "'"
- if o < 4: p = p + (4-o) * ","
- return p # abc pitch == abc note without accidental
-
-def chkbug (dt, m):
- if dt > m.divs / 16: return 1 # duration should be > 1/64 note
- info ('MuseScore bug: incorrect duration, smaller then 1/64! in measure %d, part %d' % (m.ixm, m.ixp))
- return 0
-
-#----------------
-# parser
-#----------------
-class Parser:
- note_alts = [ # 3 alternative notations of the same note for tablature mapping
- [x.strip () for x in '=C, ^C, =D, ^D, =E, =F, ^F, =G, ^G, =A, ^A, =B'.split (',')],
- [x.strip () for x in '^B, _D,^^C, _E, _F, ^E, _G,^^F, _A,^^G, _B, _C'.split (',')],
- [x.strip () for x in '__D,^^B,__E,__F,^^D,__G,^^E,__A,_/A,__B,__C,^^A'.split (',')] ]
- step_map = {'C':0,'D':2,'E':4,'F':5,'G':7,'A':9,'B':11}
- def __init__ (s, options):
- # unfold repeats, number of chars per line, credit filter level, volta option
- s.slurBuf = {} # dict of open slurs keyed by slur number
- s.dirStk = {} # {direction-type + number -> (type, voice | time)} dict for proper closing
- s.ingrace = 0 # marks a sequence of grace notes
- s.msc = Music (options) # global music data abstraction
- s.unfold = options.u # turn unfolding repeats on
- s.ctf = options.c # credit text filter level
- s.gStfMap = [] # [[abc voice numbers] for all parts]
- s.midiMap = [] # midi-settings for each abc voice, in order
- s.drumInst = {} # inst_id -> midi pitch for channel 10 notes
- s.drumNotes = {} # (xml voice, abc note) -> (midi note, note head)
- s.instMid = [] # [{inst id -> midi-settings} for all parts]
- s.midDflt = [-1,-1,-1,-91] # default midi settings for channel, program, volume, panning
- s.msralts = {} # xml-notenames (without octave) with accidentals from the key
- s.curalts = {} # abc-notenames (with voice number) with passing accidentals
- s.stfMap = {} # xml staff number -> [xml voice number]
- s.vce2stf = {} # xml voice number -> allocated staff number
- s.clefMap = {} # xml staff number -> abc clef (for header only)
- s.curClef = {} # xml staff number -> current abc clef
- s.stemDir = {} # xml voice number -> current stem direction
- s.clefOct = {} # xml staff number -> current clef-octave-change
- s.curStf = {} # xml voice number -> current xml staff number
- s.nolbrk = options.x; # generate no linebreaks ($)
- s.jscript = options.j # compatibility with javascript version
- s.ornaments = sorted (note_ornamentation_map.items ())
- s.doPageFmt = len (options.p) == 1 # translate xml page format
- s.tstep = options.t # clef determines step on staff (percussion)
- s.dirtov1 = options.v1 # all directions to first voice of staff
- s.ped = options.ped # render pedal directions
- s.wstems = options.stm # translate stem elements
- s.pedVce = None # voice for pedal directions
- s.repeat_str = {} # staff number -> [measure number, repeat-text]
- s.tabVceMap = {} # abc voice num -> [%%map ...] for tab voices
- s.koppen = {} # noteheads needed for %%map
-
- def matchSlur (s, type2, n, v2, note2, grace, stopgrace): # match slur number n in voice v2, add abc code to before/after
- if type2 not in ['start', 'stop']: return # slur type continue has no abc equivalent
- if n == None: n = '1'
- if n in s.slurBuf:
- type1, v1, note1, grace1 = s.slurBuf [n]
- if type2 != type1: # slur complete, now check the voice
- if v2 == v1: # begins and ends in the same voice: keep it
- if type1 == 'start' and (not grace1 or not stopgrace): # normal slur: start before stop and no grace slur
- note1.before = ['('] + note1.before # keep left-right order!
- note2.after += ')'
- # no else: don't bother with reversed stave spanning slurs
- del s.slurBuf [n] # slur finished, remove from stack
- else: # double definition, keep the last
- info ('double slur numbers %s-%s in part %d, measure %d, voice %d note %s, first discarded' % (type2, n, s.msr.ixp+1, s.msr.ixm+1, v2, note2.ns))
- s.slurBuf [n] = (type2, v2, note2, grace)
- else: # unmatched slur, put in dict
- s.slurBuf [n] = (type2, v2, note2, grace)
-
- def doNotations (s, note, nttn, isTab):
- for key, val in s.ornaments:
- if nttn.find (key) != None: note.before += [val] # just concat all ornaments
- trem = nttn.find ('ornaments/tremolo')
- if trem != None:
- type = trem.get ('type')
- if type == 'single':
- note.before.insert (0, '!%s!' % (int (trem.text) * '/'))
- else:
- note.fact = None # no time modification in ABC
- if s.tstep: # abc2svg version
- if type == 'stop': note.before.insert (0, '!trem%s!' % trem.text);
- else: # abc2xml version
- if type == 'start': note.before.insert (0, '!%s-!' % (int (trem.text) * '/'));
- fingering = nttn.findall ('technical/fingering')
- for finger in fingering: # handle multiple finger annotations
- if not isTab: note.before += ['!%s!' % finger.text] # fingering goes before chord (addChord)
- snaar = nttn.find ('technical/string')
- if snaar != None and isTab:
- if s.tstep:
- fret = nttn.find ('technical/fret')
- if fret != None: note.tab = (snaar.text, fret.text)
- else:
- deco = '!%s!' % snaar.text # no double string decos (bug in musescore)
- if deco not in note.ntdec: note.ntdec += deco
- wvln = nttn.find ('ornaments/wavy-line')
- if wvln != None:
- if wvln.get ('type') == 'start': note.before = ['!trill(!'] + note.before # keep left-right order!
- elif wvln.get ('type') == 'stop': note.before = ['!trill)!'] + note.before
- glis = nttn.find ('glissando')
- if glis == None: glis = nttn.find ('slide') # treat slide as glissando
- if glis != None:
- lt = '~' if glis.get ('line-type') =='wavy' else '-'
- if glis.get ('type') == 'start': note.before = ['!%s(!' % lt] + note.before # keep left-right order!
- elif glis.get ('type') == 'stop': note.before = ['!%s)!' % lt] + note.before
-
- def tabnote (s, alt, ptc, oct, v, ntrec):
- p = s.step_map [ptc] + int (alt or '0') # p in -2 .. 13
- if p > 11: oct += 1 # octave correction
- if p < 0: oct -= 1
- p = p % 12 # remap p into 0..11
- snaar_nw, fret_nw = ntrec.tab # the computed/annotated allocation of nt
- for i in range (4): # support same note on 4 strings
- na = s.note_alts [i % 3] [p] # get alternative representation of same note
- o = oct
- if na in ['^B', '^^B']: o -= 1 # because in adjacent octave
- if na in ['_C', '__C']: o += 1
- if '/' in na or i == 3: o = 9 # emergency notation for 4th string case
- nt = addoct (na, o)
- snaar, fret = s.tabmap.get ((v, nt), ('', '')) # the current allocation of nt
- if not snaar: break # note not yet allocated
- if snaar_nw == snaar: return nt # use present allocation
- if i == 3: # new allocaion needed but none is free
- fmt = 'rejected: voice %d note %3s string %s fret %2s remains: string %s fret %s'
- info (fmt % (v, nt, snaar_nw, fret_nw, snaar, fret), 1)
- ntrec.tab = (snaar, fret)
- s.tabmap [v, nt] = ntrec.tab # for tablature map (voice, note) -> (string, fret)
- return nt # ABC code always in key C (with midi pitch alterations)
-
- def ntAbc (s, ptc, oct, note, v, ntrec, isTab): # pitch, octave -> abc notation
- acc2alt = {'double-flat':-2,'flat-flat':-2,'flat':-1,'natural':0,'sharp':1,'sharp-sharp':2,'double-sharp':2}
- oct += s.clefOct.get (s.curStf [v], 0) # minus clef-octave-change value
- acc = note.findtext ('accidental') # should be the notated accidental
- alt = note.findtext ('pitch/alter') # pitch alteration (midi)
- if ntrec.tab: return s.tabnote (alt, ptc, oct, v, ntrec) # implies s.tstep is true (options.t was given)
- elif isTab and s.tstep:
- nt = ['__','_','','^','^^'][int (alt or '0') + 2] + addoct (ptc, oct)
- info ('no string notation found for note %s in voice %d' % (nt, v), 1)
- p = addoct (ptc, oct)
- if alt == None and s.msralts.get (ptc, 0): alt = 0 # no alt but key implies alt -> natural!!
- if alt == None and (p, v) in s.curalts: alt = 0 # no alt but previous note had one -> natural!!
- if acc == None and alt == None: return p # no acc, no alt
- elif acc != None:
- alt = acc2alt [acc] # acc takes precedence over the pitch here!
- else: # now see if we really must add an accidental
- alt = int (float (alt))
- if (p, v) in s.curalts: # the note in this voice has been altered before
- if alt == s.curalts [(p, v)]: return p # alteration still the same
- elif alt == s.msralts.get (ptc, 0): return p # alteration implied by the key
- tieElms = note.findall ('tie') + note.findall ('notations/tied') # in xml we have separate notated ties and playback ties
- if 'stop' in [e.get ('type') for e in tieElms]: return p # don't alter tied notes
- info ('accidental %d added in part %d, measure %d, voice %d note %s' % (alt, s.msr.ixp+1, s.msr.ixm+1, v+1, p))
- s.curalts [(p, v)] = alt
- p = ['__','_','=','^','^^'][alt+2] + p # and finally ... prepend the accidental
- return p
-
- def doNote (s, n): # parse a musicXML note tag
- note = Note ()
- v = int (n.findtext ('voice', '1'))
- if s.isSib: v += 100 * int (n.findtext ('staff', '1')) # repair bug in Sibelius
- chord = n.find ('chord') != None
- p = n.findtext ('pitch/step') or n.findtext ('unpitched/display-step')
- o = n.findtext ('pitch/octave') or n.findtext ('unpitched/display-octave')
- r = n.find ('rest')
- numer = n.findtext ('time-modification/actual-notes')
- if numer:
- denom = n.findtext ('time-modification/normal-notes')
- note.fact = (int (numer), int (denom))
- note.tup = [x.get ('type') for x in n.findall ('notations/tuplet')]
- dur = n.findtext ('duration')
- grc = n.find ('grace')
- note.grace = grc != None
- note.before, note.after = [], '' # strings with ABC stuff that goes before or after a note/chord
- if note.grace and not s.ingrace: # open a grace sequence
- s.ingrace = 1
- note.before = ['{']
- if grc.get ('slash') == 'yes': note.before += ['/'] # acciaccatura
- stopgrace = not note.grace and s.ingrace
- if stopgrace: # close the grace sequence
- s.ingrace = 0
- s.msc.lastnote.after += '}' # close grace on lastenote.after
- if dur == None or note.grace: dur = 0
- if r == None and n.get ('print-object') == 'no':
- if chord: return
- r = 1 # turn invisible notes (that advance the time) into invisible rests
- note.dur = int (dur)
- if r == None and (not p or not o): # not a rest and no pitch
- s.msc.cnt.inc ('nopt', v) # count unpitched notes
- o, p = 5,'E' # make it an E5 ??
- isTab = s.curClef and s.curClef.get (s.curStf [v], '').startswith ('tab')
- nttn = n.find ('notations') # add ornaments
- if nttn != None: s.doNotations (note, nttn, isTab)
- e = n.find ('stem') if r == None else None # no !stemless! before rest
- if e != None and e.text == 'none' and (not isTab or v in s.hasStems or s.tstep):
- note.before += ['s']; abcOut.stemless = 1;
- e = n.find ('accidental')
- if e != None and e.get ('parentheses') == 'yes': note.ntdec += '!courtesy!'
- if r != None: noot = 'x' if n.get ('print-object') == 'no' or isTab else 'z'
- else: noot = s.ntAbc (p, int (o), n, v, note, isTab)
- if n.find ('unpitched') != None:
- clef = s.curClef [s.curStf [v]] # the current clef for this voice
- step = staffStep (p, int (o), clef, s.tstep) # (clef independent) step value of note on the staff
- instr = n.find ('instrument')
- instId = instr.get ('id') if instr != None else 'dummyId'
- midi = s.drumInst.get (instId, abcMid (noot))
- nh = n.findtext ('notehead', '').replace (' ','-') # replace spaces in xml notehead names for percmap
- if nh == 'x': noot = '^' + noot.replace ('^','').replace ('_','')
- if nh in ['circle-x','diamond','triangle']: noot = '_' + noot.replace ('^','').replace ('_','')
- if nh and n.find ('notehead').get ('filled','') == 'yes': nh += '+'
- if nh and n.find ('notehead').get ('filled','') == 'no': nh += '-'
- s.drumNotes [(v, noot)] = (step, midi, nh) # keep data for percussion map
- tieElms = n.findall ('tie') + n.findall ('notations/tied') # in xml we have separate notated ties and playback ties
- if 'start' in [e.get ('type') for e in tieElms]: # n can have stop and start tie
- noot = noot + '-'
- note.beam = sum ([1 for b in n.findall('beam') if b.text in ['continue', 'end']]) + int (note.grace)
- lyrlast = 0; rsib = re.compile (r'^.*verse')
- for e in n.findall ('lyric'):
- lyrnum = int (rsib.sub ('', e.get ('number', '1'))) # also do Sibelius numbers
- if lyrnum == 0: lyrnum = lyrlast + 1 # and correct Sibelius bugs
- else: lyrlast = lyrnum
- note.lyrs [lyrnum] = doSyllable (e)
- stemdir = n.findtext ('stem')
- if s.wstems and (stemdir == 'up' or stemdir == 'down'):
- if stemdir != s.stemDir.get (v, ''):
- s.stemDir [v] = stemdir
- s.msc.appendElem (v, '[I:stemdir %s]' % stemdir)
- if chord: s.msc.addChord (note, noot)
- else:
- xmlstaff = int (n.findtext ('staff', '1'))
- if s.curStf [v] != xmlstaff: # the note should go to another staff
- dstaff = xmlstaff - s.curStf [v] # relative new staff number
- s.curStf [v] = xmlstaff # remember the new staff for this voice
- s.msc.appendElem (v, '[I:staff %+d]' % dstaff) # insert a move before the note
- s.msc.appendNote (v, note, noot)
- for slur in n.findall ('notations/slur'): # s.msc.lastnote points to the last real note/chord inserted above
- s.matchSlur (slur.get ('type'), slur.get ('number'), v, s.msc.lastnote, note.grace, stopgrace) # match slur definitions
-
- def doAttr (s, e): # parse a musicXML attribute tag
- teken = {'C1':'alto1','C2':'alto2','C3':'alto','C4':'tenor','F4':'bass','F3':'bass3','G2':'treble','TAB':'tab','percussion':'perc'}
- dvstxt = e.findtext ('divisions')
- if dvstxt: s.msr.divs = int (dvstxt)
- steps = int (e.findtext ('transpose/chromatic', '0')) # for transposing instrument
- fifths = e.findtext ('key/fifths')
- first = s.msc.tijd == 0 and s.msr.ixm == 0 # first attributes in first measure
- if fifths:
- key, s.msralts = setKey (int (fifths), e.findtext ('key/mode','major'))
- if first and not steps and abcOut.key == 'none':
- abcOut.key = key # first measure -> header, if not transposing instrument or percussion part!
- elif key != abcOut.key or not first:
- s.msr.attr += '[K:%s]' % key # otherwise -> voice
- beats = e.findtext ('time/beats')
- if beats:
- unit = e.findtext ('time/beat-type')
- mtr = beats + '/' + unit
- if first: abcOut.mtr = mtr # first measure -> header
- else: s.msr.attr += '[M:%s]' % mtr # otherwise -> voice
- s.msr.mtr = int (beats), int (unit)
- s.msr.mdur = (s.msr.divs * s.msr.mtr[0] * 4) // s.msr.mtr[1] # duration of measure in xml-divisions
- for ms in e.findall('measure-style'):
- n = int (ms.get ('number', '1')) # staff number
- voices = s.stfMap [n] # all voices of staff n
- for mr in ms.findall('measure-repeat'):
- ty = mr.get('type')
- if ty == 'start': # remember start measure number and text voor each staff
- s.repeat_str [n] = [s.msr.ixm, mr.text]
- for v in voices: # insert repeat into all voices, value will be overwritten at stop
- s.msc.insertElem (v, s.repeat_str [n])
- elif ty == 'stop': # calculate repeat measure count for this staff n
- start_ix, text_ = s.repeat_str [n]
- repeat_count = s.msr.ixm - start_ix
- if text_:
- mid_str = "%s " % text_
- repeat_count /= int (text_)
- else:
- mid_str = "" # overwrite repeat with final string
- s.repeat_str [n][0] = '[I:repeat %s%d]' % (mid_str, repeat_count)
- del s.repeat_str [n] # remove closed repeats
- toct = e.findtext ('transpose/octave-change', '')
- if toct: steps += 12 * int (toct) # extra transposition of toct octaves
- for clef in e.findall ('clef'): # a part can have multiple staves
- n = int (clef.get ('number', '1')) # local staff number for this clef
- sgn = clef.findtext ('sign')
- line = clef.findtext ('line', '') if sgn not in ['percussion','TAB'] else ''
- cs = teken.get (sgn + line, '')
- oct = clef.findtext ('clef-octave-change', '') or '0'
- if oct: cs += {-2:'-15', -1:'-8', 1:'+8', 2:'+15'}.get (int (oct), '')
- s.clefOct [n] = -int (oct); # xml playback pitch -> abc notation pitch
- if steps: cs += ' transpose=' + str (steps)
- stfdtl = e.find ('staff-details')
- if stfdtl and int (stfdtl.get ('number', '1')) == n:
- lines = stfdtl.findtext ('staff-lines')
- if lines:
- lns= '|||' if lines == '3' and sgn == 'TAB' else lines
- cs += ' stafflines=%s' % lns
- s.stafflines = int (lines) # remember for tab staves
- strings = stfdtl.findall ('staff-tuning')
- if strings:
- tuning = [st.findtext ('tuning-step') + st.findtext ('tuning-octave') for st in strings]
- cs += ' strings=%s' % ','.join (tuning)
- capo = stfdtl.findtext ('capo')
- if capo: cs += ' capo=%s' % capo
- s.curClef [n] = cs # keep track of current clef (for percmap)
- if first: s.clefMap [n] = cs # clef goes to header (where it is mapped to voices)
- else:
- voices = s.stfMap[n] # clef change to all voices of staff n
- for v in voices:
- if n != s.curStf [v]: # voice is not at its home staff n
- dstaff = n - s.curStf [v]
- s.curStf [v] = n # reset current staff at start of measure to home position
- s.msc.appendElem (v, '[I:staff %+d]' % dstaff)
- s.msc.appendElem (v, '[K:%s]' % cs)
-
- def findVoice (s, i, es):
- stfnum = int (es[i].findtext ('staff',1)) # directions belong to a staff
- vs = s.stfMap [stfnum] # voices in this staff
- v1 = vs [0] if vs else 1 # directions to first voice of staff
- if s.dirtov1: return stfnum, v1, v1 # option --v1
- for e in es [i+1:]: # or to the voice of the next note
- if e.tag == 'note':
- v = int (e.findtext ('voice', '1'))
- if s.isSib: v += 100 * int (e.findtext ('staff', '1')) # repair bug in Sibelius
- stf = s.vce2stf [v] # use our own staff allocation
- return stf, v, v1 # voice of next note, first voice of staff
- if e.tag == 'backup': break
- return stfnum, v1, v1 # no note found, fall back to v1
-
- def doDirection (s, e, i, es): # parse a musicXML direction tag
- def addDirection (x, vs, tijd, stfnum):
- if not x: return
- vs = s.stfMap [stfnum] if '!8v' in x else [vs] # ottava's go to all voices of staff
- for v in vs:
- if tijd != None: # insert at time of encounter
- s.msc.appendElemT (v, x.replace ('(',')').replace ('ped','ped-up'), tijd)
- else:
- s.msc.appendElem (v, x)
- def startStop (dtype, vs, stfnum=1):
- typmap = {'down':'!8va(!', 'up':'!8vb(!', 'crescendo':'!<(!', 'diminuendo':'!>(!', 'start':'!ped!'}
- type = t.get ('type', '')
- k = dtype + t.get ('number', '1') # key to match the closing direction
- if type in typmap: # opening the direction
- x = typmap [type]
- if k in s.dirStk: # closing direction already encountered
- stype, tijd = s.dirStk [k]; del s.dirStk [k]
- if stype == 'stop':
- addDirection (x, vs, tijd, stfnum)
- else:
- info ('%s direction %s has no stop in part %d, measure %d, voice %d' % (dtype, stype, s.msr.ixp+1, s.msr.ixm+1, vs+1))
- s.dirStk [k] = ((type , vs)) # remember voice and type for closing
- else:
- s.dirStk [k] = ((type , vs)) # remember voice and type for closing
- elif type == 'stop':
- if k in s.dirStk: # matching open direction found
- type, vs = s.dirStk [k]; del s.dirStk [k] # into the same voice
- if type == 'stop':
- info ('%s direction %s has double stop in part %d, measure %d, voice %d' % (dtype, type, s.msr.ixp+1, s.msr.ixm+1, vs+1))
- x = ''
- else:
- x = typmap [type].replace ('(',')').replace ('ped','ped-up')
- else: # closing direction found before opening
- s.dirStk [k] = ('stop', s.msc.tijd)
- x = '' # delay code generation until opening found
- else: raise ValueError ('wrong direction type')
- addDirection (x, vs, None, stfnum)
- tempo, wrdstxt = None, ''
- plcmnt = e.get ('placement')
- stf, vs, v1 = s.findVoice (i, es)
- jmp = '' # for jump sound elements: dacapo, dalsegno and family
- jmps = [('dacapo','D.C.'),('dalsegno','D.S.'),('tocoda','dacoda'),('fine','fine'),('coda','O'),('segno','S')]
- t = e.find ('sound') # there are many possible attributes for sound
- if t != None:
- minst = t.find ('midi-instrument')
- if minst:
- prg = t.findtext ('midi-instrument/midi-program')
- chn = t.findtext ('midi-instrument/midi-channel')
- vids = [v for v, id in s.vceInst.items () if id == minst.get ('id')]
- if vids: vs = vids [0] # direction for the indentified voice, not the staff
- parm, inst = ('program', str (int (prg) - 1)) if prg else ('channel', chn)
- if inst and abcOut.volpan > 0: s.msc.appendElem (vs, '[I:MIDI= %s %s]' % (parm, inst))
- tempo = t.get ('tempo') # look for tempo attribute
- if tempo:
- tempo = '%.0f' % float (tempo) # hope it is a number and insert in voice 1
- tempo_units = (1,4) # always 1/4 for sound elements!
- for r, v in jmps:
- if t.get (r, ''): jmp = v; break
- dirtypes = e.findall ('direction-type')
- for dirtyp in dirtypes:
- units = { 'whole': (1,1), 'half': (1,2), 'quarter': (1,4), 'eighth': (1,8) }
- metr = dirtyp.find ('metronome')
- if metr != None:
- t = metr.findtext ('beat-unit', '')
- if t in units: tempo_units = units [t]
- else: tempo_units = units ['quarter']
- if metr.find ('beat-unit-dot') != None:
- tempo_units = simplify (tempo_units [0] * 3, tempo_units [1] * 2)
- tmpro = re.search ('[.\d]+', metr.findtext ('per-minute')) # look for a number
- if tmpro: tempo = tmpro.group () # overwrites the value set by the sound element of this direction
- t = dirtyp.find ('wedge')
- if t != None: startStop ('wedge', vs)
- allwrds = dirtyp.findall ('words') # insert text annotations
- if not allwrds: allwrds = dirtyp.findall ('rehearsal') # treat rehearsal mark as text annotation
- for wrds in allwrds:
- if jmp: # ignore the words when a jump sound element is present in this direction
- s.msc.appendElem (vs, '!%s!' % jmp , 1) # to voice
- break
- plc = plcmnt == 'below' and '_' or '^'
- if float (wrds.get ('default-y', '0')) < 0: plc = '_'
- wrdstxt += (wrds.text or '').replace ('"','\\"').replace ('\n', '\\n')
- wrdstxt = wrdstxt.strip ()
- for key, val in dynamics_map.items ():
- if dirtyp.find ('dynamics/' + key) != None:
- s.msc.appendElem (vs, val, 1) # to voice
- if dirtyp.find ('coda') != None: s.msc.appendElem (vs, 'O', 1)
- if dirtyp.find ('segno') != None: s.msc.appendElem (vs, 'S', 1)
- t = dirtyp.find ('octave-shift')
- if t != None: startStop ('octave-shift', vs, stf) # assume size == 8 for the time being
- t = dirtyp.find ('pedal')
- if t != None and s.ped:
- if not s.pedVce: s.pedVce = vs
- startStop ('pedal', s.pedVce)
- if dirtyp.findtext ('other-direction') == 'diatonic fretting': s.diafret = 1;
- if tempo:
- tempo = '%.0f' % float (tempo) # hope it is a number and insert in voice 1
- if s.msc.tijd == 0 and s.msr.ixm == 0: # first measure -> header
- abcOut.tempo = tempo
- abcOut.tempo_units = tempo_units
- else:
- s.msc.appendElem (v1, '[Q:%d/%d=%s]' % (tempo_units [0], tempo_units [1], tempo)) # otherwise -> 1st voice
- if wrdstxt: s.msc.appendElem (vs, '"%s%s"' % (plc, wrdstxt), 1) # to voice, but after tempo
-
- def doHarmony (s, e, i, es): # parse a musicXMl harmony tag
- _, vt, _ = s.findVoice (i, es)
- short = {'major':'', 'minor':'m', 'augmented':'+', 'diminished':'dim', 'dominant':'7', 'half-diminished':'m7b5'}
- accmap = {'major':'maj', 'dominant':'', 'minor':'m', 'diminished':'dim', 'augmented':'+', 'suspended':'sus'}
- modmap = {'second':'2', 'fourth':'4', 'seventh':'7', 'sixth':'6', 'ninth':'9', '11th':'11', '13th':'13'}
- altmap = {'1':'#', '0':'', '-1':'b'}
- root = e.findtext ('root/root-step','')
- alt = altmap.get (e.findtext ('root/root-alter'), '')
- sus = ''
- kind = e.findtext ('kind', '')
- if kind in short: kind = short [kind]
- elif '-' in kind: # xml chord names: -
- triad, mod = kind.split ('-')
- kind = accmap.get (triad, '') + modmap.get (mod, '')
- if kind.startswith ('sus'): kind, sus = '', kind # sus-suffix goes to the end
- elif kind == 'none': kind = e.find ('kind').get ('text','')
- degrees = e.findall ('degree')
- for d in degrees: # chord alterations
- kind += altmap.get (d.findtext ('degree-alter'),'') + d.findtext ('degree-value','')
- kind = kind.replace ('79','9').replace ('713','13').replace ('maj6','6')
- bass = e.findtext ('bass/bass-step','') + altmap.get (e.findtext ('bass/bass-alter'),'')
- s.msc.appendElem (vt, '"%s%s%s%s%s"' % (root, alt, kind, sus, bass and '/' + bass), 1)
-
- def doBarline (s, e): # 0 = no repeat, 1 = begin repeat, 2 = end repeat
- rep = e.find ('repeat')
- if rep != None: rep = rep.get ('direction')
- if s.unfold: # unfold repeat, don't translate barlines
- return rep and (rep == 'forward' and 1 or 2) or 0
- loc = e.get ('location', 'right') # right is the default
- if loc == 'right': # only change style for the right side
- style = e.findtext ('bar-style')
- if style == 'light-light': s.msr.rline = '||'
- elif style == 'light-heavy': s.msr.rline = '|]'
- if rep != None: # repeat found
- if rep == 'forward': s.msr.lline = ':'
- else: s.msr.rline = ':|' # override barline style
- end = e.find ('ending')
- if end != None:
- if end.get ('type') == 'start':
- n = end.get ('number', '1').replace ('.','').replace (' ','')
- try: list (map (int, n.split (','))) # should be a list of integers
- except: n = '"%s"' % n.strip () # illegal musicXML
- s.msr.lnum = n # assume a start is always at the beginning of a measure
- elif s.msr.rline == '|': # stop and discontinue the same in ABC ?
- s.msr.rline = '||' # to stop on a normal barline use || in ABC ?
- return 0
-
- def doPrint (s, e): # print element, measure number -> insert a line break
- if e.get ('new-system') == 'yes' or e.get ('new-page') == 'yes':
- if not s.nolbrk: return '$' # a line break
-
- def doPartList (s, e): # translate the start/stop-event-based xml-partlist into proper tree
- for sp in e.findall ('part-list/score-part'):
- midi = {}
- for m in sp.findall ('midi-instrument'):
- x = [m.findtext (p, s.midDflt [i]) for i,p in enumerate (['midi-channel','midi-program','volume','pan'])]
- pan = float (x[3])
- if pan >= -90 and pan <= 90: # would be better to map behind-pannings
- pan = (float (x[3]) + 90) / 180 * 127 # xml between -90 and +90
- midi [m.get ('id')] = [int (x[0]), int (x[1]), float (x[2]) * 1.27, pan] # volume 100 -> midi 127
- up = m.findtext ('midi-unpitched')
- if up: s.drumInst [m.get ('id')] = int (up) - 1 # store midi-pitch for channel 10 notes
- s.instMid.append (midi)
- ps = e.find ('part-list') # partlist = [groupelem]
- xs = getPartlist (ps) # groupelem = partname | grouplist
- partlist, _ = parseParts (xs, {}, []) # grouplist = [groupelem, ..., groupdata]
- return partlist # groupdata = [group-symbol, group-barline, group-name, group-abbrev]
-
- def mkTitle (s, e):
- def filterCredits (y): # y == filter level, higher filters less
- cs = []
- for x in credits: # skip redundant credit lines
- if y < 6 and (x in title or x in mvttl): continue # sure skip
- if y < 5 and (x in composer or x in lyricist): continue # almost sure skip
- if y < 4 and ((title and title in x) or (mvttl and mvttl in x)): continue # may skip too much
- if y < 3 and ([1 for c in composer if c in x] or [1 for c in lyricist if c in x]): continue # skips too much
- if y < 2 and re.match (r'^[\d\W]*$', x): continue # line only contains numbers and punctuation
- cs.append (x)
- if y == 0 and (title + mvttl): cs = '' # default: only credit when no title set
- return cs
- title = e.findtext ('work/work-title', '').strip ()
- mvttl = e.findtext ('movement-title', '').strip ()
- composer, lyricist, credits = [], [], []
- for creator in e.findall ('identification/creator'):
- if creator.text:
- if creator.get ('type') == 'composer':
- composer += [line.strip () for line in creator.text.split ('\n')]
- elif creator.get ('type') in ('lyricist', 'transcriber'):
- lyricist += [line.strip () for line in creator.text.split ('\n')]
- for rights in e.findall ('identification/rights'):
- if rights.text:
- lyricist += [line.strip () for line in rights.text.split ('\n')]
- for credit in e.findall('credit'):
- cs = ''.join (e.text or '' for e in credit.findall('credit-words'))
- credits += [re.sub (r'\s*[\r\n]\s*', ' ', cs)]
- credits = filterCredits (s.ctf)
- if title: title = 'T:%s\n' % title.replace ('\n', '\nT:')
- if mvttl: title += 'T:%s\n' % mvttl.replace ('\n', '\nT:')
- if credits: title += '\n'.join (['T:%s' % c for c in credits]) + '\n'
- if composer: title += '\n'.join (['C:%s' % c for c in composer]) + '\n'
- if lyricist: title += '\n'.join (['Z:%s' % c for c in lyricist]) + '\n'
- if title: abcOut.title = title[:-1]
- s.isSib = 'Sibelius' in (e.findtext ('identification/encoding/software') or '')
- if s.isSib: info ('Sibelius MusicXMl is unreliable')
-
- def doDefaults (s, e):
- if not s.doPageFmt: return # return if -pf option absent
- d = e.find ('defaults');
- if d == None: return;
- mils = d.findtext ('scaling/millimeters') # mills == staff height (mm)
- tenths = d.findtext ('scaling/tenths') # staff height in tenths
- if not mils or not tenths: return
- xmlScale = float (mils) / float (tenths) / 10 # tenths -> mm
- space = 10 * xmlScale # space between staff lines == 10 tenths
- abcScale = space / 0.2117 # 0.2117 cm = 6pt = space between staff lines for scale = 1.0 in abcm2ps
- abcOut.pageFmt ['scale'] = abcScale
- eks = 2 * ['page-layout/'] + 4 * ['page-layout/page-margins/']
- eks = [a+b for a,b in zip (eks, 'page-height,page-width,left-margin,right-margin,top-margin,bottom-margin'.split (','))]
- for i in range (6):
- v = d.findtext (eks [i])
- k = abcOut.pagekeys [i+1] # pagekeys [0] == scale already done, skip it
- if not abcOut.pageFmt [k] and v:
- try: abcOut.pageFmt [k] = float (v) * xmlScale # -> cm
- except: info ('illegal value %s for xml element %s', (v, eks [i])); continue # just skip illegal values
-
- def locStaffMap (s, part, maten): # map voice to staff with majority voting
- vmap = {} # {voice -> {staff -> n}} count occurrences of voice in staff
- s.vceInst = {} # {voice -> instrument id} for this part
- s.msc.vnums = {} # voice id's
- s.hasStems = {} # XML voice nums with at least one note with a stem (for tab key)
- s.stfMap, s.clefMap = {}, {} # staff -> [voices], staff -> clef
- ns = part.findall ('measure/note')
- for n in ns: # count staff allocations for all notes
- v = int (n.findtext ('voice', '1'))
- if s.isSib: v += 100 * int (n.findtext ('staff', '1')) # repair bug in Sibelius
- s.msc.vnums [v] = 1 # collect all used voice id's in this part
- sn = int (n.findtext ('staff', '1'))
- s.stfMap [sn] = []
- if v not in vmap:
- vmap [v] = {sn:1}
- else:
- d = vmap[v] # counter for voice v
- d[sn] = d.get (sn, 0) + 1 # ++ number of allocations for staff sn
- x = n.find ('instrument')
- if x != None: s.vceInst [v] = x.get ('id')
- x, noRest = n.findtext ('stem'), n.find ('rest') == None
- if noRest and (not x or x != 'none'): s.hasStems [v] = 1 # XML voice v has at least one stem
- vks = list (vmap.keys ())
- if s.jscript or s.isSib: vks.sort ()
- for v in vks: # choose staff with most allocations for each voice
- xs = [(n, sn) for sn, n in vmap[v].items ()]
- xs.sort ()
- stf = xs[-1][1] # the winner: staff with most notes of voice v
- s.stfMap [stf].append (v)
- s.vce2stf [v] = stf # reverse map
- s.curStf [v] = stf # current staff of XML voice v
-
- def addStaffMap (s, vvmap): # vvmap: xml voice number -> global abc voice number
- part = [] # default: brace on staffs of one part
- for stf, voices in sorted (s.stfMap.items ()): # s.stfMap has xml staff and voice numbers
- locmap = [vvmap [iv] for iv in voices if iv in vvmap]
- nostem = [(iv not in s.hasStems) for iv in voices if iv in vvmap] # same order as locmap
- if locmap: # abc voice number of staff stf
- part.append (locmap)
- clef = s.clefMap.get (stf, 'treble') # {xml staff number -> clef}
- for i, iv in enumerate (locmap):
- clef_attr = ''
- if clef.startswith ('tab'):
- if nostem [i] and 'nostems' not in clef: clef_attr = ' nostems'
- if s.diafret and 'diafret' not in clef: clef_attr += ' diafret' # for all voices in the part
- abcOut.clefs [iv] = clef + clef_attr # add nostems when all notes of voice had no stem
- s.gStfMap.append (part)
-
- def addMidiMap (s, ip, vvmap): # map abc voices to midi settings
- instr = s.instMid [ip] # get the midi settings for this part
- if instr.values (): defInstr = list(instr.values ())[0] # default settings = first instrument
- else: defInstr = s.midDflt # no instruments defined
- xs = []
- for v, vabc in vvmap.items (): # xml voice num, abc voice num
- ks = sorted (s.drumNotes.items ())
- ds = [(nt, step, midi, head) for (vd, nt), (step, midi, head) in ks if v == vd] # map perc notes
- id = s.vceInst.get (v, '') # get the instrument-id for part with multiple instruments
- if id in instr: # id is defined as midi-instrument in part-list
- xs.append ((vabc, instr [id] + ds)) # get midi settings for id
- else: xs.append ((vabc, defInstr + ds)) # only one instrument for this part
- xs.sort () # put abc voices in order
- s.midiMap.extend ([midi for v, midi in xs])
- snaarmap = ['E','G','B','d', 'f', 'a', "c'", "e'"]
- diamap = '0,1-,1,1+,2,3,3,4,4,5,6,6+,7,8-,8,8+,9,10,10,11,11,12,13,13+,14'.split (',')
- for k in sorted (s.tabmap.keys ()): # add %%map's for all tab voices
- v, noot = k;
- snaar, fret = s.tabmap [k];
- if s.diafret: fret = diamap [int (fret)]
- vabc = vvmap [v]
- snaar = s.stafflines - int (snaar)
- xs = s.tabVceMap.get (vabc, [])
- xs.append ('%%%%map tab%d %s print=%s heads=kop%s\n' % (vabc, noot, snaarmap [snaar], fret))
- s.tabVceMap [vabc] = xs
- s.koppen [fret] = 1 # collect noteheads for SVG defs
-
- def parse (s, fobj):
- vvmapAll = {} # collect xml->abc voice maps (vvmap) of all parts
- e = E.parse (fobj)
- s.mkTitle (e)
- s.doDefaults (e)
- partlist = s.doPartList (e)
- parts = e.findall ('part')
- for ip, p in enumerate (parts):
- maten = p.findall ('measure')
- s.locStaffMap (p, maten) # {voice -> staff} for this part
- s.drumNotes = {} # (xml voice, abc note) -> (midi note, note head)
- s.clefOct = {} # xml staff number -> current clef-octave-change
- s.curClef = {} # xml staff number -> current abc clef
- s.stemDir = {} # xml voice number -> current stem direction
- s.tabmap = {} # (xml voice, abc note) -> (string, fret)
- s.diafret = 0 # use diatonic fretting
- s.stafflines = 5
- s.msc.initVoices (newPart = 1) # create all voices
- aantalHerhaald = 0 # keep track of number of repititions
- herhaalMaat = 0 # target measure of the repitition
- divisions = [] # current value of for each measure
- s.msr = Measure (ip) # various measure data
- while s.msr.ixm < len (maten):
- maat = maten [s.msr.ixm]
- herhaal, lbrk = 0, ''
- s.msr.reset ()
- s.curalts = {} # passing accidentals are reset each measure
- es = list (maat)
- for i, e in enumerate (es):
- if e.tag == 'note': s.doNote (e)
- elif e.tag == 'attributes': s.doAttr (e)
- elif e.tag == 'direction': s.doDirection (e, i, es)
- elif e.tag == 'sound': s.doDirection (maat, i, es) # sound element directly in measure!
- elif e.tag == 'harmony': s.doHarmony (e, i, es)
- elif e.tag == 'barline': herhaal = s.doBarline (e)
- elif e.tag == 'backup':
- dt = int (e.findtext ('duration'))
- if chkbug (dt, s.msr): s.msc.incTime (-dt)
- elif e.tag == 'forward':
- dt = int (e.findtext ('duration'))
- if chkbug (dt, s.msr): s.msc.incTime (dt)
- elif e.tag == 'print': lbrk = s.doPrint (e)
- s.msc.addBar (lbrk, s.msr)
- divisions.append (s.msr.divs)
- if herhaal == 1:
- herhaalMaat = s.msr.ixm
- s.msr.ixm += 1
- elif herhaal == 2:
- if aantalHerhaald < 1: # jump
- s.msr.ixm = herhaalMaat
- aantalHerhaald += 1
- else:
- aantalHerhaald = 0 # reset
- s.msr.ixm += 1 # just continue
- else: s.msr.ixm += 1 # on to the next measure
- for rv in s.repeat_str.values (): # close hanging measure-repeats without stop
- rv [0] = '[I:repeat %s %d]' % (rv [1], 1)
- vvmap = s.msc.outVoices (divisions, ip, s.isSib)
- s.addStaffMap (vvmap) # update global staff map
- s.addMidiMap (ip, vvmap)
- vvmapAll.update (vvmap)
- if vvmapAll: # skip output if no part has any notes
- abcOut.mkHeader (s.gStfMap, partlist, s.midiMap, s.tabVceMap, s.koppen)
- abcOut.writeall ()
- else: info ('nothing written, %s has no notes ...' % abcOut.fnmext)
-
-#----------------
-# Main Program
-#----------------
-if __name__ == '__main__':
- from optparse import OptionParser
- from glob import glob
- from zipfile import ZipFile
- ustr = '%prog [-h] [-u] [-m] [-c C] [-d D] [-n CPL] [-b BPL] [-o DIR] [-v V]\n'
- ustr += '[-x] [-p PFMT] [-t] [-s] [-i] [--v1] [--noped] [--stems] [ ...]'
- parser = OptionParser (usage=ustr, version=str(VERSION))
- parser.add_option ("-u", action="store_true", help="unfold simple repeats")
- parser.add_option ("-m", action="store", help="0 -> no %%MIDI, 1 -> minimal %%MIDI, 2-> all %%MIDI", default=0)
- parser.add_option ("-c", action="store", type="int", help="set credit text filter to C", default=0, metavar='C')
- parser.add_option ("-d", action="store", type="int", help="set L:1/D", default=0, metavar='D')
- parser.add_option ("-n", action="store", type="int", help="CPL: max number of characters per line (default 100)", default=0, metavar='CPL')
- parser.add_option ("-b", action="store", type="int", help="BPL: max number of bars per line", default=0, metavar='BPL')
- parser.add_option ("-o", action="store", help="store abc files in DIR", default='', metavar='DIR')
- parser.add_option ("-v", action="store", type="int", help="set volta typesetting behaviour to V", default=0, metavar='V')
- parser.add_option ("-x", action="store_true", help="output no line breaks")
- parser.add_option ("-p", action="store", help="pageformat PFMT (cm) = scale, pageheight, pagewidth, leftmargin, rightmargin, topmargin, botmargin", default='', metavar='PFMT')
- parser.add_option ("-j", action="store_true", help="switch for compatibility with javascript version")
- parser.add_option ("-t", action="store_true", help="translate perc- and tab-staff to ABC code with %%map, %%voicemap")
- parser.add_option ("-s", action="store_true", help="shift node heads 3 units left in a tab staff")
- parser.add_option ("--v1", action="store_true", help="start-stop directions allways to first voice of staff")
- parser.add_option ("--noped", action="store_false", help="skip all pedal directions", dest='ped', default=True)
- parser.add_option ("--stems", action="store_true", help="translate stem directions", dest='stm', default=False)
- parser.add_option ("-i", action="store_true", help="read xml file from standard input")
- options, args = parser.parse_args ()
- if options.n < 0: parser.error ('only values >= 0')
- if options.b < 0: parser.error ('only values >= 0')
- if options.d and options.d not in [2**n for n in range (10)]:
- parser.error ('D should be on of %s' % ','.join ([str(2**n) for n in range (10)]))
- options.p = options.p and options.p.split (',') or [] # ==> [] | [string]
- if len (args) == 0 and not options.i: parser.error ('no input file given')
- pad = options.o
- if pad:
- if not os.path.exists (pad): os.mkdir (pad)
- if not os.path.isdir (pad): parser.error ('%s is not a directory' % pad)
- fnmext_list = []
- for i in args: fnmext_list += glob (i)
- if options.i: fnmext_list = ['stdin.xml']
- if not fnmext_list: parser.error ('none of the input files exist')
- for X, fnmext in enumerate (fnmext_list):
- fnm, ext = os.path.splitext (fnmext)
- if ext.lower () not in ('.xml','.mxl','.musicxml'):
- info ('skipped input file %s, it should have extension .xml or .mxl' % fnmext)
- continue
- if os.path.isdir (fnmext):
- info ('skipped directory %s. Only files are accepted' % fnmext)
- continue
- if fnmext == 'stdin.xml':
- fobj = sys.stdin
- elif ext.lower () == '.mxl': # extract .xml file from .mxl file
- z = ZipFile(fnmext)
- for n in z.namelist(): # assume there is always an xml file in a mxl archive !!
- if (n[:4] != 'META') and (n[-4:].lower() == '.xml'):
- fobj = z.open (n)
- break # assume only one MusicXML file per archive
- else:
- fobj = open (fnmext, 'rb') # open regular xml file
-
- abcOut = ABCoutput (fnm + '.abc', pad, X, options) # create global ABC output object
- psr = Parser (options) # xml parser
- try:
- psr.parse (fobj) # parse file fobj and write abc to .abc
- except:
- etype, value, traceback = sys.exc_info () # works in python 2 & 3
- info ('** %s occurred: %s in %s' % (etype, value, fnmext), 0)
diff --git a/spaces/sayakpaul/sots-outdoor-dehazing-maxim/maxim/blocks/block_gating.py b/spaces/sayakpaul/sots-outdoor-dehazing-maxim/maxim/blocks/block_gating.py
deleted file mode 100644
index 0d06af50448f7a15a39c84100be1a99710b24c32..0000000000000000000000000000000000000000
--- a/spaces/sayakpaul/sots-outdoor-dehazing-maxim/maxim/blocks/block_gating.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import tensorflow as tf
-from tensorflow.keras import backend as K
-from tensorflow.keras import layers
-
-from ..layers import BlockImages, SwapAxes, UnblockImages
-
-
-def BlockGatingUnit(use_bias: bool = True, name: str = "block_gating_unit"):
- """A SpatialGatingUnit as defined in the gMLP paper.
-
- The 'spatial' dim is defined as the **second last**.
- If applied on other dims, you should swapaxes first.
- """
-
- def apply(x):
- u, v = tf.split(x, 2, axis=-1)
- v = layers.LayerNormalization(
- epsilon=1e-06, name=f"{name}_intermediate_layernorm"
- )(v)
- n = K.int_shape(x)[-2] # get spatial dim
- v = SwapAxes()(v, -1, -2)
- v = layers.Dense(n, use_bias=use_bias, name=f"{name}_Dense_0")(v)
- v = SwapAxes()(v, -1, -2)
- return u * (v + 1.0)
-
- return apply
-
-
-def BlockGmlpLayer(
- block_size,
- use_bias: bool = True,
- factor: int = 2,
- dropout_rate: float = 0.0,
- name: str = "block_gmlp",
-):
- """Block gMLP layer that performs local mixing of tokens."""
-
- def apply(x):
- n, h, w, num_channels = (
- K.int_shape(x)[0],
- K.int_shape(x)[1],
- K.int_shape(x)[2],
- K.int_shape(x)[3],
- )
- fh, fw = block_size
- gh, gw = h // fh, w // fw
- x = BlockImages()(x, patch_size=(fh, fw))
- # MLP2: Local (block) mixing part, provides within-block communication.
- y = layers.LayerNormalization(epsilon=1e-06, name=f"{name}_LayerNorm")(x)
- y = layers.Dense(
- num_channels * factor,
- use_bias=use_bias,
- name=f"{name}_in_project",
- )(y)
- y = tf.nn.gelu(y, approximate=True)
- y = BlockGatingUnit(use_bias=use_bias, name=f"{name}_BlockGatingUnit")(y)
- y = layers.Dense(
- num_channels,
- use_bias=use_bias,
- name=f"{name}_out_project",
- )(y)
- y = layers.Dropout(dropout_rate)(y)
- x = x + y
- x = UnblockImages()(x, grid_size=(gh, gw), patch_size=(fh, fw))
- return x
-
- return apply
diff --git a/spaces/sccstandardteam/ChuanhuChatGPT/modules/models/__init__.py b/spaces/sccstandardteam/ChuanhuChatGPT/modules/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/scedlatioru/img-to-music/example/Crack PORTABLE Digital Media Group Facebook Blaster Pro V7.1.9 Portable.md b/spaces/scedlatioru/img-to-music/example/Crack PORTABLE Digital Media Group Facebook Blaster Pro V7.1.9 Portable.md
deleted file mode 100644
index 558513c66063454d2a71d23679f2f8a29f4b2f3a..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Crack PORTABLE Digital Media Group Facebook Blaster Pro V7.1.9 Portable.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable: A Complete Guide
-
-
If you are looking for a software that can help you promote your business or brand on Facebook, you might want to check out CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable. This software is a powerful and easy-to-use tool that lets you send mass messages, friend requests, comments, and likes to thousands of Facebook users.
-
CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable
What is CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable?
-
-
CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable is the latest version of the popular Facebook marketing software from Digital Media Group. It is a cracked version that comes with a keygen that allows you to activate the full features of the software without paying for it.
-
-
With this software, you can import your Facebook account or create multiple accounts with different proxies. You can then use the software to search for your target audience based on keywords, groups, pages, events, or locations. You can also import your own list of Facebook users or generate a random list.
-
-
Once you have your list of users, you can use the software to send them mass messages, friend requests, comments, and likes. You can also schedule your actions and set the time interval between each action. You can also track your results and manage your accounts with ease.
-
-
What are the features of CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable?
-
-
CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable has many features that make it a versatile and powerful Facebook marketing software. Some of the main features are:
-
-
-
-
Mass messaging: You can send personalized messages to thousands of Facebook users with one click. You can also use spintax and variables to make your messages unique and relevant.
-
Mass friend requests: You can send friend requests to thousands of Facebook users with one click. You can also use filters and settings to avoid spamming and getting banned.
-
Mass comments: You can post comments on thousands of Facebook posts with one click. You can also use spintax and variables to make your comments unique and engaging.
-
Mass likes: You can like thousands of Facebook posts with one click. You can also use filters and settings to avoid spamming and getting banned.
-
Account management: You can manage multiple Facebook accounts with ease. You can also use proxies and cookies to protect your accounts and avoid detection.
-
List management: You can import, export, edit, and delete your list of Facebook users with ease. You can also generate a random list or use the built-in search function to find your target audience.
-
Scheduling: You can schedule your actions and set the time interval between each action. You can also pause, resume, or stop your actions at any time.
-
Tracking: You can track your results and see how many messages, friend requests, comments, and likes you have sent or received. You can also see how many accounts you have created or imported.
-
-
-
How to download and install CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable?
-
-
If you want to try out CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable for free, you can follow these steps:
-
-
-
Download the software from one of the torrent sites that offer it.
-
Extract the ZIP file using a program like WinRAR or 7-Zip.
-
Run the setup file and follow the instructions to install the software.
-
Copy all the files from the crack folder and paste them into the installation folder.
-
Run the keygen file as administrator and click on Generate.
-
Enjoy your activated CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable!
-
-
-
Conclusion
-
-
CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable is a great software for promoting your business or brand on Facebook. It has many features and options that allow you to send mass messages, friend requests, comments, and likes to thousands of Facebook users with ease and efficiency.
-
-
However, this software also has some drawbacks that you should be aware of. It is illegal, unstable, and unethical to use this software without paying for it or obtaining it legally. You are also risking getting infected by viruses, malware, or spyware that might come with the crack and keygen.
-
-
If you want to use a Facebook marketing software that is legal, stable, and ethical, you should consider some of the alternatives that we have mentioned above. They are also easy to use, versatile, powerful, and high-quality software that can help you promote your business or brand on Facebook.
-
-
Thank you for reading this article. We hope you have enjoyed it and learned something new.
-
How to use CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable to boost your business?
-
-
CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable can help you boost your business in many ways. Here are some of them:
-
-
-
It can increase your brand awareness: You can use this software to reach out to thousands of potential customers who might be interested in your products or services. You can also use it to build your reputation and credibility by sending valuable and relevant messages, comments, and likes.
-
It can generate more leads and sales: You can use this software to drive more traffic to your website or landing page by sending targeted messages, friend requests, comments, and likes. You can also use it to collect contact information and feedback from your prospects and customers.
-
It can improve your customer loyalty and retention: You can use this software to communicate with your existing customers and keep them updated about your offers, promotions, events, or news. You can also use it to provide customer service and support by answering their questions, resolving their issues, or thanking them for their purchases.
-
It can save you time and money: You can use this software to automate your Facebook marketing tasks and save yourself from the hassle and cost of hiring a team or outsourcing the work. You can also use it to optimize your Facebook marketing strategy and measure your results.
-
-
-
What are the best practices for using CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable?
-
-
To get the best results from using CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable, here are some best practices that you should follow:
-
-
-
Use a catchy and relevant title: You should use a title that grabs the attention of your audience and conveys the main benefit or value proposition of your message. You should also use keywords that match your target audience's search intent.
-
Use a clear and concise message: You should use a message that delivers your message clearly and concisely without being too long or too short. You should also use spintax and variables to make your message unique and relevant for each user.
-
Use a strong call to action: You should use a call to action that motivates your audience to take the desired action, such as visiting your website, signing up for your newsletter, buying your product, or contacting you. You should also use a link that directs your audience to your landing page or offer page.
-
Use filters and settings: You should use filters and settings to avoid spamming and getting banned by Facebook. You should also use proxies and cookies to protect your accounts and avoid detection by Facebook.
-
-
-
Conclusion
-
-
In this article, we have discussed CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable, a software that lets you promote your business or brand on Facebook. We have explained what this software is, what its features are, how to download and install it, how to use it, what its pros and cons are, how it can boost your business, and what are the best practices for using it.
-
-
We hope that this article has been informative and helpful for you. If you are looking for a way to market your business or brand on Facebook effectively and efficiently, you should give CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable a try. However, if you want to use a software that is legal, stable, and ethical, you should consider some of the alternatives that we have mentioned above. They are also easy to use, versatile, powerful, and high-quality software that can help you market your business or brand on Facebook.
-
-
Thank you for reading this article. We hope you have enjoyed it and learned something new.
-
What are some testimonials from users of CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable?
-
-
CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable has received many positive testimonials from users who have used it for their Facebook marketing campaigns. Here are some of them:
-
-
-
"I have been using this software for a few months now and I am very impressed with the results. I have been able to generate more leads and sales for my online business by sending mass messages, friend requests, comments, and likes to my target audience. The software is very easy to use and has many features and options that allow me to customize and optimize my campaigns. I highly recommend this software to anyone who wants to market their business or brand on Facebook."
-- John Smith, Online Entrepreneur
-
-
-
-
"This software is amazing! It has helped me grow my fan page and increase my engagement with my followers. I have been able to post comments and likes on thousands of posts related to my niche and attract more traffic to my page. The software is also very fast and reliable and has never crashed or caused any problems. I love this software and I would definitely buy it if it was not cracked."
-- Jane Doe, Social Media Influencer
-
-
-
-
"This software is a game-changer for me. It has saved me a lot of time and money that I used to spend on hiring a team or outsourcing the work. I have been able to manage multiple Facebook accounts and send mass messages, friend requests, comments, and likes to thousands of users with ease and efficiency. The software is also very user-friendly and has a great support team that answers all my questions and issues. This software is the best thing that ever happened to me."
-- Mike Jones, Facebook Marketer
-
-
-
What are some FAQs about CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable?
-
-
Here are some frequently asked questions about CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable and their answers:
-
-
-
Q: Is this software legal?
-
A: No, this software is not legal. It is a cracked version of the original software that violates the terms and conditions of Digital Media Group. You are also risking getting infected by viruses, malware, or spyware that might come with the crack and keygen.
-
Q: Is this software safe?
-
A: No, this software is not safe. It is a patched version of the original software that might not work properly or cause errors or crashes. You are also missing out on the updates and support from Digital Media Group.
-
Q: Is this software ethical?
-
A: No, this software is not ethical. It is a software that you have not paid for or obtained legally. You are also depriving the developers and creators of their rightful income and recognition.
-
Q: How can I get this software?
-
A: You can get this software by downloading it from one of the torrent sites that offer it. However, we do not recommend doing so as it is illegal, unsafe, and unethical.
-
Q: How can I use this software?
-
A: You can use this software by installing it on your computer and activating it with the crack and keygen provided by CracksMind. However, we do not recommend doing so as it is illegal, unsafe, and unethical.
-
-
-
Conclusion
-
-
In this article, we have discussed CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable, a software that lets you promote your business or brand on Facebook. We have explained what this software is, what its features are, how to download and install it, how to use it, what its pros and cons are, how it can boost your business, what are the best practices for using it, what are some testimonials from users of it, and what are some FAQs about it.
-
-
We hope that this article has been informative and helpful for you. If you are looking for a way to market your business or brand on Facebook effectively and efficiently, you should give CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable a try. However, if you want to use a software that is legal, stable, and ethical, you should consider some of the alternatives that we have mentioned above. They are also easy to use, versatile, powerful, and high-quality software that can help you market your business or brand on Facebook.
-
-
Thank you for reading this article. We hope you have enjoyed it and learned something new.
-
Conclusion
-
-
In this article, we have discussed CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable, a software that lets you promote your business or brand on Facebook. We have explained what this software is, what its features are, how to download and install it, how to use it, what its pros and cons are, how it can boost your business, what are the best practices for using it, what are some testimonials from users of it, and what are some FAQs about it.
-
-
We hope that this article has been informative and helpful for you. If you are looking for a way to market your business or brand on Facebook effectively and efficiently, you should give CRACK Digital Media Group Facebook Blaster Pro V7.1.9 Portable a try. However, if you want to use a software that is legal, stable, and ethical, you should consider some of the alternatives that we have mentioned above. They are also easy to use, versatile, powerful, and high-quality software that can help you market your business or brand on Facebook.
-
-
Thank you for reading this article. We hope you have enjoyed it and learned something new.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/segestic/ArticlePara/paraphraser.py b/spaces/segestic/ArticlePara/paraphraser.py
deleted file mode 100644
index 30d7f3e66142aab5755dd8c08715f3493a3aa12f..0000000000000000000000000000000000000000
--- a/spaces/segestic/ArticlePara/paraphraser.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from transformers import *
-
-model = PegasusForConditionalGeneration.from_pretrained("tuner007/pegasus_paraphrase")
-tokenizer = PegasusTokenizerFast.from_pretrained("tuner007/pegasus_paraphrase")
-
-
-def get_paraphrased_sentences(model, tokenizer, sentence, num_return_sequences=5, num_beams=5):
- # tokenize the text to be form of a list of token IDs
- inputs = tokenizer([sentence], truncation=True, padding="longest", return_tensors="pt")
- # generate the paraphrased sentences
- outputs = model.generate(
- **inputs,
- num_beams=num_beams,
- num_return_sequences=num_return_sequences,
- )
- # decode the generated sentences using the tokenizer to get them back to text
- return tokenizer.batch_decode(outputs, skip_special_tokens=True)
-
-#sentence = "Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences."
-#get_paraphrased_sentences(model, tokenizer, sentence, num_beams=10, num_return_sequences=10)
\ No newline at end of file
diff --git a/spaces/serdaryildiz/TRCaptionNet/Model/clip/simple_tokenizer.py b/spaces/serdaryildiz/TRCaptionNet/Model/clip/simple_tokenizer.py
deleted file mode 100644
index 0a66286b7d5019c6e221932a813768038f839c91..0000000000000000000000000000000000000000
--- a/spaces/serdaryildiz/TRCaptionNet/Model/clip/simple_tokenizer.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import gzip
-import html
-import os
-from functools import lru_cache
-
-import ftfy
-import regex as re
-
-
-@lru_cache()
-def default_bpe():
- return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
-
-
-@lru_cache()
-def bytes_to_unicode():
- """
- Returns list of utf-8 byte and a corresponding list of unicode strings.
- The reversible bpe codes work on unicode strings.
- This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
- When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
- This is a signficant percentage of your normal, say, 32K bpe vocab.
- To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
- And avoids mapping to whitespace/control characters the bpe code barfs on.
- """
- bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
- cs = bs[:]
- n = 0
- for b in range(2**8):
- if b not in bs:
- bs.append(b)
- cs.append(2**8+n)
- n += 1
- cs = [chr(n) for n in cs]
- return dict(zip(bs, cs))
-
-
-def get_pairs(word):
- """Return set of symbol pairs in a word.
- Word is represented as tuple of symbols (symbols being variable-length strings).
- """
- pairs = set()
- prev_char = word[0]
- for char in word[1:]:
- pairs.add((prev_char, char))
- prev_char = char
- return pairs
-
-
-def basic_clean(text):
- text = ftfy.fix_text(text)
- text = html.unescape(html.unescape(text))
- return text.strip()
-
-
-def whitespace_clean(text):
- text = re.sub(r'\s+', ' ', text)
- text = text.strip()
- return text
-
-
-class SimpleTokenizer(object):
- def __init__(self, bpe_path: str = default_bpe()):
- self.byte_encoder = bytes_to_unicode()
- self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
- merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
- merges = merges[1:49152-256-2+1]
- merges = [tuple(merge.split()) for merge in merges]
- vocab = list(bytes_to_unicode().values())
- vocab = vocab + [v+'' for v in vocab]
- for merge in merges:
- vocab.append(''.join(merge))
- vocab.extend(['<|startoftext|>', '<|endoftext|>'])
- self.encoder = dict(zip(vocab, range(len(vocab))))
- self.decoder = {v: k for k, v in self.encoder.items()}
- self.bpe_ranks = dict(zip(merges, range(len(merges))))
- self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
- self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
-
- def bpe(self, token):
- if token in self.cache:
- return self.cache[token]
- word = tuple(token[:-1]) + ( token[-1] + '',)
- pairs = get_pairs(word)
-
- if not pairs:
- return token+''
-
- while True:
- bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
- if bigram not in self.bpe_ranks:
- break
- first, second = bigram
- new_word = []
- i = 0
- while i < len(word):
- try:
- j = word.index(first, i)
- new_word.extend(word[i:j])
- i = j
- except:
- new_word.extend(word[i:])
- break
-
- if word[i] == first and i < len(word)-1 and word[i+1] == second:
- new_word.append(first+second)
- i += 2
- else:
- new_word.append(word[i])
- i += 1
- new_word = tuple(new_word)
- word = new_word
- if len(word) == 1:
- break
- else:
- pairs = get_pairs(word)
- word = ' '.join(word)
- self.cache[token] = word
- return word
-
- def encode(self, text):
- bpe_tokens = []
- text = whitespace_clean(basic_clean(text)).lower()
- for token in re.findall(self.pat, text):
- token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
- bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
- return bpe_tokens
-
- def decode(self, tokens):
- text = ''.join([self.decoder[token] for token in tokens])
- text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('', ' ')
- return text
diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/diffusion_utils.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/diffusion_utils.py
deleted file mode 100644
index b28b42dc6d2933d4a6159e973f70dc721f19701d..0000000000000000000000000000000000000000
--- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/model_zoo/diffusion_utils.py
+++ /dev/null
@@ -1,250 +0,0 @@
-import os
-import math
-import torch
-import torch.nn as nn
-import numpy as np
-from einops import repeat
-
-def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
- if schedule == "linear":
- betas = (
- torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
- )
-
- elif schedule == "cosine":
- timesteps = (
- torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
- )
- alphas = timesteps / (1 + cosine_s) * np.pi / 2
- alphas = torch.cos(alphas).pow(2)
- alphas = alphas / alphas[0]
- betas = 1 - alphas[1:] / alphas[:-1]
- betas = np.clip(betas, a_min=0, a_max=0.999)
-
- elif schedule == "sqrt_linear":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
- elif schedule == "sqrt":
- betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
- else:
- raise ValueError(f"schedule '{schedule}' unknown.")
- return betas.numpy()
-
-def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
- if ddim_discr_method == 'uniform':
- c = num_ddpm_timesteps // num_ddim_timesteps
- ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
- elif ddim_discr_method == 'quad':
- ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
- else:
- raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
-
- # assert ddim_timesteps.shape[0] == num_ddim_timesteps
- # add one to get the final alpha values right (the ones from first scale to data during sampling)
- steps_out = ddim_timesteps + 1
- if verbose:
- print(f'Selected timesteps for ddim sampler: {steps_out}')
- return steps_out
-
-def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
- # select alphas for computing the variance schedule
- alphas = alphacums[ddim_timesteps]
- alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
-
- # according the the formula provided in https://arxiv.org/abs/2010.02502
- sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
- if verbose:
- print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
- print(f'For the chosen value of eta, which is {eta}, '
- f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
- return sigmas, alphas, alphas_prev
-
-def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function,
- which defines the cumulative product of (1-beta) over time from t = [0,1].
- :param num_diffusion_timesteps: the number of betas to produce.
- :param alpha_bar: a lambda that takes an argument t from 0 to 1 and
- produces the cumulative product of (1-beta) up to that
- part of the diffusion process.
- :param max_beta: the maximum beta to use; use values lower than 1 to
- prevent singularities.
- """
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return np.array(betas)
-
-def extract_into_tensor(a, t, x_shape):
- b, *_ = t.shape
- out = a.gather(-1, t)
- return out.reshape(b, *((1,) * (len(x_shape) - 1)))
-
-def checkpoint(func, inputs, params, flag):
- """
- Evaluate a function without caching intermediate activations, allowing for
- reduced memory at the expense of extra compute in the backward pass.
- :param func: the function to evaluate.
- :param inputs: the argument sequence to pass to `func`.
- :param params: a sequence of parameters `func` depends on but does not
- explicitly take as arguments.
- :param flag: if False, disable gradient checkpointing.
- """
- if flag:
- args = tuple(inputs) + tuple(params)
- return CheckpointFunction.apply(func, len(inputs), *args)
- else:
- return func(*inputs)
-
-class CheckpointFunction(torch.autograd.Function):
- @staticmethod
- def forward(ctx, run_function, length, *args):
- ctx.run_function = run_function
- ctx.input_tensors = list(args[:length])
- ctx.input_params = list(args[length:])
-
- with torch.no_grad():
- output_tensors = ctx.run_function(*ctx.input_tensors)
- return output_tensors
-
- @staticmethod
- def backward(ctx, *output_grads):
- ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
- with torch.enable_grad():
- # Fixes a bug where the first op in run_function modifies the
- # Tensor storage in place, which is not allowed for detach()'d
- # Tensors.
- shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
- output_tensors = ctx.run_function(*shallow_copies)
- input_grads = torch.autograd.grad(
- output_tensors,
- ctx.input_tensors + ctx.input_params,
- output_grads,
- allow_unused=True,
- )
- del ctx.input_tensors
- del ctx.input_params
- del output_tensors
- return (None, None) + input_grads
-
-def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
- """
- Create sinusoidal timestep embeddings.
- :param timesteps: a 1-D Tensor of N indices, one per batch element.
- These may be fractional.
- :param dim: the dimension of the output.
- :param max_period: controls the minimum frequency of the embeddings.
- :return: an [N x dim] Tensor of positional embeddings.
- """
- if not repeat_only:
- half = dim // 2
- freqs = torch.exp(
- -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
- ).to(device=timesteps.device)
- args = timesteps[:, None].float() * freqs[None]
- embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
- if dim % 2:
- embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
- else:
- embedding = repeat(timesteps, 'b -> b d', d=dim)
- return embedding
-
-def zero_module(module):
- """
- Zero out the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().zero_()
- return module
-
-def scale_module(module, scale):
- """
- Scale the parameters of a module and return it.
- """
- for p in module.parameters():
- p.detach().mul_(scale)
- return module
-
-def mean_flat(tensor):
- """
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-def normalization(channels):
- """
- Make a standard normalization layer.
- :param channels: number of input channels.
- :return: an nn.Module for normalization.
- """
- return GroupNorm32(32, channels)
-
-# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
-class SiLU(nn.Module):
- def forward(self, x):
- return x * torch.sigmoid(x)
-
-class GroupNorm32(nn.GroupNorm):
- def forward(self, x):
- # return super().forward(x.float()).type(x.dtype)
- return super().forward(x)
-
-def conv_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D convolution module.
- """
- if dims == 1:
- return nn.Conv1d(*args, **kwargs)
- elif dims == 2:
- return nn.Conv2d(*args, **kwargs)
- elif dims == 3:
- return nn.Conv3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-def linear(*args, **kwargs):
- """
- Create a linear module.
- """
- return nn.Linear(*args, **kwargs)
-
-def avg_pool_nd(dims, *args, **kwargs):
- """
- Create a 1D, 2D, or 3D average pooling module.
- """
- if dims == 1:
- return nn.AvgPool1d(*args, **kwargs)
- elif dims == 2:
- return nn.AvgPool2d(*args, **kwargs)
- elif dims == 3:
- return nn.AvgPool3d(*args, **kwargs)
- raise ValueError(f"unsupported dimensions: {dims}")
-
-class HybridConditioner(nn.Module):
-
- def __init__(self, c_concat_config, c_crossattn_config):
- super().__init__()
- self.concat_conditioner = instantiate_from_config(c_concat_config)
- self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
-
- def forward(self, c_concat, c_crossattn):
- c_concat = self.concat_conditioner(c_concat)
- c_crossattn = self.crossattn_conditioner(c_crossattn)
- return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
-
-def noise_like(x, repeat=False):
- noise = torch.randn_like(x)
- if repeat:
- bs = x.shape[0]
- noise = noise[0:1].repeat(bs, *((1,) * (len(x.shape) - 1)))
- return noise
-
-##########################
-# inherit from ldm.utils #
-##########################
-
-def count_params(model, verbose=False):
- total_params = sum(p.numel() for p in model.parameters())
- if verbose:
- print(f"{model.__class__.__name__} has {total_params*1.e-6:.2f} M params.")
- return total_params
diff --git a/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/__init__.py b/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/__init__.py
deleted file mode 100644
index b570848421afd921fae635569c97d0f8f5b33c80..0000000000000000000000000000000000000000
--- a/spaces/sidharthism/fashion-eye/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from .config import BigGANConfig
-from .model import BigGAN
-from .file_utils import PYTORCH_PRETRAINED_BIGGAN_CACHE, cached_path
-from .utils import (truncated_noise_sample, save_as_images,
- convert_to_images, display_in_terminal,
- one_hot_from_int, one_hot_from_names)
diff --git a/spaces/silencewing/server/youyou/.history/math_20230613232510.html b/spaces/silencewing/server/youyou/.history/math_20230613232510.html
deleted file mode 100644
index 31db495e1536a654c7e5ec1a22e10024688565a0..0000000000000000000000000000000000000000
--- a/spaces/silencewing/server/youyou/.history/math_20230613232510.html
+++ /dev/null
@@ -1,234 +0,0 @@
-
-
-
-
-
-
-
-
-
- Document
-
-
-
-
-
-
-
-
-
-
-
题目
-
-
-
-
答案
-
-
-
-
-
正误
-
-
-
-
得分
-
-
-
-
-
-
-
-
-
-
-
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Discover the Secrets of Cedaltia in 9th Dawn 3 RPG MOD APK.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Discover the Secrets of Cedaltia in 9th Dawn 3 RPG MOD APK.md
deleted file mode 100644
index 9df3c39d2ab2fcfd113a903f38e69b19b86bd0f5..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Discover the Secrets of Cedaltia in 9th Dawn 3 RPG MOD APK.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
9th Dawn RPG Mod APK: A Guide to the Ultimate Adventure
-
Are you looking for a fun and immersive role-playing game that will keep you hooked for hours? Do you want to experience a vast and open world full of mysteries, dangers, and adventures? If you answered yes, then you should try 9th Dawn RPG Mod APK, a modified version of the popular game 9th Dawn RPG by Valorware. In this article, we will tell you everything you need to know about this amazing game, including its features, story, mod benefits, download instructions, and tips and tricks. Read on and get ready for the ultimate adventure!
-
What is 9th Dawn RPG?
-
9th Dawn RPG is a classic-style role-playing game that was released in 2012 by Valorware, an independent game developer. The game is set in the island continent of Montelorne, a land far detached from the mainland, but filled with mystery, danger, and adventure. You play as a hero who arrives in Montelorne to explore its secrets and face its challenges. You can choose from three different classes: warrior, mage, or archer, and customize your character's appearance, skills, and equipment. You can also interact with various NPCs, join factions, complete quests, collect items, craft weapons and armor, learn spells, fight enemies and bosses, and more. The game has a retro pixel art style that gives it a nostalgic charm, and a dynamic day-night cycle that affects the gameplay. The game also has a huge open world that you can explore freely, with over 300 maps to discover.
Some of the features that make 9th Dawn RPG stand out are:
-
-
A large and diverse world with over 300 maps to explore, including forests, caves, dungeons, towns, castles, islands, and more.
-
A rich and engaging story with multiple endings depending on your choices and actions.
-
A dynamic day-night cycle that affects the environment, NPCs, enemies, and quests.
-
A variety of enemies and bosses to fight, each with their own strengths and weaknesses.
-
A complex combat system that allows you to use melee weapons, ranged weapons, shields, spells, potions, traps, and more.
-
A character customization system that lets you choose from three classes (warrior, mage, or archer), select your gender and appearance, distribute your attribute points (strength, agility, intelligence), and learn skills and spells.
-
An equipment system that lets you collect and craft various items such as weapons, armor, accessories, consumables, etc.
-
An inventory system that lets you manage your items and equip them on your character.
-
A faction system that lets you join one of the four factions in Montelorne: The Order of the Lion (the royal army), The Shadow Knights (the rebels), The Arcane Society (the mages), or The Brotherhood (the thieves).
-
A quest system that lets you accept and complete various tasks from NPCs or factions.
-
A dialogue system that lets you interact with NPCs and choose your responses.
-
A save system that lets you save your progress at any time.
-
-
Story and setting of 9th Dawn RPG
-
The story of 9th Dawn RPG takes place in the island continent of Montelorne, a land that was once part of a great empire called Esteria. However, due to a cataclysmic event known as the Great
War, Montelorne was separated from the mainland and plunged into chaos. The empire collapsed, and four factions emerged to vie for power and influence: The Order of the Lion, The Shadow Knights, The Arcane Society, and The Brotherhood. You are a hero who arrives in Montelorne to explore its secrets and face its challenges. You can choose to align yourself with one of the factions, or remain neutral and forge your own destiny. Your actions and choices will shape the fate of Montelorne and its people.
-
9th dawn rpg mod apk free download
-9th dawn rpg mod apk unlimited money
-9th dawn rpg mod apk latest version
-9th dawn rpg mod apk android 1
-9th dawn rpg mod apk offline
-9th dawn rpg mod apk obb
-9th dawn rpg mod apk happymod
-9th dawn rpg mod apk revdl
-9th dawn rpg mod apk no root
-9th dawn rpg mod apk full version
-9th dawn rpg mod apk + data
-9th dawn rpg mod apk hack
-9th dawn rpg mod apk cheats
-9th dawn rpg mod apk mega
-9th dawn rpg mod apk rexdl
-9th dawn rpg mod apk mediafire
-9th dawn rpg mod apk android republic
-9th dawn rpg mod apk platinmods
-9th dawn rpg mod apk an1
-9th dawn rpg mod apk andropalace
-9th dawn rpg mod apk blackmod
-9th dawn rpg mod apk vip
-9th dawn rpg mod apk pro
-9th dawn rpg mod apk premium
-9th dawn rpg mod apk cracked
-9th dawn rpg mod apk unlocked
-9th dawn rpg mod apk god mode
-9th dawn rpg mod apk high damage
-9th dawn rpg mod apk unlimited gems
-9th dawn rpg mod apk unlimited coins
-9th dawn rpg mod apk unlimited skill points
-9th dawn rpg mod apk unlimited health
-9th dawn rpg mod apk unlimited stamina
-9th dawn rpg mod apk unlimited inventory space
-9th dawn rpg mod apk unlimited items
-9th dawn rpg mod apk all weapons unlocked
-9th dawn rpg mod apk all armor unlocked
-9th dawn rpg mod apk all spells unlocked
-9th dawn rpg mod apk all quests unlocked
-9th dawn rpg mod apk all achievements unlocked
-
What is 9th Dawn RPG Mod APK?
-
9th Dawn RPG Mod APK is a modified version of the original game that gives you some extra benefits and features that are not available in the official version. The mod APK is created by third-party developers who modify the game files to enhance the gameplay and user experience. However, you should be careful when downloading and installing mod APKs, as they may contain viruses or malware that can harm your device or compromise your privacy. You should always download mod APKs from trusted sources and scan them with antivirus software before installing them.
-
Benefits of 9th Dawn RPG Mod APK
-
Some of the benefits that you can enjoy by using 9th Dawn RPG Mod APK are:
-
-
Unlimited money: You can get unlimited coins and gems that you can use to buy items, upgrade your equipment, learn skills and spells, etc.
-
Unlocked items: You can access all the items in the game, including weapons, armor, accessories, consumables, etc., without having to collect or craft them.
-
Unlocked maps: You can explore all the maps in the game, including the hidden ones, without having to unlock them by completing quests or finding keys.
-
No ads: You can play the game without any interruptions or distractions from annoying ads.
-
-
How to download and install 9th Dawn RPG Mod APK
-
To download and install 9th Dawn RPG Mod APK, you need to follow these steps:
-
-
Go to a reliable website that offers 9th Dawn RPG Mod APK for free download. For example, you can use this link: .
-
Click on the download button and wait for the file to be downloaded on your device.
-
Once the file is downloaded, go to your device's settings and enable the option to install apps from unknown sources. This will allow you to install mod APKs that are not from the Google Play Store.
-
Locate the downloaded file on your device and tap on it to start the installation process.
-
Follow the instructions on the screen and wait for the installation to be completed.
-
Launch the game and enjoy!
-
-
Tips and tricks for playing 9th Dawn RPG Mod APK
-
Now that you have downloaded and installed 9th Dawn RPG Mod APK, you are ready to start your adventure in Montelorne. Here are some tips and tricks that will help you make the most out of your gaming experience:
-
Explore the world of Montelorne
-
One of the best things about 9th Dawn RPG is its huge and diverse world that you can explore freely. There are over 300 maps to discover, each with its own environment, NPCs, enemies, items, secrets, and more. You can find hidden areas, treasure chests, dungeons, caves, towns, castles, islands, etc., by exploring every corner of the map. You can also use fast travel points to move between locations quickly. However, be careful when exploring new areas, as you may encounter dangerous enemies or traps that can harm you. Always be prepared with enough health potions, weapons, armor, spells, etc., before venturing into unknown territory.
-
Customize your character and equipment
-
Another great thing about 9th Dawn RPG is its character customization system that lets you create your own unique hero. You can choose from three classes: warrior, mage, or archer, and select your gender and appearance. You can also distribute your attribute points (strength, agility, intelligence) and learn skills and spells that suit your playstyle. You can also collect and craft various items such as weapons, armor, accessories, consumables, etc., and equip them on your character. You can find items by exploring the world, completing quests, defeating enemies, opening chests, etc. You can also craft items by using materials and recipes that you can find or buy. You can upgrade your equipment by using gems that you can find or buy. You can also enchant your equipment by using scrolls that you can find or buy. You can customize your character and equipment at any time by accessing the menu.
-
Learn skills and spells
-
Skills and spells are special abilities that you can use in combat or exploration. They can help you deal more damage, heal yourself or allies, buff yourself or allies, debuff enemies, escape from danger, etc. You can learn skills and spells by leveling up your character, joining factions, completing quests, finding books, etc. You can also upgrade your skills and spells by using skill points that you earn by leveling up. You can access your skills and spells by tapping on the icons on the bottom right corner of the screen. You can also assign them to quick slots for easier access. You can use skills and spells by tapping on their icons or pressing the corresponding buttons on your device. However, be aware that skills and spells consume stamina or mana, which are indicated by the blue and green bars on the top left corner of the screen. You need to wait for them to regenerate before you can use them again.
-
Fight enemies and bosses
-
Enemies and bosses are the main challenges that you will face in 9th Dawn RPG. They are creatures or characters that will attack you on sight or when provoked. They have different types, levels, stats, behaviors, strengths, and weaknesses. You need to use your skills, spells, weapons, armor, potions, traps, etc., to defeat them. You can also use the environment to your advantage, such as hiding behind obstacles, luring enemies into traps, etc. You can also avoid fighting enemies by running away or using stealth skills or items. However, some enemies and bosses are unavoidable or mandatory for completing quests or advancing the story. You can find enemies and bosses by exploring the world, entering dungeons or caves, accepting quests, etc. You can also encounter random encounters or ambushes by enemies while traveling between locations.
-
Join factions and quests
-
Factions and quests are optional but rewarding aspects of 9th Dawn RPG. Factions are groups of NPCs that have their own goals, beliefs, and agendas. You can join one of the four factions in Montelorne: The Order of the Lion, The Shadow Knights, The Arcane Society, or The Brotherhood. Each faction has its own leader, headquarters, members, allies, enemies, and reputation. You can increase your reputation with a faction by completing quests, helping members, donating items, etc. You can also decrease your reputation with a faction by attacking members, stealing items, betraying allies, etc. Your reputation with a faction affects how they treat you, what quests they offer you, what rewards they give you, etc. You can also switch factions at any time by talking to the faction leader or using a special item. However, be careful when joining or leaving factions, as you may lose some benefits or gain some enemies. Quests are tasks that you can accept and complete from NPCs or factions. They can involve various objectives such as killing enemies, finding items, delivering messages, escorting allies, solving puzzles, etc. They can also have different difficulties, rewards, time limits, consequences, etc. You can find quests by talking to NPCs, visiting locations, reading notices, etc. You can also track your active quests by accessing the menu. You can complete quests by fulfilling the objectives and returning to the quest giver. You can also fail quests by ignoring the objectives, running out of time, killing the quest giver, etc. Quests can help you gain experience, money, items, reputation, skills, spells, etc. They can also help you advance the story or unlock new areas.
-
Conclusion
-
9th Dawn RPG Mod APK is a modified version of the original game that gives you some extra benefits and features that are not available in the official version. It is a classic-style role-playing game that lets you explore a vast and open world full of mysteries, dangers, and adventures. You can customize your character and equipment, learn skills and spells, fight enemies and bosses, join factions and quests, and more. It is a fun and immersive game that will keep you hooked for hours.
-
Summary of the article
-
In this article, we have covered the following topics:
-
-
What is 9th Dawn RPG?
-
What is 9th Dawn RPG Mod APK?
-
Benefits of 9th Dawn RPG Mod APK
-
How to download and install 9th Dawn RPG Mod APK
-
Tips and tricks for playing 9th Dawn RPG Mod APK
-
-
FAQs
-
Here are some frequently asked questions about 9th Dawn RPG Mod APK:
-
-
Is 9th Dawn RPG Mod APK safe to use?
-
9th Dawn RPG Mod APK is generally safe to use if you download it from a trusted source and scan it with antivirus software before installing it. However, you should be aware that mod APKs are not authorized by the original game developer and may contain bugs or errors that can affect your gameplay or device performance. You should also be careful when granting permissions to mod APKs, as they may access your personal data or device functions without your consent.
-
Is 9th Dawn RPG Mod APK compatible with my device?
-
9th Dawn RPG Mod APK is compatible with most Android devices that have Android 4.0 or higher operating system and at least 1 GB of RAM and 100 MB of free storage space. However, you should check the mod APK's specifications and requirements before downloading and installing it to ensure that it works properly on your device.
-
Can I play 9th Dawn RPG Mod APK online or offline?
-
9th Dawn RPG Mod APK is mainly an offline game that does not require an internet connection to play. However, you may need an internet connection to download and install the mod APK, to access some online features such as leaderboards or achievements, or to update the mod APK to the latest version.
-
Can I play 9th Dawn RPG Mod APK with my friends?
-
9th Dawn RPG Mod APK does not have a multiplayer mode that allows you to play with your friends online or locally. However, you can share your progress and achievements with your friends by using social media platforms such as Facebook or Twitter.
-
Can I transfer my progress from 9th Dawn RPG to 9th Dawn RPG Mod APK or vice versa?
-
No, you cannot transfer your progress from 9th Dawn RPG to 9th Dawn RPG Mod APK or vice versa. The mod APK has a different file structure and data format than the original game, and they are not compatible with each other. If you want to switch between the two versions, you will have to start from scratch.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/sinksmell/ChatPDF/README.md b/spaces/sinksmell/ChatPDF/README.md
deleted file mode 100644
index 84a1fcadec658af6b4d056aecf1bd542bbc8ad3d..0000000000000000000000000000000000000000
--- a/spaces/sinksmell/ChatPDF/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChatPDF
-emoji: 😁
-colorFrom: blue
-colorTo: blue
-sdk: gradio
-sdk_version: 3.24.1
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: shibing624/ChatPDF
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/skf15963/summary/fengshen/examples/stable_diffusion_dreambooth/train.sh b/spaces/skf15963/summary/fengshen/examples/stable_diffusion_dreambooth/train.sh
deleted file mode 100644
index ad3eb7ead394e6662168eb0b4947055277a01b58..0000000000000000000000000000000000000000
--- a/spaces/skf15963/summary/fengshen/examples/stable_diffusion_dreambooth/train.sh
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=taiyi-sd-dreambooth # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks-per-node=1 # number of tasks to run per node
-#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH -o %x-%j.log # output and error log file names (%x for job id)
-#SBATCH -x dgx050
-
-# pwd=Fengshenbang-LM/fengshen/examples/pretrain_erlangshen
-ROOT_DIR=../../workspace
-# export CUDA_VISIBLE_DEVICES='7'
-export TORCH_EXTENSIONS_DIR=${ROOT_DIR}/torch_extendsions
-
-MODEL_NAME=taiyi-sd-dreambooth
-MODEL_ROOT_DIR=$ROOT_DIR/${MODEL_NAME}
-if [ ! -d ${MODEL_ROOT_DIR} ];then
- mkdir ${MODEL_ROOT_DIR}
-fi
-
-NNODES=1
-GPUS_PER_NODE=1
-
-MICRO_BATCH_SIZE=1
-INSTANCE_PROMPT="小黄鸭"
-OUTPUT_DIR="saved_model_tinyduck"
-INSTANCE_DIR="train_images_duck"
-
-DATA_ARGS="\
- --dataloader_workers 2 \
- --train_batchsize $MICRO_BATCH_SIZE \
- --val_batchsize $MICRO_BATCH_SIZE \
- --test_batchsize $MICRO_BATCH_SIZE \
- --instance_data_dir=$INSTANCE_DIR \
- --instance_prompt=$INSTANCE_PROMPT \
- --resolution=512 \
- "
-
-MODEL_ARGS="\
- --model_path $MODEL_ROOT_DIR/pretrain/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/ \
- --train_text_encoder \
- --learning_rate 1e-6 \
- --scheduler_type constant \
- --warmup_steps 100 \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --save_ckpt_path ${MODEL_ROOT_DIR}/ckpt \
- --load_ckpt_path ${MODEL_ROOT_DIR}/ckpt/last.ckpt \
- "
-
-TRAINER_ARGS="\
- --max_steps 1200 \
- --gpus $GPUS_PER_NODE \
- --num_nodes $NNODES \
- --strategy ddp \
- --log_every_n_steps 100 \
- --precision 32 \
- --default_root_dir ${MODEL_ROOT_DIR} \
- --replace_sampler_ddp False \
- --num_sanity_val_steps 0 \
- --limit_val_batches 0 \
- "
-# num_sanity_val_steps, limit_val_batches 通过这俩参数把validation关了
-
-export options=" \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
- "
-# run local
-python train.py $options
-# run on slurm
-# srun python train.py $options
\ No newline at end of file
diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/options/test_options.py b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/options/test_options.py
deleted file mode 100644
index adfee8aa2fac92662e0fef8419b0d3a3256a4d9a..0000000000000000000000000000000000000000
--- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/seg2art/options/test_options.py
+++ /dev/null
@@ -1,22 +0,0 @@
-"""
-Copyright (C) 2019 NVIDIA Corporation. All rights reserved.
-Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
-"""
-
-from .base_options import BaseOptions
-
-
-class TestOptions(BaseOptions):
- def initialize(self, parser):
- BaseOptions.initialize(self, parser)
- parser.add_argument('--results_dir', type=str, default='./results/', help='saves results here.')
- parser.add_argument('--which_epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model')
- parser.add_argument('--checkpoint_path', type=str, default='./checkpoints/multimodal_artworks/latest_net_G-fp16.pth', help='load model from a checkpoint')
- parser.add_argument('--how_many', type=int, default=float("inf"), help='how many test images to run')
-
- parser.set_defaults(preprocess_mode='scale_width_and_crop', crop_size=512, load_size=512, display_winsize=256)
- parser.set_defaults(serial_batches=True)
- parser.set_defaults(no_flip=True)
- parser.set_defaults(phase='test')
- self.isTrain = False
- return parser
diff --git a/spaces/sophiamyang/test-panel/README.md b/spaces/sophiamyang/test-panel/README.md
deleted file mode 100644
index e46f879b29d335a2a3d14db536eaba0d3edf63c6..0000000000000000000000000000000000000000
--- a/spaces/sophiamyang/test-panel/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Panel Template
-emoji: 📈
-colorFrom: gray
-colorTo: green
-sdk: docker
-pinned: false
-duplicated_from: Panel-Org/panel-template
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py
deleted file mode 100644
index 9e7b655feee0042d42ac2b13cec5f1d2a88e201e..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py
+++ /dev/null
@@ -1,76 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq.models import register_model, register_model_architecture
-from fairseq.models.multilingual_transformer import MultilingualTransformerModel
-from fairseq.models.transformer import (
- TransformerDecoder,
- TransformerEncoder,
- base_architecture,
-)
-from fairseq.utils import safe_hasattr
-
-from .latent_transformer import LatentTransformerDecoder, LatentTransformerEncoder
-
-
-@register_model("latent_multilingual_transformer")
-class LatentMultilingualTransformerModel(MultilingualTransformerModel):
- """A variant of standard multilingual Transformer models which encoder and/or
- decoders supports latent depth, as is in "Deep Transformer with Latent Depth"
- (https://arxiv.org/abs/2009.13102).
- """
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- MultilingualTransformerModel.add_args(parser)
- parser.add_argument(
- '--soft-select',
- action='store_true',
- help='use soft samples in training an inference',
- )
- parser.add_argument(
- '--sampling-tau',
- type=float,
- default=5.,
- help='sampling temperature',
- )
-
- @classmethod
- def _get_module_class(cls, is_encoder, args, lang_dict, embed_tokens, langs):
- if is_encoder:
- if safe_hasattr(args, "encoder_latent_layer") and args.encoder_latent_layer:
- return LatentTransformerEncoder(
- args, lang_dict, embed_tokens, num_logits=len(langs)
- )
- else:
- return TransformerEncoder(args, lang_dict, embed_tokens)
- else:
- if safe_hasattr(args, "decoder_latent_layer") and args.decoder_latent_layer:
- return LatentTransformerDecoder(
- args, lang_dict, embed_tokens, num_logits=len(langs)
- )
- else:
- return TransformerDecoder(args, lang_dict, embed_tokens)
-
-
-@register_model_architecture(
- "latent_multilingual_transformer", "latent_multilingual_transformer"
-)
-def latent_multilingual_architecture(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4)
- args.encoder_layers = getattr(args, "encoder_layers", 12)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4)
- args.decoder_layers = getattr(args, "decoder_layers", 24)
- args.share_encoders = getattr(args, "share_encoders", True)
- args.share_decoders = getattr(args, "share_decoders", True)
- args.share_encoder_embeddings = getattr(args, "share_encoder_embeddings", True)
- args.share_decoder_embeddings = getattr(args, "share_decoder_embeddings", True)
-
- base_architecture(args)
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py
deleted file mode 100644
index 7e2caa03400129ac0bb34ae35274cdf46f27a055..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py
+++ /dev/null
@@ -1,65 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-from fairseq import utils
-from fairseq.modules import TransformerEncoderLayer
-
-from .multihead_linear_attention import MultiheadLinearAttention
-
-
-class LinformerTransformerEncoderLayer(TransformerEncoderLayer):
- """
- Implements a Linformer Encoder Layer used in BERT/XLM style pre-trained
- models.
- """
-
- def __init__(self, args, shared_compress_layer):
- # wrap in a list so it's not automatically registered by PyTorch
- self.shared_compress_layer = [shared_compress_layer]
-
- super().__init__(args)
-
- self.register_buffer("version", torch.tensor(2))
-
- def build_self_attention(self, embed_dim, args):
- return MultiheadLinearAttention(
- embed_dim,
- args.encoder_attention_heads,
- dropout=args.dropout,
- self_attention=True,
- q_noise=args.quant_noise_pq,
- qn_block_size=args.quant_noise_pq_block_size,
- compressed=args.compressed,
- max_seq_len=args.max_positions,
- shared_kv_compressed=args.shared_kv_compressed,
- shared_compress_layer=self.shared_compress_layer[0],
- freeze_compress=args.freeze_compress,
- )
-
- def upgrade_state_dict_named(self, state_dict, name):
- super().upgrade_state_dict_named(state_dict, name)
- prefix = name + "." if name != "" else ""
-
- # some old checkpoints had weight sharing implemented incorrectly
- # (note: this was correct in the original paper code)
- if utils.item(state_dict.get(f"{prefix}version", torch.tensor(1))) < 2:
- state_dict[f"{prefix}version"] = torch.tensor(1)
- # check compression layer sharing
- if f"{prefix}shared_compress_layer.weight" in state_dict:
- # reinitialize block without sharing compression layer to match
- # old behavior
- self.shared_compress_layer = [
- torch.nn.Linear(
- self.shared_compress_layer[0].weight.size(1),
- self.shared_compress_layer[0].weight.size(0),
- )
- ]
- self.self_attn = self.build_self_attention(self.embed_dim, self.args)
- # delete shared_compress_layer, since it's already copied to
- # self_attn.compress_k.weight
- del state_dict[f"{prefix}shared_compress_layer.weight"]
- if f"{prefix}shared_compress_layer.bias" in state_dict:
- del state_dict[f"{prefix}shared_compress_layer.bias"]
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh
deleted file mode 100644
index ad35d7adf28dc9b23d13a6a3fec0b12cb760e855..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env sh
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-#
-# Please follow the instructions here http://alt.qcri.org/tools/arabic-normalizer/
-# to install tools needed for Arabic
-
-echo "Please install Arabic tools: http://alt.qcri.org/tools/arabic-normalizer/"
-echo "Then update environment variables in tokenizer_ar.sh"
-exit 1
-
-SVMTOOL=...
-GOMOSESGO=...
-QCRI_ARABIC_NORMALIZER=...
-
-export PERL5LIB="$SVMTOOL/lib":"$GOMOSESGO/bin/MADA-3.2":$PERL5LIB
-
-
-tempfile=$(mktemp)
-cat - > $tempfile
-
-cd $QCRI_ARABIC_NORMALIZER
-
-bash qcri_normalizer_mada3.2_aramorph1.2.1.sh $tempfile
-cat $tempfile.mada_norm-aramorph.europarl_tok
diff --git a/spaces/stamps-labs/stamp2vec/embedding_models/vae/model.py b/spaces/stamps-labs/stamp2vec/embedding_models/vae/model.py
deleted file mode 100644
index 965c8815f95aee365aab639962385bd147cf5d96..0000000000000000000000000000000000000000
--- a/spaces/stamps-labs/stamp2vec/embedding_models/vae/model.py
+++ /dev/null
@@ -1,147 +0,0 @@
-import torch.nn as nn
-from torch.distributions.normal import Normal
-
-from .constants import *
-
-
-class Encoder(nn.Module):
- '''
- Encoder Class
- Values:
- im_chan: the number of channels of the output image, a scalar
- hidden_dim: the inner dimension, a scalar
- '''
-
- def __init__(self, im_chan=3, output_chan=Z_DIM, hidden_dim=ENC_HIDDEN_DIM):
- super(Encoder, self).__init__()
- self.z_dim = output_chan
- self.disc = nn.Sequential(
- self.make_disc_block(im_chan, hidden_dim),
- self.make_disc_block(hidden_dim, hidden_dim * 2),
- self.make_disc_block(hidden_dim * 2, hidden_dim * 4),
- self.make_disc_block(hidden_dim * 4, hidden_dim * 8),
- self.make_disc_block(hidden_dim * 8, output_chan * 2, final_layer=True),
- )
-
- def make_disc_block(self, input_channels, output_channels, kernel_size=4, stride=2, final_layer=False):
- '''
- Function to return a sequence of operations corresponding to a encoder block of the VAE,
- corresponding to a convolution, a batchnorm (except for in the last layer), and an activation
- Parameters:
- input_channels: how many channels the input feature representation has
- output_channels: how many channels the output feature representation should have
- kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
- stride: the stride of the convolution
- final_layer: whether we're on the final layer (affects activation and batchnorm)
- '''
- if not final_layer:
- return nn.Sequential(
- nn.Conv2d(input_channels, output_channels, kernel_size, stride),
- nn.BatchNorm2d(output_channels),
- nn.LeakyReLU(0.2, inplace=True),
- )
- else:
- return nn.Sequential(
- nn.Conv2d(input_channels, output_channels, kernel_size, stride),
- )
-
- def forward(self, image):
- '''
- Function for completing a forward pass of the Encoder: Given an image tensor,
- returns a 1-dimension tensor representing fake/real.
- Parameters:
- image: a flattened image tensor with dimension (im_dim)
- '''
- disc_pred = self.disc(image)
- encoding = disc_pred.view(len(disc_pred), -1)
- # The stddev output is treated as the log of the variance of the normal
- # distribution by convention and for numerical stability
- return encoding[:, :self.z_dim], encoding[:, self.z_dim:].exp()
-
-
-class Decoder(nn.Module):
- '''
- Decoder Class
- Values:
- z_dim: the dimension of the noise vector, a scalar
- im_chan: the number of channels of the output image, a scalar
- hidden_dim: the inner dimension, a scalar
- '''
-
- def __init__(self, z_dim=Z_DIM, im_chan=3, hidden_dim=DEC_HIDDEN_DIM):
- super(Decoder, self).__init__()
- self.z_dim = z_dim
- self.gen = nn.Sequential(
- self.make_gen_block(z_dim, hidden_dim * 16),
- self.make_gen_block(hidden_dim * 16, hidden_dim * 8, kernel_size=4, stride=1),
- self.make_gen_block(hidden_dim * 8, hidden_dim * 4),
- self.make_gen_block(hidden_dim * 4, hidden_dim * 2, kernel_size=4),
- self.make_gen_block(hidden_dim * 2, hidden_dim, kernel_size=4),
- self.make_gen_block(hidden_dim, im_chan, kernel_size=4, final_layer=True),
- )
-
- def make_gen_block(self, input_channels, output_channels, kernel_size=3, stride=2, final_layer=False):
- '''
- Function to return a sequence of operations corresponding to a Decoder block of the VAE,
- corresponding to a transposed convolution, a batchnorm (except for in the last layer), and an activation
- Parameters:
- input_channels: how many channels the input feature representation has
- output_channels: how many channels the output feature representation should have
- kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
- stride: the stride of the convolution
- final_layer: whether we're on the final layer (affects activation and batchnorm)
- '''
- if not final_layer:
- return nn.Sequential(
- nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
- nn.BatchNorm2d(output_channels),
- nn.ReLU(inplace=True),
- )
- else:
- return nn.Sequential(
- nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
- nn.Sigmoid(),
- )
-
- def forward(self, noise):
- '''
- Function for completing a forward pass of the Decoder: Given a noise vector,
- returns a generated image.
- Parameters:
- noise: a noise tensor with dimensions (batch_size, z_dim)
- '''
- x = noise.view(len(noise), self.z_dim, 1, 1)
- return self.gen(x)
-
-
-class VAE(nn.Module):
- '''
- VAE Class
- Values:
- z_dim: the dimension of the noise vector, a scalar
- im_chan: the number of channels of the output image, a scalar
- MNIST is black-and-white, so that's our default
- hidden_dim: the inner dimension, a scalar
- '''
-
- def __init__(self, z_dim=Z_DIM, im_chan=3):
- super(VAE, self).__init__()
- self.z_dim = z_dim
- self.encode = Encoder(im_chan, z_dim)
- self.decode = Decoder(z_dim, im_chan)
-
- def forward(self, images):
- '''
- Function for completing a forward pass of the Decoder: Given a noise vector,
- returns a generated image.
- Parameters:
- images: an image tensor with dimensions (batch_size, im_chan, im_height, im_width)
- Returns:
- decoding: the autoencoded image
- q_dist: the z-distribution of the encoding
- '''
- q_mean, q_stddev = self.encode(images)
- q_dist = Normal(q_mean, q_stddev)
- z_sample = q_dist.rsample() # Sample once from each distribution, using the `rsample` notation
- decoding = self.decode(z_sample)
- return decoding, q_dist
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/EDIUS Neo 3.5 Crack ((HOT)).md b/spaces/stomexserde/gpt4-ui/Examples/EDIUS Neo 3.5 Crack ((HOT)).md
deleted file mode 100644
index a51388b8534294df42b43ff3daab65fefb56290c..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/EDIUS Neo 3.5 Crack ((HOT)).md
+++ /dev/null
@@ -1,14 +0,0 @@
-
-
How to Download and Install EDIUS Neo 3.5 Crack for Free
-
EDIUS Neo 3.5 is a video editing software that offers a basic, entry-level range of tools and features for those new to video editing[^1^]. It supports various file formats, such as XAVC, AVC-Ultra, and MPEG-2[^3^], and allows real-time editing for multiple formats and frame rates[^1^]. However, EDIUS Neo 3.5 is not a free software, and you need to purchase a license to use it legally.
-
If you want to try EDIUS Neo 3.5 for free, you can download a trial version from the official website[^1^]. The trial version has some limitations, such as a watermark on the output and a 30-day expiration date. Alternatively, you can download and install EDIUS Neo 3.5 crack from a third-party source. A crack is a modified version of the software that bypasses the activation process and lets you use the full features without paying.
However, downloading and installing EDIUS Neo 3.5 crack is not recommended for several reasons. First of all, it is illegal and violates the copyright of the software developer. You may face legal consequences if you are caught using a cracked software. Second, it is unsafe and risky for your computer. You may download malware or viruses along with the crack that can damage your system or steal your personal information. Third, it is unreliable and unstable. You may encounter errors, bugs, or compatibility issues that can affect your editing performance or quality.
-
Therefore, it is better to avoid EDIUS Neo 3.5 crack and use the official version instead. You can purchase a license from the official website[^1^] or from an authorized reseller[^2^]. You can also look for discounts or promotions that may lower the price of the software. By using the official version, you can enjoy the full features, updates, support, and security of EDIUS Neo 3.5.
-
-
EDIUS Neo 3.5 also has some advanced features that can enhance your editing experience and creativity. For example, it supports 3D stereoscopic editing, which allows you to create immersive videos for various S3D formats[^1^]. You can import left-eye and right-eye images, convert them into S3D clips, edit them on the timeline, and output them in different S3D modes[^1^]. You can also preview your S3D videos on a 3D monitor or a 2D monitor with anaglyph glasses[^1^].
-
-
Another feature of EDIUS Neo 3.5 is the QuickTitler, which is a built-in tool for creating and editing titles in real time[^1^]. You can use QuickTitler to add simple text, roll, crawl, or animated titles to your videos. You can also customize the font, size, color, style, alignment, and animation of your titles[^1^]. QuickTitler supports Unicode characters and various languages, including Arabic, Chinese, Japanese, and Korean[^1^]. You can also import and export titles as RTF files for further editing in other applications[^1^].
-
EDIUS Neo 3.5 is compatible with various video devices and formats. You can capture video from DV or HDV cameras via FireWire or from AVCHD cameras via USB[^1^]. You can also import video from SD cards, Memory Sticks, or camera HDDs[^1^]. You can export your projects to Blu-ray Discs or DVDs with menus using the built-in disc authoring tool[^1^]. You can also export your projects to various file formats, such as AVI, MPEG-2, H.264, Windows Media, QuickTime, and MXF[^1^]. You can also output your projects to DV devices via FireWire or to external monitors via HDMI or DVI[^1^].
e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Evermap Adobe Plugins WORK Download.md b/spaces/stomexserde/gpt4-ui/Examples/Evermap Adobe Plugins WORK Download.md
deleted file mode 100644
index 165a82ad0638e48f8eacc233c6b93a6c303cc8bc..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Evermap Adobe Plugins WORK Download.md
+++ /dev/null
@@ -1,46 +0,0 @@
-
-
How to Download and Install Evermap Adobe Plugins for PDF Processing
-
-
If you are looking for a way to enhance your productivity and efficiency with PDF documents, you might want to check out Evermap Adobe Plugins. Evermap is a company that offers a range of advanced plug-ins for Adobe Acrobat that can help you with various tasks such as splitting, merging, bookmarking, redacting, mail merging, extracting data, and more.
-
-
In this article, we will show you how to download and install Evermap Adobe Plugins for PDF processing. We will also give you an overview of some of the features and benefits of each plug-in.
To download Evermap Adobe Plugins, you need to visit their website at https://evermap.com/downloads.asp. There you will find a list of all the available plug-ins, along with their descriptions, requirements, and trial versions. You can download the trial versions for free and test them for 30 days before purchasing a license.
-
-
Some of the most popular Evermap Adobe Plugins are:
-
-
-
AutoSplit: A plug-in for splitting PDF documents by bookmarks, page count, page ranges, content, and more. It can also merge and rename PDF files.
-
AutoBookmark: A plug-in for creating and managing PDF bookmarks, links, highlights, and table of contents. It can also batch process multiple PDF files.
-
AutoMailMerge: A plug-in for creating, securing, and emailing personalized PDF forms from structured data sources such as spreadsheets, databases, or text files.
-
AutoExtract: A plug-in for extracting information from PDF documents into structured data files such as CSV or XML.
-
AutoRedact: A plug-in for automatically redacting sensitive information from PDF documents using text search or predefined patterns.
-
-
-
You can find more details about each plug-in on their respective product pages.
-
-
How to Install Evermap Adobe Plugins
-
-
To install Evermap Adobe Plugins, you need to have Adobe Acrobat or Adobe Acrobat Professional installed on your computer. The plug-ins will not work with the free Adobe Acrobat Reader. You also need to have Windows Server 2012-2019 or Windows 7-11 as your operating system.
-
-
Once you have downloaded the trial versions of the plug-ins you want to try, you need to follow these steps:
-
-
-
-
Close all instances of Adobe Acrobat.
-
Run the installation file of the plug-in you want to install.
-
Follow the instructions on the screen and accept the license agreement.
-
Restart Adobe Acrobat and activate the plug-in using the activation code you received by email after downloading the trial version.
-
Enjoy using the plug-in for 30 days. If you want to purchase a license, you can do so from the Evermap website.
-
-
-
If you encounter any problems during the installation or activation process, you can refer to the FAQ page or contact the support team.
-
-
Conclusion
-
-
Evermap Adobe Plugins are powerful tools that can help you with various PDF processing tasks. They are easy to download and install, and they offer a free trial period for testing. If you are looking for a way to improve your workflow and efficiency with PDF documents, you should give them a try.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Garritan Aria Player Keygen [Extra Quality] Download Multi Champion Tradi.md b/spaces/stomexserde/gpt4-ui/Examples/Garritan Aria Player Keygen [Extra Quality] Download Multi Champion Tradi.md
deleted file mode 100644
index 3700a062413d562832d10c88e136c8855a046e14..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Garritan Aria Player Keygen [Extra Quality] Download Multi Champion Tradi.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
How to Use Garritan Aria Player to Create Multi-Champion Tracks
-
Garritan Aria Player is a powerful and versatile software that allows you to play and edit high-quality sampled instruments from various genres and styles. You can use it as a standalone application or as a plug-in with your favorite sequencer or notation program. But did you know that you can also use it to create multi-champion tracks, which are songs that combine elements from different musical champions, such as rock, jazz, classical, and more?
-
In this article, we will show you how to use Garritan Aria Player keygen to download and activate the software, and how to use its features to create multi-champion tracks. You will learn how to load and playback MIDI files, how to mix and match instruments from different libraries, how to adjust the sound and expression of each instrument, and how to export your final track as an audio file.
-
Garritan Aria Player Keygen Download multi champion tradi
Step 1: Download and Activate Garritan Aria Player
-
To use Garritan Aria Player, you need to download and install the software from the official website. You also need to activate it with a keygen, which is a program that generates a valid serial number for the software. You can find a reliable keygen for Garritan Aria Player on this link. Just follow the instructions on the website and enter the generated serial number when prompted by the software.
-
Step 2: Load and Playback MIDI Files
-
Garritan Aria Player can load and playback MIDI files, which are files that contain musical data such as notes, velocities, timings, and more. You can use MIDI files to play along with your favorite songs, or to create your own compositions. To load a MIDI file, go to File > Open MIDI File and browse for the file on your computer. To playback the MIDI file, use the transport controls at the bottom of the interface. You can also adjust the tempo, volume, and balance of the MIDI file using the sliders on the right side of the interface.
-
Step 3: Mix and Match Instruments from Different Libraries
-
One of the most exciting features of Garritan Aria Player is that it allows you to mix and match instruments from different libraries, such as Garritan Personal Orchestra, Garritan Jazz and Big Band, Garritan World Instruments, and more. You can access these libraries by clicking on the Load button on each slot of the interface. You can load up to 16 instruments at a time, each with its own channel number. You can then assign each instrument to a different track of your MIDI file by changing the channel number on your sequencer or notation program.
-
For example, if you want to create a multi-champion track that combines rock guitar, jazz saxophone, classical violin, and world percussion, you can load these instruments on slots 1-4 of Garritan Aria Player. Then, you can assign them to tracks 1-4 of your MIDI file by changing the channel numbers accordingly. You can also change the order of the slots by dragging them up or down.
-
Step 4: Adjust the Sound and Expression of Each Instrument
-
Garritan Aria Player also allows you to adjust the sound and expression of each instrument using various parameters and controls. You can access these parameters by clicking on the Edit button on each slot of the interface. You can then tweak settings such as volume envelope, filter envelope, pitch envelope, LFOs (low-frequency oscillators), effects (reverb, chorus, delay), EQ (equalizer), tuning (microtuning), keyswitches (special notes that trigger different articulations), controllers (modulation wheel, expression pedal), and more.
-
For example, if you want to make your rock guitar sound more distorted and aggressive, you can increase the drive and level of the distortion effect. If you want to make your jazz saxophone sound more expressive and dynamic, you can use the modulation wheel or expression pedal to control the vibrato and volume of the instrument. If you want to make your classical violin sound more realistic and nuanced, you can use keyswitch
81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD [BETTER].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD [BETTER].md
deleted file mode 100644
index ded618843109a9583fa99249b075b816bbb45bb2..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD [BETTER].md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD: A Powerful CAD Software for Design and Engineering
-
-
If you are looking for a reliable and versatile CAD software that can handle complex design and engineering tasks, you might want to check out PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD. This is the latest version of the popular PTC Creo software, which offers a range of features and capabilities to help you create, analyze, and optimize your products.
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD is a comprehensive CAD solution that supports multiple languages and platforms. It allows you to work with 2D and 3D models, parametric and direct modeling, simulation and analysis, additive manufacturing, augmented reality, and more. You can also integrate PTC Creo with other PTC products, such as Windchill, ThingWorx, and Mathcad.
-
-
What are the benefits of using PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD?
-
-
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD has many advantages over other CAD software, such as:
-
-
-
It is easy to use and learn, thanks to its intuitive user interface and flexible modeling tools.
-
It is compatible with various file formats and standards, such as STEP, IGES, DXF, DWG, PDF, STL, AMF, 3MF, JT, ISO 10303-21, ISO 10303-28, etc.
-
It is scalable and adaptable, allowing you to customize your workflows and preferences according to your needs and preferences.
-
It is powerful and efficient, enabling you to handle large and complex assemblies, perform advanced simulations and analyses, optimize your designs for performance and manufacturability, and collaborate with your team and stakeholders.
-
It is innovative and future-ready, offering you the latest technologies and trends in design and engineering, such as additive manufacturing, augmented reality, generative design, artificial intelligence, etc.
-
-
-
How can you get PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD?
-
-
If you are interested in getting PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD, you have several options:
-
-
-
You can download it for free from the official website of PTC Creo. You will need to register an account and provide some basic information. You will also need to agree to the terms and conditions of use. The download process may take some time depending on your internet speed and system specifications.
-
You can buy it from an authorized reseller or distributor of PTC Creo. You will need to pay a certain amount of money depending on the license type and duration. You will also need to provide some personal and payment details. You will receive a confirmation email with a link to download the software.
-
You can use it online from a cloud-based service provider of PTC Creo. You will need to sign up for a subscription plan and pay a monthly or annual fee depending on the features and resources you need. You will also need to have a stable internet connection and a compatible browser. You will be able to access the software from any device and location.
-
-
-
Conclusion
-
-
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD is a great choice for anyone who wants to create high-quality products with ease and efficiency. It offers a wide range of features and capabilities that can help you design and engineer anything you can imagine. Whether you are a beginner or a professional, you will find PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD to be a valuable tool for your projects.
-
-
If you want to learn more about PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD or download it for free or buy it online or use it online from cloud-based service provider , visit the official website of PTC Creo today!
-
-
What are the main features of PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD?
-
-
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD has many features that make it a powerful and versatile CAD software. Some of the main features are:
-
-
-
Multi-language support: PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD supports multiple languages, such as English, Spanish, French, German, Italian, Japanese, Korean, Chinese, etc. You can switch between languages easily and work with different teams and clients across the globe.
-
SolidSQUAD crack: PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD comes with a crack from SolidSQUAD, a team of hackers who specialize in cracking CAD software. The crack allows you to use PTC Creo without any limitations or restrictions. You can enjoy all the features and updates of PTC Creo without paying any fees or licenses.
-
2D and 3D modeling: PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD allows you to create and edit 2D and 3D models with ease and precision. You can use parametric and direct modeling techniques, sketching and drawing tools, geometric constraints and dimensions, feature-based and surface-based modeling, etc. You can also import and export models from other CAD software and formats.
-
Simulation and analysis: PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD enables you to perform various simulations and analyses on your models, such as structural, thermal, fluid, dynamic, kinematic, vibration, fatigue, etc. You can also use advanced tools such as finite element analysis (FEA), computational fluid dynamics (CFD), multi-body dynamics (MBD), etc. You can validate your designs and optimize them for performance and quality.
-
Additive manufacturing: PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD supports additive manufacturing or 3D printing technologies. You can design and print your models using various materials and methods, such as fused deposition modeling (FDM), stereolithography (SLA), selective laser sintering (SLS), direct metal laser sintering (DMLS), etc. You can also use generative design tools to create optimal shapes and structures for your models.
-
Augmented reality: PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD integrates with augmented reality or AR technologies. You can use PTC's ThingWorx platform to create and share AR experiences with your models. You can also use PTC's Vuforia platform to create and view AR applications on your mobile devices. You can enhance your design and engineering workflows with AR capabilities.
-
-
-
How can you learn PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD?
-
-
If you want to learn PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD, you have several resources available:
-
-
-
You can access the official documentation and tutorials of PTC Creo from the help menu or the website of PTC Creo. You can find detailed information and instructions on how to use PTC Creo and its features.
-
You can watch online videos and courses on PTC Creo from various platforms, such as YouTube, Udemy, Coursera, Lynda, etc. You can learn from experts and instructors who will teach you the basics and advanced topics of PTC Creo.
-
You can join online forums and communities of PTC Creo users, such as Reddit, Quora, Stack Overflow, etc. You can ask questions and get answers from other users who have experience and knowledge of PTC Creo.
-
You can read online blogs and articles on PTC Creo from various sources, such as Medium, TechCrunch, Forbes, etc. You can get insights and tips on how to use PTC Creo effectively and efficiently.
-
-
-
Conclusion
-
-
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD is a powerful CAD software that can help you design and engineer anything you can imagine. It has many features and capabilities that make it easy to use and versatile. It also comes with a crack from SolidSQUAD that allows you to use it without any limitations or restrictions.
-
-
If you want to get PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD or learn more about it , visit the official website of PTC Creo today!
-
How can you download PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD?
-
-
If you want to download PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD, you have several options:
-
-
-
You can download it from the official website of PTC Creo. You will need to create a PTC eSupport account and provide some basic information. You will also need to agree to the terms and conditions of use. The download process may take some time depending on your internet speed and system specifications.
-
You can download it from a third-party website that offers PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD as a package on npm. You will need to have npm installed on your computer and run a command to install the package. You will also need to verify the authenticity and security of the package before using it.
-
You can download it from a torrent website that offers PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD as a PDF file. You will need to have a torrent client installed on your computer and open the file with it. You will also need to be careful of viruses and malware that may be attached to the file.
-
-
-
However, downloading PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD from unofficial sources may be illegal and risky. You may violate the intellectual property rights of PTC Creo and face legal consequences. You may also expose your computer and data to potential threats and damages. Therefore, we recommend that you download PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD from the official website of PTC Creo or an authorized reseller or distributor.
-
-
How can you install PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD?
-
-
If you have downloaded PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD from the official website of PTC Creo or an authorized reseller or distributor, you can install it by following these steps:
-
-
-
Run the PTC Installation Assistant and select Install New Software.
-
Select PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD from the list of available software and click Next.
-
Choose the installation type (Typical or Custom) and click Next.
-
Select the installation directory and click Next.
-
Enter your license information (Sales Order Number, Host ID/MAC Address) and click Next.
-
Review the installation summary and click Install.
-
Wait for the installation to complete and click Finish.
-
-
-
If you have downloaded PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD from a third-party website or a torrent website, you may need to follow different steps depending on the format and content of the file. You may also need to apply the crack from SolidSQUAD to activate PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD. However, we do not recommend that you do this as it may be illegal and risky.
-
-
Conclusion
-
-
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD is a powerful CAD software that can help you design and engineer anything you can imagine. It has many features and capabilities that make it easy to use and versatile. It also comes with a crack from SolidSQUAD that allows you to use it without any limitations or restrictions.
-
-
If you want to download PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD or learn more about it , visit the official website of PTC Creo today!
-
Conclusion
-
-
PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD is a powerful CAD software that can help you design and engineer anything you can imagine. It has many features and capabilities that make it easy to use and versatile. It also comes with a crack from SolidSQUAD that allows you to use it without any limitations or restrictions.
-
-
If you want to download PTC Creo V20 M030 MULTiLANGUAGESolidSQUAD or learn more about it , visit the official website of PTC Creo today!
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Scala Infochannel Designer 5 Crack [UPD].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Scala Infochannel Designer 5 Crack [UPD].md
deleted file mode 100644
index ed0e611f7dfb171b0f41982e800bfcf6e7d0f7a4..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Scala Infochannel Designer 5 Crack [UPD].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
- )
-}
diff --git a/spaces/t13718236382/bingoGPT4/src/pages/api/sydney.ts b/spaces/t13718236382/bingoGPT4/src/pages/api/sydney.ts
deleted file mode 100644
index d49397aad6f9624c87e79f6f65094c2efd4fe255..0000000000000000000000000000000000000000
--- a/spaces/t13718236382/bingoGPT4/src/pages/api/sydney.ts
+++ /dev/null
@@ -1,63 +0,0 @@
-import { NextApiRequest, NextApiResponse } from 'next'
-import { WebSocket, debug } from '@/lib/isomorphic'
-import { BingWebBot } from '@/lib/bots/bing'
-import { websocketUtils } from '@/lib/bots/bing/utils'
-import { WatchDog, createHeaders } from '@/lib/utils'
-
-export default async function handler(req: NextApiRequest, res: NextApiResponse) {
- const conversationContext = req.body
- const headers = createHeaders(req.cookies)
- debug(headers)
- res.setHeader('Content-Type', 'text/stream; charset=UTF-8')
-
- const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', {
- headers: {
- ...headers,
- 'accept-language': 'zh-CN,zh;q=0.9',
- 'cache-control': 'no-cache',
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32',
- pragma: 'no-cache',
- }
- })
-
- const closeDog = new WatchDog()
- const timeoutDog = new WatchDog()
- ws.onmessage = (event) => {
- timeoutDog.watch(() => {
- debug('timeout')
- ws.send(websocketUtils.packMessage({ type: 6 }))
- }, 1500)
- closeDog.watch(() => {
- debug('timeout close')
- ws.close()
- }, 10000)
- res.write(event.data)
- if (/\{"type":([367])\}/.test(String(event.data))) {
- const type = parseInt(RegExp.$1, 10)
- debug('connection type', type)
- if (type === 3) {
- ws.close()
- } else {
- ws.send(websocketUtils.packMessage({ type }))
- }
- }
- }
-
- ws.onclose = () => {
- timeoutDog.reset()
- closeDog.reset()
- debug('connection close')
- res.end()
- }
-
- await new Promise((resolve) => ws.onopen = resolve)
- ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 }))
- ws.send(websocketUtils.packMessage({ type: 6 }))
- ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!)))
- req.socket.once('close', () => {
- ws.close()
- if (!res.closed) {
- res.end()
- }
- })
-}
diff --git a/spaces/taesiri/DeticChatGPT/detic/evaluation/oideval.py b/spaces/taesiri/DeticChatGPT/detic/evaluation/oideval.py
deleted file mode 100644
index e60125aec21f1f32f054cac51cdfb85368c53895..0000000000000000000000000000000000000000
--- a/spaces/taesiri/DeticChatGPT/detic/evaluation/oideval.py
+++ /dev/null
@@ -1,699 +0,0 @@
-# Part of the code is from https://github.com/tensorflow/models/blob/master/research/object_detection/metrics/oid_challenge_evaluation.py
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-# The original code is under Apache License, Version 2.0 (the "License");
-# Part of the code is from https://github.com/lvis-dataset/lvis-api/blob/master/lvis/eval.py
-# Copyright (c) 2019, Agrim Gupta and Ross Girshick
-# Modified by Xingyi Zhou
-# This script re-implement OpenImages evaluation in detectron2
-# The code is from https://github.com/xingyizhou/UniDet/blob/master/projects/UniDet/unidet/evaluation/oideval.py
-# The original code is under Apache-2.0 License
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-import datetime
-import logging
-import itertools
-from collections import OrderedDict
-from collections import defaultdict
-import copy
-import json
-import numpy as np
-import torch
-from tabulate import tabulate
-
-from lvis.lvis import LVIS
-from lvis.results import LVISResults
-
-import pycocotools.mask as mask_utils
-
-from fvcore.common.file_io import PathManager
-import detectron2.utils.comm as comm
-from detectron2.data import MetadataCatalog
-from detectron2.evaluation.coco_evaluation import instances_to_coco_json
-from detectron2.utils.logger import create_small_table
-from detectron2.evaluation import DatasetEvaluator
-
-def compute_average_precision(precision, recall):
- """Compute Average Precision according to the definition in VOCdevkit.
- Precision is modified to ensure that it does not decrease as recall
- decrease.
- Args:
- precision: A float [N, 1] numpy array of precisions
- recall: A float [N, 1] numpy array of recalls
- Raises:
- ValueError: if the input is not of the correct format
- Returns:
- average_precison: The area under the precision recall curve. NaN if
- precision and recall are None.
- """
- if precision is None:
- if recall is not None:
- raise ValueError("If precision is None, recall must also be None")
- return np.NAN
-
- if not isinstance(precision, np.ndarray) or not isinstance(
- recall, np.ndarray):
- raise ValueError("precision and recall must be numpy array")
- if precision.dtype != np.float or recall.dtype != np.float:
- raise ValueError("input must be float numpy array.")
- if len(precision) != len(recall):
- raise ValueError("precision and recall must be of the same size.")
- if not precision.size:
- return 0.0
- if np.amin(precision) < 0 or np.amax(precision) > 1:
- raise ValueError("Precision must be in the range of [0, 1].")
- if np.amin(recall) < 0 or np.amax(recall) > 1:
- raise ValueError("recall must be in the range of [0, 1].")
- if not all(recall[i] <= recall[i + 1] for i in range(len(recall) - 1)):
- raise ValueError("recall must be a non-decreasing array")
-
- recall = np.concatenate([[0], recall, [1]])
- precision = np.concatenate([[0], precision, [0]])
-
- for i in range(len(precision) - 2, -1, -1):
- precision[i] = np.maximum(precision[i], precision[i + 1])
- indices = np.where(recall[1:] != recall[:-1])[0] + 1
- average_precision = np.sum(
- (recall[indices] - recall[indices - 1]) * precision[indices])
- return average_precision
-
-class OIDEval:
- def __init__(
- self, lvis_gt, lvis_dt, iou_type="bbox", expand_pred_label=False,
- oid_hierarchy_path='./datasets/oid/annotations/challenge-2019-label500-hierarchy.json'):
- """Constructor for OIDEval.
- Args:
- lvis_gt (LVIS class instance, or str containing path of annotation file)
- lvis_dt (LVISResult class instance, or str containing path of result file,
- or list of dict)
- iou_type (str): segm or bbox evaluation
- """
- self.logger = logging.getLogger(__name__)
-
- if iou_type not in ["bbox", "segm"]:
- raise ValueError("iou_type: {} is not supported.".format(iou_type))
-
- if isinstance(lvis_gt, LVIS):
- self.lvis_gt = lvis_gt
- elif isinstance(lvis_gt, str):
- self.lvis_gt = LVIS(lvis_gt)
- else:
- raise TypeError("Unsupported type {} of lvis_gt.".format(lvis_gt))
-
- if isinstance(lvis_dt, LVISResults):
- self.lvis_dt = lvis_dt
- elif isinstance(lvis_dt, (str, list)):
- # self.lvis_dt = LVISResults(self.lvis_gt, lvis_dt, max_dets=-1)
- self.lvis_dt = LVISResults(self.lvis_gt, lvis_dt)
- else:
- raise TypeError("Unsupported type {} of lvis_dt.".format(lvis_dt))
-
- if expand_pred_label:
- oid_hierarchy = json.load(open(oid_hierarchy_path, 'r'))
- cat_info = self.lvis_gt.dataset['categories']
- freebase2id = {x['freebase_id']: x['id'] for x in cat_info}
- id2freebase = {x['id']: x['freebase_id'] for x in cat_info}
- id2name = {x['id']: x['name'] for x in cat_info}
-
- fas = defaultdict(set)
- def dfs(hierarchy, cur_id):
- all_childs = set()
- all_keyed_child = {}
- if 'Subcategory' in hierarchy:
- for x in hierarchy['Subcategory']:
- childs = dfs(x, freebase2id[x['LabelName']])
- all_childs.update(childs)
- if cur_id != -1:
- for c in all_childs:
- fas[c].add(cur_id)
- all_childs.add(cur_id)
- return all_childs
- dfs(oid_hierarchy, -1)
-
- expanded_pred = []
- id_count = 0
- for d in self.lvis_dt.dataset['annotations']:
- cur_id = d['category_id']
- ids = [cur_id] + [x for x in fas[cur_id]]
- for cat_id in ids:
- new_box = copy.deepcopy(d)
- id_count = id_count + 1
- new_box['id'] = id_count
- new_box['category_id'] = cat_id
- expanded_pred.append(new_box)
-
- print('Expanding original {} preds to {} preds'.format(
- len(self.lvis_dt.dataset['annotations']),
- len(expanded_pred)
- ))
- self.lvis_dt.dataset['annotations'] = expanded_pred
- self.lvis_dt._create_index()
-
- # per-image per-category evaluation results
- self.eval_imgs = defaultdict(list)
- self.eval = {} # accumulated evaluation results
- self._gts = defaultdict(list) # gt for evaluation
- self._dts = defaultdict(list) # dt for evaluation
- self.params = Params(iou_type=iou_type) # parameters
- self.results = OrderedDict()
- self.ious = {} # ious between all gts and dts
-
- self.params.img_ids = sorted(self.lvis_gt.get_img_ids())
- self.params.cat_ids = sorted(self.lvis_gt.get_cat_ids())
-
- def _to_mask(self, anns, lvis):
- for ann in anns:
- rle = lvis.ann_to_rle(ann)
- ann["segmentation"] = rle
-
- def _prepare(self):
- """Prepare self._gts and self._dts for evaluation based on params."""
-
- cat_ids = self.params.cat_ids if self.params.cat_ids else None
-
- gts = self.lvis_gt.load_anns(
- self.lvis_gt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids)
- )
- dts = self.lvis_dt.load_anns(
- self.lvis_dt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids)
- )
- # convert ground truth to mask if iou_type == 'segm'
- if self.params.iou_type == "segm":
- self._to_mask(gts, self.lvis_gt)
- self._to_mask(dts, self.lvis_dt)
-
- for gt in gts:
- self._gts[gt["image_id"], gt["category_id"]].append(gt)
-
- # For federated dataset evaluation we will filter out all dt for an
- # image which belong to categories not present in gt and not present in
- # the negative list for an image. In other words detector is not penalized
- # for categories about which we don't have gt information about their
- # presence or absence in an image.
- img_data = self.lvis_gt.load_imgs(ids=self.params.img_ids)
- # per image map of categories not present in image
- img_nl = {d["id"]: d["neg_category_ids"] for d in img_data}
- # per image list of categories present in image
- img_pl = {d["id"]: d["pos_category_ids"] for d in img_data}
- # img_pl = defaultdict(set)
- for ann in gts:
- # img_pl[ann["image_id"]].add(ann["category_id"])
- assert ann["category_id"] in img_pl[ann["image_id"]]
- # print('check pos ids OK.')
-
- for dt in dts:
- img_id, cat_id = dt["image_id"], dt["category_id"]
- if cat_id not in img_nl[img_id] and cat_id not in img_pl[img_id]:
- continue
- self._dts[img_id, cat_id].append(dt)
-
- def evaluate(self):
- """
- Run per image evaluation on given images and store results
- (a list of dict) in self.eval_imgs.
- """
- self.logger.info("Running per image evaluation.")
- self.logger.info("Evaluate annotation type *{}*".format(self.params.iou_type))
-
- self.params.img_ids = list(np.unique(self.params.img_ids))
-
- if self.params.use_cats:
- cat_ids = self.params.cat_ids
- else:
- cat_ids = [-1]
-
- self._prepare()
-
- self.ious = {
- (img_id, cat_id): self.compute_iou(img_id, cat_id)
- for img_id in self.params.img_ids
- for cat_id in cat_ids
- }
-
- # loop through images, area range, max detection number
- print('Evaluating ...')
- self.eval_imgs = [
- self.evaluate_img_google(img_id, cat_id, area_rng)
- for cat_id in cat_ids
- for area_rng in self.params.area_rng
- for img_id in self.params.img_ids
- ]
-
- def _get_gt_dt(self, img_id, cat_id):
- """Create gt, dt which are list of anns/dets. If use_cats is true
- only anns/dets corresponding to tuple (img_id, cat_id) will be
- used. Else, all anns/dets in image are used and cat_id is not used.
- """
- if self.params.use_cats:
- gt = self._gts[img_id, cat_id]
- dt = self._dts[img_id, cat_id]
- else:
- gt = [
- _ann
- for _cat_id in self.params.cat_ids
- for _ann in self._gts[img_id, cat_id]
- ]
- dt = [
- _ann
- for _cat_id in self.params.cat_ids
- for _ann in self._dts[img_id, cat_id]
- ]
- return gt, dt
-
- def compute_iou(self, img_id, cat_id):
- gt, dt = self._get_gt_dt(img_id, cat_id)
-
- if len(gt) == 0 and len(dt) == 0:
- return []
-
- # Sort detections in decreasing order of score.
- idx = np.argsort([-d["score"] for d in dt], kind="mergesort")
- dt = [dt[i] for i in idx]
-
- # iscrowd = [int(False)] * len(gt)
- iscrowd = [int('iscrowd' in g and g['iscrowd'] > 0) for g in gt]
-
- if self.params.iou_type == "segm":
- ann_type = "segmentation"
- elif self.params.iou_type == "bbox":
- ann_type = "bbox"
- else:
- raise ValueError("Unknown iou_type for iou computation.")
- gt = [g[ann_type] for g in gt]
- dt = [d[ann_type] for d in dt]
-
- # compute iou between each dt and gt region
- # will return array of shape len(dt), len(gt)
- ious = mask_utils.iou(dt, gt, iscrowd)
- return ious
-
- def evaluate_img_google(self, img_id, cat_id, area_rng):
- gt, dt = self._get_gt_dt(img_id, cat_id)
- if len(gt) == 0 and len(dt) == 0:
- return None
-
- if len(dt) == 0:
- return {
- "image_id": img_id,
- "category_id": cat_id,
- "area_rng": area_rng,
- "dt_ids": [],
- "dt_matches": np.array([], dtype=np.int32).reshape(1, -1),
- "dt_scores": [],
- "dt_ignore": np.array([], dtype=np.int32).reshape(1, -1),
- 'num_gt': len(gt)
- }
-
- no_crowd_inds = [i for i, g in enumerate(gt) \
- if ('iscrowd' not in g) or g['iscrowd'] == 0]
- crowd_inds = [i for i, g in enumerate(gt) \
- if 'iscrowd' in g and g['iscrowd'] == 1]
- dt_idx = np.argsort([-d["score"] for d in dt], kind="mergesort")
-
- if len(self.ious[img_id, cat_id]) > 0:
- ious = self.ious[img_id, cat_id]
- iou = ious[:, no_crowd_inds]
- iou = iou[dt_idx]
- ioa = ious[:, crowd_inds]
- ioa = ioa[dt_idx]
- else:
- iou = np.zeros((len(dt_idx), 0))
- ioa = np.zeros((len(dt_idx), 0))
- scores = np.array([dt[i]['score'] for i in dt_idx])
-
- num_detected_boxes = len(dt)
- tp_fp_labels = np.zeros(num_detected_boxes, dtype=bool)
- is_matched_to_group_of = np.zeros(num_detected_boxes, dtype=bool)
-
- def compute_match_iou(iou):
- max_overlap_gt_ids = np.argmax(iou, axis=1)
- is_gt_detected = np.zeros(iou.shape[1], dtype=bool)
- for i in range(num_detected_boxes):
- gt_id = max_overlap_gt_ids[i]
- is_evaluatable = (not tp_fp_labels[i] and
- iou[i, gt_id] >= 0.5 and
- not is_matched_to_group_of[i])
- if is_evaluatable:
- if not is_gt_detected[gt_id]:
- tp_fp_labels[i] = True
- is_gt_detected[gt_id] = True
-
- def compute_match_ioa(ioa):
- scores_group_of = np.zeros(ioa.shape[1], dtype=float)
- tp_fp_labels_group_of = np.ones(
- ioa.shape[1], dtype=float)
- max_overlap_group_of_gt_ids = np.argmax(ioa, axis=1)
- for i in range(num_detected_boxes):
- gt_id = max_overlap_group_of_gt_ids[i]
- is_evaluatable = (not tp_fp_labels[i] and
- ioa[i, gt_id] >= 0.5 and
- not is_matched_to_group_of[i])
- if is_evaluatable:
- is_matched_to_group_of[i] = True
- scores_group_of[gt_id] = max(scores_group_of[gt_id], scores[i])
- selector = np.where((scores_group_of > 0) & (tp_fp_labels_group_of > 0))
- scores_group_of = scores_group_of[selector]
- tp_fp_labels_group_of = tp_fp_labels_group_of[selector]
-
- return scores_group_of, tp_fp_labels_group_of
-
- if iou.shape[1] > 0:
- compute_match_iou(iou)
-
- scores_box_group_of = np.ndarray([0], dtype=float)
- tp_fp_labels_box_group_of = np.ndarray([0], dtype=float)
-
- if ioa.shape[1] > 0:
- scores_box_group_of, tp_fp_labels_box_group_of = compute_match_ioa(ioa)
-
- valid_entries = (~is_matched_to_group_of)
-
- scores = np.concatenate(
- (scores[valid_entries], scores_box_group_of))
- tp_fps = np.concatenate(
- (tp_fp_labels[valid_entries].astype(float),
- tp_fp_labels_box_group_of))
-
- return {
- "image_id": img_id,
- "category_id": cat_id,
- "area_rng": area_rng,
- "dt_matches": np.array([1 if x > 0 else 0 for x in tp_fps], dtype=np.int32).reshape(1, -1),
- "dt_scores": [x for x in scores],
- "dt_ignore": np.array([0 for x in scores], dtype=np.int32).reshape(1, -1),
- 'num_gt': len(gt)
- }
-
- def accumulate(self):
- """Accumulate per image evaluation results and store the result in
- self.eval.
- """
- self.logger.info("Accumulating evaluation results.")
-
- if not self.eval_imgs:
- self.logger.warn("Please run evaluate first.")
-
- if self.params.use_cats:
- cat_ids = self.params.cat_ids
- else:
- cat_ids = [-1]
-
- num_thrs = 1
- num_recalls = 1
-
- num_cats = len(cat_ids)
- num_area_rngs = 1
- num_imgs = len(self.params.img_ids)
-
- # -1 for absent categories
- precision = -np.ones(
- (num_thrs, num_recalls, num_cats, num_area_rngs)
- )
- recall = -np.ones((num_thrs, num_cats, num_area_rngs))
-
- # Initialize dt_pointers
- dt_pointers = {}
- for cat_idx in range(num_cats):
- dt_pointers[cat_idx] = {}
- for area_idx in range(num_area_rngs):
- dt_pointers[cat_idx][area_idx] = {}
-
- # Per category evaluation
- for cat_idx in range(num_cats):
- Nk = cat_idx * num_area_rngs * num_imgs
- for area_idx in range(num_area_rngs):
- Na = area_idx * num_imgs
- E = [
- self.eval_imgs[Nk + Na + img_idx]
- for img_idx in range(num_imgs)
- ]
- # Remove elements which are None
- E = [e for e in E if not e is None]
- if len(E) == 0:
- continue
-
- dt_scores = np.concatenate([e["dt_scores"] for e in E], axis=0)
- dt_idx = np.argsort(-dt_scores, kind="mergesort")
- dt_scores = dt_scores[dt_idx]
- dt_m = np.concatenate([e["dt_matches"] for e in E], axis=1)[:, dt_idx]
- dt_ig = np.concatenate([e["dt_ignore"] for e in E], axis=1)[:, dt_idx]
-
- num_gt = sum([e['num_gt'] for e in E])
- if num_gt == 0:
- continue
-
- tps = np.logical_and(dt_m, np.logical_not(dt_ig))
- fps = np.logical_and(np.logical_not(dt_m), np.logical_not(dt_ig))
- tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float)
- fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float)
-
- dt_pointers[cat_idx][area_idx] = {
- "tps": tps,
- "fps": fps,
- }
-
- for iou_thr_idx, (tp, fp) in enumerate(zip(tp_sum, fp_sum)):
- tp = np.array(tp)
- fp = np.array(fp)
- num_tp = len(tp)
- rc = tp / num_gt
-
- if num_tp:
- recall[iou_thr_idx, cat_idx, area_idx] = rc[
- -1
- ]
- else:
- recall[iou_thr_idx, cat_idx, area_idx] = 0
-
- # np.spacing(1) ~= eps
- pr = tp / (fp + tp + np.spacing(1))
- pr = pr.tolist()
-
- for i in range(num_tp - 1, 0, -1):
- if pr[i] > pr[i - 1]:
- pr[i - 1] = pr[i]
-
- mAP = compute_average_precision(
- np.array(pr, np.float).reshape(-1),
- np.array(rc, np.float).reshape(-1))
- precision[iou_thr_idx, :, cat_idx, area_idx] = mAP
-
- self.eval = {
- "params": self.params,
- "counts": [num_thrs, num_recalls, num_cats, num_area_rngs],
- "date": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
- "precision": precision,
- "recall": recall,
- "dt_pointers": dt_pointers,
- }
-
- def _summarize(self, summary_type):
- s = self.eval["precision"]
- if len(s[s > -1]) == 0:
- mean_s = -1
- else:
- mean_s = np.mean(s[s > -1])
- # print(s.reshape(1, 1, -1, 1))
- return mean_s
-
- def summarize(self):
- """Compute and display summary metrics for evaluation results."""
- if not self.eval:
- raise RuntimeError("Please run accumulate() first.")
-
- max_dets = self.params.max_dets
- self.results["AP50"] = self._summarize('ap')
-
- def run(self):
- """Wrapper function which calculates the results."""
- self.evaluate()
- self.accumulate()
- self.summarize()
-
- def print_results(self):
- template = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} catIds={:>3s}] = {:0.3f}"
-
- for key, value in self.results.items():
- max_dets = self.params.max_dets
- if "AP" in key:
- title = "Average Precision"
- _type = "(AP)"
- else:
- title = "Average Recall"
- _type = "(AR)"
-
- if len(key) > 2 and key[2].isdigit():
- iou_thr = (float(key[2:]) / 100)
- iou = "{:0.2f}".format(iou_thr)
- else:
- iou = "{:0.2f}:{:0.2f}".format(
- self.params.iou_thrs[0], self.params.iou_thrs[-1]
- )
-
- cat_group_name = "all"
- area_rng = "all"
-
- print(template.format(title, _type, iou, area_rng, max_dets, cat_group_name, value))
-
- def get_results(self):
- if not self.results:
- self.logger.warn("results is empty. Call run().")
- return self.results
-
-
-class Params:
- def __init__(self, iou_type):
- self.img_ids = []
- self.cat_ids = []
- # np.arange causes trouble. the data point on arange is slightly
- # larger than the true value
- self.iou_thrs = np.linspace(
- 0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True
- )
- self.google_style = True
- # print('Using google style PR curve')
- self.iou_thrs = self.iou_thrs[:1]
- self.max_dets = 1000
-
- self.area_rng = [
- [0 ** 2, 1e5 ** 2],
- ]
- self.area_rng_lbl = ["all"]
- self.use_cats = 1
- self.iou_type = iou_type
-
-
-class OIDEvaluator(DatasetEvaluator):
- def __init__(self, dataset_name, cfg, distributed, output_dir=None):
- self._distributed = distributed
- self._output_dir = output_dir
-
- self._cpu_device = torch.device("cpu")
- self._logger = logging.getLogger(__name__)
-
- self._metadata = MetadataCatalog.get(dataset_name)
- json_file = PathManager.get_local_path(self._metadata.json_file)
- self._oid_api = LVIS(json_file)
- # Test set json files do not contain annotations (evaluation must be
- # performed using the LVIS evaluation server).
- self._do_evaluation = len(self._oid_api.get_ann_ids()) > 0
- self._mask_on = cfg.MODEL.MASK_ON
-
- def reset(self):
- self._predictions = []
- self._oid_results = []
-
- def process(self, inputs, outputs):
- for input, output in zip(inputs, outputs):
- prediction = {"image_id": input["image_id"]}
- instances = output["instances"].to(self._cpu_device)
- prediction["instances"] = instances_to_coco_json(
- instances, input["image_id"])
- self._predictions.append(prediction)
-
- def evaluate(self):
- if self._distributed:
- comm.synchronize()
- self._predictions = comm.gather(self._predictions, dst=0)
- self._predictions = list(itertools.chain(*self._predictions))
-
- if not comm.is_main_process():
- return
-
- if len(self._predictions) == 0:
- self._logger.warning("[LVISEvaluator] Did not receive valid predictions.")
- return {}
-
- self._logger.info("Preparing results in the OID format ...")
- self._oid_results = list(
- itertools.chain(*[x["instances"] for x in self._predictions]))
-
- # unmap the category ids for LVIS (from 0-indexed to 1-indexed)
- for result in self._oid_results:
- result["category_id"] += 1
-
- PathManager.mkdirs(self._output_dir)
- file_path = os.path.join(
- self._output_dir, "oid_instances_results.json")
- self._logger.info("Saving results to {}".format(file_path))
- with PathManager.open(file_path, "w") as f:
- f.write(json.dumps(self._oid_results))
- f.flush()
-
- if not self._do_evaluation:
- self._logger.info("Annotations are not available for evaluation.")
- return
-
- self._logger.info("Evaluating predictions ...")
- self._results = OrderedDict()
- res, mAP = _evaluate_predictions_on_oid(
- self._oid_api,
- file_path,
- eval_seg=self._mask_on,
- class_names=self._metadata.get("thing_classes"),
- )
- self._results['bbox'] = res
- mAP_out_path = os.path.join(self._output_dir, "oid_mAP.npy")
- self._logger.info('Saving mAP to' + mAP_out_path)
- np.save(mAP_out_path, mAP)
- return copy.deepcopy(self._results)
-
-def _evaluate_predictions_on_oid(
- oid_gt, oid_results_path, eval_seg=False,
- class_names=None):
- logger = logging.getLogger(__name__)
- metrics = ["AP50", "AP50_expand"]
-
- results = {}
- oid_eval = OIDEval(oid_gt, oid_results_path, 'bbox', expand_pred_label=False)
- oid_eval.run()
- oid_eval.print_results()
- results["AP50"] = oid_eval.get_results()["AP50"]
-
- if eval_seg:
- oid_eval = OIDEval(oid_gt, oid_results_path, 'segm', expand_pred_label=False)
- oid_eval.run()
- oid_eval.print_results()
- results["AP50_segm"] = oid_eval.get_results()["AP50"]
- else:
- oid_eval = OIDEval(oid_gt, oid_results_path, 'bbox', expand_pred_label=True)
- oid_eval.run()
- oid_eval.print_results()
- results["AP50_expand"] = oid_eval.get_results()["AP50"]
-
- mAP = np.zeros(len(class_names)) - 1
- precisions = oid_eval.eval['precision']
- assert len(class_names) == precisions.shape[2]
- results_per_category = []
- id2apiid = sorted(oid_gt.get_cat_ids())
- inst_aware_ap, inst_count = 0, 0
- for idx, name in enumerate(class_names):
- precision = precisions[:, :, idx, 0]
- precision = precision[precision > -1]
- ap = np.mean(precision) if precision.size else float("nan")
- inst_num = len(oid_gt.get_ann_ids(cat_ids=[id2apiid[idx]]))
- if inst_num > 0:
- results_per_category.append(("{} {}".format(
- name.replace(' ', '_'),
- inst_num if inst_num < 1000 else '{:.1f}k'.format(inst_num / 1000)),
- float(ap * 100)))
- inst_aware_ap += inst_num * ap
- inst_count += inst_num
- mAP[idx] = ap
- # logger.info("{} {} {:.2f}".format(name, inst_num, ap * 100))
- inst_aware_ap = inst_aware_ap * 100 / inst_count
- N_COLS = min(6, len(results_per_category) * 2)
- results_flatten = list(itertools.chain(*results_per_category))
- results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)])
- table = tabulate(
- results_2d,
- tablefmt="pipe",
- floatfmt=".3f",
- headers=["category", "AP"] * (N_COLS // 2),
- numalign="left",
- )
- logger.info("Per-category {} AP: \n".format('bbox') + table)
- logger.info("Instance-aware {} AP: {:.4f}".format('bbox', inst_aware_ap))
-
- logger.info("Evaluation results for bbox: \n" + \
- create_small_table(results))
- return results, mAP
\ No newline at end of file
diff --git a/spaces/taskswithcode/semantic_search/twc_openai_search.py b/spaces/taskswithcode/semantic_search/twc_openai_search.py
deleted file mode 100644
index b56e1b859e8237b93f09f9bbd86a765740684866..0000000000000000000000000000000000000000
--- a/spaces/taskswithcode/semantic_search/twc_openai_search.py
+++ /dev/null
@@ -1,124 +0,0 @@
-from scipy.spatial.distance import cosine
-import argparse
-import json
-import os
-import openai
-import pdb
-
-def read_text(input_file):
- arr = open(input_file).read().split("\n")
- return arr[:-1]
-
-
-class OpenAIQnAModel:
- def __init__(self):
- self.debug = False
- self.q_model_name = None
- self.d_model_name = None
- self.skip_key = True
- print("In OpenAI API constructor")
-
-
- def init_model(self,model_name = None):
- #print("OpenAI: Init model",model_name)
- openai.api_key = os.getenv("OPENAI_API_KEY")
- if (openai.api_key == None):
- openai.api_key = ""
- print("API key not set")
-
- if (len(openai.api_key) == 0 and not self.skip_key):
- print("Open API key not set")
-
- if (model_name is None):
- self.d_model_name = "text-search-ada-doc-001"
- else:
- self.d_model_name = model_name
- self.q_model_name = self.construct_query_model_name(self.d_model_name)
- print(f"OpenAI: Init model complete :query model {self.q_model_name} doc:{self.d_model_name}")
-
- def construct_query_model_name(self,d_model_name):
- return d_model_name.replace('-doc-','-query-')
-
-
- def compute_embeddings(self,input_file_name,input_data,is_file):
- if (len(openai.api_key) == 0 and not self.skip_key):
- print("Open API key not set")
- return [],[]
- #print("In compute embeddings after key check")
- in_file = input_file_name.split('/')[-1]
- in_file = self.d_model_name + '_' + '.'.join(in_file.split('.')[:-1]) + "_search.json"
- cached = False
- try:
- fp = open(in_file)
- cached = True
- embeddings = json.load(fp)
- q_embeddings = [embeddings[0]]
- d_embeddings = embeddings[1:]
- print("Using cached embeddings")
- except:
- pass
-
- texts = read_text(input_data) if is_file == True else input_data
- queries = [texts[0]]
- docs = texts[1:]
-
- if (not cached):
- print(f"Computing embeddings for {input_file_name} and query model {self.q_model_name}")
- query_embeds = openai.Embedding.create(
- input=queries,
- model=self.q_model_name
- )
- print(f"Computing embeddings for {input_file_name} and doc model {self.q_model_name}")
- doc_embeds = openai.Embedding.create(
- input=docs,
- model=self.d_model_name
- )
- q_embeddings = []
- d_embeddings = []
- for i in range(len(query_embeds['data'])):
- q_embeddings.append(query_embeds['data'][i]['embedding'])
- for i in range(len(doc_embeds['data'])):
- d_embeddings.append(doc_embeds['data'][i]['embedding'])
- if (not cached):
- embeddings = q_embeddings + d_embeddings
- with open(in_file,"w") as fp:
- json.dump(embeddings,fp)
- return texts,(q_embeddings,d_embeddings)
-
- def output_results(self,output_file,texts,embeddings,main_index = 0):
- # Calculate cosine similarities
- # Cosine similarities are in [-1, 1]. Higher means more similar
- query_embeddings = embeddings[0]
- doc_embeddings = embeddings[1]
- cosine_dict = {}
- queries = [texts[0]]
- docs = texts[1:]
- if (self.debug):
- print("Total sentences",len(texts))
- for i in range(len(docs)):
- cosine_dict[docs[i]] = 1 - cosine(query_embeddings[0], doc_embeddings[i])
-
- if (self.debug):
- print("Input sentence:",texts[main_index])
- sorted_dict = dict(sorted(cosine_dict.items(), key=lambda item: item[1],reverse = True))
- if (self.debug):
- for key in sorted_dict:
- print("Cosine similarity with \"%s\" is: %.3f" % (key, sorted_dict[key]))
- if (output_file is not None):
- with open(output_file,"w") as fp:
- fp.write(json.dumps(sorted_dict,indent=0))
- return sorted_dict
-
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='OpenAI model for document search embeddings ',formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser.add_argument('-input', action="store", dest="input",required=True,help="Input file with sentences")
- parser.add_argument('-output', action="store", dest="output",default="output.txt",help="Output file with results")
- parser.add_argument('-model', action="store", dest="model",default="text-search-ada-doc-001",help="model name")
-
- results = parser.parse_args()
- obj = OpenAIQnAModel()
- obj.init_model(results.model)
- texts, embeddings = obj.compute_embeddings(results.input,results.input,is_file = True)
- results = obj.output_results(results.output,texts,embeddings)
diff --git a/spaces/teamnassim/Fictionista/dnnlib/util.py b/spaces/teamnassim/Fictionista/dnnlib/util.py
deleted file mode 100644
index 6bbdf3bd8fe1c138cd969d37dcc52190b45c4c16..0000000000000000000000000000000000000000
--- a/spaces/teamnassim/Fictionista/dnnlib/util.py
+++ /dev/null
@@ -1,491 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""Miscellaneous utility classes and functions."""
-
-import ctypes
-import fnmatch
-import importlib
-import inspect
-import numpy as np
-import os
-import shutil
-import sys
-import types
-import io
-import pickle
-import re
-import requests
-import html
-import hashlib
-import glob
-import tempfile
-import urllib
-import urllib.request
-import uuid
-
-from distutils.util import strtobool
-from typing import Any, List, Tuple, Union
-
-
-# Util classes
-# ------------------------------------------------------------------------------------------
-
-
-class EasyDict(dict):
- """Convenience class that behaves like a dict but allows access with the attribute syntax."""
-
- def __getattr__(self, name: str) -> Any:
- try:
- return self[name]
- except KeyError:
- raise AttributeError(name)
-
- def __setattr__(self, name: str, value: Any) -> None:
- self[name] = value
-
- def __delattr__(self, name: str) -> None:
- del self[name]
-
-
-class Logger(object):
- """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file."""
-
- def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True):
- self.file = None
-
- if file_name is not None:
- self.file = open(file_name, file_mode)
-
- self.should_flush = should_flush
- self.stdout = sys.stdout
- self.stderr = sys.stderr
-
- sys.stdout = self
- sys.stderr = self
-
- def __enter__(self) -> "Logger":
- return self
-
- def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None:
- self.close()
-
- def write(self, text: Union[str, bytes]) -> None:
- """Write text to stdout (and a file) and optionally flush."""
- if isinstance(text, bytes):
- text = text.decode()
- if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash
- return
-
- if self.file is not None:
- self.file.write(text)
-
- self.stdout.write(text)
-
- if self.should_flush:
- self.flush()
-
- def flush(self) -> None:
- """Flush written text to both stdout and a file, if open."""
- if self.file is not None:
- self.file.flush()
-
- self.stdout.flush()
-
- def close(self) -> None:
- """Flush, close possible files, and remove stdout/stderr mirroring."""
- self.flush()
-
- # if using multiple loggers, prevent closing in wrong order
- if sys.stdout is self:
- sys.stdout = self.stdout
- if sys.stderr is self:
- sys.stderr = self.stderr
-
- if self.file is not None:
- self.file.close()
- self.file = None
-
-
-# Cache directories
-# ------------------------------------------------------------------------------------------
-
-_dnnlib_cache_dir = None
-
-def set_cache_dir(path: str) -> None:
- global _dnnlib_cache_dir
- _dnnlib_cache_dir = path
-
-def make_cache_dir_path(*paths: str) -> str:
- if _dnnlib_cache_dir is not None:
- return os.path.join(_dnnlib_cache_dir, *paths)
- if 'DNNLIB_CACHE_DIR' in os.environ:
- return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths)
- if 'HOME' in os.environ:
- return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths)
- if 'USERPROFILE' in os.environ:
- return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths)
- return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths)
-
-# Small util functions
-# ------------------------------------------------------------------------------------------
-
-
-def format_time(seconds: Union[int, float]) -> str:
- """Convert the seconds to human readable string with days, hours, minutes and seconds."""
- s = int(np.rint(seconds))
-
- if s < 60:
- return "{0}s".format(s)
- elif s < 60 * 60:
- return "{0}m {1:02}s".format(s // 60, s % 60)
- elif s < 24 * 60 * 60:
- return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60)
- else:
- return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60)
-
-
-def format_time_brief(seconds: Union[int, float]) -> str:
- """Convert the seconds to human readable string with days, hours, minutes and seconds."""
- s = int(np.rint(seconds))
-
- if s < 60:
- return "{0}s".format(s)
- elif s < 60 * 60:
- return "{0}m {1:02}s".format(s // 60, s % 60)
- elif s < 24 * 60 * 60:
- return "{0}h {1:02}m".format(s // (60 * 60), (s // 60) % 60)
- else:
- return "{0}d {1:02}h".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24)
-
-
-def ask_yes_no(question: str) -> bool:
- """Ask the user the question until the user inputs a valid answer."""
- while True:
- try:
- print("{0} [y/n]".format(question))
- return strtobool(input().lower())
- except ValueError:
- pass
-
-
-def tuple_product(t: Tuple) -> Any:
- """Calculate the product of the tuple elements."""
- result = 1
-
- for v in t:
- result *= v
-
- return result
-
-
-_str_to_ctype = {
- "uint8": ctypes.c_ubyte,
- "uint16": ctypes.c_uint16,
- "uint32": ctypes.c_uint32,
- "uint64": ctypes.c_uint64,
- "int8": ctypes.c_byte,
- "int16": ctypes.c_int16,
- "int32": ctypes.c_int32,
- "int64": ctypes.c_int64,
- "float32": ctypes.c_float,
- "float64": ctypes.c_double
-}
-
-
-def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]:
- """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes."""
- type_str = None
-
- if isinstance(type_obj, str):
- type_str = type_obj
- elif hasattr(type_obj, "__name__"):
- type_str = type_obj.__name__
- elif hasattr(type_obj, "name"):
- type_str = type_obj.name
- else:
- raise RuntimeError("Cannot infer type name from input")
-
- assert type_str in _str_to_ctype.keys()
-
- my_dtype = np.dtype(type_str)
- my_ctype = _str_to_ctype[type_str]
-
- assert my_dtype.itemsize == ctypes.sizeof(my_ctype)
-
- return my_dtype, my_ctype
-
-
-def is_pickleable(obj: Any) -> bool:
- try:
- with io.BytesIO() as stream:
- pickle.dump(obj, stream)
- return True
- except:
- return False
-
-
-# Functionality to import modules/objects by name, and call functions by name
-# ------------------------------------------------------------------------------------------
-
-def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]:
- """Searches for the underlying module behind the name to some python object.
- Returns the module and the object name (original name with module part removed)."""
-
- # allow convenience shorthands, substitute them by full names
- obj_name = re.sub("^np.", "numpy.", obj_name)
- obj_name = re.sub("^tf.", "tensorflow.", obj_name)
-
- # list alternatives for (module_name, local_obj_name)
- parts = obj_name.split(".")
- name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)]
-
- # try each alternative in turn
- for module_name, local_obj_name in name_pairs:
- try:
- module = importlib.import_module(module_name) # may raise ImportError
- get_obj_from_module(module, local_obj_name) # may raise AttributeError
- return module, local_obj_name
- except:
- pass
-
- # maybe some of the modules themselves contain errors?
- for module_name, _local_obj_name in name_pairs:
- try:
- importlib.import_module(module_name) # may raise ImportError
- except ImportError:
- if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"):
- raise
-
- # maybe the requested attribute is missing?
- for module_name, local_obj_name in name_pairs:
- try:
- module = importlib.import_module(module_name) # may raise ImportError
- get_obj_from_module(module, local_obj_name) # may raise AttributeError
- except ImportError:
- pass
-
- # we are out of luck, but we have no idea why
- raise ImportError(obj_name)
-
-
-def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any:
- """Traverses the object name and returns the last (rightmost) python object."""
- if obj_name == '':
- return module
- obj = module
- for part in obj_name.split("."):
- obj = getattr(obj, part)
- return obj
-
-
-def get_obj_by_name(name: str) -> Any:
- """Finds the python object with the given name."""
- module, obj_name = get_module_from_obj_name(name)
- return get_obj_from_module(module, obj_name)
-
-
-def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any:
- """Finds the python object with the given name and calls it as a function."""
- assert func_name is not None
- func_obj = get_obj_by_name(func_name)
- assert callable(func_obj)
- return func_obj(*args, **kwargs)
-
-
-def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any:
- """Finds the python class with the given name and constructs it with the given arguments."""
- return call_func_by_name(*args, func_name=class_name, **kwargs)
-
-
-def get_module_dir_by_obj_name(obj_name: str) -> str:
- """Get the directory path of the module containing the given object name."""
- module, _ = get_module_from_obj_name(obj_name)
- return os.path.dirname(inspect.getfile(module))
-
-
-def is_top_level_function(obj: Any) -> bool:
- """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'."""
- return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__
-
-
-def get_top_level_function_name(obj: Any) -> str:
- """Return the fully-qualified name of a top-level function."""
- assert is_top_level_function(obj)
- module = obj.__module__
- if module == '__main__':
- module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0]
- return module + "." + obj.__name__
-
-
-# File system helpers
-# ------------------------------------------------------------------------------------------
-
-def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]:
- """List all files recursively in a given directory while ignoring given file and directory names.
- Returns list of tuples containing both absolute and relative paths."""
- assert os.path.isdir(dir_path)
- base_name = os.path.basename(os.path.normpath(dir_path))
-
- if ignores is None:
- ignores = []
-
- result = []
-
- for root, dirs, files in os.walk(dir_path, topdown=True):
- for ignore_ in ignores:
- dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)]
-
- # dirs need to be edited in-place
- for d in dirs_to_remove:
- dirs.remove(d)
-
- files = [f for f in files if not fnmatch.fnmatch(f, ignore_)]
-
- absolute_paths = [os.path.join(root, f) for f in files]
- relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths]
-
- if add_base_to_relative:
- relative_paths = [os.path.join(base_name, p) for p in relative_paths]
-
- assert len(absolute_paths) == len(relative_paths)
- result += zip(absolute_paths, relative_paths)
-
- return result
-
-
-def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None:
- """Takes in a list of tuples of (src, dst) paths and copies files.
- Will create all necessary directories."""
- for file in files:
- target_dir_name = os.path.dirname(file[1])
-
- # will create all intermediate-level directories
- if not os.path.exists(target_dir_name):
- os.makedirs(target_dir_name)
-
- shutil.copyfile(file[0], file[1])
-
-
-# URL helpers
-# ------------------------------------------------------------------------------------------
-
-def is_url(obj: Any, allow_file_urls: bool = False) -> bool:
- """Determine whether the given object is a valid URL string."""
- if not isinstance(obj, str) or not "://" in obj:
- return False
- if allow_file_urls and obj.startswith('file://'):
- return True
- try:
- res = requests.compat.urlparse(obj)
- if not res.scheme or not res.netloc or not "." in res.netloc:
- return False
- res = requests.compat.urlparse(requests.compat.urljoin(obj, "/"))
- if not res.scheme or not res.netloc or not "." in res.netloc:
- return False
- except:
- return False
- return True
-
-
-def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any:
- """Download the given URL and return a binary-mode file object to access the data."""
- assert num_attempts >= 1
- assert not (return_filename and (not cache))
-
- # Doesn't look like an URL scheme so interpret it as a local filename.
- if not re.match('^[a-z]+://', url):
- return url if return_filename else open(url, "rb")
-
- # Handle file URLs. This code handles unusual file:// patterns that
- # arise on Windows:
- #
- # file:///c:/foo.txt
- #
- # which would translate to a local '/c:/foo.txt' filename that's
- # invalid. Drop the forward slash for such pathnames.
- #
- # If you touch this code path, you should test it on both Linux and
- # Windows.
- #
- # Some internet resources suggest using urllib.request.url2pathname() but
- # but that converts forward slashes to backslashes and this causes
- # its own set of problems.
- if url.startswith('file://'):
- filename = urllib.parse.urlparse(url).path
- if re.match(r'^/[a-zA-Z]:', filename):
- filename = filename[1:]
- return filename if return_filename else open(filename, "rb")
-
- assert is_url(url)
-
- # Lookup from cache.
- if cache_dir is None:
- cache_dir = make_cache_dir_path('downloads')
-
- url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest()
- if cache:
- cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*"))
- if len(cache_files) == 1:
- filename = cache_files[0]
- return filename if return_filename else open(filename, "rb")
-
- # Download.
- url_name = None
- url_data = None
- with requests.Session() as session:
- if verbose:
- print("Downloading %s ..." % url, end="", flush=True)
- for attempts_left in reversed(range(num_attempts)):
- try:
- with session.get(url) as res:
- res.raise_for_status()
- if len(res.content) == 0:
- raise IOError("No data received")
-
- if len(res.content) < 8192:
- content_str = res.content.decode("utf-8")
- if "download_warning" in res.headers.get("Set-Cookie", ""):
- links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link]
- if len(links) == 1:
- url = requests.compat.urljoin(url, links[0])
- raise IOError("Google Drive virus checker nag")
- if "Google Drive - Quota exceeded" in content_str:
- raise IOError("Google Drive download quota exceeded -- please try again later")
-
- match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", ""))
- url_name = match[1] if match else url
- url_data = res.content
- if verbose:
- print(" done")
- break
- except KeyboardInterrupt:
- raise
- except:
- if not attempts_left:
- if verbose:
- print(" failed")
- raise
- if verbose:
- print(".", end="", flush=True)
-
- # Save to cache.
- if cache:
- safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name)
- cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name)
- temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name)
- os.makedirs(cache_dir, exist_ok=True)
- with open(temp_file, "wb") as f:
- f.write(url_data)
- os.replace(temp_file, cache_file) # atomic
- if return_filename:
- return cache_file
-
- # Return data as file object.
- assert not return_filename
- return io.BytesIO(url_data)
diff --git a/spaces/teralomaniac/clewd/config.js b/spaces/teralomaniac/clewd/config.js
deleted file mode 100644
index 59a414a1debdec695d603dea3b587775987f7cea..0000000000000000000000000000000000000000
--- a/spaces/teralomaniac/clewd/config.js
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
-* https://rentry.org/teralomaniac_clewd
-* https://github.com/teralomaniac/clewd
-*/
-
-// SET YOUR COOKIE BELOW
-
-module.exports = {
- "Cookie": "",
- "CookieArray": [],
- "Cookiecounter": 0,
- "CookieIndex": 0,
- "Ip": "0.0.0.0",
- "Port": 7860,
- "localtunnel": false,
- "BufferSize": 1,
- "SystemInterval": 3,
- "rProxy": "https://claude.ai",
- "padtxt_placeholder": "",
- "PromptExperimentFirst": "",
- "PromptExperimentNext": "",
- "PersonalityFormat": "{{char}}'s personality: {{personality}}",
- "ScenarioFormat": "Dialogue scenario: {{scenario}}",
- "Settings": {
- "RenewAlways": true,
- "RetryRegenerate": false,
- "PromptExperiments": true,
- "SystemExperiments": true,
- "PreventImperson": false,
- "AllSamples": false,
- "NoSamples": false,
- "StripAssistant": false,
- "StripHuman": false,
- "PassParams": false,
- "ClearFlags": true,
- "PreserveChats": true,
- "LogMessages": true,
- "FullColon": true,
- "padtxt": 13500,
- "xmlPlot": true,
- "Superfetch": true
- }
-}
-
-/*
- BufferSize
- * How many characters will be buffered before the AI types once
- * lower = less chance of `PreventImperson` working properly
-
- ---
-
- SystemInterval
- * How many messages until `SystemExperiments alternates`
-
- ---
-
- Other settings
- * https://gitgud.io/ahsk/clewd/#defaults
- * and
- * https://gitgud.io/ahsk/clewd/-/blob/master/CHANGELOG.md
- */
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/3com Wl 552 Firmware LINK Download.md b/spaces/terfces0erbo/CollegeProjectV2/3com Wl 552 Firmware LINK Download.md
deleted file mode 100644
index c3ba653b89e51b6fa00aaf928d5a2700fb43b9c0..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/3com Wl 552 Firmware LINK Download.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-
3Com Wl-552 Firmware - How to Download and Install It on Your Device
-
If you are looking for a reliable and fast wireless network solution, you might want to consider the 3Com Wl-552 router. This device offers a high-performance and secure wireless connection for your home or office. However, to make sure that your router works properly and efficiently, you need to update its firmware regularly.
-
Firmware is the software that controls the functions and features of your router. It can improve the performance, stability and security of your device. It can also fix some bugs and errors that might affect your wireless network. Therefore, it is important to download and install the latest 3Com Wl-552 firmware on your router.
The first step to update your router's firmware is to download the latest version from the official website of 3Com. To do this, you need to follow these steps:
Select your product category (Wireless) and model (Wl-552) from the drop-down menus.
-
Click on the Downloads link and choose the firmware file that matches your router's model number and operating system.
-
Save the file to your computer and remember its location.
-
-
How to Install 3Com Wl-552 Firmware
-
The next step is to install the downloaded firmware file on your router. To do this, you need to follow these steps:
-
-
Connect your computer to your router using an Ethernet cable or a wireless connection.
-
Open a web browser and type in the default IP address of your router (usually 192.168.1.1) in the address bar.
-
Enter the default username (admin) and password (admin) of your router and click on Login.
-
Go to the System Tools section and click on Firmware Upgrade.
-
Browse for the firmware file that you downloaded earlier and click on Upload.
-
Wait for the firmware upgrade process to complete. Do not turn off or disconnect your router during this time.
-
When the upgrade is done, reboot your router and check if everything is working fine.
-
-
Conclusion
-
Updating your 3Com Wl-552 firmware is a simple and easy process that can enhance the performance and security of your wireless network. You just need to download the latest firmware file from the official website of 3Com and install it on your router using a web browser. By doing this, you can enjoy a fast and reliable wireless connection for your home or office.
-
Why You Should Update Your 3Com Wl-552 Firmware
-
Updating your 3Com Wl-552 firmware can bring you many benefits and advantages. Here are some of the reasons why you should update your router's firmware:
-
-
You can enjoy new features and functions that are added by the manufacturer.
-
You can fix some compatibility issues with other devices or software.
-
You can enhance the security and privacy of your wireless network.
-
You can improve the speed and stability of your wireless connection.
-
You can avoid potential problems and errors that might occur with outdated firmware.
-
-
How to Check Your 3Com Wl-552 Firmware Version
-
Before you update your 3Com Wl-552 firmware, you need to check the current version of your router's firmware. This way, you can see if there is a newer version available and if you need to update it. To check your firmware version, you need to follow these steps:
-
-
Connect your computer to your router using an Ethernet cable or a wireless connection.
-
Open a web browser and type in the default IP address of your router (usually 192.168.1.1) in the address bar.
-
Enter the default username (admin) and password (admin) of your router and click on Login.
-
Go to the Status section and click on Device Information.
-
Look for the Firmware Version field and note down the number.
-
Compare it with the latest firmware version on the official website of 3Com and see if there is a difference.
-
-
How to Troubleshoot 3Com Wl-552 Firmware Issues
-
Sometimes, you might encounter some problems or errors with your 3Com Wl-552 firmware. This can affect your wireless network performance and functionality. To troubleshoot these issues, you need to follow these steps:
-
-
Check if your router is powered on and connected to the internet.
-
Check if your router's firmware is up to date. If not, download and install the latest version from the official website of 3Com.
-
Check if your router's settings are correct and compatible with your wireless network. You can access the router's web interface and modify the settings as needed.
-
Check if your wireless devices are compatible with your router's firmware. You might need to update the drivers or software of your devices.
-
Check if there are any interferences or obstacles that might affect your wireless signal. You can try to change the location or position of your router or devices.
-
Reset your router to its factory default settings and reconfigure it. This can solve some firmware issues that might be caused by corrupted or misconfigured files.
-
-
How to Contact 3Com Wl-552 Firmware Support
-
If you still have any questions or issues with your 3Com Wl-552 firmware, you can contact the support team of 3Com for further assistance. You can use one of these methods:
-
-
-
Visit the official website of 3Com and go to the Support section. You can find FAQs, manuals, guides, videos and other resources that can help you with your firmware issues.
-
Call the toll-free number of 3Com and speak to a customer service representative. You can find the phone number on the website or on the product package.
-
Email the support team of 3Com and describe your problem in detail. You can find the email address on the website or on the product package.
-
Chat with a live agent of 3Com online and get instant help. You can find the chat option on the website or on the product package.
-
-
How to Backup and Restore Your 3Com Wl-552 Firmware Settings
-
Before you update your 3Com Wl-552 firmware, it is recommended that you backup your router's settings. This way, you can restore them in case something goes wrong with the firmware update or if you want to revert to the previous firmware version. To backup and restore your router's settings, you need to follow these steps:
-
-
Connect your computer to your router using an Ethernet cable or a wireless connection.
-
Open a web browser and type in the default IP address of your router (usually 192.168.1.1) in the address bar.
-
Enter the default username (admin) and password (admin) of your router and click on Login.
-
Go to the System Tools section and click on Backup & Restore.
-
Click on Backup Settings and save the file to your computer.
-
To restore your settings, click on Browse and select the backup file that you saved earlier.
-
Click on Restore Settings and wait for the process to complete.
-
-
How to Reset Your 3Com Wl-552 Firmware to Factory Defaults
-
If you encounter any problems or errors with your 3Com Wl-552 firmware that cannot be solved by troubleshooting or updating, you can try to reset your router to its factory default settings. This will erase all your customized settings and restore the original firmware settings. To reset your router to factory defaults, you need to follow these steps:
-
-
Locate the reset button on the back of your router. It is usually a small hole or a recessed button.
-
Use a paper clip or a similar object to press and hold the reset button for about 10 seconds.
-
Release the button and wait for the router to reboot.
-
Your router is now reset to factory defaults. You can use the default username (admin) and password (admin) to access the web interface and reconfigure your router as needed.
-
-
What are the Features and Benefits of 3Com Wl-552 Firmware
-
3Com Wl-552 firmware is the software that controls the functions and features of your 3Com Wl-552 router. It offers many features and benefits that can enhance your wireless network experience. Here are some of them:
-
-
It supports wireless standards such as IEEE 802.11b/g/n, which can provide fast and stable wireless connection for your devices.
-
It supports wireless security protocols such as WEP, WPA, WPA2, which can protect your wireless network from unauthorized access and attacks.
-
It supports wireless modes such as AP, Client, Bridge, Repeater, WDS, which can extend your wireless coverage and connectivity.
-
It supports wireless features such as QoS, WMM, WPS, which can improve your wireless performance and convenience.
-
It supports network features such as DHCP, NAT, Firewall, VPN, DDNS, UPnP, which can provide various network services and functions for your devices.
-
It supports management features such as Web UI, Telnet, SSH, SNMP, which can help you configure and monitor your router easily and remotely.
-
-
How to Optimize Your 3Com Wl-552 Firmware Performance
-
To optimize your 3Com Wl-552 firmware performance, you need to follow some tips and tricks that can help you improve your wireless network quality and speed. Here are some of them:
-
-
Place your router in a central and open location that can avoid interferences and obstacles.
-
Adjust the antenna direction and position of your router to get the best wireless signal.
-
Change the wireless channel of your router to avoid overlapping with other wireless networks.
-
Update your router's firmware regularly to get the latest features and fixes.
-
Use a wired connection for devices that require high bandwidth or low latency.
-
Reduce the number of devices that are connected to your wireless network.
-
-
Conclusion
-
3Com Wl-552 firmware is the software that controls the functions and features of your 3Com Wl-552 router. It can provide you with a fast and secure wireless network solution for your home or office. To make sure that your router works properly and efficiently, you need to download and install the latest 3Com Wl-552 firmware on your router. You also need to check, backup, restore, reset and optimize your router's settings and performance. By doing this, you can enjoy a reliable and smooth wireless network experience with your 3Com Wl-552 router.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Asesinas Nazis Las Brujas De Hitler Pdf 26.md b/spaces/terfces0erbo/CollegeProjectV2/Asesinas Nazis Las Brujas De Hitler Pdf 26.md
deleted file mode 100644
index f368fe7dd81ef84166e0e757f67b9a16d514f2a3..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Asesinas Nazis Las Brujas De Hitler Pdf 26.md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-
-Ethical principles for journal publications The publication of articles in the journal Reformasi: Jurnal Ilmiah Ilmu Sosial dan Politik, reviewed by colleagues, is ...one
-The publication of articles in the journal Reformasi: Jurnal Ilmiah Ilmu Sosial dan Politik, considered by colleagues, is a means by which scientists can discuss and share the results of their research with colleagues.
-While it is often considered a form of academic writing, the publication of articles in a journal has a broader meaning, as it can be used to enhance the academic image of scholars and facilitate the process of preparing and publishing papers. 8a78ff9644
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/CK2 DLC Activator Corepack LINK.md b/spaces/terfces0erbo/CollegeProjectV2/CK2 DLC Activator Corepack LINK.md
deleted file mode 100644
index 57dedc3cd6fd8ccc9b3ce3d053420f59e2ad82c8..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/CK2 DLC Activator Corepack LINK.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-
-Crusader Kings II PC Game Repack-Games.cc Crusader Kings II v3.3.2.0 DLC PC - Download Crusader Kings II Holy Fury-CODEX Repack-Games. cc Crusader Kings II v3.3.2.0 DLC PC - Download Crusader Kings II Holy Fury-CODEX Repack-Games.cc Crusader Kings II v3.3.2.0 DLC PC - Download Crusader Kings II Holy Fury-
-CODEX Repack-Games Crusader Kings 2 is a strategy game that combines politics, intrigue and greed.
-The game takes place on the lands of medieval Europe, in the midst of a civil war.
-The king is dead, long live 8a78ff9644
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/GTR 3 Torrent.md b/spaces/terfces0erbo/CollegeProjectV2/GTR 3 Torrent.md
deleted file mode 100644
index 378621d89c62879fb83f6c16918cb444e5174dca..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/GTR 3 Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-October 31, 2011 - This patch updates your installed HALion 4 to the current version of HALion 4.5.4! Mac OS X Windows. Mac OS X 10.6 10.7 10.8, Windows Vista. HALion 5 Release Notes - V1.07 .
-HALion 5 - This is a new version of the popular game editor, with a lot of new features (from patch 1.01 to 1.14) .
-HALion 5 Release Notes - V1.06 .
-HALion 5 - This is a new version of the popular game editor, with a lot of new features (from patch 1.01 to 1.14) . 8a78ff9644
-
-
-
diff --git a/spaces/thecho7/deepfake/training/transforms/albu.py b/spaces/thecho7/deepfake/training/transforms/albu.py
deleted file mode 100644
index 0922ce64030df6f651b19e350aa81bc2a92b9dae..0000000000000000000000000000000000000000
--- a/spaces/thecho7/deepfake/training/transforms/albu.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import random
-
-import cv2
-import numpy as np
-from albumentations import DualTransform, ImageOnlyTransform
-from albumentations.augmentations.crops.functional import crop
-#from albumentations.augmentations.functional import crop
-
-
-def isotropically_resize_image(img, size, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC):
- h, w = img.shape[:2]
- if max(w, h) == size:
- return img
- if w > h:
- scale = size / w
- h = h * scale
- w = size
- else:
- scale = size / h
- w = w * scale
- h = size
- interpolation = interpolation_up if scale > 1 else interpolation_down
- resized = cv2.resize(img, (int(w), int(h)), interpolation=interpolation)
- return resized
-
-
-class IsotropicResize(DualTransform):
- def __init__(self, max_side, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC,
- always_apply=False, p=1):
- super(IsotropicResize, self).__init__(always_apply, p)
- self.max_side = max_side
- self.interpolation_down = interpolation_down
- self.interpolation_up = interpolation_up
-
- def apply(self, img, interpolation_down=cv2.INTER_AREA, interpolation_up=cv2.INTER_CUBIC, **params):
- return isotropically_resize_image(img, size=self.max_side, interpolation_down=interpolation_down,
- interpolation_up=interpolation_up)
-
- def apply_to_mask(self, img, **params):
- return self.apply(img, interpolation_down=cv2.INTER_NEAREST, interpolation_up=cv2.INTER_NEAREST, **params)
-
- def get_transform_init_args_names(self):
- return ("max_side", "interpolation_down", "interpolation_up")
-
-
-class Resize4xAndBack(ImageOnlyTransform):
- def __init__(self, always_apply=False, p=0.5):
- super(Resize4xAndBack, self).__init__(always_apply, p)
-
- def apply(self, img, **params):
- h, w = img.shape[:2]
- scale = random.choice([2, 4])
- img = cv2.resize(img, (w // scale, h // scale), interpolation=cv2.INTER_AREA)
- img = cv2.resize(img, (w, h),
- interpolation=random.choice([cv2.INTER_CUBIC, cv2.INTER_LINEAR, cv2.INTER_NEAREST]))
- return img
-
-
-class RandomSizedCropNonEmptyMaskIfExists(DualTransform):
-
- def __init__(self, min_max_height, w2h_ratio=[0.7, 1.3], always_apply=False, p=0.5):
- super(RandomSizedCropNonEmptyMaskIfExists, self).__init__(always_apply, p)
-
- self.min_max_height = min_max_height
- self.w2h_ratio = w2h_ratio
-
- def apply(self, img, x_min=0, x_max=0, y_min=0, y_max=0, **params):
- cropped = crop(img, x_min, y_min, x_max, y_max)
- return cropped
-
- @property
- def targets_as_params(self):
- return ["mask"]
-
- def get_params_dependent_on_targets(self, params):
- mask = params["mask"]
- mask_height, mask_width = mask.shape[:2]
- crop_height = int(mask_height * random.uniform(self.min_max_height[0], self.min_max_height[1]))
- w2h_ratio = random.uniform(*self.w2h_ratio)
- crop_width = min(int(crop_height * w2h_ratio), mask_width - 1)
- if mask.sum() == 0:
- x_min = random.randint(0, mask_width - crop_width + 1)
- y_min = random.randint(0, mask_height - crop_height + 1)
- else:
- mask = mask.sum(axis=-1) if mask.ndim == 3 else mask
- non_zero_yx = np.argwhere(mask)
- y, x = random.choice(non_zero_yx)
- x_min = x - random.randint(0, crop_width - 1)
- y_min = y - random.randint(0, crop_height - 1)
- x_min = np.clip(x_min, 0, mask_width - crop_width)
- y_min = np.clip(y_min, 0, mask_height - crop_height)
-
- x_max = x_min + crop_height
- y_max = y_min + crop_width
- y_max = min(mask_height, y_max)
- x_max = min(mask_width, x_max)
- return {"x_min": x_min, "x_max": x_max, "y_min": y_min, "y_max": y_max}
-
- def get_transform_init_args_names(self):
- return "min_max_height", "height", "width", "w2h_ratio"
diff --git a/spaces/themanas021/Image_Caption_Generation/README.md b/spaces/themanas021/Image_Caption_Generation/README.md
deleted file mode 100644
index 9b252f8c19b4ce2cc00ec84a50b1155478526ef7..0000000000000000000000000000000000000000
--- a/spaces/themanas021/Image_Caption_Generation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Image Caption Generation
-emoji: 📉
-colorFrom: yellow
-colorTo: blue
-sdk: streamlit
-sdk_version: 1.25.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/!!TOP!! Style Korg Pa1X SET TALLAVA Free Downloadrar - Trello[3].md b/spaces/tialenAdioni/chat-gpt-api/logs/!!TOP!! Style Korg Pa1X SET TALLAVA Free Downloadrar - Trello[3].md
deleted file mode 100644
index 1e560af0d9e6b4fd9353f01e9414ef1d7f3cbac7..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/!!TOP!! Style Korg Pa1X SET TALLAVA Free Downloadrar - Trello[3].md
+++ /dev/null
@@ -1,143 +0,0 @@
-
-
Style Korg Pa1X SET TALLAVA Free Download.rar: What You Need to Know
-
If you are a fan of Balkan, Turkish, Greek, or Oriental music, and you own a Korg Pa1x keyboard, you might be interested in downloading Style Korg Pa1X SET TALLAVA. This is a free set of styles and sounds that will enhance your musical experience and make your keyboard sound like a professional instrument. In this article, we will explain what Style Korg Pa1X SET TALLAVA is, why you should download it, how to download it, what features it has, and how to use it. Read on to find out more.
-
Introduction
-
What is Style Korg Pa1X SET TALLAVA?
-
Style Korg Pa1X SET TALLAVA is a collection of styles and sounds for the Korg Pa1x keyboard. A style is a musical pattern that can be played automatically by the keyboard, while a sound is a musical instrument that can be played manually by the keyboard. Style Korg Pa1X SET TALLAVA contains hundreds of styles and sounds that are suitable for various genres of music, especially those from the Balkan region, Turkey, Greece, and the Middle East. The set was created by various musicians and programmers who shared their work online for free.
There are many reasons why you might want to download Style Korg Pa1X SET TALLAVA for your keyboard. Here are some of them:
-
-
You will get access to a wide range of styles and sounds that are not available in the original factory set of your keyboard.
-
You will be able to play songs from different cultures and traditions with authentic rhythms, melodies, and instruments.
-
You will be able to create your own songs and compositions using the styles and sounds as inspiration.
-
You will be able to impress your audience with your musical skills and versatility.
-
You will be able to enjoy playing your keyboard more with new and exciting sounds.
-
-
How to download Style Korg Pa1X SET TALLAVA?
-
Downloading Style Korg Pa1X SET TALLAVA is easy and free. All you need is a computer with an internet connection, a USB flash drive, and your keyboard. Here are the steps to follow:
-
-
Go to one of the websites that offer Style Korg Pa1X SET TALLAVA for free download. Some examples are korgpa-styles.blogspot.com, piyanistset.com, kit.co, or trello.com. You can also search for other websites using Google or any other search engine.
-
Select the set that you want to download. There are different versions of Style Korg Pa1X SET TALLAVA available online, each with different features and contents. You can choose the one that suits your preferences and needs.
-
Click on the download link or button and save the file on your computer. The file will be in .rar format, which is a compressed archive that contains multiple files inside.
-
Extract the files from the .rar archive using a software like WinRAR or 7-Zip. You will get a folder with several files inside, such as .set, .pcg, .stl, .mp3, etc.
-
Copy the folder onto your USB flash drive.
-
Insert the USB flash drive into your keyboard's USB port.
-
You are ready to use Style Korg Pa1X SET TALLAVA on your keyboard.
-
-
Features of Style Korg Pa1X SET TALLAVA
-
Balkan styles and sounds
-
One of the main features of Style Korg Pa1X SET TALLAVA is the Balkan styles and sounds. These are musical patterns and instruments that are typical of the music from countries like Serbia, Bosnia, Croatia, Macedonia, Albania, Romania, Bulgaria, etc. Some examples of Balkan styles are Tallava, Turbo Folk, Manele, Chalga, Gypsy Music, etc. Some examples of Balkan sounds are Accordion, Clarinet, Saxophone, Trumpet, Violin, Bouzouki, etc.
-
Turkish styles and sounds
-
Another feature of Style Korg Pa1X SET TALLAVA is the Turkish styles and sounds. These are musical patterns and instruments that are characteristic of the music from Turkey and its neighboring regions. Some examples of Turkish styles are Arabesk, Pop Folk, Oyun Havasi, Halay, Anatolian Rock, etc. Some examples of Turkish sounds are Oud, Saz, Kanun, Ney, Zurna, Darbuka, etc.
-
Style Korg Pa1X SET TALLAVA download link
-How to install Style Korg Pa1X SET TALLAVA on keyboard
-Style Korg Pa1X SET TALLAVA review and demo
-Style Korg Pa1X SET TALLAVA sound samples and presets
-Style Korg Pa1X SET TALLAVA compatible keyboards and models
-Style Korg Pa1X SET TALLAVA update and patch
-Style Korg Pa1X SET TALLAVA manual and instructions
-Style Korg Pa1X SET TALLAVA features and specifications
-Style Korg Pa1X SET TALLAVA price and availability
-Style Korg Pa1X SET TALLAVA comparison and alternatives
-Style Korg Pa1X SET TALLAVA forum and community
-Style Korg Pa1X SET TALLAVA tips and tricks
-Style Korg Pa1X SET TALLAVA troubleshooting and support
-Style Korg Pa1X SET TALLAVA warranty and guarantee
-Style Korg Pa1X SET TALLAVA feedback and testimonials
-Style Korg Pa1X SET TALLAVA history and origin
-Style Korg Pa1X SET TALLAVA genre and style
-Style Korg Pa1X SET TALLAVA best practices and recommendations
-Style Korg Pa1X SET TALLAVA pros and cons
-Style Korg Pa1X SET TALLAVA benefits and advantages
-Style Korg Pa1X SET TALLAVA drawbacks and disadvantages
-Style Korg Pa1X SET TALLAVA customization and personalization
-Style Korg Pa1X SET TALLAVA quality and performance
-Style Korg Pa1X SET TALLAVA popularity and demand
-Style Korg Pa1X SET TALLAVA ratings and rankings
-Style Korg Pa1X SET TALLAVA videos and tutorials
-Style Korg Pa1X SET TALLAVA blogs and articles
-Style Korg Pa1X SET TALLAVA podcasts and interviews
-Style Korg Pa1X SET TALLAVA courses and classes
-Style Korg Pa1X SET TALLAVA ebooks and guides
-Style Korg Pa1X SET TALLAVA webinars and workshops
-Style Korg Pa1X SET TALLAVA case studies and success stories
-Style Korg Pa1X SET TALLAVA challenges and contests
-Style Korg Pa1X SET TALLAVA coupons and discounts
-Style Korg Pa1X SET TALLAVA free trial and demo version
-Style Korg Pa1X SET TALLAVA affiliate program and commission
-Style Korg Pa1X SET TALLAVA resale rights and license
-Style Korg Pa1X SET TALLAVA bonus and gift
-Style Korg Pa1X SET TALLAVA software and tools
-Style Korg Pa1X SET TALLAVA accessories and add-ons
-Style Korg Pa1X SET TALLAVA music and songs
-Style Korg Pa1X SET TALLAVA artists and composers
-Style Korg Pa1X SET TALLAVA covers and remixes
-Style Korg Pa1X SET TALLAVA sheet music and tabs
-Style Korg Pa1X SET TALLAVA midi files and karaoke tracks
-Style Korg Pa1X SET TALLAVA samples and loops
-Style Korg Pa1X SET TALLAVA beats and rhythms
-
Greek styles and sounds
-
A third feature of Style Korg Pa1X SET TALLAVA is the Greek styles and sounds. These are musical patterns and instruments that are representative of the music from Greece and its islands. Some examples of Greek styles are Laiko, Rebetiko, Zeibekiko, Hasapiko, Sirtaki, etc. Some examples of Greek sounds are Bouzouki, Baglama, Clarino, Lauto, Santouri, etc.
-
Oriental styles and sounds
-
A fourth feature of Style Korg Pa1X SET TALLAVA is the Oriental styles and sounds. These are musical patterns and instruments that are common in the music from the Middle East, North Africa, and Central Asia. Some examples of Oriental styles are Raqs Sharqi, Khaliji, Dabke, Mawal, etc. Some examples of Oriental sounds are Qanun, Oud, Ney, Kamancheh, Tabla, etc.
-
Other styles and sounds
-
A fifth feature of Style Korg Pa1X SET TALLAVA is the other styles and sounds. These are musical patterns and instruments that are not specific to any region or genre but can be used for various purposes. Some examples of other styles are Pop Rock, Dance, Ballad, Jazz, etc. Some examples of other sounds are Piano, Guitar, Bass, Drums, Synth, Strings, etc.
-
How to use Style Korg Pa1X SET TALLAVA
-
How to load the set into your Korg Pa1x keyboard
-
To use Style Korg Pa1X SET TALLAVA on your keyboard, you need to load the set into its memory. This is how you do it:
-
-
Turn on your keyboard and wait for it to initialize.
-
Press the MEDIA button on the panel to enter the Media mode.
-
Select the USB device option on the display and press ENTER.
-
Navigate to the folder that contains Style Korg Pa1X SET TALLAVA using the VALUE dial and the PAGE buttons.
-
Select the .set file that you want to load and press ENTER.
-
A dialog box will appear asking you to confirm the loading process. Press ENTER again to start loading.
-
Wait for the loading process to finish. It may take a few minutes depending on the size of the set.
-
A message will appear on the display indicating that the loading is complete. Press EXIT to return to the main mode.
-
You have successfully loaded Style Korg Pa1X SET TALLAVA into your keyboard.
-
-
How to select and play the styles and sounds
-
To select and play the styles and sounds from Style Korg Pa1X SET TALLAVA on your keyboard, you need to follow these steps:
-
-
Press the STYLE PLAY button on the panel to enter the Style Play mode.
-
Select a style bank using the BANK buttons on the panel. Each bank contains 16 styles that are grouped by genre or category.
-
Select a style using the STYLE buttons on the panel. Each style has four variations (A, B, C, D) that you can switch using the VARIATION buttons on the panel.
-
Press the START/STOP button on the panel to start playing the style. You can control the tempo using the TEMPO buttons or dial on the panel.
-
Play a chord on the left side of the keyboard to trigger the accompaniment. You can change the chord recognition mode using the CHORD SCANNING button on the panel.
-
Select a sound for your right hand using the PROGRAM buttons on the panel. Each program has two sounds that you can layer or split using the LAYER/SPLIT button on the panel.
-
Play the melody or solo on the right side of the keyboard using the selected sound. You can add effects using the JOYSTICK or the ASSIGNABLE SWITCHES on the panel.
-
Use the other buttons and knobs on the panel to adjust the volume, balance, transpose, octave, etc. of the style and sound.
-
Press the START/STOP button again to stop playing the style.
-
You have successfully selected and played a style and sound from Style Korg Pa1X SET TALLAVA on your keyboard.
-
-
How to customize the styles and sounds
-
If you want to customize the styles and sounds from Style Korg Pa1X SET TALLAVA on your keyboard, you can use the following features:
-
-
The RECORDING mode: This mode allows you to record your own styles and sounds using the built-in sequencer. You can edit, arrange, and save your recordings as user styles and sounds.
-
The EDIT mode: This mode allows you to modify the parameters of the existing styles and sounds. You can change the tempo, key, instruments, effects, etc. of the styles and sounds.
-
The GLOBAL mode: This mode allows you to adjust the overall settings of your keyboard. You can change the tuning, scale, MIDI, display, etc. of your keyboard.
-
-
For more details on how to use these features, please refer to the user manual of your keyboard.
-
Conclusion
-
Summary of the main points
-
In this article, we have discussed what Style Korg Pa1X SET TALLAVA is, why you should download it, how to download it, what features it has, and how to use it. We have learned that Style Korg Pa1X SET TALLAVA is a free set of styles and sounds for the Korg Pa1x keyboard that covers various genres of music from the Balkan region, Turkey, Greece, and the Middle East. We have also learned that Style Korg Pa1X SET TALLAVA can enhance your musical experience and make your keyboard sound like a professional instrument. We have also learned how to load, select, play, and customize the styles and sounds from Style Korg Pa1X SET TALLAVA on your keyboard.
-
Call to action
-
If you are interested in downloading Style Korg Pa1X SET TALLAVA for your keyboard, you can visit one of the websites that offer it for free download. You can also search for other websites using Google or any other search engine. You can also check out other sets of styles and sounds that are available online for your keyboard. You can also create your own styles and sounds using the recording and editing features of your keyboard. Whatever you choose to do, we hope that you enjoy playing your keyboard with Style Korg Pa1X SET TALLAVA.
-
FAQs
-
Here are some frequently asked questions about Style Korg Pa1X SET TALLAVA:
-
-
Q: What is the size of Style Korg Pa1X SET TALLAVA?
-
A: The size of Style Korg Pa1X SET TALLAVA varies depending on the version and the website that you download it from. However, most versions are around 100 MB to 200 MB in .rar format.
-
Q: Do I need to backup my original factory set before loading Style Korg Pa1X SET TALLAVA?
-
A: It is recommended that you backup your original factory set before loading any new set into your keyboard. This way, you can restore your original settings if you encounter any problems or if you want to switch back to the factory set.
-
Q: Can I use Style Korg Pa1X SET TALLAVA on other models of Korg keyboards?
-
A: Style Korg Pa1X SET TALLAVA is designed for the Korg Pa1x keyboard. However, some versions may be compatible with other models of Korg keyboards, such as the Pa2x, Pa3x, Pa4x, etc. You can check the compatibility of the set with your keyboard model on the website that you download it from.
-
Q: Where can I find more information about Style Korg Pa1X SET TALLAVA?
-
A: You can find more information about Style Korg Pa1X SET TALLAVA on the websites that offer it for free download. You can also find more information on the user manual of your keyboard or on online forums and communities of Korg users.
-
Q: How can I contact the creators of Style Korg Pa1X SET TALLAVA?
-
A: You can contact the creators of Style Korg Pa1X SET TALLAVA by visiting their websites or social media pages. You can also contact them by email or phone if they provide their contact details. You can thank them for their work or give them feedback or suggestions.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Anka 2 Game Free Download Full Version Enjoy a Charming Point and Click Adventure.md b/spaces/tialenAdioni/chat-gpt-api/logs/Anka 2 Game Free Download Full Version Enjoy a Charming Point and Click Adventure.md
deleted file mode 100644
index 8ef0f73c78bed7034f12348b65c3d2b95dceeb84..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Anka 2 Game Free Download Full Version Enjoy a Charming Point and Click Adventure.md
+++ /dev/null
@@ -1,82 +0,0 @@
-
-
Anka 2 Game Free Download Full Version: A Review
-
If you are looking for a fun and relaxing game that will challenge your brain and entertain you at the same time, you should try Anka 2. Anka 2 is a point-and-click adventure game that follows the story of a young boy named Anka who sets out to find his missing parents. Along the way, he will encounter many mysteries, puzzles, and mini-games that will keep you hooked for hours. In this article, we will review the features of Anka 2 game, show you how to download it for free, and give you some tips and tricks for playing it.
Anka 2 game is not just a collection of puzzles and mini-games. It also has a rich and engaging story that will make you care about the characters and their fate. You will help Anka solve various problems, meet new friends, and discover secrets about his family and his world. The game has a lot of humor, emotion, and surprises that will keep you interested until the end.
-
Cute graphics and facial animations
-
Anka 2 game has a charming cartoon style that suits the tone and theme of the game. The graphics are colorful, detailed, and expressive. The characters are facially animated, which means they can show different emotions and reactions depending on the situation. The game also has a lot of voice acting that adds personality and realism to the dialogue.
-
23 varied mini-games and puzzles
-
Anka 2 game is not just a point-and-click adventure game. It also has a lot of mini-games and puzzles that will test your logic, memory, reflexes, and skills. You will encounter different types of challenges, such as brainteasers, riddles, games of skill, hidden object scenes, matching games, memory games, jigsaw puzzles, and more. The mini-games are integrated into the story and are relevant to the plot. They are also fun to play and replay.
-
Customization with your own photo
-
Anka 2 game has a unique feature that allows you to customize the game with your own photo. You can upload your photo or choose one from your computer or online sources. The game will then use your photo as the face of Anka or any other character in the game. This way, you can feel more connected to the game and have more fun playing it.
-
How to Download Anka 2 Game for Free
-
Step 1: Visit the official website of Anka 2 Game
-
The first step to download Anka 2 game for free is to visit the official website of the game. You can find it by searching for "Anka 2 game free download full version" on any search engine or by clicking on this link. The website will give you more information about the game, such as its features, screenshots, videos, reviews, system requirements, etc.
-
Step 2: Choose your preferred version and platform
-
The next step is to choose which version and platform of Anka 2 game you want to download. The game is available for Windows XP/XP Professional/Vista/7/8/10/11 as well as Mac OS X. You can also choose between a free trial version or a full version. The free trial version lets you play the first few levels of the game for free but has some limitations. The full version gives you access to all the levels, features, and mini-games of the game but costs $14.99. You can pay by credit card or PayPal.
-
Step 3: Click on the download button and follow the instructions
-
The final step is to click on the download button on the website and follow the instructions on how to install and run the game on your computer. The download process may take some time depending on your internet speed and connection. Once the download is complete , you can enjoy playing Anka 2 game for free or for a small fee.
-
Tips and Tricks for Playing Anka 2 Game
-
Use the hint button if you get stuck
-
If you ever get stuck in Anka 2 game or don't know what to do next, you can use the hint button at the bottom right corner of the screen. The hint button will show you where to go or what to do next in order to progress in the game. However, using the hint button will cost you some points from your score, so use it wisely.
-
Collect all the hidden objects in each scene
-
In each scene of Anka 2 game, there are some hidden objects that you can collect by clicking on them. These objects are not necessary for completing the scene but they will give you extra points and unlock some achievements. You can see how many hidden objects are left in each scene by looking at the top left corner of the screen.
-
anka game free download for pc
-anka adventure game full version
-anka point and click game download
-anka puzzle game free download
-anka 2 game download for windows
-anka 2 full version free download
-anka 2 adventure game download
-anka 2 point and click game free
-anka 2 puzzle game download for pc
-anka 2 game free download apk
-anka free download games and family & kids games from shockwave.com
-anka free apk android game free download apkcombo
-anka free game by ovogame
-anka free game enchanting adventure
-anka free game customize with your own photo
-anka free game 23 varied mini-games
-anka free trial download for pc freedownloadmanager
-anka free trial buy now just 6.99 play unlimited shockwave unlimited
-anka free trial game info & requirements
-anka free trial use your puzzle skills to bring anka's parents home
-anka full version download for pc doublegames.com
-anka full version published by ovogame
-anka full version windows xp/vista/7/8/10/11 compatible
-anka full version 1000 mhz processor 128 mb ram 64 mb video ram 25 mb free disc space directx 9.0c required
-anka full version recommended age 7+
-anka full version solve mysteries and restore peace in his life
-anka full version enjoy countless puzzles and mini-games in this enchanting adventure
-anka full version help the child named anka in solving various mysteries and reaching the goals that would change the world
-anka full version solve various puzzles unlock access to new levels overcome obstacles find different clues to further the narrative and provide more details
-anka full version aliases include "anka solo rimozione" "anka supprimer" "anka nur deinstallation"
-anka full version antivirus scan shows that this download is safe
-anka full version most frequently downloaded versions are 32.0 and 1.0 by the program users
-anka full version software lies within games more precisely puzzle
-anka full version most frequent installer filenames for the software include ANKA.exe drm_en.exe engine.exe game.exe and GameStart.exe etc.
-anka full version are you ready for an epic adventure a young child named anka needs your help life is good for him but all that is about to change forever
-
Replay the mini-games to improve your score
-
As mentioned before, Anka 2 game has 23 varied mini-games that you can play throughout the adventure. Each mini-game has its own rules, objectives, difficulty level, time limit, etc. You can replay any mini-game that you have unlocked by accessing them from the main menu or from within each scene. Replaying the mini-games will allow you to improve your score, beat your previous records, and earn more points.
-
Explore every location and interact with everything
-
Anka 2 game has many locations that you can visit and explore during your quest. Each location has its own atmosphere, characters, and secrets. You can interact with almost everything in each location by clicking on it. You may find clues, items, or surprises that will help you solve puzzles, advance in the story, or just have fun. Don't be afraid to experiment and try different things in each location.
-
Conclusion and FAQs
-
In conclusion, Anka 2 game is a great point-and-click adventure game that will appeal to anyone who likes puzzles, mini-games, and stories. The game has many features that make it enjoyable, such as enchanting adventure, cute graphics, facial animations, 23 varied mini-games, and customization with your own photo. The game is also easy to download and play for free or for a small fee. If you are looking for a fun and relaxing game that will challenge your brain and entertain you at the same time, you should try Anka 2 game today.
-
Here are some FAQs about Anka 2 game that may answer some of your questions:
-
-
Q: How long does it take to finish Anka 2 game?
-
A: It depends on how fast you solve puzzles and complete mini-games, but on average, it takes about 4-5 hours to finish the game.
-
Q: Is Anka 2 game suitable for children?
-
A: Yes, Anka 2 game is suitable for children of all ages. The game has no violence, gore, or inappropriate content. The game is also educational and family-friendly, as it teaches children logical thinking, problem-solving, and creativity.
-
Q: What are the differences between Anka 2 game and the original Anka game?
-
A: Anka 2 game is a sequel to the original Anka game, which was released in 2009. The sequel has improved graphics, sound, and gameplay. The sequel also has more levels, mini-games, and puzzles than the original. The sequel also continues the story of Anka and his parents, but it can be played as a standalone game without playing the original first.
-
Q: Can I play Anka 2 game offline?
-
A: Yes, you can play Anka 2 game offline once you have downloaded and installed it on your computer. You don't need an internet connection to play the game, unless you want to update it or access some online features.
-
Q: Where can I find more games like Anka 2 game?
-
A: If you like Anka 2 game, you may also like other games in the same genre, such as Hidden Object Games, Puzzle Games, Adventure Games, etc. You can find more games like Anka 2 game on various websites, such as GamesGoFree.com, Shockwave.com, iDownloadGames.com, HotGameLive.com, etc.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Crack AutoCAD MEP 2007 Activation The Best Way to Learn and Master AutoCAD MEP.md b/spaces/tialenAdioni/chat-gpt-api/logs/Crack AutoCAD MEP 2007 Activation The Best Way to Learn and Master AutoCAD MEP.md
deleted file mode 100644
index b04c4695e41b2af232022c373a68e5f159af8ee7..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Crack AutoCAD MEP 2007 Activation The Best Way to Learn and Master AutoCAD MEP.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-
Are you looking for a powerful software that can help you create, edit, convert, and manage raster images in your AutoCAD drawings? If so, you might be interested in AutoCAD Raster Design 2008, a software that can enhance your productivity and creativity with raster images. But before you can use this software, you need a keygen that can generate a valid activation code for you. In this article, we will show you what AutoCAD Raster Design 2008 is, what a keygen is, how to download and install AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275, how to use AutoCAD Raster Design 2008, what are the benefits of using it, and some FAQs that you might have.
AutoCAD Raster Design 2008 is a software that can help you work with raster images in your AutoCAD drawings. Raster images are images that are made up of pixels, such as photos, scanned drawings, maps, satellite images, etc. With AutoCAD Raster Design 2008, you can:
-
-
Create and edit raster images using various commands and tools
-
Convert raster images to vector objects using advanced algorithms
-
Insert and manage raster images in your AutoCAD drawings using image frames, clipping boundaries, transparency settings, etc.
-
Enhance your raster images with filters, color adjustments, contrast enhancements, etc.
-
Analyze your raster images with measurement tools, histograms, statistics, etc.
-
-
AutoCAD Raster Design 2008 is compatible with AutoCAD 2008 and other Autodesk products that support ObjectARX applications.
-
What is a keygen and why do you need it?
-
A keygen is a software that can generate serial numbers or activation codes for other software. You need a keygen because AutoCAD Raster Design 2008 is not a free software. You need to purchase a license from Autodesk or an authorized reseller to use it legally. However, if you don't want to spend money on buying a license, you can use a keygen that can generate a valid activation code for you.
-
However, using a keygen is not recommended because it may violate the terms of service of Autodesk or expose your computer to viruses or malware. Therefore, use a keygen at your own risk.
-
How to download and install AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275?
-
If you decide to use a keygen to activate AutoCAD Raster Design 2008, here are the steps that you need to follow:
-
Step 1: Download the keygen from a reliable source
How to download and install AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 free download full version
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 crack serial number
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 activation code generator
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 product key and license key
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 torrent download link
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 patch update
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 system requirements and compatibility
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 features and benefits
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 tutorial and user guide
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 review and feedback
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 alternative and comparison
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 troubleshooting and error fixing
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 support and customer service
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 discount and coupon code
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 vs Autodesk All Products Activator X86/x64 Key[^2^]
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 vs Autocad LT 2008 product key[^2^]
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 vs Autocad LT 2019 product key[^3^]
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 vs Autocad LT x86/x64/x32[^2^]
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 vs Autocad LT xforce keygen download[^2^]
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 for Windows XP/Vista/7/8/10
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 for Mac OS X/Linux
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 for Android/iOS
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 for web browser
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 online and offline mode
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 pros and cons
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 tips and tricks
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 best practices and recommendations
-AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275 FAQs and Q&A
-AutoCAD Raster Design
-
Step 2: Extract the keygen file using WinRAR or 7-Zip
-
The next step is to extract the keygen file using WinRAR or 7-Zip. These are free software that can help you unzip compressed files. You can download WinRAR from https://www.win-rar.com/download.html or 7-Zip from https://www.7-zip.org/download.html. After installing one of these software, right-click on the downloaded keygen file and choose Extract Here or Extract To.
-
Step 3: Run the keygen as administrator
-
You can ignore these alerts or disable your antivirus software temporarily while using the keygen. However, make sure to scan your computer for viruses or malware after using the keygen.
-
Step 4: Select AutoCAD Raster Design 2008 from the product list and click Generate
-
The fourth step is to select AutoCAD Raster Design 2008 from the product list and click Generate. You will see a keygen interface like this:
-
-
From the drop-down menu, choose AutoCAD Raster Design 2008 and click Generate. You will see a random activation code generated by the keygen. Copy this code and keep it for later use.
-
Step 5: Copy the generated activation code and paste it in the AutoCAD Raster Design 2008 activation window
-
The final step is to copy the generated activation code and paste it in the AutoCAD Raster Design 2008 activation window. To do this, you need to have AutoCAD Raster Design 2008 installed on your computer. If you don't have it, you can download it from https://www.autodesk.com/products/autocad-raster-design/overview or use a torrent site to download it.
-
After installing AutoCAD Raster Design 2008, run it and you will see an activation window like this:
-
-
Enter your name, organization, and serial number (you can use any random numbers) and click Next. You will see another window like this:
-
-
Choose I have an activation code from Autodesk and click Next. You will see another window like this:
-
-
Paste the activation code that you copied from the keygen in the empty boxes and click Next. You will see a confirmation message that your product has been activated successfully.
-
How to use AutoCAD Raster Design 2008?
-
Now that you have activated AutoCAD Raster Design 2008, you can start using it to work with raster images in your AutoCAD drawings. Here are some of the main functions and tools that you can use:
-
How to create and edit raster images?
-
To create and edit raster images, you can use the raster editing commands and tools that are available in the Raster menu or the Raster toolbar. Some of the common commands and tools are:
-
-
Raster Draw: This command allows you to draw lines, circles, rectangles, polygons, etc. on a raster image.
-
Raster Edit: This command allows you to modify existing raster entities, such as moving, rotating, scaling, cropping, etc.
-
Raster Clean: This command allows you to remove unwanted pixels or noise from a raster image.
-
Raster Touchup: This command allows you to apply various effects or filters to a raster image, such as blur, sharpen, emboss, etc.
-
Raster Color: This command allows you to adjust the color settings of a raster image, such as brightness, contrast, hue, saturation, etc.
-
-
How to convert raster images to vector objects?
-
To convert raster images to vector objects, you can use the raster-to-vector conversion commands and tools that are available in the Raster menu or the Raster toolbar. Some of the common commands and tools are:
-
-
Raster Entity: This command allows you to convert a raster entity (such as a line or a circle) to a vector object (such as a polyline or an arc).
-
Raster Vectorize: This command allows you to convert a raster image (such as a photo or a map) to a vector drawing (such as a contour or an outline).
-
Raster OCR: This command allows you to convert a raster image that contains text (such as a scanned document or a sign) to editable text objects.
-
Raster Recognize: This command allows you to convert a raster image that contains symbols (such as logos or icons) to predefined blocks.
-
-
How to insert and manage raster images in AutoCAD drawings?
-
To insert and manage raster images in your AutoCAD drawings, you can use the image insertion and management commands and tools that are available in the Image menu or the Image toolbar. Some of the common commands and tools are:
-
-
Image Attach: This command allows you to insert a raster image file (such as JPG, PNG, TIFF, etc.) into your current drawing.
-
Image Clip: This command allows you to define a clipping boundary for an inserted image.
-
Image Transparency: This command allows you to adjust the transparency level of an inserted image.
-
Image Frame: This command allows you to toggle the visibility of an image frame (a border around an inserted image).
-
Image Manager: This command allows you to view and modify the properties of all inserted images in your current drawing.
-
-
What are the benefits of using AutoCAD Raster Design 2008?
-
Using AutoCAD Raster Design 2008 can bring you many benefits, such as:
-
-
You can save time and money by using existing raster images instead of creating new vector drawings from scratch.
-
You can improve the quality and accuracy of your drawings by editing and enhancing your raster images with various commands and tools.
-
You can increase your productivity and creativity by converting your raster images to vector objects with advanced algorithms.
-
You can integrate your raster images with your AutoCAD drawings seamlessly by inserting and managing them with ease.
-
-
Conclusion
-
In conclusion, AutoCAD Raster Design 2008 is a software that can help you work with raster images in your AutoCAD drawings. It can help you create, edit, convert, and manage raster images with various commands and tools. However, before you can use this software, you need a keygen that can generate a valid activation code for you. In this article, we showed you what AutoCAD Raster Design 2008 is, what a keygen is, how to download and install AutoCAD Raster Design 2008 Keygen X-force V1.0.5 275, how to use AutoCAD Raster Design 2008, what are the benefits of using it, and some FAQs that you might have. We hope that this article was helpful for you. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions and answers about AutoCAD Raster Design 2008:
-
-
Q: Is AutoCAD Raster Design 2008 compatible with Windows 10?
-
A: Yes, AutoCAD Raster Design 2008 is compatible with Windows 10. However, you may need to run it in compatibility mode or install some updates or patches to make it work properly.
-
Q: Is AutoCAD Raster Design 2008 compatible with other Autodesk products?
-
A: Yes, AutoCAD Raster Design 2008 is compatible with other Autodesk products that support ObjectARX applications, such as AutoCAD Civil 3D, AutoCAD Map 3D, AutoCAD Architecture, etc.
-
Q: How can I get technical support for AutoCAD Raster Design 2008?
Q: How can I learn more about working with raster images in AutoCAD?
-/2019/ENU/AutoCAD-Core/files/GUID-5B0E6F7E-1E9A-4F0C-8F5D-3C1B2D0B7C6A-htm.html">https://knowledge.autodesk.com/support/autocad/getting-started/caas/CloudHelp/cloudhelp/2019/ENU/AutoCAD-Core/files/GUID-5B0E6F7E-1E9A-4F0C-8F5D-3C1B2D0B7C6A-htm.html, where you can find topics, videos, tips, etc.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dmx Dog Bark Mp3 [UPD] Download.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dmx Dog Bark Mp3 [UPD] Download.md
deleted file mode 100644
index 7dc1405d30b303eb13c1240c6be65abeb0795770..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Dmx Dog Bark Mp3 [UPD] Download.md
+++ /dev/null
@@ -1,24 +0,0 @@
-
-
How to Download DMX Dog Barking MP3 for Free
-
If you are a fan of the late rapper DMX, you might be interested in downloading his signature dog barking sound effect as an MP3 file. DMX, whose real name was Earl Simmons, was known for his distinctive vocal delivery and his frequent use of dog barking noises in his songs. He passed away on April 9, 2021, after suffering a heart attack.
-
In this article, we will show you how to download DMX dog barking MP3 for free from a reliable source. You can use this sound effect as a ringtone, an alarm, or a prank on your friends. Here are the steps you need to follow:
Scroll down and find the sound effect that matches DMX's dog barking style. You can preview the sound by clicking on the play button.
-
Once you find the sound you like, click on the download button. You will need to create a free account or log in with your existing account to access the download link.
-
Choose the MP3 format and click on the download button again. The file will be saved to your device.
-
Enjoy your DMX dog barking MP3 and pay tribute to the legendary rapper.
-
-
We hope this article was helpful and informative. If you have any questions or feedback, please leave a comment below. And don't forget to share this article with your fellow DMX fans.
-
-
DMX was one of the most influential and successful rappers of the late 1990s and early 2000s. He sold over 74 million records worldwide and had five consecutive albums debut at number one on the Billboard 200 chart. He was also known for his acting roles in films such as Belly, Romeo Must Die, Exit Wounds, and Cradle 2 the Grave.
-
DMX's dog barking sound effect was a trademark of his music and persona. He often used it to express his aggression, loyalty, and authenticity. He also claimed to have a strong connection with dogs, saying that they were his only friends growing up. He even named some of his albums after dogs, such as Flesh of My Flesh, Blood of My Blood and Year of the Dog... Again.
-
By downloading DMX dog barking MP3, you can honor his legacy and celebrate his music. You can also use it to spice up your own songs or videos, as long as you give credit to DMX and respect his copyrights. Remember to always keep DMX in your prayers and in your playlists.
-
-
If you want to learn more about DMX and his dog barking sound effect, you can check out some of his interviews and documentaries. For example, you can watch his appearance on The Breakfast Club, where he explained the meaning behind his barks and how he developed them. You can also watch his episode of BET's Ruff Ryders Chronicles, where he talked about his life story and his relationship with dogs.
-
Another way to appreciate DMX and his dog barking sound effect is to listen to some of his classic songs and albums. Some of his most popular songs that feature his barks are Ruff Ryders' Anthem, Party Up (Up in Here), X Gon' Give It to Ya, Where the Hood At, and Get at Me Dog. Some of his best albums that showcase his barks are It's Dark and Hell Is Hot, And Then There Was X, The Great Depression, and Grand Champ.
-
DMX's dog barking sound effect is not only a part of his music, but also a part of hip-hop culture and history. Many other rappers and artists have paid homage to DMX and his barks, such as Eminem, Jay-Z, Kanye West, Lil Wayne, Snoop Dogg, and Drake. DMX's barks have also been sampled and used in various movies, TV shows, video games, and commercials.
- e93f5a0c3f
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Movie Kung Fu Panda 3 English In Hindi Hd Watch the Epic Adventure of Po and His Friends.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Movie Kung Fu Panda 3 English In Hindi Hd Watch the Epic Adventure of Po and His Friends.md
deleted file mode 100644
index e91dc3db54f3f9ad4ebb4842ac0c110fc4b6daba..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Movie Kung Fu Panda 3 English In Hindi Hd Watch the Epic Adventure of Po and His Friends.md
+++ /dev/null
@@ -1,102 +0,0 @@
-
-
Download Movie Kung Fu Panda 3 English In Hindi Hd
-
If you are a fan of animated movies, you must have heard of the Kung Fu Panda franchise. It is one of the most successful and popular series of movies produced by DreamWorks Animation. The third installment, Kung Fu Panda 3, was released in 2016 and received positive reviews from critics and audiences alike. In this article, we will tell you everything you need to know about this movie, why you should watch it in Hindi, and how to download it in HD quality.
-
Download Movie Kung Fu Panda 3 English In Hindi Hd
Kung Fu Panda 3 is an animated comedy-adventure film that follows the adventures of Po, a lovable panda who is the Dragon Warrior and the leader of the Furious Five, a group of kung fu masters. In this movie, Po reunites with his long-lost biological father, Li Shan, and travels to a secret panda village where he meets many other pandas. However, he also faces a new threat from Kai, a powerful spirit warrior who has returned from the spirit realm and is stealing the chi (life force) of other kung fu masters. Po must learn how to master his own chi and teach kung fu to his fellow pandas in order to stop Kai and save the world.
-
Why should you watch it in Hindi?
-
Kung Fu Panda 3 is a movie that can be enjoyed by people of all ages and cultures. However, if you are an Indian viewer, you might find it more entertaining and relatable to watch it in Hindi. Why? Because the Hindi dubbing of this movie is done by some of the most famous and talented Bollywood actors and actresses. For example, Po's voice is given by none other than Jackie Chan, who is also one of the producers of the movie. Li Shan's voice is given by Amitabh Bachchan, who is widely regarded as one of the greatest actors of Indian cinema. Kai's voice is given by J.K. Simmons, who won an Oscar for his performance in Whiplash. Other notable voice actors include Ranbir Kapoor as Bao, Vidya Balan as Mei Mei, Anil Kapoor as Shifu, and Rajpal Yadav as Dim.
-
By watching this movie in Hindi, you will not only enjoy the amazing performances of these actors but also appreciate the cultural references and jokes that are added to suit the Indian audience. For example, Po calls Li Shan "Baba" instead of "Dad", Kai calls himself "Kailash" instead of "Kai", and Po teaches his panda friends how to dance to Bollywood songs. These are just some of the examples of how the Hindi dubbing enhances the fun and charm of this movie.
-
How to download it in HD quality?
-
If you want to watch this movie at your own convenience and comfort, you might want to download it in HD quality. However, downloading movies from unauthorized sources can be illegal and risky. You might end up with viruses or malware on your device or face legal consequences for violating copyright laws. Therefore, we recommend that you use a safe and legal way to download this movie in HD quality.
-
One such way is to use a streaming service that offers this movie for download. For example, you can use Netflix, Amazon Prime Video, or Disney+ Hotstar to watch this movie online or offline. All you need is a subscription to one of these services and a compatible device such as a smartphone, tablet, laptop, or smart TV. You can then search for this movie on the app or website and click on the download button to save it on your device. You can choose the quality of the download according to your preference and internet speed.
-
Kung Fu Panda 3 Hindi Dubbed Movie Download Hd
-Watch Kung Fu Panda 3 Online Free In Hindi Hd
-Kung Fu Panda 3 Full Movie In Hindi Hd 1080p Download
-How To Download Kung Fu Panda 3 In Hindi Hd
-Kung Fu Panda 3 English Subtitles Download In Hindi Hd
-Kung Fu Panda 3 Dual Audio Hindi English Download Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd 720p
-Kung Fu Panda 3 Hindi Hd Torrent Download
-Kung Fu Panda 3 Bluray Download In Hindi Hd
-Kung Fu Panda 3 Hindi Dubbed Hd Mp4 Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Free Download
-Kung Fu Panda 3 Hindi Audio Track Download Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Filmywap
-Kung Fu Panda 3 Hindi Dubbed Download Hd Quality
-Kung Fu Panda 3 Full Movie In Hindi Hd Youtube Download
-Kung Fu Panda 3 English Movie Download In Hindi Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Worldfree4u
-Kung Fu Panda 3 Hindi Dubbed Hd Avi Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Dailymotion Download
-Kung Fu Panda 3 English To Hindi Dubbed Download Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Pagalworld
-Kung Fu Panda 3 Hindi Dubbed Hd Mkv Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Google Drive Download
-Kung Fu Panda 3 English Version Download In Hindi Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Moviescounter
-Kung Fu Panda 3 Hindi Dubbed Hd Mobile Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Netflix Download
-Kung Fu Panda 3 English To Hindi Translation Download Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Coolmoviez
-Kung Fu Panda 3 Hindi Dubbed Hd Online Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Amazon Prime Download
-Kung Fu Panda 3 English Subtitles Srt File Download In Hindi Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Bolly4u
-Kung Fu Panda 3 Hindi Dubbed Hd Direct Download Link
-Kung Fu Panda 3 Full Movie In Hindi Hd Disney Plus Hotstar Download
-Kung Fu Panda 3 English To Hindi Converter Download Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Tamilrockers
-Kung Fu Panda 3 Hindi Dubbed Hd Stream Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Telegram Channel Download
-Kung Fu Panda 3 English Audio Track Download In Hindi Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Extramovies
-Kung Fu Panda 3 Hindi Dubbed Hd Magnet Link Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Facebook Video Downloader
-Kung Fu Panda 3 English To Hindi Voice Over Download Hd
-Kung Fu Panda 3 Movie Download In Hindi Hd Skymovies
-Kung Fu Panda 3 Hindi Dubbed Hd Zip File Download
-Kung Fu Panda 3 Full Movie In Hindi Hd Instagram Reels Downloader
-Kung Fu Panda 3 English To Hindi Dictionary Pdf Download
-
Another way is to use a DVD or Blu-ray disc that contains this movie in HD quality. You can buy or rent this disc from a store or online platform and play it on your DVD or Blu-ray player. You can also connect your player to your TV or monitor for a better viewing experience.
-
Review of Kung Fu Panda 3
-
The plot
-
The plot of Kung Fu Panda 3 is engaging and exciting. It has a good balance of action, comedy, drama, and emotion. It explores Po's identity crisis as he struggles to accept his dual heritage as a panda and a dragon warrior. It also shows his growth as a leader and a teacher as he learns how to inspire and empower others. It also introduces new characters and settings that add more depth and diversity to the story. The villain Kai is also a formidable antagonist who poses a serious challenge to Po and his friends.
-
The characters
-
The characters of Kung Fu Panda 3 are well-developed and likable. Po is still his adorable and hilarious self but also shows more maturity and responsibility. Li Shan is a caring and supportive father who wants to reconnect with his son and share his culture with him. Kai is a ruthless and cunning enemy who has a personal history with Po's mentor Oogway. The other pandas are also fun and colorful characters who have their own personalities and talents.
-
The animation
-
The animation of Kung Fu Panda 3 is stunning and beautiful. It uses a combination of computer-generated imagery (CGI) and traditional hand-drawn animation to create a rich and vibrant visual style. The colors are bright and vivid, the movements are smooth and fluid, and the details are realistic and impressive. The scenes in the spirit realm are especially breathtaking as they showcase a different aesthetic from the physical world.
-
The humor
-
The humor of Kung Fu Panda 3 is witty and hilarious. It uses various types of comedy such as slapstick, wordplay, sarcasm, parody, satire, irony, exaggeration, etc., to make the audience laugh out loud. It also uses references from pop culture such as Star Wars, The Matrix, Harry Potter etc., to add more humor and appeal to different generations.
-
The message
-
The message of Kung Fu Panda 3 is inspiring and uplifting. It teaches us about the importance of family, friendship, identity, self-confidence and harmony. It shows us that we can overcome our fears and doubts by embracing our true selves and our roots . It also shows us that we can achieve our goals and dreams by working together and helping each other . It also shows us that we can find happiness and peace by living in balance and harmony with ourselves and our surroundings .
-
Comparison with previous movies
-
Kung Fu Panda 1
-
Kung Fu Panda 1 was released in 2008 and was the first movie in the franchise. It introduced us to Po and his journey from being a noodle shop worker to becoming the Dragon Warrior. It also introduced us to Shifu and the Furious Five, who were initially skeptical but eventually accepted Po as their friend and teammate. The villain was Tai Lung, a former student of Shifu who turned evil after being denied the Dragon Scroll. The movie was praised for its animation, humor, action, and heart.
-
Kung Fu Panda 2
-
Kung Fu Panda 2 was released in 2011 and was the second movie in the franchise. It continued Po's story as he faced a new threat from Lord Shen, a peacock who wanted to conquer China with his weapon of fireworks. It also revealed Po's past as he learned that he was adopted by Mr.Ping, a goose who runs a noodle shop. He also learned that his biological parents and his panda village were attacked and destroyed by Shen. The movie was praised for its animation, humor, action, emotion, and depth.
-
Kung Fu Panda 3
-
Kung Fu Panda 3 was released in 2016 and was the third movie in the franchise. It concluded Po's story as he met his biological father and traveled to his panda village. He also faced a new enemy from Kai who wanted to steal the chi of all living beings. He also learned how to master his chi and teach kung fu to his panda friends. The movie was praised for its animation, humor, action, emotion, and message.
-
Conclusion
-
Summary of the main points
-
Kung Fu Panda 3 is an amazing movie that you should not miss. It is a fun and entertaining movie that has a great plot, characters, animation, humor, and message. It is also a movie that you should watch in Hindi because it has a brilliant voice cast of Bollywood stars who add more flavor and charm to the movie. It is also a movie that you can download in HD quality from legal and safe sources such as streaming services or DVDs.
-
Recommendation to watch the movie
-
If you are looking for a movie that will make you laugh, cry, and cheer, then Kung Fu Panda 3 is the perfect choice for you. It is a movie that will appeal to people of all ages and backgrounds. It is a movie that will inspire you to be yourself and to follow your dreams. It is a movie that will teach you about the value of family, friendship, identity, self-confidence, and harmony. It is a movie that will make you happy and peaceful.
-
FAQs
-
Here are some frequently asked questions about Kung Fu Panda 3:
-
-
Q: When was Kung Fu Panda 3 released?
-
A: Kung Fu Panda 3 was released on January 29, 2016 in the United States and on March 11, 2016 in India.
-
Q: Who directed Kung Fu Panda 3?
-
A: Kung Fu Panda 3 was directed by Jennifer Yuh Nelson and Alessandro Carloni.
-
Q: How much did Kung Fu Panda 3 cost to make?
-
A: Kung Fu Panda 3 had a budget of $145 million.
-
Q: How much did Kung Fu Panda 3 earn at the box office?
-
A: Kung Fu Panda 3 earned $521.2 million worldwide.
-
Q: Is Kung Fu Panda 3 the last movie in the franchise?
-
A: No, Kung Fu Panda 3 is not the last movie in the franchise. According to the creators, there are plans for more movies in the future.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Downloadmastercamx5fullcrack32bit Tips and Tricks for Using Mastercam X5 Effectively.md b/spaces/tialenAdioni/chat-gpt-api/logs/Downloadmastercamx5fullcrack32bit Tips and Tricks for Using Mastercam X5 Effectively.md
deleted file mode 100644
index cc34d419129587d4e45fb3d94304a72f58063e34..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Downloadmastercamx5fullcrack32bit Tips and Tricks for Using Mastercam X5 Effectively.md
+++ /dev/null
@@ -1,139 +0,0 @@
-
-
Download Mastercam X5 Full Crack 32 Bit - A Powerful CAD/CAM Software for CNC Machining
-
If you are looking for a powerful CAD/CAM software that can help you design parts and create complete machining operations, then you might want to check out Mastercam X5. This is a full-featured application that offers extensive modeling capabilities and a solid set of machining strategies. In this article, we will show you how to download and install Mastercam X5 full crack 32 bit, as well as how to use it for CNC machining.
-
What is Mastercam X5?
-
Mastercam X5 is a software product developed by CNC Software, Inc., a company that specializes in providing computer-aided manufacturing (CAM) solutions for various industries. Mastercam X5 is one of the most popular CAM software in the world, with over 250,000 installations in more than 75 countries. It is used by professionals and hobbyists alike to design parts and create complete machining operations for CNC machines.
Mastercam X5 offers many features and benefits that make it a powerful and versatile CAD/CAM software. Some of them are:
-
-
It comes with an easy-to-use and intuitive interface that offers smart features to support most complicated tasks.
-
It offers industry-standard services to solve the world’s manufacturing challenges.
-
It brings expanded machining flexibility and an increased emphasis on speed and automation.
-
It performs 3D design and basic 3D machining tasks in a dedicated environment for drawing, viewing, and customizing objects with several visualization modes.
-
It includes multiple processing patterns along with a library of templates for easier modeling.
-
It constantly adjusts the toolpath to ensure the most efficient cut and enable use of the entire tool flute length.
-
It lets you choose the basic type of work you are doing using illustrations, and then provides step-by-step processes for defining how to cut the part.
-
-
System requirements and compatibility of Mastercam X5
-
Before you download and install Mastercam X5 full crack 32 bit, you need to make sure that your system meets the minimum requirements and is compatible with the software. Here are the system requirements and compatibility of Mastercam X5:
-
-
-
Operating System
-
Windows XP/Vista/7/8/10 (32-bit only)
-
-
-
CPU
-
Intel Pentium 4 or higher
-
-
-
RAM
-
2 GB or more
-
-
-
HDD
-
1 GB or more free space
-
-
-
Graphics Card
-
NVIDIA or ATI with OpenGL support
-
-
-
Monitor Resolution
-
1024 x 768 or higher
-
-
-
Internet Connection
-
Required for activation and updates
-
-
-
How to download and install Mastercam X5 full crack 32 bit?
-
If you want to download and install Mastercam X5 full crack 32 bit, you need to follow these steps:
-
Step 1: Download the setup file and crack file from the link below
-
The first step is to download the setup file and crack file from the link below. The setup file is about 914 MB in size, while the crack file is about 6 MB in size. You can use any browser or download manager to download these files.
download mastercam x5 full version 32 bit
-mastercam x5 crack free download for windows 7 32 bit
-how to install mastercam x5 on 32 bit system
-mastercam x5 32 bit download with license file
-mastercam x5 full crack iso download 32 bit
-mastercam x5 torrent download 32 bit
-mastercam x5 activation code generator 32 bit
-mastercam x5 patch download 32 bit
-mastercam x5 keygen download 32 bit
-mastercam x5 serial number crack 32 bit
-mastercam x5 software free download for pc 32 bit
-mastercam x5 tutorial pdf download 32 bit
-mastercam x5 system requirements 32 bit
-mastercam x5 update download 32 bit
-mastercam x5 training videos download 32 bit
-mastercam x5 hasp emulator 32 bit
-mastercam x5 windows 10 compatibility 32 bit
-mastercam x5 solidworks integration 32 bit
-mastercam x5 lathe tutorial download 32 bit
-mastercam x5 mill tutorial download 32 bit
-mastercam x5 router tutorial download 32 bit
-mastercam x5 wire tutorial download 32 bit
-mastercam x5 art tutorial download 32 bit
-mastercam x5 design tutorial download 32 bit
-mastercam x5 surface tutorial download 32 bit
-mastercam x5 multiaxis tutorial download 32 bit
-mastercam x5 toolpath tutorial download 32 bit
-mastercam x5 simulation tutorial download 32 bit
-mastercam x5 post processor download 32 bit
-mastercam x5 machine definition download 32 bit
-mastercam x5 tool library download 32 bit
-mastercam x5 material library download 32 bit
-mastercam x5 font library download 32 bit
-mastercam x5 geometry library download 32 bit
-mastercam x5 operation library download 32 bit
-mastercam x5 parameter library download 32 bit
-mastercam x5 macro library download 32 bit
-mastercam x5 custom commands download 32 bit
-mastercam x5 user interface customization 32 bit
-mastercam x5 keyboard shortcuts pdf download 32 bit
-mastercam x5 tips and tricks pdf download 32 bit
-mastercam x5 troubleshooting guide pdf download 32 bit
-mastercam x5 reference manual pdf download 32 bit
-mastercam x5 installation guide pdf download 32 bit
-mastercam x5 what's new pdf download 32 bit
-mastercam x5 release notes pdf download 32 bit
-mastercam x5 product brochure pdf download 32 bit
-mastercam x5 customer testimonials pdf download 32 bit
-mastercam x5 frequently asked questions pdf download 32 bit
-mastercam x5 contact support email phone number website
-
Step 2: Extract the files using WinRAR or 7-Zip
-
The next step is to extract the files using WinRAR or 7-Zip. You can use any other extraction software as well, but these two are recommended for their reliability and ease of use. To extract the files, right-click on each file and select "Extract here" or "Extract to folder". You will get two folders named "Mastercam_X5_14.0.4.33" and "MasterCamX5_Crack".
-
Step 3: Install Mastercam X5 by running the setup file
-
The third step is to install Mastercam X5 by running the setup file. To do this, open the folder "Mastercam_X5_14.0.4.33" and double-click on the file "Setup.exe". This will launch the installation wizard that will guide you through the process. You can choose your preferred language, accept the license agreement, select your destination folder, choose your components, and start the installation. The installation may take some time depending on your system speed.
-
Step 4: Copy the crack file to the installation folder
-
The fourth step is to copy the crack file to the installation folder. To do this, open the folder "MasterCamX5_Crack" and copy the file "mastercam.exe". Then, go to your installation folder (usually C:\Program Files\mcamx) and paste it there. You will be asked to replace or overwrite an existing file. Click "Yes" or "Replace" to confirm.
-
Step 5: Run Mastercam X5 and enjoy the full version
-
The final step is to run Mastercam X5 and enjoy the full version. To do this, go to your installation folder (usually C:\Program Files\mcamx) and double-click on the file "mastercam.exe". This will launch Mastercam X5 with all its features unlocked. You can now use it for designing parts and creating complete machining operations for CNC machines.
-
How to use Mastercam X5 for CNC machining?
-
If you want to use Mastercam X5 for CNC machining, you need to follow these steps:
-
Choose the basic type of work you are doing using illustrations
-
The first step is to choose the basic type of work you are doing using illustrations. To do this, open Mastercam X5 and click on "File" > "New". This will open a dialog box where you can select your machine type (mill, lathe, router, etc.), your units (inch or metric), your plane (top, front, right), your stock size (width, length, height), your origin (center or corner), your material (steel, aluminum, etc.), your tool library (default or custom), your post processor (default or custom), etc. You can also see some illustrations that show you what each option means. Click on "OK" when you are done.
-
Define how to cut the part using various machining strategies
-
Simulate and verify the toolpath before sending it to the machine
-
The last step is to simulate and verify the toolpath before sending it to the machine. To do this, click on "Verify" > "Verify Selected Operations". This will open a window where you can see a 3D simulation of your part and your toolpath. You can use various controls to zoom, rotate, pan, play, pause, rewind, fast forward, etc. You can also see some statistics such as cut time, distance, volume, etc. You can also change the display options such as colors, transparency, shading, etc. You can also save the simulation as a video file or a picture file. Click on "Close" when you are done.
-
Conclusion
-
In conclusion, Mastercam X5 is a powerful CAD/CAM software that can help you design parts and create complete machining operations for CNC machines. It offers many features and benefits that make it a versatile and efficient software. It also has an easy-to-use and intuitive interface that offers smart features to support most complicated tasks. In this article, we showed you how to download and install Mastercam X5 full crack 32 bit, as well as how to use it for CNC machining. We hope you found this article helpful and informative.
-
FAQs
-
Here are some frequently asked questions about Mastercam X5:
-
-
Q: Is Mastercam X5 compatible with Windows 10?
-
A: Yes, Mastercam X5 is compatible with Windows 10, as well as Windows XP/Vista/7/8 (32-bit only).
-
Q: How can I update Mastercam X5 to the latest version?
-
A: You can update Mastercam X5 to the latest version by clicking on "Help" > "Check for Updates". This will open a web page where you can download and install the latest updates for Mastercam X5.
-
Q: How can I get support for Mastercam X5?
-
A: You can get support for Mastercam X5 by clicking on "Help" > "Mastercam Help". This will open a web page where you can access various resources such as tutorials, manuals, forums, videos, etc. You can also contact CNC Software, Inc. by phone or email.
-
Q: How can I uninstall Mastercam X5 from my computer?
-
A: You can uninstall Mastercam X5 from your computer by clicking on "Start" > "Control Panel" > "Programs and Features". Then, select "Mastercam X5" and click on "Uninstall". Follow the instructions to complete the uninstallation process.
-
Q: How can I learn more about Mastercam X5?
-
A: You can learn more about Mastercam X5 by visiting the official website of CNC Software, Inc. at https://www.mastercam.com/. There you can find more information about the product, the company, the community, etc.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Ghc Generador De Horarios Full V Crea horarios ptimos para tu centro educativo con este software profesional.md b/spaces/tialenAdioni/chat-gpt-api/logs/Ghc Generador De Horarios Full V Crea horarios ptimos para tu centro educativo con este software profesional.md
deleted file mode 100644
index c12e1c556372656f317138ac31434583ee4939fb..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Ghc Generador De Horarios Full V Crea horarios ptimos para tu centro educativo con este software profesional.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-
Ghc Generador De Horarios Full V: The Ultimate Tool for Creating Timetables
-
If you are looking for a software that can help you create and manage timetables for schools, colleges, universities, or any other institution, you should check out Ghc Generador De Horarios Full V. This is a powerful and versatile tool that can handle any type of timetable, from simple to complex, with ease and efficiency.
-
Ghc Generador De Horarios Full V is more than just a timetable generator. It is also a timetable manager that allows you to edit, modify, print, export, and share your timetables with others. You can also use it to assign teachers, classrooms, subjects, groups, and other resources to your timetables. You can even import data from Excel or other sources to save time and avoid errors.
One of the best features of Ghc Generador De Horarios Full V is its ability to optimize your timetables according to your preferences and criteria. You can set your own rules and constraints for your timetables, such as the number of hours per day or week, the breaks between classes, the availability of teachers or rooms, the compatibility of groups or subjects, and more. Ghc Generador De Horarios Full V will then generate the best possible timetable that meets your requirements and solves any conflicts or problems.
-
Ghc Generador De Horarios Full V is also a user-friendly and intuitive software that has a simple and clear interface. You can easily navigate through its menus and options and access its features with a few clicks. You can also customize its appearance and language according to your preferences. Ghc Generador De Horarios Full V supports multiple languages, including English, Spanish, French, German, Italian, Portuguese, and more.
-
If you want to try Ghc Generador De Horarios Full V for yourself, you can download it from its official website and enjoy a free trial period of 30 days. You can also watch some tutorials and videos on how to use it and get the most out of it. If you decide to buy it, you can choose from different plans and prices depending on your needs and budget.
-
Ghc Generador De Horarios Full V is the ultimate tool for creating timetables that will save you time, money, and hassle. Whether you are a teacher, a student, an administrator, or anyone else who needs to create or manage timetables, you should give Ghc Generador De Horarios Full V a try. You will not regret it!
-
-
What are the benefits of using Ghc Generador De Horarios Full V?
-
Using Ghc Generador De Horarios Full V has many benefits for both teachers and students. Here are some of them:
-
-
It saves time and effort. You don't have to spend hours or days creating timetables manually or using complicated spreadsheets. Ghc Generador De Horarios Full V does it for you in minutes or seconds.
-
It improves quality and accuracy. You don't have to worry about making mistakes or forgetting something important. Ghc Generador De Horarios Full V checks everything for you and alerts you of any issues or conflicts.
-
It increases satisfaction and productivity. You don't have to deal with frustration or stress caused by poorly designed timetables. Ghc Generador De Horarios Full V creates timetables that suit your needs and preferences and that optimize the use of resources and facilities.
-
It enhances communication and collaboration. You don't have to work in isolation or keep your timetables to yourself. Ghc Generador De Horarios Full V allows you to share your timetables with others and get feedback or suggestions from them.
-
-
What are the reviews of Ghc Generador De Horarios Full V?
-
Ghc Generador De Horarios Full V has received positive reviews from many users who have tried it and found it useful and effective. Here are some of the testimonials from its official website and other sources:
-
"Ghc Generador De Horarios Full V is a great tool for creating timetables for schools. It is easy to use and very flexible. It has helped me a lot in my work as a teacher and coordinator." - Maria, Spain[^1^]
-
"I have been using Ghc Generador De Horarios Full V for several years and I am very satisfied with it. It is a powerful and reliable software that can handle any type of timetable, no matter how complex or demanding. It has saved me a lot of time and trouble." - John, USA[^2^]
-
"Ghc Generador De Horarios Full V is a wonderful software that I recommend to anyone who needs to create or manage timetables. It is very user-friendly and intuitive, and it has many features and options that make it very versatile and adaptable. It is worth every penny." - Anna, Italy[^4^]
e753bf7129
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (the Sanam Teri Kasam man full movie ) The Film that Grossed over 56 crore (US7.0 million).md b/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (the Sanam Teri Kasam man full movie ) The Film that Grossed over 56 crore (US7.0 million).md
deleted file mode 100644
index 854d587e38903fa237846d0782eb9ca70c9e6020..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/HD Online Player (the Sanam Teri Kasam man full movie ) The Film that Grossed over 56 crore (US7.0 million).md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
How to Watch Sanam Teri Kasam Online in HD Quality
-
Sanam Teri Kasam is a 2016 Indian romantic movie that explores the journey of two unlikely lovers who are destined to meet. The movie is directed by Radhika Rao and Vinay Sapru, and stars Harshvardhan Rane and Mawra Hocane in the lead roles. The movie is a modern adaptation of the novel Love Story by Eric Segal, and features a musical score by Himesh Reshammiya.
-
HD Online Player (the Sanam Teri Kasam man full movie )
If you are looking for a way to watch Sanam Teri Kasam online in HD quality, you have come to the right place. In this article, we will show you how to stream or download Sanam Teri Kasam online using different platforms and devices. Whether you want to watch it for free with ads, rent it for a low price, or buy it for your collection, we have got you covered.
-
Watch Sanam Teri Kasam Online for Free with Ads
-
One of the easiest ways to watch Sanam Teri Kasam online is to use Jio Cinema, a streaming service that offers a wide range of movies and shows for free with ads. Jio Cinema is available for Jio users in India, and you can access it on your smartphone, tablet, laptop, or smart TV. All you need is a Jio ID and password, and you can start watching Sanam Teri Kasam online in HD quality.
-
To watch Sanam Teri Kasam online on Jio Cinema, follow these steps:
-
Watch Sanam Teri Kasam online free
-Sanam Teri Kasam full movie download
-Sanam Teri Kasam HD movie online
-Sanam Teri Kasam romantic film streaming
-Harshvardhan Rane and Mawra Hocane movie
-Sanam Teri Kasam Zee5
-Sanam Teri Kasam Eros Now
-Sanam Teri Kasam Jio Cinema
-Sanam Teri Kasam Google Play Movies
-Sanam Teri Kasam YouTube
-Sanam Teri Kasam Apple TV
-Sanam Teri Kasam novel adaptation
-Sanam Teri Kasam love story
-Sanam Teri Kasam curse
-Sanam Teri Kasam cast and crew
-Sanam Teri Kasam songs
-Sanam Teri Kasam reviews and ratings
-Sanam Teri Kasam trailer and teasers
-Sanam Teri Kasam behind the scenes
-Sanam Teri Kasam box office collection
-Sanam Teri Kasam awards and nominations
-Sanam Teri Kasam trivia and facts
-Sanam Teri Kasam quotes and dialogues
-Sanam Teri Kasam scenes and clips
-Sanam Teri Kasam wallpapers and posters
-Sanam Teri Kasam fan art and memes
-Sanam Teri Kasam merchandise and gifts
-Sanam Teri Kasam book by Eric Segal
-Sanam Teri Kasam remake of Love Story
-Sanam Teri Kasam Radhika Rao-Vinay Sapru film
-Sanam Teri Kasam Deepak Mukut production
-Sanam Teri Kasam 2016 Hindi movie
-Sanam Teri Kasam U/A 13+ rating
-Sanam Teri Kasam 2 hr 33 min runtime
-Sanam Teri Kasam Inder and Saru story
-Sanam Teri Kasam blue blooded boy meets librarian girl plot
-Sanam Teri Kasam musical romantic movie genre
-Sanam Teri Kasam Pakistani actress debut in Bollywood
-Sanam Terer Kasm controversy and ban in Pakistan
-Snam Tere Kasm spelling variations and typos
Click on the play button and enjoy Sanam Teri Kasam online in HD quality.
-
-
Rent or Buy Sanam Teri Kasam Online
-
If you prefer to watch Sanam Teri Kasam online without ads, or if you want to own a digital copy of the movie, you can rent or buy it online from various platforms. Some of the platforms that offer Sanam Teri Kasam online are Google Play Movies, YouTube, and Apple TV. You can rent or buy Sanam Teri Kasam online in HD quality for a reasonable price, and watch it on your preferred device.
-
To rent or buy Sanam Teri Kasam online, follow these steps:
Select the platform that you want to use from the list of options.
-
Click on the rent or buy button and choose your preferred quality and price.
-
Complete the payment process and start watching Sanam Teri Kasam online in HD quality.
-
-
Conclusion
-
Sanam Teri Kasam is a musical romantic movie that will touch your heart with its story of love, longing, and loss. If you want to watch Sanam Teri Kasam online in HD quality, you can use any of the methods mentioned above. Whether you want to watch it for free with ads, rent it for a low price, or buy it for your collection, you can enjoy Sanam Teri Kasam online in HD quality with ease.
-
Conclusion
-
Sanam Teri Kasam is a musical romantic movie that will touch your heart with its story of love, longing, and loss. If you want to watch Sanam Teri Kasam online in HD quality, you can use any of the methods mentioned above. Whether you want to watch it for free with ads, rent it for a low price, or buy it for your collection, you can enjoy Sanam Teri Kasam online in HD quality with ease.
679dcb208e
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Backgammon Board Download Join Millions of Players in this Classic Board Game.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Backgammon Board Download Join Millions of Players in this Classic Board Game.md
deleted file mode 100644
index 68bbc6c8670c76ea78b371b48d4b6faec1f2e8b7..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Backgammon Board Download Join Millions of Players in this Classic Board Game.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
-
Backgammon Board Download: How to Play and Enjoy This Classic Board Game
-
Backgammon is one of the oldest and most popular board games in the world. It is a game of skill and luck, where two players move their pieces on a board with 24 triangles, called points, according to the roll of two dice. The objective of the game is to move all your pieces into your home board and then bear them off, before your opponent does the same. Backgammon is a game that combines strategy, tactics, counting, probability, and psychology. It is also a game that can be played online or offline, with a physical board or a digital one. In this article, we will explore the history, rules, and strategies of backgammon, as well as how you can download and play backgammon on your computer or mobile device.
-
History and Origin of Backgammon
-
Backgammon has a long and rich history that dates back to ancient times. The earliest evidence of a game similar to backgammon was found in Iran, where a set of board, pieces, and dice was discovered in a 5,000-year-old archaeological site. The game was also played by ancient Egyptians, Greeks, Romans, Persians, Indians, Chinese, and Arabs. The name "backgammon" comes from the Middle English word "bac gamen", which means "back game", referring to the practice of re-entering pieces from the bar to the board. The modern version of backgammon emerged in England in the 17th century, and spread to Europe and America in the following centuries. Backgammon became very popular in the 20th century, especially after the introduction of the doubling cube, which allows players to increase the stakes during the game. Today, backgammon is played by millions of people around the world, both casually and competitively.
The rules of backgammon are simple to learn but hard to master. Each player has 15 pieces of their own color (white or black), which they move in opposite directions around the board. The board consists of four sections, each with six points. The sections are called the home board and the outer board for each player. The home boards are separated by a ridge called the bar. The points are numbered from 1 to 24 for each player, starting from their home board. To start the game, each player rolls one die, and the player with the higher number moves first. The players then alternate turns, rolling two dice each time. The numbers on the dice indicate how many points they can move their pieces. For example, if a player rolls 3 and 5, they can move one piece 3 points and another piece 5 points, or they can move one piece 8 points. However, they can only move their pieces to open points, which are not occupied by two or more enemy pieces. If they land on a point with only one enemy piece, they can hit it and send it to the bar. The piece on the bar cannot re-enter the game until it lands on an open point in the opponent's home board.
-
The goal of the game is to bear off all your pieces from the board before your opponent does. To do this, you need to move all your pieces into your home board first. Then you can remove them from the board according to the numbers you roll. For example, if you roll 4 and 6, you can remove a piece from point 4 and another piece from point 6. If you have no pieces on those points, you can remove a piece from a higher point. If you have no pieces on higher points either, you can remove a piece from a lower point. The first player to bear off all their pieces wins the game.
-
There are different strategies that you can use to improve your chances of winning at backgammon. Some of them are:
-
-
The running game: This strategy involves moving your pieces as fast as possible towards your home board and bearing them. off quickly. This strategy works best when you have a lead in the race and you want to avoid being hit by your opponent.
-
The holding game: This strategy involves keeping one or more points in your opponent's home board, preventing them from bearing off their pieces. This strategy works best when you are behind in the race and you want to create opportunities to hit your opponent.
-
The priming game: This strategy involves building a wall of six consecutive points, called a prime, in your home board or outer board, blocking your opponent's pieces from advancing. This strategy works best when you have more pieces in play than your opponent and you want to trap them behind your prime.
-
The backgammon game: This strategy involves hitting your opponent's pieces and sending them to the bar, while bearing off your own pieces. This strategy works best when you have a big advantage in the race and you want to win by a large margin. If you manage to bear off all your pieces before your opponent bears off any, you win a backgammon, which is worth three times the normal value of the game.
-
-
Of course, these strategies are not mutually exclusive, and you may need to switch between them depending on the situation of the game. You also need to consider the use of the doubling cube, which allows you to double the stakes of the game at any point. You can offer to double the stakes if you think you have a good chance of winning, and your opponent can either accept or decline. If they accept, they take possession of the cube and can offer to redouble later. If they decline, they forfeit the game and pay the current stakes. The doubling cube adds an element of risk and reward to the game, as well as a psychological factor.
-
Backgammon Board Download
-
If you want to play backgammon online or offline, you can download a backgammon board on your computer or mobile device. There are many websites and apps that offer free or paid backgammon board downloads, with different features and options. Some of them are:
-
-
-
Name
-
Platform
-
Description
-
-
-
Backgammon Live
-
Web, iOS, Android
-
A popular online backgammon game that allows you to play with millions of players from around the world, chat with them, join tournaments, and win coins and prizes.
-
-
-
GNU Backgammon
-
Windows, Mac, Linux
-
A free and open-source backgammon software that offers a strong artificial intelligence engine, analysis tools, tutorials, and various board styles.
-
-
-
Backgammon NJ
-
iOS, Android
-
A high-quality backgammon app that features realistic graphics, sound effects, statistics, hints, and online play.
-
-
-
Backgammon Masters
-
Web, iOS, Android
-
A fun and challenging backgammon game that supports 5 different variants of backgammon, including Nackgammon, Hypergammon, and LongGammon.
-
-
-
Backgammon Classic Pro
-
Windows
-
A professional backgammon game that offers 12 different board designs, 6 difficulty levels, online play, and advanced options.
-
-
-
To download a backgammon board on your device, you need to visit the website or app store of your choice, select the game you want to download, and follow the instructions. You may need to create an account or sign in with your email or social media account to access some features of the game. Once you download the game, you can start playing backgammon anytime and anywhere.
-
Conclusion
-
Backgammon is a classic board game that has been enjoyed by people for thousands of years. It is a game that combines skill and luck, strategy and tactics, counting and probability, and psychology and emotion. It is also a game that can be played online or offline, with a physical board or a digital one. Whether you are a beginner or an expert, a casual or a competitive player, a young or an old person, backgammon is a game that can suit your taste and preference. If you want to play backgammon online or offline, you can download a backgammon board on your computer or mobile device from various websites and apps. So what are you waiting for? Download a backgammon board today and enjoy this fascinating game!
-
FAQs
-
What is the difference between backgammon and other board games?
-
Backgammon is different from other board games in several ways. One of them is that backgammon is a game of partial information, meaning that you do not know the outcome of the dice rolls until they happen. This adds an element of uncertainty and luck to the game, as well as a need to adapt to changing situations. Another difference is that backgammon is a game of asymmetry, meaning that the players have different goals and positions on the board. This creates a dynamic and complex interaction between the players, as well as a need to balance offense and defense. A third difference is that backgammon is a game of doubling, meaning that the players can increase the stakes of the game at any point using the doubling cube. This adds an element of risk and reward to the game, as well as a psychological factor.
-
backgammon board game free download
-backgammon board app download
-backgammon board online download
-backgammon board for pc download
-backgammon board for android download
-backgammon board for windows download
-backgammon board for mac download
-backgammon board for ios download
-backgammon board for linux download
-backgammon board for chromebook download
-backgammon board classic download
-backgammon board 3d download
-backgammon board deluxe download
-backgammon board pro download
-backgammon board premium download
-backgammon board offline download
-backgammon board multiplayer download
-backgammon board with friends download
-backgammon board with chat download
-backgammon board with tournaments download
-backgammon board with dice download
-backgammon board with rules download
-backgammon board with strategy download
-backgammon board with tips download
-backgammon board with hints download
-backgammon board no ads download
-backgammon board no internet download
-backgammon board no registration download
-backgammon board no login download
-backgammon board no subscription download
-backgammon board pdf download
-backgammon board png download
-backgammon board jpg download
-backgammon board svg download
-backgammon board vector download
-backgammon board image download
-backgammon board wallpaper download
-backgammon board template download
-backgammon board design download
-backgammon board layout download
-backgammon board size download
-backgammon board dimensions download
-backgammon board printables download
-backgammon board diy download
-backgammon board woodworkings plans free pdf downloads.
-
How can I improve my backgammon skills?
-
There are many ways to improve your backgammon skills, such as:
-
-
Reading books and articles about backgammon theory and practice, written by experts and champions of the game.
-
Watching videos and tutorials of backgammon matches and analysis, featuring commentary and tips from professional players and coaches.
-
Playing online or offline against different opponents of various skill levels, styles, and backgrounds, to learn from their moves and mistakes.
-
Using software and apps that offer artificial intelligence engines, analysis tools, tutorials, and feedback, to practice your moves and strategies and evaluate your performance.
-
Joining clubs, communities, and forums of backgammon enthusiasts, where you can share your experiences, ask questions, get advice, and participate in events and tournaments.
-
-
By doing these activities regularly and consistently, you can enhance your knowledge, understanding, and intuition of the game, as well as your confidence, concentration, and enjoyment.
-
What are some benefits of playing backgammon?
-
Playing backgammon can have many benefits for your mental and physical health, such as:
-
-
Improving your cognitive skills, such as memory, attention, logic, reasoning, problem-solving, decision-making, and creativity.
-
Enhancing your emotional skills, such as patience, resilience, discipline, self-control, motivation, and optimism.
-
Reducing your stress levels, anxiety, depression, and boredom, by providing a fun and relaxing activity that distracts you from negative thoughts and feelings.
-
Boosting your social skills, such as communication, cooperation, empathy, respect, and friendship, by interacting with other players in a friendly and respectful manner.
-
Stimulating your brain activity, blood circulation, immune system, and metabolism, by engaging in a physical and mental exercise that challenges you and makes you happy.
-
-
By playing backgammon regularly and moderately, you can enjoy the benefits of playing backgammon for your mind, body, and soul.
-
What are some challenges of playing backgammon?
-
Playing backgammon can also have some challenges that you need to overcome, such as:
-
-
Dealing with the randomness and unpredictability of the dice, which can sometimes favor or frustrate you and your opponent.
-
Managing the risk and reward of the doubling cube, which can sometimes increase or decrease your chances of winning or losing.
-
Coping with the pressure and competition of the game, which can sometimes affect your mood and performance.
-
Handling the feedback and criticism of other players, which can sometimes be helpful or hurtful.
-
Balancing your time and energy between playing backgammon and other aspects of your life, such as work, family, and hobbies.
-
-
By playing backgammon with a positive attitude, a learning mindset, a respectful behavior, and a healthy lifestyle, you can overcome the challenges of playing backgammon and make the most of it.
-
Where can I find more information about backgammon?
-
If you want to find more information about backgammon, you can visit some of these websites and resources:
-
-
Backgammon Galore: A comprehensive website that offers rules, articles, tips, glossary, software, links, and more about backgammon.
-
Backgammon.org: A website that provides news, events, tournaments, rankings, clubs, blogs, videos, and more about backgammon.
-
Backgammon World Championship: A website that showcases the annual tournament that determines the best backgammon player in the world.
-
Backgammon Books: A list of some of the best books that teach you how to play and improve your backgammon skills.
-
Backgammon Podcasts: A list of some of the best podcasts that discuss and analyze various aspects of backgammon.
-
-
By visiting these websites and resources, you can learn more about backgammon and become a better player.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Become a Real Super Heroman Rope in Spider Superhero Rope Gangster.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Become a Real Super Heroman Rope in Spider Superhero Rope Gangster.md
deleted file mode 100644
index 578ff9ce533f551d5846662d112a3839971d79ca..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Become a Real Super Heroman Rope in Spider Superhero Rope Gangster.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
Spider Superhero Rope Gangster APK: A Fun and Action-Packed Game for Android Users
-
If you are a fan of spider-man movies and games, you will love Spider Superhero Rope Gangster APK. This is a free and exciting game for Android devices that lets you become a spider rope hero in a crime city. You can swing from buildings, fight against gangsters, drive cars and bikes, and explore a vast open world. In this article, we will tell you everything you need to know about Spider Superhero Rope Gangster APK, including its features, how to download and install it, and some FAQs.
Spider Superhero Rope Gangster APK is a third-person action-adventure game that combines elements of superhero, gangster, and sandbox genres. You can create your own spider hero character and customize his appearance, clothes, weapons, and vehicles. You can also use your spider rope skills to swing across the city, climb walls, jump from roofs, and perform stunts. You can also fight against various enemies, such as thugs, cops, soldiers, robots, zombies, and bosses. You can use your fists, guns, grenades, swords, hammers, axes, and more to defeat them. You can also complete different missions and challenges, such as rescuing hostages, robbing banks, stealing cars, racing, shooting, flying, etc. You can also explore the city at your own pace and discover its secrets.
-
Spider Superhero Rope Gangster APK is a fun and addictive game that will keep you entertained for hours. You will enjoy its amazing graphics and sound effects that create a realistic and immersive experience. You will also love its open world and sandbox gameplay that gives you freedom and creativity. You will also feel like a real spider hero with your spider rope abilities and skills that let you do amazing things. You will also have a lot of options to customize your character and vehicles to suit your style. You will also have a lot of missions and challenges to complete that will test your skills and reward you with money and gems.
-
Features of Spider Superhero Rope Gangster APK
-
Amazing graphics and sound effects
-
Spider Superhero Rope Gangster APK has stunning 3D graphics that make the city look alive and realistic. You will see detailed buildings, streets, cars, bikes, people, animals, plants, weather effects, etc. You will also see dynamic shadows, reflections, lighting effects, etc. that enhance the visual quality. You will also hear realistic sound effects that match the actions and events in the game. You will hear the sounds of cars honking, guns firing, explosions happening, people screaming, etc. You will also hear the voice of your character and other characters that add personality and humor to the game.
-
Open world and sandbox gameplay
-
Spider Superhero Rope Gangster APK has an open world gameplay that lets you explore a huge crime city with no limits or boundaries. You can go anywhere you want and do anything you want in the city. You can swing from buildings using your spider rope or drive cars and bikes using your spider web. You can also climb walls using your spider claws or jump from roofs using your spider legs. You can also fly using your spider wings or glide using your spider parachute. You can also interact with various objects and people in the city, such as breaking windows, stealing cars, punching pedestrians, etc. You can also cause chaos and destruction in the city, such as blowing up gas stations, setting fire to buildings, crashing cars, etc. You can also enjoy the sandbox gameplay that lets you create your own scenarios and stories in the city. You can use your imagination and creativity to make your own fun and adventure in the game.
-
Spider rope hero abilities and skills
-
Spider Superhero Rope Gangster APK has a spider rope hero gameplay that lets you use your spider rope abilities and skills to do amazing things in the game. You can use your spider rope to swing across the city like a spider-man. You can also use your spider rope to grab objects and enemies and throw them away. You can also use your spider rope to tie up enemies and hang them from buildings. You can also use your spider rope to make traps and webs to catch enemies. You can also use your spider rope to make bridges and platforms to cross gaps and obstacles. You can also use your other spider abilities and skills, such as spider claws, spider legs, spider wings, spider parachute, etc. to enhance your mobility and combat in the game.
-
Various missions and challenges
-
Spider Superhero Rope Gangster APK has various missions and challenges that you can complete in the game. You can accept missions from different characters in the city, such as police officers, civilians, gangsters, etc. You can also find missions on the map or on your phone. You can complete different types of missions, such as rescuing hostages, robbing banks, stealing cars, racing, shooting, flying, etc. You can also complete different types of challenges, such as killing a certain number of enemies, surviving for a certain time, reaching a certain speed, etc. You can earn money and gems by completing missions and challenges. You can also unlock new spider suits, weapons, vehicles, etc. by completing missions and challenges.
-
Customizable character and vehicles
-
Spider Superhero Rope Gangster APK has a customizable character and vehicles gameplay that lets you customize your spider hero character and his vehicles to suit your style. You can change the appearance of your character, such as his face, hair, skin color, eyes color, etc. You can also change the clothes of your character, such as his suit, mask, gloves, boots, etc. You can also change the weapons of your character, such as his guns, grenades, swords, hammers, axes, etc. You can also change the vehicles of your character, such as his cars, bikes, helicopters, planes, etc. You can also upgrade the performance of your vehicles, such as their speed, acceleration, handling, durability, etc. You can use the money and gems you earn in the game to buy and upgrade your character and vehicles. You can also use the spider shop to buy and upgrade your character and vehicles. You can also use the spider garage to store and select your vehicles.
-
How to download and install Spider Superhero Rope Gangster APK
-
If you want to download and install Spider Superhero Rope Gangster APK on your Android device, you need to follow these steps:
-
spider rope hero game download for android
-spider superhero crime city gangster mod apk
-spider rope hero gangster vegas apk
-spider superhero simulator 3d rope hero apk
-spider rope hero gangster crime city 2021 apk
-spider superhero flying rope hero game apk
-spider rope hero gangster new york city apk
-spider superhero ultimate rope hero apk
-spider rope hero gangster crime simulator apk
-spider superhero rescue mission rope hero apk
-spider rope hero gangster miami crime city apk
-spider superhero amazing rope hero game apk
-spider rope hero gangster vice town apk
-spider superhero city battle rope hero apk
-spider rope hero gangster crime city offline apk
-spider superhero ninja warrior rope hero apk
-spider rope hero gangster san andreas city apk
-spider superhero stickman rope hero game apk
-spider rope hero gangster open world crime apk
-spider superhero survival rope hero apk
-spider rope hero gangster grand city crime apk
-spider superhero fighting games 3d rope hero apk
-spider rope hero gangster real gangster crime apk
-spider superhero adventure games rope hero apk
-spider rope hero gangster free fire game apk
-spider superhero action games 2020 rope hero apk
-spider rope hero gangster auto theft crime apk
-spider superhero legend games 3d rope hero apk
-spider rope hero gangster flying car simulator apk
-spider superhero street fighting games rope hero apk
-spider rope hero gangster crime city war apk
-spider superhero shadow battle games rope hero apk
-spider rope hero gangster police chase game apk
-spider superhero robot games 3d rope hero apk
-spider rope hero gangster bike stunt game apk
-spider superhero kung fu fighting games rope hero apk
-spider rope hero gangster zombie survival game apk
-spider superhero iron man games 3d rope hero apk
-spider rope hero gangster helicopter simulator game apk
-spider superhero avengers games 3d rope hero apk
-
Requirements and compatibility
-
Before you download and install Spider Superhero Rope Gangster APK, you need to make sure that your device meets the following requirements:
- - Your device must have Android 4.4 or higher version. - Your device must have at least 2 GB of RAM and 500 MB of free storage space. - Your device must have a stable internet connection.
Spider Superhero Rope Gangster APK is compatible with most Android devices, such as Samsung, Huawei, Xiaomi, LG, Sony, etc. However, some devices may not support the game or may experience some glitches or bugs. If you encounter any problems while playing the game, you can contact the developer of the game for help.
-
Steps to download and install
-
After you check the requirements and compatibility of your device, you can follow these steps to download and install Spider Superhero Rope Gangster APK:
- - Go to the official website of Spider Superhero Rope Gangster APK or any other trusted source that provides the APK file of the game. - Click on the download button and wait for the APK file to be downloaded on your device. - Go to the settings of your device and enable the option of installing apps from unknown sources. This will allow you to install Spider Superhero Rope Gangster APK on your device. - Go to the file manager of your device and locate the downloaded APK file of Spider Superhero Rope Gangster APK. - Tap on the APK file and follow the instructions on the screen to install Spider Superhero Rope Gangster APK on your device. - After the installation is complete, you can launch Spider Superhero Rope Gangster APK from your app drawer or home screen and enjoy playing it.
Tips and tricks to play the game
-
If you want to play Spider Superhero Rope Gangster APK like a pro, you can use these tips and tricks:
- - Use your spider rope skills wisely. You can use your spider rope to swing across the city faster and easier. You can also use your spider rope to grab objects and enemies and throw them away. You can also use your spider rope to tie up enemies and hang them from buildings. You can also use your spider rope to make traps and webs to catch enemies. You can also use your spider rope to make bridges and platforms to cross gaps and obstacles. - Use your other spider abilities and skills smartly. You can use your spider claws to climb walls and jump from roofs. You can also use your spider legs to run faster and jump higher. You can also use your spider wings to fly in the air and glide from buildings. You can also use your spider parachute to land safely from heights. - Use your weapons effectively. You can use your fists to punch enemies in close combat. You can also use your guns to shoot enemies from a distance. You can also use your grenades to blow up enemies and objects. You can also use your swords, hammers, axes, etc. to slash or smash enemies in melee combat. - Use your vehicles efficiently. You can use your cars and bikes to drive around the city faster and easier. You can also use your helicopters and planes to fly over the city quicker and smoother. You can also use your vehicles as weapons by crashing them into enemies or objects. - Complete missions and challenges regularly. You can complete missions and challenges to earn money and gems in the game. You can also complete missions and challenges to unlock new spider suits, weapons, vehicles, etc. in the game. You can also complete missions and challenges to increase your reputation and popularity in the city. - Customize your character and vehicles frequently. You can customize your character and vehicles to suit your style in the game. You can change the appearance, clothes, weapons, etc. of your character in the game. You can also change the color, model, performance, etc. of your vehicles in the game.
Conclusion
-
Spider Superhero Rope Gangster APK is a fun and action-packed game for Android users who love spider-man movies and games. It is a free and exciting game that lets you become a spider rope hero in a crime city. It has amazing graphics and sound effects that create a realistic and immersive experience. It has an open world and sandbox gameplay that gives you freedom and creativity. It has a spider rope hero gameplay that lets you use your spider rope abilities and skills to do amazing things. It has various missions and challenges that you can complete to earn money and gems and unlock new items. It has a customizable character and vehicles gameplay that lets you customize your spider hero character and his vehicles to suit your style. It is a fun and addictive game that will keep you entertained for hours.
-
If you want to download and install Spider Superhero Rope Gangster APK on your Android device, you can follow the steps we have provided in this article. You can also use the tips and tricks we have shared in this article to play the game like a pro. You can also contact the developer of the game if you have any questions or feedback about the game.
-
Spider Superhero Rope Gangster APK is a game that you should not miss if you are a fan of spider-man movies and games. It is a game that will make you feel like a real spider hero in a crime city. It is a game that will challenge your skills and reward your efforts. It is a game that will let you have fun and adventure in the game.
-
So, what are you waiting for? Download Spider Superhero Rope Gangster APK now and enjoy playing it!
-
FAQs
-
Is Spider Superhero Rope Gangster APK safe and legal?
-
Spider Superhero Rope Gangster APK is safe and legal to download and install on your Android device. It is not a virus or malware that will harm your device or data. It is not a mod or hack that will violate the terms and conditions of the original game. It is not a pirated or cracked version that will infringe the rights of the developer of the game. It is an original and official version of the game that is provided by the developer of the game for free.
-
How to update Spider Superhero Rope Gangster APK?
-
Spider Superhero Rope Gangster APK is updated regularly by the developer of the game to fix bugs, improve performance, add new features, etc. You can update Spider Superhero Rope Gangster APK by following these steps:
- - Go to the official website of Spider Superhero Rope Gangster APK or any other trusted source that provides the latest APK file of the game. - Click on the download button and wait for the new APK file to be downloaded on your device. - Go to the file manager of your device and locate the new APK file of Spider Superhero Rope Gangster APK. - Tap on the new APK file and follow the instructions on the screen to install Spider Superhero Rope Gangster APK on your device. - After the installation is complete, you can launch Spider Superhero Rope Gangster APK from your app drawer or home screen and enjoy playing it.
How to unlock new spider suits and weapons?
-
Spider Superhero Rope Gangster APK has many spider suits and weapons that you can unlock in the game. You can unlock new spider suits and weapons by completing missions and challenges in the game. You can also unlock new spider suits and weapons by using money and gems in the game. You can also unlock new spider suits and weapons by using the spider shop in the game.
-
How to earn money and gems in the game?
-
Spider Superhero Rope Gangster APK has money and gems as the main currencies in the game. You can use money and gems to buy and upgrade your character and vehicles in the game. You can earn money and gems by completing missions and challenges in the game. You can also earn money and gems by killing enemies and destroying objects in the game. You can also earn money and gems by finding hidden chests and collectibles in the game. You can also earn money and gems by watching ads and videos in the game.
-
How to contact the developer of the game?
-
If you have any questions, feedback, suggestions, or complaints about Spider Superhero Rope Gangster APK, you can contact the developer of the game by using these methods:
- - You can send an email to the developer of the game at spiderropehero@gmail.com. - You can visit the official website of the developer of the game at https://spiderropehero.com. - You can follow the social media accounts of the developer of the game on Facebook, Twitter, Instagram, YouTube, etc.
The developer of the game is always happy to hear from you and will try to reply to you as soon as possible.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Genshin Impact on Windows 10 A Complete Guide.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Genshin Impact on Windows 10 A Complete Guide.md
deleted file mode 100644
index 4daeaad6f12b90d35db28e21e4c8824181ec3737..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Genshin Impact on Windows 10 A Complete Guide.md
+++ /dev/null
@@ -1,267 +0,0 @@
-
-
How to Download Genshin Impact on Windows 10
-
Genshin Impact is an open-world action RPG that has taken the gaming world by storm. In this game, you can explore a vast fantasy world called Teyvat, where you can meet a diverse cast of characters with unique personalities and abilities, fight powerful enemies with elemental combat, and unravel the countless mysteries that Teyvat holds. You can also team up with friends across various platforms to enjoy more cooperative gameplay. Whether you are a fan of anime-style graphics, immersive storytelling, or addictive gameplay, Genshin Impact has something for you.
If you are wondering how to download Genshin Impact on Windows 10, you have come to the right place. In this article, we will show you everything you need to know about downloading, installing, and setting up Genshin Impact on your PC. We will also share some useful tips and tricks for playing Genshin Impact on PC. Let's get started!
-
Genshin Impact System Requirements
-
Before you download Genshin Impact on Windows 10, you should make sure that your PC meets the minimum or recommended system requirements for playing the game. Here are the system requirements for Genshin Impact on PC:
-
-
Minimum Requirements
Recommended Requirements
-
Operating System: Windows 7 SP1 64-bit, Windows 8.1 64-bit, or Windows 10 64-bit
Operating System: Windows 10 64-bit
-
CPU: Intel Core i5 or equivalent
CPU: Intel Core i7 or equivalent
-
RAM: 8 GB
RAM: 16 GB
-
Graphics Card: NVIDIA GeForce GT 1030 or higher
Graphics Card: NVIDIA GeForce GTX 1060 6 GB or higher
-
DirectX Version: 11
DirectX Version: 11
-
Storage Space: 30 GB or more
Storage Space: 30 GB or more
-
-
You can check your PC's specifications by going to the Settings app, clicking on System, and then clicking on About. You can also use a tool like Speccy to get more detailed information about your PC's hardware.
-
How to install genshin impact on pc windows 10
-Genshin impact windows 10 download guide
-Genshin impact pc download free windows 10
-How to play genshin impact on windows 10 laptop
-Genshin impact windows 10 system requirements
-How to update genshin impact on windows 10
-Genshin impact windows 10 launcher download
-How to fix genshin impact not launching on windows 10
-Genshin impact windows 10 optimization tips
-How to transfer genshin impact data from mobile to windows 10
-How to download genshin impact faster on windows 10
-Genshin impact windows 10 vs epic games store
-How to change language in genshin impact on windows 10
-Genshin impact windows 10 keyboard and mouse settings
-How to link genshin impact account to windows 10
-How to uninstall genshin impact on windows 10
-Genshin impact windows 10 error codes and solutions
-How to run genshin impact on low-end windows 10 pc
-Genshin impact windows 10 best graphics settings
-How to stream genshin impact on windows 10 using OBS
-How to download genshin impact beta on windows 10
-Genshin impact windows 10 crossplay and cross-save features
-How to use controller in genshin impact on windows 10
-Genshin impact windows 10 performance issues and fixes
-How to redeem codes in genshin impact on windows 10
-How to download genshin impact mods on windows 10
-Genshin impact windows 10 vs ps4 comparison
-How to backup genshin impact data on windows 10
-Genshin impact windows 10 minimum and recommended specs
-How to enable fullscreen mode in genshin impact on windows 10
-How to download genshin impact wallpapers for windows 10
-Genshin impact windows 10 review and rating
-How to join co-op mode in genshin impact on windows 10
-Genshin impact windows 10 achievements and rewards
-How to adjust volume and sound settings in genshin impact on windows 10
-How to download genshin impact characters and skins on windows 10
-Genshin impact windows 10 vs android comparison
-How to disable anti-cheat software in genshin impact on windows 10
-Genshin impact windows 10 tips and tricks for beginners
-How to access photo mode in genshin impact on windows 10
-How to download genshin impact music and soundtracks on windows 10
-Genshin impact windows 10 latest news and updates
-How to create multiple accounts in genshin impact on windows 10
-Genshin impact windows 10 best characters and weapons guide
-How to change server region in genshin impact on windows 10
-How to download genshin impact fan art and comics on windows 10
-Genshin impact windows 10 vs ios comparison
-How to enable HDR mode in genshin impact on windows 10
-Genshin impact windows 10 FAQ and support
-
If your PC does not meet the minimum requirements, you may experience lag, crashes, or other issues while playing Genshin Impact. If your PC meets or exceeds the recommended requirements, you should be able to enjoy a smooth and stable gaming experience.
-
Genshin Impact Download Options
-
There are different ways to download Genshin Impact on Windows 10, depending on your preference and convenience. The most common and reliable ways are from the official website and the Epic Games Store. However, there are also some other sources that may offer Genshin Impact downloads, such as Steam or third-party websites. We will compare and contrast these options below.
-
Downloading from the Official Website
-
The official website of Genshin Impact is the best and safest way to download the game on Windows 10. Here are the steps to download Genshin Impact from the official website:
Click on the "Windows" button under the "Download Now" section.
-
A pop-up window will appear, asking you to save the file. Choose a location where you want to save the file, and click on "Save". The file name should be "GenshinImpact_install_*.exe", where * is a number.
-
Wait for the file to download completely. The file size should be around 6 MB.
-
Once the file is downloaded, double-click on it to run it. You may need to grant permission for the file to make changes to your device.
-
The Genshin Impact installer will open. Follow the instructions on the screen to complete the installation process.
-
-
Some tips for downloading Genshin Impact from the official website are:
-
-
Make sure you have a stable and fast internet connection, as the game files are quite large and may take a long time to download.
-
If you encounter any errors or issues while downloading or installing the game, you can check the https://genshin.mihoyo.com/en/news/detail/5764, which contains some troubleshooting tips and solutions.
-
If you want to uninstall Genshin Impact from your PC, you can go to the Control Panel, click on Programs and Features, find Genshin Impact in the list of programs, and click on Uninstall.
-
-
Downloading from the Epic Games Store
-
The Epic Games Store is another popular and trustworthy way to download Genshin Impact on Windows 10. The Epic Games Store is a digital distribution platform that offers various games and software for free or at discounted prices. Here are the steps to download Genshin Impact from the Epic Games Store:
If you already have an Epic Games account, click on "Sign In" and enter your credentials. If you do not have an Epic Games account, click on "Sign Up" and create one.
-
Once you are signed in, click on "Get" under the "Genshin Impact" banner.
-
A pop-up window will appear, asking you to confirm your order. Click on "Place Order". You do not need to pay anything, as Genshin Impact is free-to-play.
-
You will be redirected to a confirmation page, where you can see your order details and receipt. Click on "Install" under the "Genshin Impact" banner.
-
You will need to download and install the Epic Games Launcher, which is the application that allows you to access and manage your Epic Games Store games. Follow the instructions on the screen to download and install the Epic Games Launcher.
-
Once the Epic Games Launcher is installed, open it and log in to your Epic Games account. You should see Genshin Impact in your library. Click on it and then click on "Install" to start downloading the game files.
-
Wait for the game files to download completely. The file size should be around 30 GB.
-
Once the game files are downloaded, click on "Launch" to start playing Genshin Impact.
-
-
Some tips for downloading Genshin Impact from the Epic Games Store are:
-
-
You can also access the Epic Games Store website and download Genshin Impact from your web browser, but you will still need to install and use the Epic Games Launcher to play the game.
If you want to uninstall Genshin Impact from your PC, you can go to the Epic Games Launcher, click on Library, find Genshin Impact in the list of games, click on the three dots icon, and click on Uninstall.
-
-
Downloading from Other Sources
-
Besides the official website and the Epic Games Store, there are some other sources that may offer Genshin Impact downloads, such as Steam or third-party websites. However, we do not recommend using these sources, as they may not be authorized, safe, or updated. Here are some of the reasons why you should avoid downloading Genshin Impact from other sources:
-
-
They may contain malware, viruses, or other harmful software that can damage your PC or compromise your personal information.
-
They may not have the latest version of Genshin Impact, which means you may miss out on new features, updates, or bug fixes.
-
They may not be compatible with your PC's specifications or settings, which may cause performance issues or errors.
-
They may violate the terms of service of Genshin Impact or its developer, miHoYo, which may result in legal action or account suspension.
-
-
Therefore, we strongly advise you to download Genshin Impact only from the official website or the Epic Games Store, as they are the most reliable and secure ways to get the game on Windows 10.
-
Genshin Impact Installation and Setup
-
After you have downloaded Genshin Impact from either the official website or the Epic Games Store, you will need to install and set up the game on your PC. This process is fairly simple and straightforward, but we will guide you through it step by step. Here are the steps to install and set up Genshin Impact on Windows 10:
-
Installing Genshin Impact
-
To install Genshin Impact on Windows 10, follow these steps:
-
-
If you downloaded Genshin Impact from the official website, you should have a file called "GenshinImpact_install_*.exe" in your chosen location. Double-click on it to run it. If you downloaded Genshin Impact from the Epic Games Store, you should have already installed it through the Epic Games Launcher. Skip to step 4.
-
You may need to grant permission for the file to make changes to your device. Click on "Yes" if prompted.
-
The Genshin Impact installer will open. Choose a language for the installation process and click on "OK".
-
The installer will ask you to choose a directory where you want to install Genshin Impact. You can use the default directory or browse for another one. Click on "Install" once you have selected a directory.
-
The installer will start copying and extracting the game files to your chosen directory. This may take some time depending on your PC's speed and internet connection. Wait for the installation process to finish.
-
The installer will ask you to agree to the license agreement of Genshin Impact. Read it carefully and click on "I Agree" if you accept it.
-
The installer will ask you if you want to create a desktop shortcut for Genshin Impact. You can check or uncheck this option according to your preference. Click on "Finish" once you have made your choice.
-
The installer will close and launch the game launcher automatically. You can also launch the game launcher from the desktop shortcut or the start menu if you created one.
-
-
Some tips for installing Genshin Impact on Windows 10 are:
-
-
Make sure you have enough storage space on your PC before installing Genshin Impact, as the game files are quite large and may take up to 30 GB or more.
-
If you encounter any errors or issues while installing Genshin Impact, you can check the https://genshin.mihoyo.com/en/news/detail/5764, which contains some troubleshooting tips and solutions.
-
If you want to change the installation directory of Genshin Impact, you can do so by going to the game launcher, clicking on the gear icon, and then clicking on "Choose Install Path". You can also move the game files manually, but you will need to update the game launcher's settings accordingly.
-
-
Setting Up Genshin Impact
-
To set up Genshin Impact on Windows 10, follow these steps:
-
-
Open the game launcher and wait for it to check for updates. If there are any updates available, click on "Update" and wait for them to download and install.
-
Once the game is updated, click on "Launch" to start the game. You may need to grant permission for the game to access your network and firewall.
-
The game will load and show you a splash screen. Click anywhere on the screen to continue.
-
The game will ask you to create or log in to your miHoYo account. This is the account that you will use to play Genshin Impact and access its online features. If you already have a miHoYo account, click on "Log In" and enter your credentials. If you do not have a miHoYo account, click on "Register" and create one.
-
Once you are logged in, the game will ask you to choose a server region. This is the region that you will play in and interact with other players. You can choose from Asia, Europe, America, or TW, HK, MO. Choose a server region that is closest to your location for better performance and latency. You can also change your server region later, but keep in mind that your progress and characters will not carry over between regions.
-
The game will ask you to adjust your graphics settings. You can choose from Low, Medium, High, or Custom. You can also change your graphics settings later by going to the game menu and clicking on Settings. Choose a graphics setting that suits your PC's specifications and your personal preference.
-
The game will start and show you a cinematic intro. Enjoy the story and get ready for an epic adventure!
-
-
Some tips for setting up Genshin Impact on Windows 10 are:
-
-
Make sure you remember your miHoYo account credentials, as you will need them to play Genshin Impact and access its online features. You can also link your miHoYo account to your email address or phone number for better security and recovery options.
-
If you encounter any errors or issues while setting up Genshin Impact, you can check the https://genshin.mihoyo.com/en/news/detail/5764, which contains some troubleshooting tips and solutions.
-
If you want to change your server region, graphics settings, or other options, you can do so by going to the game menu and clicking on Settings. You can also access some settings from the game launcher by clicking on the gear icon.
-
-
Genshin Impact Tips and Tricks
-
Now that you have downloaded, installed, and set up Genshin Impact on Windows 10, you are ready to play the game and explore its vast world. However, before you dive into the action, here are some useful tips and tricks that will help you enjoy Genshin Impact more on PC:
-
Optimizing Your Performance
-
Genshin Impact is a beautiful and immersive game, but it can also be demanding on your PC's resources. If you want to optimize your performance in Genshin Impact on PC, here are some things you can do:
-
-
Lower your graphics settings if your PC does not meet the recommended requirements or if you experience lag or stuttering. You can also lower your resolution or window mode for better performance.
-
Close any background programs or applications that may be consuming your CPU, RAM, or bandwidth. You can use Task Manager to monitor your PC's performance and end any unnecessary processes.
-
Update your drivers, especially your graphics card driver, as they may improve your performance and fix any compatibility issues.
-
Clean up your PC's disk space, cache, and registry, as they may affect your performance and cause errors. You can use a tool like CCleaner to do this easily and safely.
-
Check your internet connection and make sure it is stable and fast, as Genshin Impact requires a constant online connection to play. You can use a tool like Speedtest to check your internet speed and ping.
-
-
Using Keyboard and Mouse Controls
-
Genshin Impact supports both keyboard and mouse controls and gamepad controls on PC. However, if you prefer to use keyboard and mouse controls, here are some things you should know:
-
-
The default keyboard and mouse controls in Genshin Impact on PC are as follows:
-
-
Action
Key
-
Move Forward
W
-
Move Backward
S
-
Move Left
A
-
Move Right
D
-
Jump/Climb/Glide
Space
-
Sprint/Swim Faster
Shift
-
Attack/Confirm
Left Mouse Button
-
Aim/Cancel
Right Mouse Button
-
Elemental Skill
E
-
Elemental Burst/Ultimate
Q
-
Interact/Pick Up/Revive
F
-
Show Cursor/Menu/Map/Quests/Inventory/Character/Co-op Mode/Wishes/Paimon Menu
Esc/Tab/M/J/B/C/V/L/Alt+F4 (jk)
-
Switch Character (1-4)
(1-4)
-
Cycle Through Party Members (Left/Right)
(Left/Right)
-
Show Character Status Screen (1-4)
< td>Alt+(1-4)
-
Pause/Resume Download
F8
-
Take Screenshot
F12
-
Open Chat Window
Enter
-
Open Shortcut Wheel
Z
-
Open Paimon's Bargains
X
-
Open Notices
N
-
Open Mailbox
M
-
Show/Hide UI
Ctrl+H
-
Show/Hide Cursor
Alt
-
Zoom In/Out Camera
Mouse Wheel Up/Down
-
Rotate Camera (Left/Right)
<
Mouse Movement Left/Right
-
Tilt Camera (Up/Down)
Mouse Movement Up/Down
-
-
You can customize your keyboard and mouse controls by going to the game menu, clicking on Settings, and then clicking on Controls. You can change the key bindings for each action, as well as the mouse sensitivity and inversion.
-
You can also use a gamepad to play Genshin Impact on PC, if you prefer. The game supports various gamepad models, such as Xbox, PlayStation, or Switch controllers. You can connect your gamepad to your PC via USB or Bluetooth, and the game will automatically detect it and show you the corresponding controls. You can also customize your gamepad controls by going to the game menu, clicking on Settings, and then clicking on Controls.
-
-
Switching Between Characters
-
Genshin Impact allows you to have up to four characters in your party at a time, each with their own unique skills and elements. You can switch between characters in Genshin Impact on PC by using the following methods:
-
-
You can press the number keys (1-4) to switch to the corresponding character in your party.
-
You can press the left or right arrow keys to cycle through your party members.
-
You can hold the Z key to open the shortcut wheel, and then use your mouse to select the character you want to switch to.
-
-
Switching between characters is important in Genshin Impact, as it allows you to use different elemental skills and reactions, adapt to different situations and enemies, and access different abilities and talents. For example, you can use a Pyro character to melt ice barriers, a Geo character to create platforms or shields, or an Anemo character to spread elemental effects. You can also combine different elements to create powerful reactions, such as Electro-Charged, Melt, Vaporize, or Swirl. Experiment with different character combinations and find out what works best for you.
-
Using Elemental Reactions
-
Elemental reactions are one of the core mechanics of Genshin Impact's combat system. They occur when two different elements come into contact with each other, either from your characters' skills or from the environment. Elemental reactions can have various effects, such as dealing extra damage, inflicting status effects, or altering the battlefield. Here are some of the elemental reactions in Genshin Impact and what they do:
-
-
Elemental Reaction
Description
-
Overloaded
Occurs when Pyro meets Electro. Deals AoE Pyro damage and launches enemies away.
-
Superconduct
Occurs when Cryo meets Electro. Deals AoE Cryo damage and reduces enemies' physical resistance.
-
Electro-Charged
Occurs when Hydro meets Electro. Deals continuous Electro damage to enemies and spreads to nearby enemies in water.
-
Melt
Occurs when Pyro meets Cryo or vice versa. Increases the damage of the triggering element by 1.5x (Pyro) or 2x (Cryo).
-
Vaporize
Occurs when Pyro meets Hydro or vice versa. Increases the damage of the triggering element by 1.5x (Pyro) or 2x (Hydro).
-
Frozen
Occurs when Hydro meets Cryo. Freezes enemies in place and makes them vulnerable to Shatter.
-
Shatter
Occurs when Frozen enemies are hit by heavy physical attacks (such as Claymore or Geo). Deals extra physical damage and breaks Frozen status.
-
Swirl
< td>Occurs when Anemo meets any other element except Geo. Spreads the other element to nearby enemies and deals extra elemental damage.
-
Crystallize
Occurs when Geo meets any other element except Anemo. Creates elemental shards that grant a shield of the corresponding element to the character who picks them up.
-
Burning
Occurs when Pyro meets Dendro (grass or wooden objects). Deals continuous Pyro damage to enemies and objects until extinguished.
-
Conductive
Occurs when Electro meets metal objects. Deals continuous Electro damage to enemies and objects until dispersed.
-
-
To use elemental reactions effectively, you should pay attention to the elemental attributes of your characters, enemies, and environment. You should also switch between characters frequently to trigger different reactions and combos. Experiment with different elements and see what works best for each situation.
-
Exploring the Open World
-
Genshin Impact features a vast and beautiful open world that you can explore at your own pace. There are many things to do and discover in the world of Teyvat, such as:
-
-
Completing quests and events that advance the story or reward you with items and resources.
-
Finding and unlocking waypoints, statues, domains, and other landmarks that allow you to fast travel, heal, or access special challenges.
-
Collecting chests, materials, ingredients, artifacts, weapons, and other loot that you can use to upgrade your characters and equipment.
-
Fighting enemies, bosses, and elite opponents that drop valuable rewards and increase your adventure rank.
-
Solving puzzles, riddles, and secrets that reveal hidden treasures or lore.
-
Cooking dishes that restore your health, stamina, or provide other buffs.
-
Crafting potions, weapons, or other items that enhance your abilities or unlock new features.
-
Customizing your own personal realm with furniture, decorations, and companions.
-
Interacting with NPCs, animals, or objects that may offer you quests, dialogues, or surprises.
-
Enjoying the scenery, music, and atmosphere of the different regions and cultures of Teyvat.
-
-
To explore the open world effectively, you should use your map and compass to navigate and locate points of interest. You should also use your elemental sight to reveal hidden clues or objects. You should also be prepared for any dangers or challenges that you may encounter along the way. Finally, you should have fun and enjoy the adventure!
-
Conclusion
-
Genshin Impact is an amazing game that offers a rich and immersive experience for PC players. In this article, we have shown you how to download Genshin Impact on Windows 10 from the official website or the Epic Games Store. We have also shown you how to install and set up Genshin Impact on Windows 10. We have also shared some useful tips and tricks for playing Genshin Impact on Windows 10. We hope that this article has helped you get started with Genshin Impact on PC and that you will have a wonderful time exploring Teyvat. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
Frequently Asked Questions
-
Here are some of the frequently asked questions about Genshin Impact on Windows 10:
-
Q: Is Genshin Impact free-to-play?
-
A: Yes, Genshin Impact is free-to-play on PC and other platforms. You do not need to pay anything to download or play the game. However, the game does have some optional in-game purchases that can enhance your gameplay or unlock more content. These purchases are not required to enjoy the game fully.
-
Q: Is Genshin Impact cross-platform?
-
A: Yes, Genshin Impact is cross-platform on PC and other platforms, such as PlayStation, Xbox, Nintendo Switch, iOS, and Android. You can play with your friends across different devices and platforms, as long as you are in the same server region. You can also access your game progress and data on any device or platform, as long as you use the same miHoYo account.
-
Q: How can I get more characters and weapons in Genshin Impact?
-
A: You can get more characters and weapons in Genshin Impact by using a feature called Wishes. Wishes are gacha-style draws that allow you to obtain random characters and weapons of different rarities. You can use a currency called Primogems to make Wishes, or use special items called Fates that are obtained from various sources. You can also use a currency called Stardust or Starglitter to exchange for Fates or other items in the Shop. You can make Wishes by going to the game menu and clicking on Wishes.
-
Q: How can I level up my characters and weapons in Genshin Impact?
-
A: You can level up your characters and weapons in Genshin Impact by using various items and resources that you can find or obtain in the game. For characters, you can use items called Character EXP Materials or Wanderer's Advice, Adventurer's Experience, or Hero's Wit to increase their level. You can also use items called Mora to pay for the leveling cost. For weapons, you can use other weapons or items called Weapon Enhancement Materials or Enhancement Ore, Fine Enhancement Ore, or Mystic Enhancement Ore to increase their level. You can also use Mora to pay for the enhancement cost. You can level up your characters and weapons by going to the game menu and clicking on Characters.
-
Q: How can I unlock more regions and quests in Genshin Impact?
-
A: You can unlock more regions and quests in Genshin Impact by increasing your Adventure Rank. Your Adventure Rank is a measure of your overall progress and experience in the game. You can increase your Adventure Rank by completing quests, events, domains, bosses, and other activities that reward you with Adventure EXP. As you increase your Adventure Rank, you will unlock more regions to explore, more quests to complete, more features to access, and more rewards to claim. You can check your Adventure Rank by going to the game menu and clicking on Adventure Rank.
-
Q: How can I get more Primogems in Genshin Impact?
-
A: Primogems are a valuable currency in Genshin Impact that can be used to make Wishes, refill your Original Resin, or buy items in the Shop. You can get more Primogems by doing various things in the game, such as:
-
-
Completing quests, events, achievements, and daily commissions.
-
Finding and opening chests, seelies, shrines, and other hidden treasures.
-
Unlocking waypoints, statues, domains, and other landmarks.
-
Completing Spiral Abyss floors and challenges.
-
Claiming daily login rewards and special codes.
-
Purchasing them with real money or Genesis Crystals.
-
-
You can check your Primogem balance by going to the game menu and clicking on Shop.
-
-
This is the end of the article. Thank you for reading!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Archicad 18 Free Download With Crack 64 Bit.md b/spaces/tioseFevbu/cartoon-converter/scripts/Archicad 18 Free Download With Crack 64 Bit.md
deleted file mode 100644
index 162a20d85c1565128fb2fe23d8fa2f981d8cba70..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Archicad 18 Free Download With Crack 64 Bit.md
+++ /dev/null
@@ -1,66 +0,0 @@
-
-
Archicad 18 Free Download With Crack 64 Bit: A Complete Guide
-
If you are looking for a way to get Archicad 18, one of the best architectural design software, for free with crack, then you have come to the right place. In this article, we will show you how to download, install, and activate Archicad 18 on your computer, as well as how to use it for your projects. We will also provide you with some useful resources and support to help you get started with Archicad 18.
-
What is Archicad 18 and why you need it
-
Archicad 18 is a software developed by Graphisoft that allows you to design and document buildings in a 3D environment. It is a BIM (Building Information Modeling) software that enables you to create accurate and detailed models of your buildings, as well as generate high-quality renderings, animations, and documentation. Archicad 18 is used by architects, engineers, contractors, and designers all over the world for various types of projects, from residential homes to skyscrapers.
Archicad 18 has many features and benefits that make it a powerful and efficient tool for architectural design. Some of the main features and benefits are:
-
-
CineRender: This is a new rendering engine that integrates with Archicad 18 and allows you to create photorealistic images and animations of your buildings. You can use CineRender to enhance your presentations, communicate your design ideas, and impress your clients. CineRender is based on the Cinema 4D engine, which is a professional software for visualization and animation.
-
Revision Management: This is a feature that helps you keep track of the changes in your building model and document them automatically. You can use Revision Management to compare different versions of your model, identify the differences, and assign responsibilities for the changes. This feature also helps you comply with local BIM standards and regulations.
-
Built-in Tools: Archicad 18 has a set of built-in tools that allow you to design, visualize, collaborate, and document your projects. You can use these tools to create complex geometries, apply materials and textures, add annotations and dimensions, create schedules and reports, export and import data, and more.
-
User Interface: Archicad 18 has a user-friendly interface that makes it easy to use and learn. You can customize the interface according to your preferences and workflow. You can also access various tutorials, guides, tips, and support from the Graphisoft website or community.
-
-
Archicad 18 system requirements and compatibility
-
To run Archicad 18 on your computer, you need to meet the following system requirements:
-
-
-
Operating System
Processor
Memory
Graphics Card
Hard Disk Space
-
Windows 10 (64-bit), Windows 8.1 (64-bit), Windows 8 (64-bit), Windows 7 (64-bit), or Mac OS X 10.10 Yosemite, Mac OS X 10.9 Mavericks, Mac OS X 10.8 Mountain Lion
64-bit processor with four or more cores, such as Intel Core i5, i7 or equivalent AMD processors
8 GB RAM or more
Dedicated OpenGL 2.0 compatible graphics card with 1024 MB or more memory
5 GB free disk space or more
-
-
Archicad 18 is compatible with various file formats and software, such as DWG, DXF, IFC, PDF, SKP, 3DS, OBJ, BCF, and more. You can also use Archicad 18 with other Graphisoft products, such as BIMx, BIMcloud, EcoDesigner, and MEP Modeler.
-
How to download Archicad 18 for free with crack
-
If you want to use Archicad 18 for free without paying for a license, you need to download it with a crack. A crack is a file that modifies the original software to bypass the activation process and make it work without restrictions. However, downloading Archicad 18 with a crack is illegal and risky, as it may contain viruses, malware, or spyware that can harm your computer or compromise your data. Therefore, we do not recommend or endorse this method of obtaining Archicad 18.
-
If you still want to proceed with downloading Archicad 18 with a crack, you need to follow these steps:
-
Step 1: Find a reliable download link for Archicad 18
-
The first step is to find a reliable download link for Archicad 18 with a crack. You can search online for websites that offer Archicad 18 downloads with cracks, but be careful of fake or malicious links that may lead you to unwanted or harmful downloads. You can also use torrent sites or peer-to-peer networks to find Archicad 18 downloads with cracks, but be aware of the legal and ethical implications of using these sources.
-
One possible download link for Archicad 18 with a crack is , but we cannot guarantee its safety or functionality. Use it at your own risk and discretion.
-
Step 2: Install Archicad 18 on your computer
-
The second step is to install Archicad 18 on your computer. To do this, you need to extract the downloaded file using a software like WinRAR or 7-Zip. Then, you need to run the setup.exe file and follow the instructions on the screen. You may need to enter a serial number or a product key during the installation process. You can use any random number or key that matches the format of the required input.
-
After the installation is complete, do not run Archicad 18 yet. You need to apply the crack first.
-
Step 3: Apply the crack to activate Archicad 18
-
The third step is to apply the crack to activate Archicad 18. To do this, you need to copy the crack file from the downloaded folder and paste it into the installation folder of Archicad 18. The installation folder is usually located in C:\Program Files\GRAPHISOFT\Archicad 18. You may need to replace the original file with the crack file when prompted.
-
After applying the crack, you can run Archicad 18 and enjoy its full features and functions.
-
How to use Archicad 18 for your architectural projects
-
Now that you have downloaded and installed Archicad 18 with a crack, you can use it for your architectural projects. Here are some tips on how to use Archicad 18 for your projects:
-
Archicad 18 tutorial for beginners
-
If you are new to Archicad 18 or BIM software in general, you may want to start with some basic tutorials that will teach you how to use the software and its tools. You can find many tutorials online, such as , , or . These tutorials will guide you through the steps of creating a simple building model in Archicad 18, from setting up the project parameters to adding walls, doors, windows, roofs, floors, stairs, furniture, and more.
-
Archicad 18 tips and tricks for advanced users
-
If you are already familiar with Archicad 18 or BIM software in general, you may want to learn some tips and tricks that will help you improve your skills and efficiency. You can find many tips and tricks online, such as , , or . These tips and tricks will show you how to use some advanced features and functions of Archic ad 18, such as using CineRender, Revision Management, Built-in Tools, and User Interface.
-
Archicad 18 resources and support
-
If you need more help or guidance on using Archicad 18, you can access various resources and support from Graphisoft or other sources. Some of the resources and support are:
-
-
Graphisoft Help Center: This is the official website of Graphisoft that provides you with online documentation, manuals, videos, FAQs, and forums for Archicad 18 and other products. You can visit the website at .
-
Graphisoft YouTube Channel: This is the official YouTube channel of Graphisoft that provides you with video tutorials, webinars, tips, and showcases for Archicad 18 and other products. You can subscribe to the channel at .
-
Graphisoft Service Select: This is a subscription-based service that provides you with premium support, updates, training, and benefits for Archicad 18 and other products. You can learn more about the service at .
-
Archicad Community: This is a network of Archicad users, experts, partners, and enthusiasts that share their knowledge, experience, and feedback on Archicad 18 and other products. You can join the community at .
-
-
Conclusion
-
In conclusion, Archicad 18 is a powerful and efficient software that allows you to design and document buildings in a 3D environment. It has many features and benefits that make it a great tool for architectural design. However, downloading Archicad 18 with a crack is illegal and risky, as it may expose you to viruses, malware, or spyware. Therefore, we advise you to use Archicad 18 legally and ethically by purchasing a license from Graphisoft or using their trial version. We hope this article has helped you understand how to download, install, and use Archicad 18 for your projects.
-
FAQs
-
Here are some frequently asked questions about Archicad 18:
-
-
What is the difference between Archicad 18 and Archicad 19?
-
Archicad 19 is the latest version of Archicad that was released in June 2023. It has some new features and improvements over Archicad 18, such as faster performance, smoother navigation, enhanced collaboration, improved interoperability, and more. You can compare the two versions at .
-
How can I get a trial version of Archicad 18?
-
You can get a trial version of Archicad 18 by registering on the Graphisoft website at . You will receive an email with a download link and a serial number for the trial version. The trial version is valid for 30 days and has all the features and functions of the full version.
-
How much does Archicad 18 cost?
-
The cost of Archicad 18 depends on the type of license you choose and the region you are in. You can choose between a perpetual license or a rental license. A perpetual license is a one-time purchase that gives you lifetime access to Archicad 18. A rental license is a monthly or yearly subscription that gives you access to Archicad 18 as long as you pay the fee. You can check the prices of Archicad 18 licenses at .
-
Is Archicad 18 compatible with Windows 11?
-
Windows 11 is the upcoming operating system from Microsoft that is expected to be released in late 2023. According to Graphisoft, Archicad 18 is not officially compatible with Windows 11 yet. However, they are working on testing and updating their products to ensure compatibility with Windows 11 as soon as possible. You can check the compatibility status of Archicad 18 with Windows 11 at .
-
Where can I find more articles on Archicad 18?
-
You can find more articles on Archicad 18 on various websites and blogs that cover topics related to architectural design and BIM software. Some examples are , , or . These articles will provide you with more information, insights, reviews, and examples on using Archicad 18 for your projects.
- b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Battlefield 2 Euro Force Armored Fury Special Forces Patch Updated.md b/spaces/tioseFevbu/cartoon-converter/scripts/Battlefield 2 Euro Force Armored Fury Special Forces Patch Updated.md
deleted file mode 100644
index eb88c0e2e90f6a1ca3fad34c13b5421ea2a7bdef..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Battlefield 2 Euro Force Armored Fury Special Forces Patch Updated.md
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
Battlefield 2 Euro Force Armored Fury Special Forces Patch | Updated
-
-
If you are a fan of Battlefield 2, you might want to check out the latest patch that includes the Armored Fury and Euro Force expansions for free. This patch also removes the disc check DRM, adds widescreen support, and fixes a crash issue when switching to desktop. Here is everything you need to know about the Battlefield 2 Euro Force Armored Fury Special Forces Patch.
-
-
What are the expansions?
-
-
The Armored Fury expansion adds three new maps set in the United States, featuring rural and urban environments. It also introduces new vehicles such as attack jets, helicopters, and tanks. The Euro Force expansion adds three new maps set in Europe, featuring a variety of terrains such as mountains, forests, and cities. It also introduces new weapons and equipment for the European Union faction.
-
Battlefield 2 Euro Force Armored Fury Special Forces Patch | updated
To install the patch, you need to have version 1.41 of Battlefield 2 installed beforehand. You can download it from here. Then, you can download the patch 1.50 from here. Run the executable file and follow the instructions. After that, you can download the patch 1.50 hotfix from here. Run the executable file and follow the instructions. You are now ready to play.
-
-
How to play online?
-
-
Since the official servers for Battlefield 2 have been shut down, you need to use a third-party service to play online. One of the most popular ones is Battlelog.co. You need to create an account and download their launcher. Then, you can browse and join servers from their website or from the game itself.
-
-
Conclusion
-
-
Battlefield 2 is one of the best military shooters ever made, and with this patch, you can enjoy it even more. The expansions add new maps, vehicles, weapons, and factions to the game, making it more diverse and fun. The patch also improves the performance and compatibility of the game, making it smoother and easier to run. If you want to experience Battlefield 2 in its full glory, you should definitely download this patch.
-
-
How to play Special Forces?
-
-
Special Forces is an expansion pack for Battlefield 2 that adds new maps, modes, weapons, and gadgets to the game. To play Special Forces, you need to have Battlefield 2 installed and activated. Then, you need to follow these steps:
-
-
-
Open Battlefield 2 and log in to your Battlefield 2 account.
-
Once logged in, click the Community Button, then Custom Games.
-
Select the "xPack" game, which should be "Battlefield 2: Special Forces" and click Activate in the lower right. You will then be asked to restart the game.
-
After restarting the game, you can access the Special Forces content from the main menu. You can play singleplayer or multiplayer with the new maps and modes.
-
-
-
What are some tips and tricks for Special Forces?
-
-
Special Forces is a challenging and exciting expansion pack that requires some skills and strategies to master. Here are some tips and tricks to help you out:
-
-
-
-
Use the new gadgets wisely. The grappling hook and the zip line can help you reach high places or escape from enemies. The tear gas and the flashbangs can blind and disorient your foes. The night vision goggles can help you see in the dark.
-
Be stealthy and silent. Many of the maps in Special Forces are set at night or in dark areas. You can use this to your advantage by hiding in the shadows and using silenced weapons. You can also use the knife or the shock paddles to take out enemies quietly.
-
Be aware of your surroundings. Some of the maps in Special Forces have environmental hazards such as fire, explosions, or traps. You can use these to damage or distract your enemies, but be careful not to get caught in them yourself.
-
Be flexible and adaptable. The maps and modes in Special Forces are varied and unpredictable. You may have to switch between different weapons, vehicles, and tactics depending on the situation. You may also have to cooperate with your teammates or go solo depending on the objective.
-
-
-
Conclusion
-
-
Special Forces is a great expansion pack for Battlefield 2 that adds more depth and diversity to the game. It offers new challenges and opportunities for both veterans and newcomers of the series. If you want to experience a different side of Battlefield 2, you should definitely try out Special Forces.
cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Dishkiyaoon Movie 1080p Download Free Utorrent.md b/spaces/tioseFevbu/cartoon-converter/scripts/Dishkiyaoon Movie 1080p Download Free Utorrent.md
deleted file mode 100644
index ea4f5421e0ae27adba19cfe49069a03e026608c0..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Dishkiyaoon Movie 1080p Download Free Utorrent.md
+++ /dev/null
@@ -1,51 +0,0 @@
-
-Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Dishkiyaoon Movie 1080p Download Utorrent":
-
-
Dishkiyaoon Movie 1080p Download Utorrent: How to Watch the Action Thriller Online
-
-
Dishkiyaoon is a 2014 Hindi-language crime action film directed by Sanamjit Singh Talwar and produced by Shilpa Shetty and Raj Kundra along with Eros International. The film features Sunny Deol, Harman Baweja, Ayesha Khanna, Aditya Pancholi and Prashant Narayanan in the lead roles. The film is based on the Mumbai underworld and follows the story of Viki, a scarred man who chooses to be a gangster against his father's wishes. [^3^]
If you are looking for a way to watch Dishkiyaoon movie online, you might be tempted to download it from torrent sites like UHDMovies.in or GALAXY OF JOBS. However, this is not a safe or legal option as you might end up downloading malware or violating copyright laws. [^1^] [^2^]
-
-
The best way to watch Dishkiyaoon movie online is to stream it on a legal and reliable platform like ZEE5. ZEE5 is a popular OTT service that offers a wide range of movies, shows, originals and live TV channels in various languages. You can watch Dishkiyaoon movie on ZEE5 with a subscription plan that suits your budget and preferences. [^5^]
-
-
Here are some of the benefits of watching Dishkiyaoon movie on ZEE5:
-
-
-
-
You can enjoy the movie in high quality 1080p resolution without any buffering or interruptions.
-
You can watch the movie anytime and anywhere on your preferred device like smartphone, tablet, laptop or smart TV.
-
You can access other content on ZEE5 like the official trailer, songs, behind-the-scenes videos and interviews of the cast and crew of Dishkiyaoon. [^4^]
-
You can also explore other genres and categories of movies and shows on ZEE5 like action, comedy, drama, romance, thriller, horror, biopic, historical, sports and more.
-
You can support the filmmakers and artists who worked hard to create Dishkiyaoon by paying for their content legally.
-
-
-
So what are you waiting for? Subscribe to ZEE5 today and watch Dishkiyaoon movie online in 1080p quality. You will not regret it!
Here are a few more paragraphs for the article:
-
-
Now that you know how to watch Dishkiyaoon movie online legally and safely, you might be wondering what makes this movie worth watching. Here are some of the reasons why Dishkiyaoon is a must-watch for action thriller fans:
-
-
-
The movie has a gripping plot that keeps you hooked till the end. It shows the dark and gritty side of the Mumbai underworld and the struggles of a man who wants to rise above it.
-
The movie has some stellar performances by the lead actors, especially Harman Baweja, who plays the role of Viki with conviction and intensity. Sunny Deol also delivers a powerful performance as Lakwa, a ruthless gangster who mentors Viki.
-
The movie has some amazing action sequences that are realistic and thrilling. The movie also has a good dose of humor and romance to balance the violence and drama.
-
The movie has a catchy soundtrack that complements the mood and tone of the film. The songs are composed by Sneha Khanwalkar, Palash Muchhal, Prashant Narayanan and White Noise Production. Some of the popular songs are "Tu Hi Hai Aashiqui", "Nachle Tu" and "Tutey".
-
-
-
So if you are looking for a movie that will keep you on the edge of your seat and entertain you with its action, drama, humor and romance, then Dishkiyaoon is the perfect choice for you. Don't miss this opportunity to watch this movie online on ZEE5.
-
-
Before you go, here are some tips for writing SEO-optimized articles that will help you rank higher on Google and other search engines:
-
-
-
Use keyword-rich titles and descriptions. Make sure your titles and descriptions contain the keywords you want to target. Your title should be catchy and descriptive, while your description should summarize the main points of your article. [^4^]
-
Use related keywords throughout your article. Don't just focus on one keyword, but use synonyms and variations that are relevant to your topic. This will help you cover more search queries and avoid keyword stuffing. [^1^]
-
Use headings and subheadings to structure your article. Headings and subheadings help search engines understand the hierarchy and organization of your content. They also make your article easier to read and scan for your readers. Use
tags for your main title,
tags for your main headings, and
tags for your subheadings. Include keywords in your headings and subheadings as well. [^1^]
-
Use images and videos to enhance your article. Images and videos can make your article more engaging and appealing to your readers. They can also help you explain complex concepts or show examples. However, make sure you optimize your images and videos for SEO as well. Use relevant file names, alt text, captions, and titles that include keywords. [^1^]
-
Use internal and external links to boost your authority and relevance. Internal links are links that point to other pages or posts on your website. External links are links that point to other websites or sources that support your claims or provide more information. Both types of links can help you improve your SEO by showing search engines that your content is valuable and trustworthy. However, make sure you use high-quality and relevant links that add value to your readers. [^1^]
-
-
-
By following these tips, you can write SEO-optimized articles that will rank higher on Google and other search engines, drive more traffic to your website, and generate more leads and conversions for your business.
-
-
I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.
Euro Truck Simulator 5: A New Generation of Trucking Simulation
-
Euro Truck Simulator 5 is the latest installment in the popular truck simulation game series by SCS Software. Released in 2015, this game features a revamped graphics engine, new trucks and trailers, more realistic physics and damage, and a huge map of Europe with over 60 cities to explore.
-
The game also includes 31 downloadable content (DLC) packs that add more countries, cargo, paint jobs, accessories, and tuning options to customize your truck. You can also download mods from the Steam Workshop or other sources to enhance your gameplay experience.
-
Euro Truck Simulator 5 [v 1.25.2.8s 31 DLC] (2015) Repack
If you are looking for a repack version of Euro Truck Simulator 5 that includes all the updates and DLCs, you can find it online from various sources. However, be careful of viruses and malware that may harm your computer. Always scan the files before installing them and use a reputable antivirus software.
-
Euro Truck Simulator 5 is a game that will challenge your driving skills and endurance as you deliver important cargo across impressive distances. You will also enjoy the beautiful scenery and landmarks of Europe as you travel along major trunk routes and visit famous places based on real roads and cities[^2^] [^3^].
-
Whether you are a fan of trucking simulation games or just looking for a relaxing and immersive way to explore Europe, Euro Truck Simulator 5 is a game that you should not miss.
-
-
Euro Truck Simulator 5 offers a realistic and immersive gameplay experience that will make you feel like a real trucker. You can choose from a variety of trucks from 7 major European truck manufacturers, including official license from MAN[^2^]. You can also customize your truck with different paint jobs, accessories, and tuning options to suit your style and preferences.
-
As you drive across Europe, you will encounter different road conditions, weather effects, traffic situations, and day and night cycles. You will also have to obey the traffic rules and regulations, such as speed limits, traffic lights, tolls, and weigh stations. You will also have to manage your fuel, fatigue, and cargo damage levels. If you commit a traffic offense or damage your truck or cargo, you will have to pay fines or repair costs.
-
The game also features a career mode where you can start your own trucking company and hire drivers to work for you. You can buy new trucks and garages, expand your business, and earn more money. You can also take on different types of cargo, such as food, chemicals, cars, machinery, and more. You can also unlock new skills and perks that will improve your performance and reputation.
-
If you want to play with other players online, you can join the TruckersMP mod[^1^] that allows you to drive with thousands of other truckers on dedicated servers. You can chat with other players, join convoys, participate in events, and more. You can also download other mods from the Steam Workshop or other sources that add new maps, trucks, trailers, sounds, graphics, and more.
- 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/distutils_args.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/distutils_args.py
deleted file mode 100644
index 2fd1862073f55d5551fc2c1bc1e9eaaed0c0e877..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/utils/distutils_args.py
+++ /dev/null
@@ -1,43 +0,0 @@
-from getopt import GetoptError, getopt
-from typing import Dict, List
-
-_options = [
- "exec-prefix=",
- "home=",
- "install-base=",
- "install-data=",
- "install-headers=",
- "install-lib=",
- "install-platlib=",
- "install-purelib=",
- "install-scripts=",
- "prefix=",
- "root=",
- "user",
-]
-
-
-def parse_distutils_args(args: List[str]) -> Dict[str, str]:
- """Parse provided arguments, returning an object that has the matched arguments.
-
- Any unknown arguments are ignored.
- """
- result = {}
- for arg in args:
- try:
- parsed_opt, _ = getopt(args=[arg], shortopts="", longopts=_options)
- except GetoptError:
- # We don't care about any other options, which here may be
- # considered unrecognized since our option list is not
- # exhaustive.
- continue
-
- if not parsed_opt:
- continue
-
- option = parsed_opt[0]
- name_from_parsed = option[0][2:].replace("-", "_")
- value_from_parsed = option[1] or "true"
- result[name_from_parsed] = value_from_parsed
-
- return result
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_manylinux.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_manylinux.py
deleted file mode 100644
index 4c379aa6f69ff56c8f19612002c6e3e939ea6012..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_manylinux.py
+++ /dev/null
@@ -1,301 +0,0 @@
-import collections
-import functools
-import os
-import re
-import struct
-import sys
-import warnings
-from typing import IO, Dict, Iterator, NamedTuple, Optional, Tuple
-
-
-# Python does not provide platform information at sufficient granularity to
-# identify the architecture of the running executable in some cases, so we
-# determine it dynamically by reading the information from the running
-# process. This only applies on Linux, which uses the ELF format.
-class _ELFFileHeader:
- # https://en.wikipedia.org/wiki/Executable_and_Linkable_Format#File_header
- class _InvalidELFFileHeader(ValueError):
- """
- An invalid ELF file header was found.
- """
-
- ELF_MAGIC_NUMBER = 0x7F454C46
- ELFCLASS32 = 1
- ELFCLASS64 = 2
- ELFDATA2LSB = 1
- ELFDATA2MSB = 2
- EM_386 = 3
- EM_S390 = 22
- EM_ARM = 40
- EM_X86_64 = 62
- EF_ARM_ABIMASK = 0xFF000000
- EF_ARM_ABI_VER5 = 0x05000000
- EF_ARM_ABI_FLOAT_HARD = 0x00000400
-
- def __init__(self, file: IO[bytes]) -> None:
- def unpack(fmt: str) -> int:
- try:
- data = file.read(struct.calcsize(fmt))
- result: Tuple[int, ...] = struct.unpack(fmt, data)
- except struct.error:
- raise _ELFFileHeader._InvalidELFFileHeader()
- return result[0]
-
- self.e_ident_magic = unpack(">I")
- if self.e_ident_magic != self.ELF_MAGIC_NUMBER:
- raise _ELFFileHeader._InvalidELFFileHeader()
- self.e_ident_class = unpack("B")
- if self.e_ident_class not in {self.ELFCLASS32, self.ELFCLASS64}:
- raise _ELFFileHeader._InvalidELFFileHeader()
- self.e_ident_data = unpack("B")
- if self.e_ident_data not in {self.ELFDATA2LSB, self.ELFDATA2MSB}:
- raise _ELFFileHeader._InvalidELFFileHeader()
- self.e_ident_version = unpack("B")
- self.e_ident_osabi = unpack("B")
- self.e_ident_abiversion = unpack("B")
- self.e_ident_pad = file.read(7)
- format_h = "H"
- format_i = "I"
- format_q = "Q"
- format_p = format_i if self.e_ident_class == self.ELFCLASS32 else format_q
- self.e_type = unpack(format_h)
- self.e_machine = unpack(format_h)
- self.e_version = unpack(format_i)
- self.e_entry = unpack(format_p)
- self.e_phoff = unpack(format_p)
- self.e_shoff = unpack(format_p)
- self.e_flags = unpack(format_i)
- self.e_ehsize = unpack(format_h)
- self.e_phentsize = unpack(format_h)
- self.e_phnum = unpack(format_h)
- self.e_shentsize = unpack(format_h)
- self.e_shnum = unpack(format_h)
- self.e_shstrndx = unpack(format_h)
-
-
-def _get_elf_header() -> Optional[_ELFFileHeader]:
- try:
- with open(sys.executable, "rb") as f:
- elf_header = _ELFFileHeader(f)
- except (OSError, TypeError, _ELFFileHeader._InvalidELFFileHeader):
- return None
- return elf_header
-
-
-def _is_linux_armhf() -> bool:
- # hard-float ABI can be detected from the ELF header of the running
- # process
- # https://static.docs.arm.com/ihi0044/g/aaelf32.pdf
- elf_header = _get_elf_header()
- if elf_header is None:
- return False
- result = elf_header.e_ident_class == elf_header.ELFCLASS32
- result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB
- result &= elf_header.e_machine == elf_header.EM_ARM
- result &= (
- elf_header.e_flags & elf_header.EF_ARM_ABIMASK
- ) == elf_header.EF_ARM_ABI_VER5
- result &= (
- elf_header.e_flags & elf_header.EF_ARM_ABI_FLOAT_HARD
- ) == elf_header.EF_ARM_ABI_FLOAT_HARD
- return result
-
-
-def _is_linux_i686() -> bool:
- elf_header = _get_elf_header()
- if elf_header is None:
- return False
- result = elf_header.e_ident_class == elf_header.ELFCLASS32
- result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB
- result &= elf_header.e_machine == elf_header.EM_386
- return result
-
-
-def _have_compatible_abi(arch: str) -> bool:
- if arch == "armv7l":
- return _is_linux_armhf()
- if arch == "i686":
- return _is_linux_i686()
- return arch in {"x86_64", "aarch64", "ppc64", "ppc64le", "s390x"}
-
-
-# If glibc ever changes its major version, we need to know what the last
-# minor version was, so we can build the complete list of all versions.
-# For now, guess what the highest minor version might be, assume it will
-# be 50 for testing. Once this actually happens, update the dictionary
-# with the actual value.
-_LAST_GLIBC_MINOR: Dict[int, int] = collections.defaultdict(lambda: 50)
-
-
-class _GLibCVersion(NamedTuple):
- major: int
- minor: int
-
-
-def _glibc_version_string_confstr() -> Optional[str]:
- """
- Primary implementation of glibc_version_string using os.confstr.
- """
- # os.confstr is quite a bit faster than ctypes.DLL. It's also less likely
- # to be broken or missing. This strategy is used in the standard library
- # platform module.
- # https://github.com/python/cpython/blob/fcf1d003bf4f0100c/Lib/platform.py#L175-L183
- try:
- # os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17".
- version_string = os.confstr("CS_GNU_LIBC_VERSION")
- assert version_string is not None
- _, version = version_string.split()
- except (AssertionError, AttributeError, OSError, ValueError):
- # os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)...
- return None
- return version
-
-
-def _glibc_version_string_ctypes() -> Optional[str]:
- """
- Fallback implementation of glibc_version_string using ctypes.
- """
- try:
- import ctypes
- except ImportError:
- return None
-
- # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen
- # manpage says, "If filename is NULL, then the returned handle is for the
- # main program". This way we can let the linker do the work to figure out
- # which libc our process is actually using.
- #
- # We must also handle the special case where the executable is not a
- # dynamically linked executable. This can occur when using musl libc,
- # for example. In this situation, dlopen() will error, leading to an
- # OSError. Interestingly, at least in the case of musl, there is no
- # errno set on the OSError. The single string argument used to construct
- # OSError comes from libc itself and is therefore not portable to
- # hard code here. In any case, failure to call dlopen() means we
- # can proceed, so we bail on our attempt.
- try:
- process_namespace = ctypes.CDLL(None)
- except OSError:
- return None
-
- try:
- gnu_get_libc_version = process_namespace.gnu_get_libc_version
- except AttributeError:
- # Symbol doesn't exist -> therefore, we are not linked to
- # glibc.
- return None
-
- # Call gnu_get_libc_version, which returns a string like "2.5"
- gnu_get_libc_version.restype = ctypes.c_char_p
- version_str: str = gnu_get_libc_version()
- # py2 / py3 compatibility:
- if not isinstance(version_str, str):
- version_str = version_str.decode("ascii")
-
- return version_str
-
-
-def _glibc_version_string() -> Optional[str]:
- """Returns glibc version string, or None if not using glibc."""
- return _glibc_version_string_confstr() or _glibc_version_string_ctypes()
-
-
-def _parse_glibc_version(version_str: str) -> Tuple[int, int]:
- """Parse glibc version.
-
- We use a regexp instead of str.split because we want to discard any
- random junk that might come after the minor version -- this might happen
- in patched/forked versions of glibc (e.g. Linaro's version of glibc
- uses version strings like "2.20-2014.11"). See gh-3588.
- """
- m = re.match(r"(?P[0-9]+)\.(?P[0-9]+)", version_str)
- if not m:
- warnings.warn(
- "Expected glibc version with 2 components major.minor,"
- " got: %s" % version_str,
- RuntimeWarning,
- )
- return -1, -1
- return int(m.group("major")), int(m.group("minor"))
-
-
-@functools.lru_cache()
-def _get_glibc_version() -> Tuple[int, int]:
- version_str = _glibc_version_string()
- if version_str is None:
- return (-1, -1)
- return _parse_glibc_version(version_str)
-
-
-# From PEP 513, PEP 600
-def _is_compatible(name: str, arch: str, version: _GLibCVersion) -> bool:
- sys_glibc = _get_glibc_version()
- if sys_glibc < version:
- return False
- # Check for presence of _manylinux module.
- try:
- import _manylinux # noqa
- except ImportError:
- return True
- if hasattr(_manylinux, "manylinux_compatible"):
- result = _manylinux.manylinux_compatible(version[0], version[1], arch)
- if result is not None:
- return bool(result)
- return True
- if version == _GLibCVersion(2, 5):
- if hasattr(_manylinux, "manylinux1_compatible"):
- return bool(_manylinux.manylinux1_compatible)
- if version == _GLibCVersion(2, 12):
- if hasattr(_manylinux, "manylinux2010_compatible"):
- return bool(_manylinux.manylinux2010_compatible)
- if version == _GLibCVersion(2, 17):
- if hasattr(_manylinux, "manylinux2014_compatible"):
- return bool(_manylinux.manylinux2014_compatible)
- return True
-
-
-_LEGACY_MANYLINUX_MAP = {
- # CentOS 7 w/ glibc 2.17 (PEP 599)
- (2, 17): "manylinux2014",
- # CentOS 6 w/ glibc 2.12 (PEP 571)
- (2, 12): "manylinux2010",
- # CentOS 5 w/ glibc 2.5 (PEP 513)
- (2, 5): "manylinux1",
-}
-
-
-def platform_tags(linux: str, arch: str) -> Iterator[str]:
- if not _have_compatible_abi(arch):
- return
- # Oldest glibc to be supported regardless of architecture is (2, 17).
- too_old_glibc2 = _GLibCVersion(2, 16)
- if arch in {"x86_64", "i686"}:
- # On x86/i686 also oldest glibc to be supported is (2, 5).
- too_old_glibc2 = _GLibCVersion(2, 4)
- current_glibc = _GLibCVersion(*_get_glibc_version())
- glibc_max_list = [current_glibc]
- # We can assume compatibility across glibc major versions.
- # https://sourceware.org/bugzilla/show_bug.cgi?id=24636
- #
- # Build a list of maximum glibc versions so that we can
- # output the canonical list of all glibc from current_glibc
- # down to too_old_glibc2, including all intermediary versions.
- for glibc_major in range(current_glibc.major - 1, 1, -1):
- glibc_minor = _LAST_GLIBC_MINOR[glibc_major]
- glibc_max_list.append(_GLibCVersion(glibc_major, glibc_minor))
- for glibc_max in glibc_max_list:
- if glibc_max.major == too_old_glibc2.major:
- min_minor = too_old_glibc2.minor
- else:
- # For other glibc major versions oldest supported is (x, 0).
- min_minor = -1
- for glibc_minor in range(glibc_max.minor, min_minor, -1):
- glibc_version = _GLibCVersion(glibc_max.major, glibc_minor)
- tag = "manylinux_{}_{}".format(*glibc_version)
- if _is_compatible(tag, arch, glibc_version):
- yield linux.replace("linux", tag)
- # Handle the legacy manylinux1, manylinux2010, manylinux2014 tags.
- if glibc_version in _LEGACY_MANYLINUX_MAP:
- legacy_tag = _LEGACY_MANYLINUX_MAP[glibc_version]
- if _is_compatible(legacy_tag, arch, glibc_version):
- yield linux.replace("linux", legacy_tag)
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/core.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/core.py
deleted file mode 100644
index 6ff3c766f7dd9f4111cbd9d2a5f668e4435798b5..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/core.py
+++ /dev/null
@@ -1,5814 +0,0 @@
-#
-# core.py
-#
-import os
-import typing
-from typing import (
- NamedTuple,
- Union,
- Callable,
- Any,
- Generator,
- Tuple,
- List,
- TextIO,
- Set,
- Sequence,
-)
-from abc import ABC, abstractmethod
-from enum import Enum
-import string
-import copy
-import warnings
-import re
-import sys
-from collections.abc import Iterable
-import traceback
-import types
-from operator import itemgetter
-from functools import wraps
-from threading import RLock
-from pathlib import Path
-
-from .util import (
- _FifoCache,
- _UnboundedCache,
- __config_flags,
- _collapse_string_to_ranges,
- _escape_regex_range_chars,
- _bslash,
- _flatten,
- LRUMemo as _LRUMemo,
- UnboundedMemo as _UnboundedMemo,
-)
-from .exceptions import *
-from .actions import *
-from .results import ParseResults, _ParseResultsWithOffset
-from .unicode import pyparsing_unicode
-
-_MAX_INT = sys.maxsize
-str_type: Tuple[type, ...] = (str, bytes)
-
-#
-# Copyright (c) 2003-2022 Paul T. McGuire
-#
-# Permission is hereby granted, free of charge, to any person obtaining
-# a copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish,
-# distribute, sublicense, and/or sell copies of the Software, and to
-# permit persons to whom the Software is furnished to do so, subject to
-# the following conditions:
-#
-# The above copyright notice and this permission notice shall be
-# included in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-#
-
-
-if sys.version_info >= (3, 8):
- from functools import cached_property
-else:
-
- class cached_property:
- def __init__(self, func):
- self._func = func
-
- def __get__(self, instance, owner=None):
- ret = instance.__dict__[self._func.__name__] = self._func(instance)
- return ret
-
-
-class __compat__(__config_flags):
- """
- A cross-version compatibility configuration for pyparsing features that will be
- released in a future version. By setting values in this configuration to True,
- those features can be enabled in prior versions for compatibility development
- and testing.
-
- - ``collect_all_And_tokens`` - flag to enable fix for Issue #63 that fixes erroneous grouping
- of results names when an :class:`And` expression is nested within an :class:`Or` or :class:`MatchFirst`;
- maintained for compatibility, but setting to ``False`` no longer restores pre-2.3.1
- behavior
- """
-
- _type_desc = "compatibility"
-
- collect_all_And_tokens = True
-
- _all_names = [__ for __ in locals() if not __.startswith("_")]
- _fixed_names = """
- collect_all_And_tokens
- """.split()
-
-
-class __diag__(__config_flags):
- _type_desc = "diagnostic"
-
- warn_multiple_tokens_in_named_alternation = False
- warn_ungrouped_named_tokens_in_collection = False
- warn_name_set_on_empty_Forward = False
- warn_on_parse_using_empty_Forward = False
- warn_on_assignment_to_Forward = False
- warn_on_multiple_string_args_to_oneof = False
- warn_on_match_first_with_lshift_operator = False
- enable_debug_on_named_expressions = False
-
- _all_names = [__ for __ in locals() if not __.startswith("_")]
- _warning_names = [name for name in _all_names if name.startswith("warn")]
- _debug_names = [name for name in _all_names if name.startswith("enable_debug")]
-
- @classmethod
- def enable_all_warnings(cls) -> None:
- for name in cls._warning_names:
- cls.enable(name)
-
-
-class Diagnostics(Enum):
- """
- Diagnostic configuration (all default to disabled)
- - ``warn_multiple_tokens_in_named_alternation`` - flag to enable warnings when a results
- name is defined on a :class:`MatchFirst` or :class:`Or` expression with one or more :class:`And` subexpressions
- - ``warn_ungrouped_named_tokens_in_collection`` - flag to enable warnings when a results
- name is defined on a containing expression with ungrouped subexpressions that also
- have results names
- - ``warn_name_set_on_empty_Forward`` - flag to enable warnings when a :class:`Forward` is defined
- with a results name, but has no contents defined
- - ``warn_on_parse_using_empty_Forward`` - flag to enable warnings when a :class:`Forward` is
- defined in a grammar but has never had an expression attached to it
- - ``warn_on_assignment_to_Forward`` - flag to enable warnings when a :class:`Forward` is defined
- but is overwritten by assigning using ``'='`` instead of ``'<<='`` or ``'<<'``
- - ``warn_on_multiple_string_args_to_oneof`` - flag to enable warnings when :class:`one_of` is
- incorrectly called with multiple str arguments
- - ``enable_debug_on_named_expressions`` - flag to auto-enable debug on all subsequent
- calls to :class:`ParserElement.set_name`
-
- Diagnostics are enabled/disabled by calling :class:`enable_diag` and :class:`disable_diag`.
- All warnings can be enabled by calling :class:`enable_all_warnings`.
- """
-
- warn_multiple_tokens_in_named_alternation = 0
- warn_ungrouped_named_tokens_in_collection = 1
- warn_name_set_on_empty_Forward = 2
- warn_on_parse_using_empty_Forward = 3
- warn_on_assignment_to_Forward = 4
- warn_on_multiple_string_args_to_oneof = 5
- warn_on_match_first_with_lshift_operator = 6
- enable_debug_on_named_expressions = 7
-
-
-def enable_diag(diag_enum: Diagnostics) -> None:
- """
- Enable a global pyparsing diagnostic flag (see :class:`Diagnostics`).
- """
- __diag__.enable(diag_enum.name)
-
-
-def disable_diag(diag_enum: Diagnostics) -> None:
- """
- Disable a global pyparsing diagnostic flag (see :class:`Diagnostics`).
- """
- __diag__.disable(diag_enum.name)
-
-
-def enable_all_warnings() -> None:
- """
- Enable all global pyparsing diagnostic warnings (see :class:`Diagnostics`).
- """
- __diag__.enable_all_warnings()
-
-
-# hide abstract class
-del __config_flags
-
-
-def _should_enable_warnings(
- cmd_line_warn_options: typing.Iterable[str], warn_env_var: typing.Optional[str]
-) -> bool:
- enable = bool(warn_env_var)
- for warn_opt in cmd_line_warn_options:
- w_action, w_message, w_category, w_module, w_line = (warn_opt + "::::").split(
- ":"
- )[:5]
- if not w_action.lower().startswith("i") and (
- not (w_message or w_category or w_module) or w_module == "pyparsing"
- ):
- enable = True
- elif w_action.lower().startswith("i") and w_module in ("pyparsing", ""):
- enable = False
- return enable
-
-
-if _should_enable_warnings(
- sys.warnoptions, os.environ.get("PYPARSINGENABLEALLWARNINGS")
-):
- enable_all_warnings()
-
-
-# build list of single arg builtins, that can be used as parse actions
-_single_arg_builtins = {
- sum,
- len,
- sorted,
- reversed,
- list,
- tuple,
- set,
- any,
- all,
- min,
- max,
-}
-
-_generatorType = types.GeneratorType
-ParseAction = Union[
- Callable[[], Any],
- Callable[[ParseResults], Any],
- Callable[[int, ParseResults], Any],
- Callable[[str, int, ParseResults], Any],
-]
-ParseCondition = Union[
- Callable[[], bool],
- Callable[[ParseResults], bool],
- Callable[[int, ParseResults], bool],
- Callable[[str, int, ParseResults], bool],
-]
-ParseFailAction = Callable[[str, int, "ParserElement", Exception], None]
-DebugStartAction = Callable[[str, int, "ParserElement", bool], None]
-DebugSuccessAction = Callable[
- [str, int, int, "ParserElement", ParseResults, bool], None
-]
-DebugExceptionAction = Callable[[str, int, "ParserElement", Exception, bool], None]
-
-
-alphas = string.ascii_uppercase + string.ascii_lowercase
-identchars = pyparsing_unicode.Latin1.identchars
-identbodychars = pyparsing_unicode.Latin1.identbodychars
-nums = "0123456789"
-hexnums = nums + "ABCDEFabcdef"
-alphanums = alphas + nums
-printables = "".join([c for c in string.printable if c not in string.whitespace])
-
-_trim_arity_call_line: traceback.StackSummary = None
-
-
-def _trim_arity(func, max_limit=3):
- """decorator to trim function calls to match the arity of the target"""
- global _trim_arity_call_line
-
- if func in _single_arg_builtins:
- return lambda s, l, t: func(t)
-
- limit = 0
- found_arity = False
-
- def extract_tb(tb, limit=0):
- frames = traceback.extract_tb(tb, limit=limit)
- frame_summary = frames[-1]
- return [frame_summary[:2]]
-
- # synthesize what would be returned by traceback.extract_stack at the call to
- # user's parse action 'func', so that we don't incur call penalty at parse time
-
- # fmt: off
- LINE_DIFF = 7
- # IF ANY CODE CHANGES, EVEN JUST COMMENTS OR BLANK LINES, BETWEEN THE NEXT LINE AND
- # THE CALL TO FUNC INSIDE WRAPPER, LINE_DIFF MUST BE MODIFIED!!!!
- _trim_arity_call_line = (_trim_arity_call_line or traceback.extract_stack(limit=2)[-1])
- pa_call_line_synth = (_trim_arity_call_line[0], _trim_arity_call_line[1] + LINE_DIFF)
-
- def wrapper(*args):
- nonlocal found_arity, limit
- while 1:
- try:
- ret = func(*args[limit:])
- found_arity = True
- return ret
- except TypeError as te:
- # re-raise TypeErrors if they did not come from our arity testing
- if found_arity:
- raise
- else:
- tb = te.__traceback__
- trim_arity_type_error = (
- extract_tb(tb, limit=2)[-1][:2] == pa_call_line_synth
- )
- del tb
-
- if trim_arity_type_error:
- if limit < max_limit:
- limit += 1
- continue
-
- raise
- # fmt: on
-
- # copy func name to wrapper for sensible debug output
- # (can't use functools.wraps, since that messes with function signature)
- func_name = getattr(func, "__name__", getattr(func, "__class__").__name__)
- wrapper.__name__ = func_name
- wrapper.__doc__ = func.__doc__
-
- return wrapper
-
-
-def condition_as_parse_action(
- fn: ParseCondition, message: str = None, fatal: bool = False
-) -> ParseAction:
- """
- Function to convert a simple predicate function that returns ``True`` or ``False``
- into a parse action. Can be used in places when a parse action is required
- and :class:`ParserElement.add_condition` cannot be used (such as when adding a condition
- to an operator level in :class:`infix_notation`).
-
- Optional keyword arguments:
-
- - ``message`` - define a custom message to be used in the raised exception
- - ``fatal`` - if True, will raise :class:`ParseFatalException` to stop parsing immediately;
- otherwise will raise :class:`ParseException`
-
- """
- msg = message if message is not None else "failed user-defined condition"
- exc_type = ParseFatalException if fatal else ParseException
- fn = _trim_arity(fn)
-
- @wraps(fn)
- def pa(s, l, t):
- if not bool(fn(s, l, t)):
- raise exc_type(s, l, msg)
-
- return pa
-
-
-def _default_start_debug_action(
- instring: str, loc: int, expr: "ParserElement", cache_hit: bool = False
-):
- cache_hit_str = "*" if cache_hit else ""
- print(
- (
- "{}Match {} at loc {}({},{})\n {}\n {}^".format(
- cache_hit_str,
- expr,
- loc,
- lineno(loc, instring),
- col(loc, instring),
- line(loc, instring),
- " " * (col(loc, instring) - 1),
- )
- )
- )
-
-
-def _default_success_debug_action(
- instring: str,
- startloc: int,
- endloc: int,
- expr: "ParserElement",
- toks: ParseResults,
- cache_hit: bool = False,
-):
- cache_hit_str = "*" if cache_hit else ""
- print("{}Matched {} -> {}".format(cache_hit_str, expr, toks.as_list()))
-
-
-def _default_exception_debug_action(
- instring: str,
- loc: int,
- expr: "ParserElement",
- exc: Exception,
- cache_hit: bool = False,
-):
- cache_hit_str = "*" if cache_hit else ""
- print(
- "{}Match {} failed, {} raised: {}".format(
- cache_hit_str, expr, type(exc).__name__, exc
- )
- )
-
-
-def null_debug_action(*args):
- """'Do-nothing' debug action, to suppress debugging output during parsing."""
-
-
-class ParserElement(ABC):
- """Abstract base level parser element class."""
-
- DEFAULT_WHITE_CHARS: str = " \n\t\r"
- verbose_stacktrace: bool = False
- _literalStringClass: typing.Optional[type] = None
-
- @staticmethod
- def set_default_whitespace_chars(chars: str) -> None:
- r"""
- Overrides the default whitespace chars
-
- Example::
-
- # default whitespace chars are space, and newline
- Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl']
-
- # change to just treat newline as significant
- ParserElement.set_default_whitespace_chars(" \t")
- Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def']
- """
- ParserElement.DEFAULT_WHITE_CHARS = chars
-
- # update whitespace all parse expressions defined in this module
- for expr in _builtin_exprs:
- if expr.copyDefaultWhiteChars:
- expr.whiteChars = set(chars)
-
- @staticmethod
- def inline_literals_using(cls: type) -> None:
- """
- Set class to be used for inclusion of string literals into a parser.
-
- Example::
-
- # default literal class used is Literal
- integer = Word(nums)
- date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
-
- date_str.parse_string("1999/12/31") # -> ['1999', '/', '12', '/', '31']
-
-
- # change to Suppress
- ParserElement.inline_literals_using(Suppress)
- date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
-
- date_str.parse_string("1999/12/31") # -> ['1999', '12', '31']
- """
- ParserElement._literalStringClass = cls
-
- class DebugActions(NamedTuple):
- debug_try: typing.Optional[DebugStartAction]
- debug_match: typing.Optional[DebugSuccessAction]
- debug_fail: typing.Optional[DebugExceptionAction]
-
- def __init__(self, savelist: bool = False):
- self.parseAction: List[ParseAction] = list()
- self.failAction: typing.Optional[ParseFailAction] = None
- self.customName = None
- self._defaultName = None
- self.resultsName = None
- self.saveAsList = savelist
- self.skipWhitespace = True
- self.whiteChars = set(ParserElement.DEFAULT_WHITE_CHARS)
- self.copyDefaultWhiteChars = True
- # used when checking for left-recursion
- self.mayReturnEmpty = False
- self.keepTabs = False
- self.ignoreExprs: List["ParserElement"] = list()
- self.debug = False
- self.streamlined = False
- # optimize exception handling for subclasses that don't advance parse index
- self.mayIndexError = True
- self.errmsg = ""
- # mark results names as modal (report only last) or cumulative (list all)
- self.modalResults = True
- # custom debug actions
- self.debugActions = self.DebugActions(None, None, None)
- # avoid redundant calls to preParse
- self.callPreparse = True
- self.callDuringTry = False
- self.suppress_warnings_: List[Diagnostics] = []
-
- def suppress_warning(self, warning_type: Diagnostics) -> "ParserElement":
- """
- Suppress warnings emitted for a particular diagnostic on this expression.
-
- Example::
-
- base = pp.Forward()
- base.suppress_warning(Diagnostics.warn_on_parse_using_empty_Forward)
-
- # statement would normally raise a warning, but is now suppressed
- print(base.parseString("x"))
-
- """
- self.suppress_warnings_.append(warning_type)
- return self
-
- def copy(self) -> "ParserElement":
- """
- Make a copy of this :class:`ParserElement`. Useful for defining
- different parse actions for the same parsing pattern, using copies of
- the original parse element.
-
- Example::
-
- integer = Word(nums).set_parse_action(lambda toks: int(toks[0]))
- integerK = integer.copy().add_parse_action(lambda toks: toks[0] * 1024) + Suppress("K")
- integerM = integer.copy().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M")
-
- print((integerK | integerM | integer)[1, ...].parse_string("5K 100 640K 256M"))
-
- prints::
-
- [5120, 100, 655360, 268435456]
-
- Equivalent form of ``expr.copy()`` is just ``expr()``::
-
- integerM = integer().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M")
- """
- cpy = copy.copy(self)
- cpy.parseAction = self.parseAction[:]
- cpy.ignoreExprs = self.ignoreExprs[:]
- if self.copyDefaultWhiteChars:
- cpy.whiteChars = set(ParserElement.DEFAULT_WHITE_CHARS)
- return cpy
-
- def set_results_name(
- self, name: str, list_all_matches: bool = False, *, listAllMatches: bool = False
- ) -> "ParserElement":
- """
- Define name for referencing matching tokens as a nested attribute
- of the returned parse results.
-
- Normally, results names are assigned as you would assign keys in a dict:
- any existing value is overwritten by later values. If it is necessary to
- keep all values captured for a particular results name, call ``set_results_name``
- with ``list_all_matches`` = True.
-
- NOTE: ``set_results_name`` returns a *copy* of the original :class:`ParserElement` object;
- this is so that the client can define a basic element, such as an
- integer, and reference it in multiple places with different names.
-
- You can also set results names using the abbreviated syntax,
- ``expr("name")`` in place of ``expr.set_results_name("name")``
- - see :class:`__call__`. If ``list_all_matches`` is required, use
- ``expr("name*")``.
-
- Example::
-
- date_str = (integer.set_results_name("year") + '/'
- + integer.set_results_name("month") + '/'
- + integer.set_results_name("day"))
-
- # equivalent form:
- date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
- """
- listAllMatches = listAllMatches or list_all_matches
- return self._setResultsName(name, listAllMatches)
-
- def _setResultsName(self, name, listAllMatches=False):
- if name is None:
- return self
- newself = self.copy()
- if name.endswith("*"):
- name = name[:-1]
- listAllMatches = True
- newself.resultsName = name
- newself.modalResults = not listAllMatches
- return newself
-
- def set_break(self, break_flag: bool = True) -> "ParserElement":
- """
- Method to invoke the Python pdb debugger when this element is
- about to be parsed. Set ``break_flag`` to ``True`` to enable, ``False`` to
- disable.
- """
- if break_flag:
- _parseMethod = self._parse
-
- def breaker(instring, loc, doActions=True, callPreParse=True):
- import pdb
-
- # this call to pdb.set_trace() is intentional, not a checkin error
- pdb.set_trace()
- return _parseMethod(instring, loc, doActions, callPreParse)
-
- breaker._originalParseMethod = _parseMethod
- self._parse = breaker
- else:
- if hasattr(self._parse, "_originalParseMethod"):
- self._parse = self._parse._originalParseMethod
- return self
-
- def set_parse_action(self, *fns: ParseAction, **kwargs) -> "ParserElement":
- """
- Define one or more actions to perform when successfully matching parse element definition.
-
- Parse actions can be called to perform data conversions, do extra validation,
- update external data structures, or enhance or replace the parsed tokens.
- Each parse action ``fn`` is a callable method with 0-3 arguments, called as
- ``fn(s, loc, toks)`` , ``fn(loc, toks)`` , ``fn(toks)`` , or just ``fn()`` , where:
-
- - s = the original string being parsed (see note below)
- - loc = the location of the matching substring
- - toks = a list of the matched tokens, packaged as a :class:`ParseResults` object
-
- The parsed tokens are passed to the parse action as ParseResults. They can be
- modified in place using list-style append, extend, and pop operations to update
- the parsed list elements; and with dictionary-style item set and del operations
- to add, update, or remove any named results. If the tokens are modified in place,
- it is not necessary to return them with a return statement.
-
- Parse actions can also completely replace the given tokens, with another ``ParseResults``
- object, or with some entirely different object (common for parse actions that perform data
- conversions). A convenient way to build a new parse result is to define the values
- using a dict, and then create the return value using :class:`ParseResults.from_dict`.
-
- If None is passed as the ``fn`` parse action, all previously added parse actions for this
- expression are cleared.
-
- Optional keyword arguments:
-
- - call_during_try = (default= ``False``) indicate if parse action should be run during
- lookaheads and alternate testing. For parse actions that have side effects, it is
- important to only call the parse action once it is determined that it is being
- called as part of a successful parse. For parse actions that perform additional
- validation, then call_during_try should be passed as True, so that the validation
- code is included in the preliminary "try" parses.
-
- Note: the default parsing behavior is to expand tabs in the input string
- before starting the parsing process. See :class:`parse_string` for more
- information on parsing strings containing ```` s, and suggested
- methods to maintain a consistent view of the parsed string, the parse
- location, and line and column positions within the parsed string.
-
- Example::
-
- # parse dates in the form YYYY/MM/DD
-
- # use parse action to convert toks from str to int at parse time
- def convert_to_int(toks):
- return int(toks[0])
-
- # use a parse action to verify that the date is a valid date
- def is_valid_date(instring, loc, toks):
- from datetime import date
- year, month, day = toks[::2]
- try:
- date(year, month, day)
- except ValueError:
- raise ParseException(instring, loc, "invalid date given")
-
- integer = Word(nums)
- date_str = integer + '/' + integer + '/' + integer
-
- # add parse actions
- integer.set_parse_action(convert_to_int)
- date_str.set_parse_action(is_valid_date)
-
- # note that integer fields are now ints, not strings
- date_str.run_tests('''
- # successful parse - note that integer fields were converted to ints
- 1999/12/31
-
- # fail - invalid date
- 1999/13/31
- ''')
- """
- if list(fns) == [None]:
- self.parseAction = []
- else:
- if not all(callable(fn) for fn in fns):
- raise TypeError("parse actions must be callable")
- self.parseAction = [_trim_arity(fn) for fn in fns]
- self.callDuringTry = kwargs.get(
- "call_during_try", kwargs.get("callDuringTry", False)
- )
- return self
-
- def add_parse_action(self, *fns: ParseAction, **kwargs) -> "ParserElement":
- """
- Add one or more parse actions to expression's list of parse actions. See :class:`set_parse_action`.
-
- See examples in :class:`copy`.
- """
- self.parseAction += [_trim_arity(fn) for fn in fns]
- self.callDuringTry = self.callDuringTry or kwargs.get(
- "call_during_try", kwargs.get("callDuringTry", False)
- )
- return self
-
- def add_condition(self, *fns: ParseCondition, **kwargs) -> "ParserElement":
- """Add a boolean predicate function to expression's list of parse actions. See
- :class:`set_parse_action` for function call signatures. Unlike ``set_parse_action``,
- functions passed to ``add_condition`` need to return boolean success/fail of the condition.
-
- Optional keyword arguments:
-
- - message = define a custom message to be used in the raised exception
- - fatal = if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise
- ParseException
- - call_during_try = boolean to indicate if this method should be called during internal tryParse calls,
- default=False
-
- Example::
-
- integer = Word(nums).set_parse_action(lambda toks: int(toks[0]))
- year_int = integer.copy()
- year_int.add_condition(lambda toks: toks[0] >= 2000, message="Only support years 2000 and later")
- date_str = year_int + '/' + integer + '/' + integer
-
- result = date_str.parse_string("1999/12/31") # -> Exception: Only support years 2000 and later (at char 0),
- (line:1, col:1)
- """
- for fn in fns:
- self.parseAction.append(
- condition_as_parse_action(
- fn, message=kwargs.get("message"), fatal=kwargs.get("fatal", False)
- )
- )
-
- self.callDuringTry = self.callDuringTry or kwargs.get(
- "call_during_try", kwargs.get("callDuringTry", False)
- )
- return self
-
- def set_fail_action(self, fn: ParseFailAction) -> "ParserElement":
- """
- Define action to perform if parsing fails at this expression.
- Fail acton fn is a callable function that takes the arguments
- ``fn(s, loc, expr, err)`` where:
-
- - s = string being parsed
- - loc = location where expression match was attempted and failed
- - expr = the parse expression that failed
- - err = the exception thrown
-
- The function returns no value. It may throw :class:`ParseFatalException`
- if it is desired to stop parsing immediately."""
- self.failAction = fn
- return self
-
- def _skipIgnorables(self, instring, loc):
- exprsFound = True
- while exprsFound:
- exprsFound = False
- for e in self.ignoreExprs:
- try:
- while 1:
- loc, dummy = e._parse(instring, loc)
- exprsFound = True
- except ParseException:
- pass
- return loc
-
- def preParse(self, instring, loc):
- if self.ignoreExprs:
- loc = self._skipIgnorables(instring, loc)
-
- if self.skipWhitespace:
- instrlen = len(instring)
- white_chars = self.whiteChars
- while loc < instrlen and instring[loc] in white_chars:
- loc += 1
-
- return loc
-
- def parseImpl(self, instring, loc, doActions=True):
- return loc, []
-
- def postParse(self, instring, loc, tokenlist):
- return tokenlist
-
- # @profile
- def _parseNoCache(
- self, instring, loc, doActions=True, callPreParse=True
- ) -> Tuple[int, ParseResults]:
- TRY, MATCH, FAIL = 0, 1, 2
- debugging = self.debug # and doActions)
- len_instring = len(instring)
-
- if debugging or self.failAction:
- # print("Match {} at loc {}({}, {})".format(self, loc, lineno(loc, instring), col(loc, instring)))
- try:
- if callPreParse and self.callPreparse:
- pre_loc = self.preParse(instring, loc)
- else:
- pre_loc = loc
- tokens_start = pre_loc
- if self.debugActions.debug_try:
- self.debugActions.debug_try(instring, tokens_start, self, False)
- if self.mayIndexError or pre_loc >= len_instring:
- try:
- loc, tokens = self.parseImpl(instring, pre_loc, doActions)
- except IndexError:
- raise ParseException(instring, len_instring, self.errmsg, self)
- else:
- loc, tokens = self.parseImpl(instring, pre_loc, doActions)
- except Exception as err:
- # print("Exception raised:", err)
- if self.debugActions.debug_fail:
- self.debugActions.debug_fail(
- instring, tokens_start, self, err, False
- )
- if self.failAction:
- self.failAction(instring, tokens_start, self, err)
- raise
- else:
- if callPreParse and self.callPreparse:
- pre_loc = self.preParse(instring, loc)
- else:
- pre_loc = loc
- tokens_start = pre_loc
- if self.mayIndexError or pre_loc >= len_instring:
- try:
- loc, tokens = self.parseImpl(instring, pre_loc, doActions)
- except IndexError:
- raise ParseException(instring, len_instring, self.errmsg, self)
- else:
- loc, tokens = self.parseImpl(instring, pre_loc, doActions)
-
- tokens = self.postParse(instring, loc, tokens)
-
- ret_tokens = ParseResults(
- tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults
- )
- if self.parseAction and (doActions or self.callDuringTry):
- if debugging:
- try:
- for fn in self.parseAction:
- try:
- tokens = fn(instring, tokens_start, ret_tokens)
- except IndexError as parse_action_exc:
- exc = ParseException("exception raised in parse action")
- raise exc from parse_action_exc
-
- if tokens is not None and tokens is not ret_tokens:
- ret_tokens = ParseResults(
- tokens,
- self.resultsName,
- asList=self.saveAsList
- and isinstance(tokens, (ParseResults, list)),
- modal=self.modalResults,
- )
- except Exception as err:
- # print "Exception raised in user parse action:", err
- if self.debugActions.debug_fail:
- self.debugActions.debug_fail(
- instring, tokens_start, self, err, False
- )
- raise
- else:
- for fn in self.parseAction:
- try:
- tokens = fn(instring, tokens_start, ret_tokens)
- except IndexError as parse_action_exc:
- exc = ParseException("exception raised in parse action")
- raise exc from parse_action_exc
-
- if tokens is not None and tokens is not ret_tokens:
- ret_tokens = ParseResults(
- tokens,
- self.resultsName,
- asList=self.saveAsList
- and isinstance(tokens, (ParseResults, list)),
- modal=self.modalResults,
- )
- if debugging:
- # print("Matched", self, "->", ret_tokens.as_list())
- if self.debugActions.debug_match:
- self.debugActions.debug_match(
- instring, tokens_start, loc, self, ret_tokens, False
- )
-
- return loc, ret_tokens
-
- def try_parse(self, instring: str, loc: int, raise_fatal: bool = False) -> int:
- try:
- return self._parse(instring, loc, doActions=False)[0]
- except ParseFatalException:
- if raise_fatal:
- raise
- raise ParseException(instring, loc, self.errmsg, self)
-
- def can_parse_next(self, instring: str, loc: int) -> bool:
- try:
- self.try_parse(instring, loc)
- except (ParseException, IndexError):
- return False
- else:
- return True
-
- # cache for left-recursion in Forward references
- recursion_lock = RLock()
- recursion_memos: typing.Dict[
- Tuple[int, "Forward", bool], Tuple[int, Union[ParseResults, Exception]]
- ] = {}
-
- # argument cache for optimizing repeated calls when backtracking through recursive expressions
- packrat_cache = (
- {}
- ) # this is set later by enabled_packrat(); this is here so that reset_cache() doesn't fail
- packrat_cache_lock = RLock()
- packrat_cache_stats = [0, 0]
-
- # this method gets repeatedly called during backtracking with the same arguments -
- # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression
- def _parseCache(
- self, instring, loc, doActions=True, callPreParse=True
- ) -> Tuple[int, ParseResults]:
- HIT, MISS = 0, 1
- TRY, MATCH, FAIL = 0, 1, 2
- lookup = (self, instring, loc, callPreParse, doActions)
- with ParserElement.packrat_cache_lock:
- cache = ParserElement.packrat_cache
- value = cache.get(lookup)
- if value is cache.not_in_cache:
- ParserElement.packrat_cache_stats[MISS] += 1
- try:
- value = self._parseNoCache(instring, loc, doActions, callPreParse)
- except ParseBaseException as pe:
- # cache a copy of the exception, without the traceback
- cache.set(lookup, pe.__class__(*pe.args))
- raise
- else:
- cache.set(lookup, (value[0], value[1].copy(), loc))
- return value
- else:
- ParserElement.packrat_cache_stats[HIT] += 1
- if self.debug and self.debugActions.debug_try:
- try:
- self.debugActions.debug_try(instring, loc, self, cache_hit=True)
- except TypeError:
- pass
- if isinstance(value, Exception):
- if self.debug and self.debugActions.debug_fail:
- try:
- self.debugActions.debug_fail(
- instring, loc, self, value, cache_hit=True
- )
- except TypeError:
- pass
- raise value
-
- loc_, result, endloc = value[0], value[1].copy(), value[2]
- if self.debug and self.debugActions.debug_match:
- try:
- self.debugActions.debug_match(
- instring, loc_, endloc, self, result, cache_hit=True
- )
- except TypeError:
- pass
-
- return loc_, result
-
- _parse = _parseNoCache
-
- @staticmethod
- def reset_cache() -> None:
- ParserElement.packrat_cache.clear()
- ParserElement.packrat_cache_stats[:] = [0] * len(
- ParserElement.packrat_cache_stats
- )
- ParserElement.recursion_memos.clear()
-
- _packratEnabled = False
- _left_recursion_enabled = False
-
- @staticmethod
- def disable_memoization() -> None:
- """
- Disables active Packrat or Left Recursion parsing and their memoization
-
- This method also works if neither Packrat nor Left Recursion are enabled.
- This makes it safe to call before activating Packrat nor Left Recursion
- to clear any previous settings.
- """
- ParserElement.reset_cache()
- ParserElement._left_recursion_enabled = False
- ParserElement._packratEnabled = False
- ParserElement._parse = ParserElement._parseNoCache
-
- @staticmethod
- def enable_left_recursion(
- cache_size_limit: typing.Optional[int] = None, *, force=False
- ) -> None:
- """
- Enables "bounded recursion" parsing, which allows for both direct and indirect
- left-recursion. During parsing, left-recursive :class:`Forward` elements are
- repeatedly matched with a fixed recursion depth that is gradually increased
- until finding the longest match.
-
- Example::
-
- from pip._vendor import pyparsing as pp
- pp.ParserElement.enable_left_recursion()
-
- E = pp.Forward("E")
- num = pp.Word(pp.nums)
- # match `num`, or `num '+' num`, or `num '+' num '+' num`, ...
- E <<= E + '+' - num | num
-
- print(E.parse_string("1+2+3"))
-
- Recursion search naturally memoizes matches of ``Forward`` elements and may
- thus skip reevaluation of parse actions during backtracking. This may break
- programs with parse actions which rely on strict ordering of side-effects.
-
- Parameters:
-
- - cache_size_limit - (default=``None``) - memoize at most this many
- ``Forward`` elements during matching; if ``None`` (the default),
- memoize all ``Forward`` elements.
-
- Bounded Recursion parsing works similar but not identical to Packrat parsing,
- thus the two cannot be used together. Use ``force=True`` to disable any
- previous, conflicting settings.
- """
- if force:
- ParserElement.disable_memoization()
- elif ParserElement._packratEnabled:
- raise RuntimeError("Packrat and Bounded Recursion are not compatible")
- if cache_size_limit is None:
- ParserElement.recursion_memos = _UnboundedMemo()
- elif cache_size_limit > 0:
- ParserElement.recursion_memos = _LRUMemo(capacity=cache_size_limit)
- else:
- raise NotImplementedError("Memo size of %s" % cache_size_limit)
- ParserElement._left_recursion_enabled = True
-
- @staticmethod
- def enable_packrat(cache_size_limit: int = 128, *, force: bool = False) -> None:
- """
- Enables "packrat" parsing, which adds memoizing to the parsing logic.
- Repeated parse attempts at the same string location (which happens
- often in many complex grammars) can immediately return a cached value,
- instead of re-executing parsing/validating code. Memoizing is done of
- both valid results and parsing exceptions.
-
- Parameters:
-
- - cache_size_limit - (default= ``128``) - if an integer value is provided
- will limit the size of the packrat cache; if None is passed, then
- the cache size will be unbounded; if 0 is passed, the cache will
- be effectively disabled.
-
- This speedup may break existing programs that use parse actions that
- have side-effects. For this reason, packrat parsing is disabled when
- you first import pyparsing. To activate the packrat feature, your
- program must call the class method :class:`ParserElement.enable_packrat`.
- For best results, call ``enable_packrat()`` immediately after
- importing pyparsing.
-
- Example::
-
- from pip._vendor import pyparsing
- pyparsing.ParserElement.enable_packrat()
-
- Packrat parsing works similar but not identical to Bounded Recursion parsing,
- thus the two cannot be used together. Use ``force=True`` to disable any
- previous, conflicting settings.
- """
- if force:
- ParserElement.disable_memoization()
- elif ParserElement._left_recursion_enabled:
- raise RuntimeError("Packrat and Bounded Recursion are not compatible")
- if not ParserElement._packratEnabled:
- ParserElement._packratEnabled = True
- if cache_size_limit is None:
- ParserElement.packrat_cache = _UnboundedCache()
- else:
- ParserElement.packrat_cache = _FifoCache(cache_size_limit)
- ParserElement._parse = ParserElement._parseCache
-
- def parse_string(
- self, instring: str, parse_all: bool = False, *, parseAll: bool = False
- ) -> ParseResults:
- """
- Parse a string with respect to the parser definition. This function is intended as the primary interface to the
- client code.
-
- :param instring: The input string to be parsed.
- :param parse_all: If set, the entire input string must match the grammar.
- :param parseAll: retained for pre-PEP8 compatibility, will be removed in a future release.
- :raises ParseException: Raised if ``parse_all`` is set and the input string does not match the whole grammar.
- :returns: the parsed data as a :class:`ParseResults` object, which may be accessed as a `list`, a `dict`, or
- an object with attributes if the given parser includes results names.
-
- If the input string is required to match the entire grammar, ``parse_all`` flag must be set to ``True``. This
- is also equivalent to ending the grammar with :class:`StringEnd`().
-
- To report proper column numbers, ``parse_string`` operates on a copy of the input string where all tabs are
- converted to spaces (8 spaces per tab, as per the default in ``string.expandtabs``). If the input string
- contains tabs and the grammar uses parse actions that use the ``loc`` argument to index into the string
- being parsed, one can ensure a consistent view of the input string by doing one of the following:
-
- - calling ``parse_with_tabs`` on your grammar before calling ``parse_string`` (see :class:`parse_with_tabs`),
- - define your parse action using the full ``(s,loc,toks)`` signature, and reference the input string using the
- parse action's ``s`` argument, or
- - explicitly expand the tabs in your input string before calling ``parse_string``.
-
- Examples:
-
- By default, partial matches are OK.
-
- >>> res = Word('a').parse_string('aaaaabaaa')
- >>> print(res)
- ['aaaaa']
-
- The parsing behavior varies by the inheriting class of this abstract class. Please refer to the children
- directly to see more examples.
-
- It raises an exception if parse_all flag is set and instring does not match the whole grammar.
-
- >>> res = Word('a').parse_string('aaaaabaaa', parse_all=True)
- Traceback (most recent call last):
- ...
- pyparsing.ParseException: Expected end of text, found 'b' (at char 5), (line:1, col:6)
- """
- parseAll = parse_all or parseAll
-
- ParserElement.reset_cache()
- if not self.streamlined:
- self.streamline()
- for e in self.ignoreExprs:
- e.streamline()
- if not self.keepTabs:
- instring = instring.expandtabs()
- try:
- loc, tokens = self._parse(instring, 0)
- if parseAll:
- loc = self.preParse(instring, loc)
- se = Empty() + StringEnd()
- se._parse(instring, loc)
- except ParseBaseException as exc:
- if ParserElement.verbose_stacktrace:
- raise
- else:
- # catch and re-raise exception from here, clearing out pyparsing internal stack trace
- raise exc.with_traceback(None)
- else:
- return tokens
-
- def scan_string(
- self,
- instring: str,
- max_matches: int = _MAX_INT,
- overlap: bool = False,
- *,
- debug: bool = False,
- maxMatches: int = _MAX_INT,
- ) -> Generator[Tuple[ParseResults, int, int], None, None]:
- """
- Scan the input string for expression matches. Each match will return the
- matching tokens, start location, and end location. May be called with optional
- ``max_matches`` argument, to clip scanning after 'n' matches are found. If
- ``overlap`` is specified, then overlapping matches will be reported.
-
- Note that the start and end locations are reported relative to the string
- being parsed. See :class:`parse_string` for more information on parsing
- strings with embedded tabs.
-
- Example::
-
- source = "sldjf123lsdjjkf345sldkjf879lkjsfd987"
- print(source)
- for tokens, start, end in Word(alphas).scan_string(source):
- print(' '*start + '^'*(end-start))
- print(' '*start + tokens[0])
-
- prints::
-
- sldjf123lsdjjkf345sldkjf879lkjsfd987
- ^^^^^
- sldjf
- ^^^^^^^
- lsdjjkf
- ^^^^^^
- sldkjf
- ^^^^^^
- lkjsfd
- """
- maxMatches = min(maxMatches, max_matches)
- if not self.streamlined:
- self.streamline()
- for e in self.ignoreExprs:
- e.streamline()
-
- if not self.keepTabs:
- instring = str(instring).expandtabs()
- instrlen = len(instring)
- loc = 0
- preparseFn = self.preParse
- parseFn = self._parse
- ParserElement.resetCache()
- matches = 0
- try:
- while loc <= instrlen and matches < maxMatches:
- try:
- preloc = preparseFn(instring, loc)
- nextLoc, tokens = parseFn(instring, preloc, callPreParse=False)
- except ParseException:
- loc = preloc + 1
- else:
- if nextLoc > loc:
- matches += 1
- if debug:
- print(
- {
- "tokens": tokens.asList(),
- "start": preloc,
- "end": nextLoc,
- }
- )
- yield tokens, preloc, nextLoc
- if overlap:
- nextloc = preparseFn(instring, loc)
- if nextloc > loc:
- loc = nextLoc
- else:
- loc += 1
- else:
- loc = nextLoc
- else:
- loc = preloc + 1
- except ParseBaseException as exc:
- if ParserElement.verbose_stacktrace:
- raise
- else:
- # catch and re-raise exception from here, clears out pyparsing internal stack trace
- raise exc.with_traceback(None)
-
- def transform_string(self, instring: str, *, debug: bool = False) -> str:
- """
- Extension to :class:`scan_string`, to modify matching text with modified tokens that may
- be returned from a parse action. To use ``transform_string``, define a grammar and
- attach a parse action to it that modifies the returned token list.
- Invoking ``transform_string()`` on a target string will then scan for matches,
- and replace the matched text patterns according to the logic in the parse
- action. ``transform_string()`` returns the resulting transformed string.
-
- Example::
-
- wd = Word(alphas)
- wd.set_parse_action(lambda toks: toks[0].title())
-
- print(wd.transform_string("now is the winter of our discontent made glorious summer by this sun of york."))
-
- prints::
-
- Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York.
- """
- out: List[str] = []
- lastE = 0
- # force preservation of s, to minimize unwanted transformation of string, and to
- # keep string locs straight between transform_string and scan_string
- self.keepTabs = True
- try:
- for t, s, e in self.scan_string(instring, debug=debug):
- out.append(instring[lastE:s])
- if t:
- if isinstance(t, ParseResults):
- out += t.as_list()
- elif isinstance(t, Iterable) and not isinstance(t, str_type):
- out.extend(t)
- else:
- out.append(t)
- lastE = e
- out.append(instring[lastE:])
- out = [o for o in out if o]
- return "".join([str(s) for s in _flatten(out)])
- except ParseBaseException as exc:
- if ParserElement.verbose_stacktrace:
- raise
- else:
- # catch and re-raise exception from here, clears out pyparsing internal stack trace
- raise exc.with_traceback(None)
-
- def search_string(
- self,
- instring: str,
- max_matches: int = _MAX_INT,
- *,
- debug: bool = False,
- maxMatches: int = _MAX_INT,
- ) -> ParseResults:
- """
- Another extension to :class:`scan_string`, simplifying the access to the tokens found
- to match the given parse expression. May be called with optional
- ``max_matches`` argument, to clip searching after 'n' matches are found.
-
- Example::
-
- # a capitalized word starts with an uppercase letter, followed by zero or more lowercase letters
- cap_word = Word(alphas.upper(), alphas.lower())
-
- print(cap_word.search_string("More than Iron, more than Lead, more than Gold I need Electricity"))
-
- # the sum() builtin can be used to merge results into a single ParseResults object
- print(sum(cap_word.search_string("More than Iron, more than Lead, more than Gold I need Electricity")))
-
- prints::
-
- [['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']]
- ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity']
- """
- maxMatches = min(maxMatches, max_matches)
- try:
- return ParseResults(
- [t for t, s, e in self.scan_string(instring, maxMatches, debug=debug)]
- )
- except ParseBaseException as exc:
- if ParserElement.verbose_stacktrace:
- raise
- else:
- # catch and re-raise exception from here, clears out pyparsing internal stack trace
- raise exc.with_traceback(None)
-
- def split(
- self,
- instring: str,
- maxsplit: int = _MAX_INT,
- include_separators: bool = False,
- *,
- includeSeparators=False,
- ) -> Generator[str, None, None]:
- """
- Generator method to split a string using the given expression as a separator.
- May be called with optional ``maxsplit`` argument, to limit the number of splits;
- and the optional ``include_separators`` argument (default= ``False``), if the separating
- matching text should be included in the split results.
-
- Example::
-
- punc = one_of(list(".,;:/-!?"))
- print(list(punc.split("This, this?, this sentence, is badly punctuated!")))
-
- prints::
-
- ['This', ' this', '', ' this sentence', ' is badly punctuated', '']
- """
- includeSeparators = includeSeparators or include_separators
- last = 0
- for t, s, e in self.scan_string(instring, max_matches=maxsplit):
- yield instring[last:s]
- if includeSeparators:
- yield t[0]
- last = e
- yield instring[last:]
-
- def __add__(self, other) -> "ParserElement":
- """
- Implementation of ``+`` operator - returns :class:`And`. Adding strings to a :class:`ParserElement`
- converts them to :class:`Literal`s by default.
-
- Example::
-
- greet = Word(alphas) + "," + Word(alphas) + "!"
- hello = "Hello, World!"
- print(hello, "->", greet.parse_string(hello))
-
- prints::
-
- Hello, World! -> ['Hello', ',', 'World', '!']
-
- ``...`` may be used as a parse expression as a short form of :class:`SkipTo`.
-
- Literal('start') + ... + Literal('end')
-
- is equivalent to:
-
- Literal('start') + SkipTo('end')("_skipped*") + Literal('end')
-
- Note that the skipped text is returned with '_skipped' as a results name,
- and to support having multiple skips in the same parser, the value returned is
- a list of all skipped text.
- """
- if other is Ellipsis:
- return _PendingSkip(self)
-
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return And([self, other])
-
- def __radd__(self, other) -> "ParserElement":
- """
- Implementation of ``+`` operator when left operand is not a :class:`ParserElement`
- """
- if other is Ellipsis:
- return SkipTo(self)("_skipped*") + self
-
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return other + self
-
- def __sub__(self, other) -> "ParserElement":
- """
- Implementation of ``-`` operator, returns :class:`And` with error stop
- """
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return self + And._ErrorStop() + other
-
- def __rsub__(self, other) -> "ParserElement":
- """
- Implementation of ``-`` operator when left operand is not a :class:`ParserElement`
- """
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return other - self
-
- def __mul__(self, other) -> "ParserElement":
- """
- Implementation of ``*`` operator, allows use of ``expr * 3`` in place of
- ``expr + expr + expr``. Expressions may also be multiplied by a 2-integer
- tuple, similar to ``{min, max}`` multipliers in regular expressions. Tuples
- may also include ``None`` as in:
- - ``expr*(n, None)`` or ``expr*(n, )`` is equivalent
- to ``expr*n + ZeroOrMore(expr)``
- (read as "at least n instances of ``expr``")
- - ``expr*(None, n)`` is equivalent to ``expr*(0, n)``
- (read as "0 to n instances of ``expr``")
- - ``expr*(None, None)`` is equivalent to ``ZeroOrMore(expr)``
- - ``expr*(1, None)`` is equivalent to ``OneOrMore(expr)``
-
- Note that ``expr*(None, n)`` does not raise an exception if
- more than n exprs exist in the input stream; that is,
- ``expr*(None, n)`` does not enforce a maximum number of expr
- occurrences. If this behavior is desired, then write
- ``expr*(None, n) + ~expr``
- """
- if other is Ellipsis:
- other = (0, None)
- elif isinstance(other, tuple) and other[:1] == (Ellipsis,):
- other = ((0,) + other[1:] + (None,))[:2]
-
- if isinstance(other, int):
- minElements, optElements = other, 0
- elif isinstance(other, tuple):
- other = tuple(o if o is not Ellipsis else None for o in other)
- other = (other + (None, None))[:2]
- if other[0] is None:
- other = (0, other[1])
- if isinstance(other[0], int) and other[1] is None:
- if other[0] == 0:
- return ZeroOrMore(self)
- if other[0] == 1:
- return OneOrMore(self)
- else:
- return self * other[0] + ZeroOrMore(self)
- elif isinstance(other[0], int) and isinstance(other[1], int):
- minElements, optElements = other
- optElements -= minElements
- else:
- raise TypeError(
- "cannot multiply ParserElement and ({}) objects".format(
- ",".join(type(item).__name__ for item in other)
- )
- )
- else:
- raise TypeError(
- "cannot multiply ParserElement and {} objects".format(
- type(other).__name__
- )
- )
-
- if minElements < 0:
- raise ValueError("cannot multiply ParserElement by negative value")
- if optElements < 0:
- raise ValueError(
- "second tuple value must be greater or equal to first tuple value"
- )
- if minElements == optElements == 0:
- return And([])
-
- if optElements:
-
- def makeOptionalList(n):
- if n > 1:
- return Opt(self + makeOptionalList(n - 1))
- else:
- return Opt(self)
-
- if minElements:
- if minElements == 1:
- ret = self + makeOptionalList(optElements)
- else:
- ret = And([self] * minElements) + makeOptionalList(optElements)
- else:
- ret = makeOptionalList(optElements)
- else:
- if minElements == 1:
- ret = self
- else:
- ret = And([self] * minElements)
- return ret
-
- def __rmul__(self, other) -> "ParserElement":
- return self.__mul__(other)
-
- def __or__(self, other) -> "ParserElement":
- """
- Implementation of ``|`` operator - returns :class:`MatchFirst`
- """
- if other is Ellipsis:
- return _PendingSkip(self, must_skip=True)
-
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return MatchFirst([self, other])
-
- def __ror__(self, other) -> "ParserElement":
- """
- Implementation of ``|`` operator when left operand is not a :class:`ParserElement`
- """
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return other | self
-
- def __xor__(self, other) -> "ParserElement":
- """
- Implementation of ``^`` operator - returns :class:`Or`
- """
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return Or([self, other])
-
- def __rxor__(self, other) -> "ParserElement":
- """
- Implementation of ``^`` operator when left operand is not a :class:`ParserElement`
- """
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return other ^ self
-
- def __and__(self, other) -> "ParserElement":
- """
- Implementation of ``&`` operator - returns :class:`Each`
- """
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return Each([self, other])
-
- def __rand__(self, other) -> "ParserElement":
- """
- Implementation of ``&`` operator when left operand is not a :class:`ParserElement`
- """
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- if not isinstance(other, ParserElement):
- raise TypeError(
- "Cannot combine element of type {} with ParserElement".format(
- type(other).__name__
- )
- )
- return other & self
-
- def __invert__(self) -> "ParserElement":
- """
- Implementation of ``~`` operator - returns :class:`NotAny`
- """
- return NotAny(self)
-
- # disable __iter__ to override legacy use of sequential access to __getitem__ to
- # iterate over a sequence
- __iter__ = None
-
- def __getitem__(self, key):
- """
- use ``[]`` indexing notation as a short form for expression repetition:
-
- - ``expr[n]`` is equivalent to ``expr*n``
- - ``expr[m, n]`` is equivalent to ``expr*(m, n)``
- - ``expr[n, ...]`` or ``expr[n,]`` is equivalent
- to ``expr*n + ZeroOrMore(expr)``
- (read as "at least n instances of ``expr``")
- - ``expr[..., n]`` is equivalent to ``expr*(0, n)``
- (read as "0 to n instances of ``expr``")
- - ``expr[...]`` and ``expr[0, ...]`` are equivalent to ``ZeroOrMore(expr)``
- - ``expr[1, ...]`` is equivalent to ``OneOrMore(expr)``
-
- ``None`` may be used in place of ``...``.
-
- Note that ``expr[..., n]`` and ``expr[m, n]``do not raise an exception
- if more than ``n`` ``expr``s exist in the input stream. If this behavior is
- desired, then write ``expr[..., n] + ~expr``.
- """
-
- # convert single arg keys to tuples
- try:
- if isinstance(key, str_type):
- key = (key,)
- iter(key)
- except TypeError:
- key = (key, key)
-
- if len(key) > 2:
- raise TypeError(
- "only 1 or 2 index arguments supported ({}{})".format(
- key[:5], "... [{}]".format(len(key)) if len(key) > 5 else ""
- )
- )
-
- # clip to 2 elements
- ret = self * tuple(key[:2])
- return ret
-
- def __call__(self, name: str = None) -> "ParserElement":
- """
- Shortcut for :class:`set_results_name`, with ``list_all_matches=False``.
-
- If ``name`` is given with a trailing ``'*'`` character, then ``list_all_matches`` will be
- passed as ``True``.
-
- If ``name` is omitted, same as calling :class:`copy`.
-
- Example::
-
- # these are equivalent
- userdata = Word(alphas).set_results_name("name") + Word(nums + "-").set_results_name("socsecno")
- userdata = Word(alphas)("name") + Word(nums + "-")("socsecno")
- """
- if name is not None:
- return self._setResultsName(name)
- else:
- return self.copy()
-
- def suppress(self) -> "ParserElement":
- """
- Suppresses the output of this :class:`ParserElement`; useful to keep punctuation from
- cluttering up returned output.
- """
- return Suppress(self)
-
- def ignore_whitespace(self, recursive: bool = True) -> "ParserElement":
- """
- Enables the skipping of whitespace before matching the characters in the
- :class:`ParserElement`'s defined pattern.
-
- :param recursive: If ``True`` (the default), also enable whitespace skipping in child elements (if any)
- """
- self.skipWhitespace = True
- return self
-
- def leave_whitespace(self, recursive: bool = True) -> "ParserElement":
- """
- Disables the skipping of whitespace before matching the characters in the
- :class:`ParserElement`'s defined pattern. This is normally only used internally by
- the pyparsing module, but may be needed in some whitespace-sensitive grammars.
-
- :param recursive: If true (the default), also disable whitespace skipping in child elements (if any)
- """
- self.skipWhitespace = False
- return self
-
- def set_whitespace_chars(
- self, chars: Union[Set[str], str], copy_defaults: bool = False
- ) -> "ParserElement":
- """
- Overrides the default whitespace chars
- """
- self.skipWhitespace = True
- self.whiteChars = set(chars)
- self.copyDefaultWhiteChars = copy_defaults
- return self
-
- def parse_with_tabs(self) -> "ParserElement":
- """
- Overrides default behavior to expand ```` s to spaces before parsing the input string.
- Must be called before ``parse_string`` when the input grammar contains elements that
- match ```` characters.
- """
- self.keepTabs = True
- return self
-
- def ignore(self, other: "ParserElement") -> "ParserElement":
- """
- Define expression to be ignored (e.g., comments) while doing pattern
- matching; may be called repeatedly, to define multiple comment or other
- ignorable patterns.
-
- Example::
-
- patt = Word(alphas)[1, ...]
- patt.parse_string('ablaj /* comment */ lskjd')
- # -> ['ablaj']
-
- patt.ignore(c_style_comment)
- patt.parse_string('ablaj /* comment */ lskjd')
- # -> ['ablaj', 'lskjd']
- """
- import typing
-
- if isinstance(other, str_type):
- other = Suppress(other)
-
- if isinstance(other, Suppress):
- if other not in self.ignoreExprs:
- self.ignoreExprs.append(other)
- else:
- self.ignoreExprs.append(Suppress(other.copy()))
- return self
-
- def set_debug_actions(
- self,
- start_action: DebugStartAction,
- success_action: DebugSuccessAction,
- exception_action: DebugExceptionAction,
- ) -> "ParserElement":
- """
- Customize display of debugging messages while doing pattern matching:
-
- - ``start_action`` - method to be called when an expression is about to be parsed;
- should have the signature ``fn(input_string: str, location: int, expression: ParserElement, cache_hit: bool)``
-
- - ``success_action`` - method to be called when an expression has successfully parsed;
- should have the signature ``fn(input_string: str, start_location: int, end_location: int, expression: ParserELement, parsed_tokens: ParseResults, cache_hit: bool)``
-
- - ``exception_action`` - method to be called when expression fails to parse;
- should have the signature ``fn(input_string: str, location: int, expression: ParserElement, exception: Exception, cache_hit: bool)``
- """
- self.debugActions = self.DebugActions(
- start_action or _default_start_debug_action,
- success_action or _default_success_debug_action,
- exception_action or _default_exception_debug_action,
- )
- self.debug = True
- return self
-
- def set_debug(self, flag: bool = True) -> "ParserElement":
- """
- Enable display of debugging messages while doing pattern matching.
- Set ``flag`` to ``True`` to enable, ``False`` to disable.
-
- Example::
-
- wd = Word(alphas).set_name("alphaword")
- integer = Word(nums).set_name("numword")
- term = wd | integer
-
- # turn on debugging for wd
- wd.set_debug()
-
- term[1, ...].parse_string("abc 123 xyz 890")
-
- prints::
-
- Match alphaword at loc 0(1,1)
- Matched alphaword -> ['abc']
- Match alphaword at loc 3(1,4)
- Exception raised:Expected alphaword (at char 4), (line:1, col:5)
- Match alphaword at loc 7(1,8)
- Matched alphaword -> ['xyz']
- Match alphaword at loc 11(1,12)
- Exception raised:Expected alphaword (at char 12), (line:1, col:13)
- Match alphaword at loc 15(1,16)
- Exception raised:Expected alphaword (at char 15), (line:1, col:16)
-
- The output shown is that produced by the default debug actions - custom debug actions can be
- specified using :class:`set_debug_actions`. Prior to attempting
- to match the ``wd`` expression, the debugging message ``"Match at loc (,
)"``
- is shown. Then if the parse succeeds, a ``"Matched"`` message is shown, or an ``"Exception raised"``
- message is shown. Also note the use of :class:`set_name` to assign a human-readable name to the expression,
- which makes debugging and exception messages easier to understand - for instance, the default
- name created for the :class:`Word` expression without calling ``set_name`` is ``"W:(A-Za-z)"``.
- """
- if flag:
- self.set_debug_actions(
- _default_start_debug_action,
- _default_success_debug_action,
- _default_exception_debug_action,
- )
- else:
- self.debug = False
- return self
-
- @property
- def default_name(self) -> str:
- if self._defaultName is None:
- self._defaultName = self._generateDefaultName()
- return self._defaultName
-
- @abstractmethod
- def _generateDefaultName(self):
- """
- Child classes must define this method, which defines how the ``default_name`` is set.
- """
-
- def set_name(self, name: str) -> "ParserElement":
- """
- Define name for this expression, makes debugging and exception messages clearer.
- Example::
- Word(nums).parse_string("ABC") # -> Exception: Expected W:(0-9) (at char 0), (line:1, col:1)
- Word(nums).set_name("integer").parse_string("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1)
- """
- self.customName = name
- self.errmsg = "Expected " + self.name
- if __diag__.enable_debug_on_named_expressions:
- self.set_debug()
- return self
-
- @property
- def name(self) -> str:
- # This will use a user-defined name if available, but otherwise defaults back to the auto-generated name
- return self.customName if self.customName is not None else self.default_name
-
- def __str__(self) -> str:
- return self.name
-
- def __repr__(self) -> str:
- return str(self)
-
- def streamline(self) -> "ParserElement":
- self.streamlined = True
- self._defaultName = None
- return self
-
- def recurse(self) -> Sequence["ParserElement"]:
- return []
-
- def _checkRecursion(self, parseElementList):
- subRecCheckList = parseElementList[:] + [self]
- for e in self.recurse():
- e._checkRecursion(subRecCheckList)
-
- def validate(self, validateTrace=None) -> None:
- """
- Check defined expressions for valid structure, check for infinite recursive definitions.
- """
- self._checkRecursion([])
-
- def parse_file(
- self,
- file_or_filename: Union[str, Path, TextIO],
- encoding: str = "utf-8",
- parse_all: bool = False,
- *,
- parseAll: bool = False,
- ) -> ParseResults:
- """
- Execute the parse expression on the given file or filename.
- If a filename is specified (instead of a file object),
- the entire file is opened, read, and closed before parsing.
- """
- parseAll = parseAll or parse_all
- try:
- file_contents = file_or_filename.read()
- except AttributeError:
- with open(file_or_filename, "r", encoding=encoding) as f:
- file_contents = f.read()
- try:
- return self.parse_string(file_contents, parseAll)
- except ParseBaseException as exc:
- if ParserElement.verbose_stacktrace:
- raise
- else:
- # catch and re-raise exception from here, clears out pyparsing internal stack trace
- raise exc.with_traceback(None)
-
- def __eq__(self, other):
- if self is other:
- return True
- elif isinstance(other, str_type):
- return self.matches(other, parse_all=True)
- elif isinstance(other, ParserElement):
- return vars(self) == vars(other)
- return False
-
- def __hash__(self):
- return id(self)
-
- def matches(
- self, test_string: str, parse_all: bool = True, *, parseAll: bool = True
- ) -> bool:
- """
- Method for quick testing of a parser against a test string. Good for simple
- inline microtests of sub expressions while building up larger parser.
-
- Parameters:
- - ``test_string`` - to test against this expression for a match
- - ``parse_all`` - (default= ``True``) - flag to pass to :class:`parse_string` when running tests
-
- Example::
-
- expr = Word(nums)
- assert expr.matches("100")
- """
- parseAll = parseAll and parse_all
- try:
- self.parse_string(str(test_string), parse_all=parseAll)
- return True
- except ParseBaseException:
- return False
-
- def run_tests(
- self,
- tests: Union[str, List[str]],
- parse_all: bool = True,
- comment: typing.Optional[Union["ParserElement", str]] = "#",
- full_dump: bool = True,
- print_results: bool = True,
- failure_tests: bool = False,
- post_parse: Callable[[str, ParseResults], str] = None,
- file: typing.Optional[TextIO] = None,
- with_line_numbers: bool = False,
- *,
- parseAll: bool = True,
- fullDump: bool = True,
- printResults: bool = True,
- failureTests: bool = False,
- postParse: Callable[[str, ParseResults], str] = None,
- ) -> Tuple[bool, List[Tuple[str, Union[ParseResults, Exception]]]]:
- """
- Execute the parse expression on a series of test strings, showing each
- test, the parsed results or where the parse failed. Quick and easy way to
- run a parse expression against a list of sample strings.
-
- Parameters:
- - ``tests`` - a list of separate test strings, or a multiline string of test strings
- - ``parse_all`` - (default= ``True``) - flag to pass to :class:`parse_string` when running tests
- - ``comment`` - (default= ``'#'``) - expression for indicating embedded comments in the test
- string; pass None to disable comment filtering
- - ``full_dump`` - (default= ``True``) - dump results as list followed by results names in nested outline;
- if False, only dump nested list
- - ``print_results`` - (default= ``True``) prints test output to stdout
- - ``failure_tests`` - (default= ``False``) indicates if these tests are expected to fail parsing
- - ``post_parse`` - (default= ``None``) optional callback for successful parse results; called as
- `fn(test_string, parse_results)` and returns a string to be added to the test output
- - ``file`` - (default= ``None``) optional file-like object to which test output will be written;
- if None, will default to ``sys.stdout``
- - ``with_line_numbers`` - default= ``False``) show test strings with line and column numbers
-
- Returns: a (success, results) tuple, where success indicates that all tests succeeded
- (or failed if ``failure_tests`` is True), and the results contain a list of lines of each
- test's output
-
- Example::
-
- number_expr = pyparsing_common.number.copy()
-
- result = number_expr.run_tests('''
- # unsigned integer
- 100
- # negative integer
- -100
- # float with scientific notation
- 6.02e23
- # integer with scientific notation
- 1e-12
- ''')
- print("Success" if result[0] else "Failed!")
-
- result = number_expr.run_tests('''
- # stray character
- 100Z
- # missing leading digit before '.'
- -.100
- # too many '.'
- 3.14.159
- ''', failure_tests=True)
- print("Success" if result[0] else "Failed!")
-
- prints::
-
- # unsigned integer
- 100
- [100]
-
- # negative integer
- -100
- [-100]
-
- # float with scientific notation
- 6.02e23
- [6.02e+23]
-
- # integer with scientific notation
- 1e-12
- [1e-12]
-
- Success
-
- # stray character
- 100Z
- ^
- FAIL: Expected end of text (at char 3), (line:1, col:4)
-
- # missing leading digit before '.'
- -.100
- ^
- FAIL: Expected {real number with scientific notation | real number | signed integer} (at char 0), (line:1, col:1)
-
- # too many '.'
- 3.14.159
- ^
- FAIL: Expected end of text (at char 4), (line:1, col:5)
-
- Success
-
- Each test string must be on a single line. If you want to test a string that spans multiple
- lines, create a test like this::
-
- expr.run_tests(r"this is a test\\n of strings that spans \\n 3 lines")
-
- (Note that this is a raw string literal, you must include the leading ``'r'``.)
- """
- from .testing import pyparsing_test
-
- parseAll = parseAll and parse_all
- fullDump = fullDump and full_dump
- printResults = printResults and print_results
- failureTests = failureTests or failure_tests
- postParse = postParse or post_parse
- if isinstance(tests, str_type):
- line_strip = type(tests).strip
- tests = [line_strip(test_line) for test_line in tests.rstrip().splitlines()]
- if isinstance(comment, str_type):
- comment = Literal(comment)
- if file is None:
- file = sys.stdout
- print_ = file.write
-
- result: Union[ParseResults, Exception]
- allResults = []
- comments = []
- success = True
- NL = Literal(r"\n").add_parse_action(replace_with("\n")).ignore(quoted_string)
- BOM = "\ufeff"
- for t in tests:
- if comment is not None and comment.matches(t, False) or comments and not t:
- comments.append(
- pyparsing_test.with_line_numbers(t) if with_line_numbers else t
- )
- continue
- if not t:
- continue
- out = [
- "\n" + "\n".join(comments) if comments else "",
- pyparsing_test.with_line_numbers(t) if with_line_numbers else t,
- ]
- comments = []
- try:
- # convert newline marks to actual newlines, and strip leading BOM if present
- t = NL.transform_string(t.lstrip(BOM))
- result = self.parse_string(t, parse_all=parseAll)
- except ParseBaseException as pe:
- fatal = "(FATAL)" if isinstance(pe, ParseFatalException) else ""
- out.append(pe.explain())
- out.append("FAIL: " + str(pe))
- if ParserElement.verbose_stacktrace:
- out.extend(traceback.format_tb(pe.__traceback__))
- success = success and failureTests
- result = pe
- except Exception as exc:
- out.append("FAIL-EXCEPTION: {}: {}".format(type(exc).__name__, exc))
- if ParserElement.verbose_stacktrace:
- out.extend(traceback.format_tb(exc.__traceback__))
- success = success and failureTests
- result = exc
- else:
- success = success and not failureTests
- if postParse is not None:
- try:
- pp_value = postParse(t, result)
- if pp_value is not None:
- if isinstance(pp_value, ParseResults):
- out.append(pp_value.dump())
- else:
- out.append(str(pp_value))
- else:
- out.append(result.dump())
- except Exception as e:
- out.append(result.dump(full=fullDump))
- out.append(
- "{} failed: {}: {}".format(
- postParse.__name__, type(e).__name__, e
- )
- )
- else:
- out.append(result.dump(full=fullDump))
- out.append("")
-
- if printResults:
- print_("\n".join(out))
-
- allResults.append((t, result))
-
- return success, allResults
-
- def create_diagram(
- self,
- output_html: Union[TextIO, Path, str],
- vertical: int = 3,
- show_results_names: bool = False,
- show_groups: bool = False,
- **kwargs,
- ) -> None:
- """
- Create a railroad diagram for the parser.
-
- Parameters:
- - output_html (str or file-like object) - output target for generated
- diagram HTML
- - vertical (int) - threshold for formatting multiple alternatives vertically
- instead of horizontally (default=3)
- - show_results_names - bool flag whether diagram should show annotations for
- defined results names
- - show_groups - bool flag whether groups should be highlighted with an unlabeled surrounding box
- Additional diagram-formatting keyword arguments can also be included;
- see railroad.Diagram class.
- """
-
- try:
- from .diagram import to_railroad, railroad_to_html
- except ImportError as ie:
- raise Exception(
- "must ``pip install pyparsing[diagrams]`` to generate parser railroad diagrams"
- ) from ie
-
- self.streamline()
-
- railroad = to_railroad(
- self,
- vertical=vertical,
- show_results_names=show_results_names,
- show_groups=show_groups,
- diagram_kwargs=kwargs,
- )
- if isinstance(output_html, (str, Path)):
- with open(output_html, "w", encoding="utf-8") as diag_file:
- diag_file.write(railroad_to_html(railroad))
- else:
- # we were passed a file-like object, just write to it
- output_html.write(railroad_to_html(railroad))
-
- setDefaultWhitespaceChars = set_default_whitespace_chars
- inlineLiteralsUsing = inline_literals_using
- setResultsName = set_results_name
- setBreak = set_break
- setParseAction = set_parse_action
- addParseAction = add_parse_action
- addCondition = add_condition
- setFailAction = set_fail_action
- tryParse = try_parse
- canParseNext = can_parse_next
- resetCache = reset_cache
- enableLeftRecursion = enable_left_recursion
- enablePackrat = enable_packrat
- parseString = parse_string
- scanString = scan_string
- searchString = search_string
- transformString = transform_string
- setWhitespaceChars = set_whitespace_chars
- parseWithTabs = parse_with_tabs
- setDebugActions = set_debug_actions
- setDebug = set_debug
- defaultName = default_name
- setName = set_name
- parseFile = parse_file
- runTests = run_tests
- ignoreWhitespace = ignore_whitespace
- leaveWhitespace = leave_whitespace
-
-
-class _PendingSkip(ParserElement):
- # internal placeholder class to hold a place were '...' is added to a parser element,
- # once another ParserElement is added, this placeholder will be replaced with a SkipTo
- def __init__(self, expr: ParserElement, must_skip: bool = False):
- super().__init__()
- self.anchor = expr
- self.must_skip = must_skip
-
- def _generateDefaultName(self):
- return str(self.anchor + Empty()).replace("Empty", "...")
-
- def __add__(self, other) -> "ParserElement":
- skipper = SkipTo(other).set_name("...")("_skipped*")
- if self.must_skip:
-
- def must_skip(t):
- if not t._skipped or t._skipped.as_list() == [""]:
- del t[0]
- t.pop("_skipped", None)
-
- def show_skip(t):
- if t._skipped.as_list()[-1:] == [""]:
- t.pop("_skipped")
- t["_skipped"] = "missing <" + repr(self.anchor) + ">"
-
- return (
- self.anchor + skipper().add_parse_action(must_skip)
- | skipper().add_parse_action(show_skip)
- ) + other
-
- return self.anchor + skipper + other
-
- def __repr__(self):
- return self.defaultName
-
- def parseImpl(self, *args):
- raise Exception(
- "use of `...` expression without following SkipTo target expression"
- )
-
-
-class Token(ParserElement):
- """Abstract :class:`ParserElement` subclass, for defining atomic
- matching patterns.
- """
-
- def __init__(self):
- super().__init__(savelist=False)
-
- def _generateDefaultName(self):
- return type(self).__name__
-
-
-class Empty(Token):
- """
- An empty token, will always match.
- """
-
- def __init__(self):
- super().__init__()
- self.mayReturnEmpty = True
- self.mayIndexError = False
-
-
-class NoMatch(Token):
- """
- A token that will never match.
- """
-
- def __init__(self):
- super().__init__()
- self.mayReturnEmpty = True
- self.mayIndexError = False
- self.errmsg = "Unmatchable token"
-
- def parseImpl(self, instring, loc, doActions=True):
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-class Literal(Token):
- """
- Token to exactly match a specified string.
-
- Example::
-
- Literal('blah').parse_string('blah') # -> ['blah']
- Literal('blah').parse_string('blahfooblah') # -> ['blah']
- Literal('blah').parse_string('bla') # -> Exception: Expected "blah"
-
- For case-insensitive matching, use :class:`CaselessLiteral`.
-
- For keyword matching (force word break before and after the matched string),
- use :class:`Keyword` or :class:`CaselessKeyword`.
- """
-
- def __init__(self, match_string: str = "", *, matchString: str = ""):
- super().__init__()
- match_string = matchString or match_string
- self.match = match_string
- self.matchLen = len(match_string)
- try:
- self.firstMatchChar = match_string[0]
- except IndexError:
- raise ValueError("null string passed to Literal; use Empty() instead")
- self.errmsg = "Expected " + self.name
- self.mayReturnEmpty = False
- self.mayIndexError = False
-
- # Performance tuning: modify __class__ to select
- # a parseImpl optimized for single-character check
- if self.matchLen == 1 and type(self) is Literal:
- self.__class__ = _SingleCharLiteral
-
- def _generateDefaultName(self):
- return repr(self.match)
-
- def parseImpl(self, instring, loc, doActions=True):
- if instring[loc] == self.firstMatchChar and instring.startswith(
- self.match, loc
- ):
- return loc + self.matchLen, self.match
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-class _SingleCharLiteral(Literal):
- def parseImpl(self, instring, loc, doActions=True):
- if instring[loc] == self.firstMatchChar:
- return loc + 1, self.match
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-ParserElement._literalStringClass = Literal
-
-
-class Keyword(Token):
- """
- Token to exactly match a specified string as a keyword, that is,
- it must be immediately followed by a non-keyword character. Compare
- with :class:`Literal`:
-
- - ``Literal("if")`` will match the leading ``'if'`` in
- ``'ifAndOnlyIf'``.
- - ``Keyword("if")`` will not; it will only match the leading
- ``'if'`` in ``'if x=1'``, or ``'if(y==2)'``
-
- Accepts two optional constructor arguments in addition to the
- keyword string:
-
- - ``identChars`` is a string of characters that would be valid
- identifier characters, defaulting to all alphanumerics + "_" and
- "$"
- - ``caseless`` allows case-insensitive matching, default is ``False``.
-
- Example::
-
- Keyword("start").parse_string("start") # -> ['start']
- Keyword("start").parse_string("starting") # -> Exception
-
- For case-insensitive matching, use :class:`CaselessKeyword`.
- """
-
- DEFAULT_KEYWORD_CHARS = alphanums + "_$"
-
- def __init__(
- self,
- match_string: str = "",
- ident_chars: typing.Optional[str] = None,
- caseless: bool = False,
- *,
- matchString: str = "",
- identChars: typing.Optional[str] = None,
- ):
- super().__init__()
- identChars = identChars or ident_chars
- if identChars is None:
- identChars = Keyword.DEFAULT_KEYWORD_CHARS
- match_string = matchString or match_string
- self.match = match_string
- self.matchLen = len(match_string)
- try:
- self.firstMatchChar = match_string[0]
- except IndexError:
- raise ValueError("null string passed to Keyword; use Empty() instead")
- self.errmsg = "Expected {} {}".format(type(self).__name__, self.name)
- self.mayReturnEmpty = False
- self.mayIndexError = False
- self.caseless = caseless
- if caseless:
- self.caselessmatch = match_string.upper()
- identChars = identChars.upper()
- self.identChars = set(identChars)
-
- def _generateDefaultName(self):
- return repr(self.match)
-
- def parseImpl(self, instring, loc, doActions=True):
- errmsg = self.errmsg
- errloc = loc
- if self.caseless:
- if instring[loc : loc + self.matchLen].upper() == self.caselessmatch:
- if loc == 0 or instring[loc - 1].upper() not in self.identChars:
- if (
- loc >= len(instring) - self.matchLen
- or instring[loc + self.matchLen].upper() not in self.identChars
- ):
- return loc + self.matchLen, self.match
- else:
- # followed by keyword char
- errmsg += ", was immediately followed by keyword character"
- errloc = loc + self.matchLen
- else:
- # preceded by keyword char
- errmsg += ", keyword was immediately preceded by keyword character"
- errloc = loc - 1
- # else no match just raise plain exception
-
- else:
- if (
- instring[loc] == self.firstMatchChar
- and self.matchLen == 1
- or instring.startswith(self.match, loc)
- ):
- if loc == 0 or instring[loc - 1] not in self.identChars:
- if (
- loc >= len(instring) - self.matchLen
- or instring[loc + self.matchLen] not in self.identChars
- ):
- return loc + self.matchLen, self.match
- else:
- # followed by keyword char
- errmsg += (
- ", keyword was immediately followed by keyword character"
- )
- errloc = loc + self.matchLen
- else:
- # preceded by keyword char
- errmsg += ", keyword was immediately preceded by keyword character"
- errloc = loc - 1
- # else no match just raise plain exception
-
- raise ParseException(instring, errloc, errmsg, self)
-
- @staticmethod
- def set_default_keyword_chars(chars) -> None:
- """
- Overrides the default characters used by :class:`Keyword` expressions.
- """
- Keyword.DEFAULT_KEYWORD_CHARS = chars
-
- setDefaultKeywordChars = set_default_keyword_chars
-
-
-class CaselessLiteral(Literal):
- """
- Token to match a specified string, ignoring case of letters.
- Note: the matched results will always be in the case of the given
- match string, NOT the case of the input text.
-
- Example::
-
- CaselessLiteral("CMD")[1, ...].parse_string("cmd CMD Cmd10")
- # -> ['CMD', 'CMD', 'CMD']
-
- (Contrast with example for :class:`CaselessKeyword`.)
- """
-
- def __init__(self, match_string: str = "", *, matchString: str = ""):
- match_string = matchString or match_string
- super().__init__(match_string.upper())
- # Preserve the defining literal.
- self.returnString = match_string
- self.errmsg = "Expected " + self.name
-
- def parseImpl(self, instring, loc, doActions=True):
- if instring[loc : loc + self.matchLen].upper() == self.match:
- return loc + self.matchLen, self.returnString
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-class CaselessKeyword(Keyword):
- """
- Caseless version of :class:`Keyword`.
-
- Example::
-
- CaselessKeyword("CMD")[1, ...].parse_string("cmd CMD Cmd10")
- # -> ['CMD', 'CMD']
-
- (Contrast with example for :class:`CaselessLiteral`.)
- """
-
- def __init__(
- self,
- match_string: str = "",
- ident_chars: typing.Optional[str] = None,
- *,
- matchString: str = "",
- identChars: typing.Optional[str] = None,
- ):
- identChars = identChars or ident_chars
- match_string = matchString or match_string
- super().__init__(match_string, identChars, caseless=True)
-
-
-class CloseMatch(Token):
- """A variation on :class:`Literal` which matches "close" matches,
- that is, strings with at most 'n' mismatching characters.
- :class:`CloseMatch` takes parameters:
-
- - ``match_string`` - string to be matched
- - ``caseless`` - a boolean indicating whether to ignore casing when comparing characters
- - ``max_mismatches`` - (``default=1``) maximum number of
- mismatches allowed to count as a match
-
- The results from a successful parse will contain the matched text
- from the input string and the following named results:
-
- - ``mismatches`` - a list of the positions within the
- match_string where mismatches were found
- - ``original`` - the original match_string used to compare
- against the input string
-
- If ``mismatches`` is an empty list, then the match was an exact
- match.
-
- Example::
-
- patt = CloseMatch("ATCATCGAATGGA")
- patt.parse_string("ATCATCGAAXGGA") # -> (['ATCATCGAAXGGA'], {'mismatches': [[9]], 'original': ['ATCATCGAATGGA']})
- patt.parse_string("ATCAXCGAAXGGA") # -> Exception: Expected 'ATCATCGAATGGA' (with up to 1 mismatches) (at char 0), (line:1, col:1)
-
- # exact match
- patt.parse_string("ATCATCGAATGGA") # -> (['ATCATCGAATGGA'], {'mismatches': [[]], 'original': ['ATCATCGAATGGA']})
-
- # close match allowing up to 2 mismatches
- patt = CloseMatch("ATCATCGAATGGA", max_mismatches=2)
- patt.parse_string("ATCAXCGAAXGGA") # -> (['ATCAXCGAAXGGA'], {'mismatches': [[4, 9]], 'original': ['ATCATCGAATGGA']})
- """
-
- def __init__(
- self,
- match_string: str,
- max_mismatches: int = None,
- *,
- maxMismatches: int = 1,
- caseless=False,
- ):
- maxMismatches = max_mismatches if max_mismatches is not None else maxMismatches
- super().__init__()
- self.match_string = match_string
- self.maxMismatches = maxMismatches
- self.errmsg = "Expected {!r} (with up to {} mismatches)".format(
- self.match_string, self.maxMismatches
- )
- self.caseless = caseless
- self.mayIndexError = False
- self.mayReturnEmpty = False
-
- def _generateDefaultName(self):
- return "{}:{!r}".format(type(self).__name__, self.match_string)
-
- def parseImpl(self, instring, loc, doActions=True):
- start = loc
- instrlen = len(instring)
- maxloc = start + len(self.match_string)
-
- if maxloc <= instrlen:
- match_string = self.match_string
- match_stringloc = 0
- mismatches = []
- maxMismatches = self.maxMismatches
-
- for match_stringloc, s_m in enumerate(
- zip(instring[loc:maxloc], match_string)
- ):
- src, mat = s_m
- if self.caseless:
- src, mat = src.lower(), mat.lower()
-
- if src != mat:
- mismatches.append(match_stringloc)
- if len(mismatches) > maxMismatches:
- break
- else:
- loc = start + match_stringloc + 1
- results = ParseResults([instring[start:loc]])
- results["original"] = match_string
- results["mismatches"] = mismatches
- return loc, results
-
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-class Word(Token):
- """Token for matching words composed of allowed character sets.
- Parameters:
- - ``init_chars`` - string of all characters that should be used to
- match as a word; "ABC" will match "AAA", "ABAB", "CBAC", etc.;
- if ``body_chars`` is also specified, then this is the string of
- initial characters
- - ``body_chars`` - string of characters that
- can be used for matching after a matched initial character as
- given in ``init_chars``; if omitted, same as the initial characters
- (default=``None``)
- - ``min`` - minimum number of characters to match (default=1)
- - ``max`` - maximum number of characters to match (default=0)
- - ``exact`` - exact number of characters to match (default=0)
- - ``as_keyword`` - match as a keyword (default=``False``)
- - ``exclude_chars`` - characters that might be
- found in the input ``body_chars`` string but which should not be
- accepted for matching ;useful to define a word of all
- printables except for one or two characters, for instance
- (default=``None``)
-
- :class:`srange` is useful for defining custom character set strings
- for defining :class:`Word` expressions, using range notation from
- regular expression character sets.
-
- A common mistake is to use :class:`Word` to match a specific literal
- string, as in ``Word("Address")``. Remember that :class:`Word`
- uses the string argument to define *sets* of matchable characters.
- This expression would match "Add", "AAA", "dAred", or any other word
- made up of the characters 'A', 'd', 'r', 'e', and 's'. To match an
- exact literal string, use :class:`Literal` or :class:`Keyword`.
-
- pyparsing includes helper strings for building Words:
-
- - :class:`alphas`
- - :class:`nums`
- - :class:`alphanums`
- - :class:`hexnums`
- - :class:`alphas8bit` (alphabetic characters in ASCII range 128-255
- - accented, tilded, umlauted, etc.)
- - :class:`punc8bit` (non-alphabetic characters in ASCII range
- 128-255 - currency, symbols, superscripts, diacriticals, etc.)
- - :class:`printables` (any non-whitespace character)
-
- ``alphas``, ``nums``, and ``printables`` are also defined in several
- Unicode sets - see :class:`pyparsing_unicode``.
-
- Example::
-
- # a word composed of digits
- integer = Word(nums) # equivalent to Word("0123456789") or Word(srange("0-9"))
-
- # a word with a leading capital, and zero or more lowercase
- capital_word = Word(alphas.upper(), alphas.lower())
-
- # hostnames are alphanumeric, with leading alpha, and '-'
- hostname = Word(alphas, alphanums + '-')
-
- # roman numeral (not a strict parser, accepts invalid mix of characters)
- roman = Word("IVXLCDM")
-
- # any string of non-whitespace characters, except for ','
- csv_value = Word(printables, exclude_chars=",")
- """
-
- def __init__(
- self,
- init_chars: str = "",
- body_chars: typing.Optional[str] = None,
- min: int = 1,
- max: int = 0,
- exact: int = 0,
- as_keyword: bool = False,
- exclude_chars: typing.Optional[str] = None,
- *,
- initChars: typing.Optional[str] = None,
- bodyChars: typing.Optional[str] = None,
- asKeyword: bool = False,
- excludeChars: typing.Optional[str] = None,
- ):
- initChars = initChars or init_chars
- bodyChars = bodyChars or body_chars
- asKeyword = asKeyword or as_keyword
- excludeChars = excludeChars or exclude_chars
- super().__init__()
- if not initChars:
- raise ValueError(
- "invalid {}, initChars cannot be empty string".format(
- type(self).__name__
- )
- )
-
- initChars = set(initChars)
- self.initChars = initChars
- if excludeChars:
- excludeChars = set(excludeChars)
- initChars -= excludeChars
- if bodyChars:
- bodyChars = set(bodyChars) - excludeChars
- self.initCharsOrig = "".join(sorted(initChars))
-
- if bodyChars:
- self.bodyCharsOrig = "".join(sorted(bodyChars))
- self.bodyChars = set(bodyChars)
- else:
- self.bodyCharsOrig = "".join(sorted(initChars))
- self.bodyChars = set(initChars)
-
- self.maxSpecified = max > 0
-
- if min < 1:
- raise ValueError(
- "cannot specify a minimum length < 1; use Opt(Word()) if zero-length word is permitted"
- )
-
- self.minLen = min
-
- if max > 0:
- self.maxLen = max
- else:
- self.maxLen = _MAX_INT
-
- if exact > 0:
- self.maxLen = exact
- self.minLen = exact
-
- self.errmsg = "Expected " + self.name
- self.mayIndexError = False
- self.asKeyword = asKeyword
-
- # see if we can make a regex for this Word
- if " " not in self.initChars | self.bodyChars and (min == 1 and exact == 0):
- if self.bodyChars == self.initChars:
- if max == 0:
- repeat = "+"
- elif max == 1:
- repeat = ""
- else:
- repeat = "{{{},{}}}".format(
- self.minLen, "" if self.maxLen == _MAX_INT else self.maxLen
- )
- self.reString = "[{}]{}".format(
- _collapse_string_to_ranges(self.initChars),
- repeat,
- )
- elif len(self.initChars) == 1:
- if max == 0:
- repeat = "*"
- else:
- repeat = "{{0,{}}}".format(max - 1)
- self.reString = "{}[{}]{}".format(
- re.escape(self.initCharsOrig),
- _collapse_string_to_ranges(self.bodyChars),
- repeat,
- )
- else:
- if max == 0:
- repeat = "*"
- elif max == 2:
- repeat = ""
- else:
- repeat = "{{0,{}}}".format(max - 1)
- self.reString = "[{}][{}]{}".format(
- _collapse_string_to_ranges(self.initChars),
- _collapse_string_to_ranges(self.bodyChars),
- repeat,
- )
- if self.asKeyword:
- self.reString = r"\b" + self.reString + r"\b"
-
- try:
- self.re = re.compile(self.reString)
- except re.error:
- self.re = None
- else:
- self.re_match = self.re.match
- self.__class__ = _WordRegex
-
- def _generateDefaultName(self):
- def charsAsStr(s):
- max_repr_len = 16
- s = _collapse_string_to_ranges(s, re_escape=False)
- if len(s) > max_repr_len:
- return s[: max_repr_len - 3] + "..."
- else:
- return s
-
- if self.initChars != self.bodyChars:
- base = "W:({}, {})".format(
- charsAsStr(self.initChars), charsAsStr(self.bodyChars)
- )
- else:
- base = "W:({})".format(charsAsStr(self.initChars))
-
- # add length specification
- if self.minLen > 1 or self.maxLen != _MAX_INT:
- if self.minLen == self.maxLen:
- if self.minLen == 1:
- return base[2:]
- else:
- return base + "{{{}}}".format(self.minLen)
- elif self.maxLen == _MAX_INT:
- return base + "{{{},...}}".format(self.minLen)
- else:
- return base + "{{{},{}}}".format(self.minLen, self.maxLen)
- return base
-
- def parseImpl(self, instring, loc, doActions=True):
- if instring[loc] not in self.initChars:
- raise ParseException(instring, loc, self.errmsg, self)
-
- start = loc
- loc += 1
- instrlen = len(instring)
- bodychars = self.bodyChars
- maxloc = start + self.maxLen
- maxloc = min(maxloc, instrlen)
- while loc < maxloc and instring[loc] in bodychars:
- loc += 1
-
- throwException = False
- if loc - start < self.minLen:
- throwException = True
- elif self.maxSpecified and loc < instrlen and instring[loc] in bodychars:
- throwException = True
- elif self.asKeyword:
- if (
- start > 0
- and instring[start - 1] in bodychars
- or loc < instrlen
- and instring[loc] in bodychars
- ):
- throwException = True
-
- if throwException:
- raise ParseException(instring, loc, self.errmsg, self)
-
- return loc, instring[start:loc]
-
-
-class _WordRegex(Word):
- def parseImpl(self, instring, loc, doActions=True):
- result = self.re_match(instring, loc)
- if not result:
- raise ParseException(instring, loc, self.errmsg, self)
-
- loc = result.end()
- return loc, result.group()
-
-
-class Char(_WordRegex):
- """A short-cut class for defining :class:`Word` ``(characters, exact=1)``,
- when defining a match of any single character in a string of
- characters.
- """
-
- def __init__(
- self,
- charset: str,
- as_keyword: bool = False,
- exclude_chars: typing.Optional[str] = None,
- *,
- asKeyword: bool = False,
- excludeChars: typing.Optional[str] = None,
- ):
- asKeyword = asKeyword or as_keyword
- excludeChars = excludeChars or exclude_chars
- super().__init__(
- charset, exact=1, asKeyword=asKeyword, excludeChars=excludeChars
- )
- self.reString = "[{}]".format(_collapse_string_to_ranges(self.initChars))
- if asKeyword:
- self.reString = r"\b{}\b".format(self.reString)
- self.re = re.compile(self.reString)
- self.re_match = self.re.match
-
-
-class Regex(Token):
- r"""Token for matching strings that match a given regular
- expression. Defined with string specifying the regular expression in
- a form recognized by the stdlib Python `re module `_.
- If the given regex contains named groups (defined using ``(?P...)``),
- these will be preserved as named :class:`ParseResults`.
-
- If instead of the Python stdlib ``re`` module you wish to use a different RE module
- (such as the ``regex`` module), you can do so by building your ``Regex`` object with
- a compiled RE that was compiled using ``regex``.
-
- Example::
-
- realnum = Regex(r"[+-]?\d+\.\d*")
- # ref: https://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression
- roman = Regex(r"M{0,4}(CM|CD|D?{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})")
-
- # named fields in a regex will be returned as named results
- date = Regex(r'(?P\d{4})-(?P\d\d?)-(?P\d\d?)')
-
- # the Regex class will accept re's compiled using the regex module
- import regex
- parser = pp.Regex(regex.compile(r'[0-9]'))
- """
-
- def __init__(
- self,
- pattern: Any,
- flags: Union[re.RegexFlag, int] = 0,
- as_group_list: bool = False,
- as_match: bool = False,
- *,
- asGroupList: bool = False,
- asMatch: bool = False,
- ):
- """The parameters ``pattern`` and ``flags`` are passed
- to the ``re.compile()`` function as-is. See the Python
- `re module `_ module for an
- explanation of the acceptable patterns and flags.
- """
- super().__init__()
- asGroupList = asGroupList or as_group_list
- asMatch = asMatch or as_match
-
- if isinstance(pattern, str_type):
- if not pattern:
- raise ValueError("null string passed to Regex; use Empty() instead")
-
- self._re = None
- self.reString = self.pattern = pattern
- self.flags = flags
-
- elif hasattr(pattern, "pattern") and hasattr(pattern, "match"):
- self._re = pattern
- self.pattern = self.reString = pattern.pattern
- self.flags = flags
-
- else:
- raise TypeError(
- "Regex may only be constructed with a string or a compiled RE object"
- )
-
- self.errmsg = "Expected " + self.name
- self.mayIndexError = False
- self.asGroupList = asGroupList
- self.asMatch = asMatch
- if self.asGroupList:
- self.parseImpl = self.parseImplAsGroupList
- if self.asMatch:
- self.parseImpl = self.parseImplAsMatch
-
- @cached_property
- def re(self):
- if self._re:
- return self._re
- else:
- try:
- return re.compile(self.pattern, self.flags)
- except re.error:
- raise ValueError(
- "invalid pattern ({!r}) passed to Regex".format(self.pattern)
- )
-
- @cached_property
- def re_match(self):
- return self.re.match
-
- @cached_property
- def mayReturnEmpty(self):
- return self.re_match("") is not None
-
- def _generateDefaultName(self):
- return "Re:({})".format(repr(self.pattern).replace("\\\\", "\\"))
-
- def parseImpl(self, instring, loc, doActions=True):
- result = self.re_match(instring, loc)
- if not result:
- raise ParseException(instring, loc, self.errmsg, self)
-
- loc = result.end()
- ret = ParseResults(result.group())
- d = result.groupdict()
- if d:
- for k, v in d.items():
- ret[k] = v
- return loc, ret
-
- def parseImplAsGroupList(self, instring, loc, doActions=True):
- result = self.re_match(instring, loc)
- if not result:
- raise ParseException(instring, loc, self.errmsg, self)
-
- loc = result.end()
- ret = result.groups()
- return loc, ret
-
- def parseImplAsMatch(self, instring, loc, doActions=True):
- result = self.re_match(instring, loc)
- if not result:
- raise ParseException(instring, loc, self.errmsg, self)
-
- loc = result.end()
- ret = result
- return loc, ret
-
- def sub(self, repl: str) -> ParserElement:
- r"""
- Return :class:`Regex` with an attached parse action to transform the parsed
- result as if called using `re.sub(expr, repl, string) `_.
-
- Example::
-
- make_html = Regex(r"(\w+):(.*?):").sub(r"<\1>\2\1>")
- print(make_html.transform_string("h1:main title:"))
- # prints "
main title
"
- """
- if self.asGroupList:
- raise TypeError("cannot use sub() with Regex(asGroupList=True)")
-
- if self.asMatch and callable(repl):
- raise TypeError("cannot use sub() with a callable with Regex(asMatch=True)")
-
- if self.asMatch:
-
- def pa(tokens):
- return tokens[0].expand(repl)
-
- else:
-
- def pa(tokens):
- return self.re.sub(repl, tokens[0])
-
- return self.add_parse_action(pa)
-
-
-class QuotedString(Token):
- r"""
- Token for matching strings that are delimited by quoting characters.
-
- Defined with the following parameters:
-
- - ``quote_char`` - string of one or more characters defining the
- quote delimiting string
- - ``esc_char`` - character to re_escape quotes, typically backslash
- (default= ``None``)
- - ``esc_quote`` - special quote sequence to re_escape an embedded quote
- string (such as SQL's ``""`` to re_escape an embedded ``"``)
- (default= ``None``)
- - ``multiline`` - boolean indicating whether quotes can span
- multiple lines (default= ``False``)
- - ``unquote_results`` - boolean indicating whether the matched text
- should be unquoted (default= ``True``)
- - ``end_quote_char`` - string of one or more characters defining the
- end of the quote delimited string (default= ``None`` => same as
- quote_char)
- - ``convert_whitespace_escapes`` - convert escaped whitespace
- (``'\t'``, ``'\n'``, etc.) to actual whitespace
- (default= ``True``)
-
- Example::
-
- qs = QuotedString('"')
- print(qs.search_string('lsjdf "This is the quote" sldjf'))
- complex_qs = QuotedString('{{', end_quote_char='}}')
- print(complex_qs.search_string('lsjdf {{This is the "quote"}} sldjf'))
- sql_qs = QuotedString('"', esc_quote='""')
- print(sql_qs.search_string('lsjdf "This is the quote with ""embedded"" quotes" sldjf'))
-
- prints::
-
- [['This is the quote']]
- [['This is the "quote"']]
- [['This is the quote with "embedded" quotes']]
- """
- ws_map = ((r"\t", "\t"), (r"\n", "\n"), (r"\f", "\f"), (r"\r", "\r"))
-
- def __init__(
- self,
- quote_char: str = "",
- esc_char: typing.Optional[str] = None,
- esc_quote: typing.Optional[str] = None,
- multiline: bool = False,
- unquote_results: bool = True,
- end_quote_char: typing.Optional[str] = None,
- convert_whitespace_escapes: bool = True,
- *,
- quoteChar: str = "",
- escChar: typing.Optional[str] = None,
- escQuote: typing.Optional[str] = None,
- unquoteResults: bool = True,
- endQuoteChar: typing.Optional[str] = None,
- convertWhitespaceEscapes: bool = True,
- ):
- super().__init__()
- escChar = escChar or esc_char
- escQuote = escQuote or esc_quote
- unquoteResults = unquoteResults and unquote_results
- endQuoteChar = endQuoteChar or end_quote_char
- convertWhitespaceEscapes = (
- convertWhitespaceEscapes and convert_whitespace_escapes
- )
- quote_char = quoteChar or quote_char
-
- # remove white space from quote chars - wont work anyway
- quote_char = quote_char.strip()
- if not quote_char:
- raise ValueError("quote_char cannot be the empty string")
-
- if endQuoteChar is None:
- endQuoteChar = quote_char
- else:
- endQuoteChar = endQuoteChar.strip()
- if not endQuoteChar:
- raise ValueError("endQuoteChar cannot be the empty string")
-
- self.quoteChar = quote_char
- self.quoteCharLen = len(quote_char)
- self.firstQuoteChar = quote_char[0]
- self.endQuoteChar = endQuoteChar
- self.endQuoteCharLen = len(endQuoteChar)
- self.escChar = escChar
- self.escQuote = escQuote
- self.unquoteResults = unquoteResults
- self.convertWhitespaceEscapes = convertWhitespaceEscapes
-
- sep = ""
- inner_pattern = ""
-
- if escQuote:
- inner_pattern += r"{}(?:{})".format(sep, re.escape(escQuote))
- sep = "|"
-
- if escChar:
- inner_pattern += r"{}(?:{}.)".format(sep, re.escape(escChar))
- sep = "|"
- self.escCharReplacePattern = re.escape(self.escChar) + "(.)"
-
- if len(self.endQuoteChar) > 1:
- inner_pattern += (
- "{}(?:".format(sep)
- + "|".join(
- "(?:{}(?!{}))".format(
- re.escape(self.endQuoteChar[:i]),
- re.escape(self.endQuoteChar[i:]),
- )
- for i in range(len(self.endQuoteChar) - 1, 0, -1)
- )
- + ")"
- )
- sep = "|"
-
- if multiline:
- self.flags = re.MULTILINE | re.DOTALL
- inner_pattern += r"{}(?:[^{}{}])".format(
- sep,
- _escape_regex_range_chars(self.endQuoteChar[0]),
- (_escape_regex_range_chars(escChar) if escChar is not None else ""),
- )
- else:
- self.flags = 0
- inner_pattern += r"{}(?:[^{}\n\r{}])".format(
- sep,
- _escape_regex_range_chars(self.endQuoteChar[0]),
- (_escape_regex_range_chars(escChar) if escChar is not None else ""),
- )
-
- self.pattern = "".join(
- [
- re.escape(self.quoteChar),
- "(?:",
- inner_pattern,
- ")*",
- re.escape(self.endQuoteChar),
- ]
- )
-
- try:
- self.re = re.compile(self.pattern, self.flags)
- self.reString = self.pattern
- self.re_match = self.re.match
- except re.error:
- raise ValueError(
- "invalid pattern {!r} passed to Regex".format(self.pattern)
- )
-
- self.errmsg = "Expected " + self.name
- self.mayIndexError = False
- self.mayReturnEmpty = True
-
- def _generateDefaultName(self):
- if self.quoteChar == self.endQuoteChar and isinstance(self.quoteChar, str_type):
- return "string enclosed in {!r}".format(self.quoteChar)
-
- return "quoted string, starting with {} ending with {}".format(
- self.quoteChar, self.endQuoteChar
- )
-
- def parseImpl(self, instring, loc, doActions=True):
- result = (
- instring[loc] == self.firstQuoteChar
- and self.re_match(instring, loc)
- or None
- )
- if not result:
- raise ParseException(instring, loc, self.errmsg, self)
-
- loc = result.end()
- ret = result.group()
-
- if self.unquoteResults:
-
- # strip off quotes
- ret = ret[self.quoteCharLen : -self.endQuoteCharLen]
-
- if isinstance(ret, str_type):
- # replace escaped whitespace
- if "\\" in ret and self.convertWhitespaceEscapes:
- for wslit, wschar in self.ws_map:
- ret = ret.replace(wslit, wschar)
-
- # replace escaped characters
- if self.escChar:
- ret = re.sub(self.escCharReplacePattern, r"\g<1>", ret)
-
- # replace escaped quotes
- if self.escQuote:
- ret = ret.replace(self.escQuote, self.endQuoteChar)
-
- return loc, ret
-
-
-class CharsNotIn(Token):
- """Token for matching words composed of characters *not* in a given
- set (will include whitespace in matched characters if not listed in
- the provided exclusion set - see example). Defined with string
- containing all disallowed characters, and an optional minimum,
- maximum, and/or exact length. The default value for ``min`` is
- 1 (a minimum value < 1 is not valid); the default values for
- ``max`` and ``exact`` are 0, meaning no maximum or exact
- length restriction.
-
- Example::
-
- # define a comma-separated-value as anything that is not a ','
- csv_value = CharsNotIn(',')
- print(delimited_list(csv_value).parse_string("dkls,lsdkjf,s12 34,@!#,213"))
-
- prints::
-
- ['dkls', 'lsdkjf', 's12 34', '@!#', '213']
- """
-
- def __init__(
- self,
- not_chars: str = "",
- min: int = 1,
- max: int = 0,
- exact: int = 0,
- *,
- notChars: str = "",
- ):
- super().__init__()
- self.skipWhitespace = False
- self.notChars = not_chars or notChars
- self.notCharsSet = set(self.notChars)
-
- if min < 1:
- raise ValueError(
- "cannot specify a minimum length < 1; use "
- "Opt(CharsNotIn()) if zero-length char group is permitted"
- )
-
- self.minLen = min
-
- if max > 0:
- self.maxLen = max
- else:
- self.maxLen = _MAX_INT
-
- if exact > 0:
- self.maxLen = exact
- self.minLen = exact
-
- self.errmsg = "Expected " + self.name
- self.mayReturnEmpty = self.minLen == 0
- self.mayIndexError = False
-
- def _generateDefaultName(self):
- not_chars_str = _collapse_string_to_ranges(self.notChars)
- if len(not_chars_str) > 16:
- return "!W:({}...)".format(self.notChars[: 16 - 3])
- else:
- return "!W:({})".format(self.notChars)
-
- def parseImpl(self, instring, loc, doActions=True):
- notchars = self.notCharsSet
- if instring[loc] in notchars:
- raise ParseException(instring, loc, self.errmsg, self)
-
- start = loc
- loc += 1
- maxlen = min(start + self.maxLen, len(instring))
- while loc < maxlen and instring[loc] not in notchars:
- loc += 1
-
- if loc - start < self.minLen:
- raise ParseException(instring, loc, self.errmsg, self)
-
- return loc, instring[start:loc]
-
-
-class White(Token):
- """Special matching class for matching whitespace. Normally,
- whitespace is ignored by pyparsing grammars. This class is included
- when some whitespace structures are significant. Define with
- a string containing the whitespace characters to be matched; default
- is ``" \\t\\r\\n"``. Also takes optional ``min``,
- ``max``, and ``exact`` arguments, as defined for the
- :class:`Word` class.
- """
-
- whiteStrs = {
- " ": "",
- "\t": "",
- "\n": "",
- "\r": "",
- "\f": "",
- "\u00A0": "",
- "\u1680": "",
- "\u180E": "",
- "\u2000": "",
- "\u2001": "",
- "\u2002": "",
- "\u2003": "",
- "\u2004": "",
- "\u2005": "",
- "\u2006": "",
- "\u2007": "",
- "\u2008": "",
- "\u2009": "",
- "\u200A": "",
- "\u200B": "",
- "\u202F": "",
- "\u205F": "",
- "\u3000": "",
- }
-
- def __init__(self, ws: str = " \t\r\n", min: int = 1, max: int = 0, exact: int = 0):
- super().__init__()
- self.matchWhite = ws
- self.set_whitespace_chars(
- "".join(c for c in self.whiteStrs if c not in self.matchWhite),
- copy_defaults=True,
- )
- # self.leave_whitespace()
- self.mayReturnEmpty = True
- self.errmsg = "Expected " + self.name
-
- self.minLen = min
-
- if max > 0:
- self.maxLen = max
- else:
- self.maxLen = _MAX_INT
-
- if exact > 0:
- self.maxLen = exact
- self.minLen = exact
-
- def _generateDefaultName(self):
- return "".join(White.whiteStrs[c] for c in self.matchWhite)
-
- def parseImpl(self, instring, loc, doActions=True):
- if instring[loc] not in self.matchWhite:
- raise ParseException(instring, loc, self.errmsg, self)
- start = loc
- loc += 1
- maxloc = start + self.maxLen
- maxloc = min(maxloc, len(instring))
- while loc < maxloc and instring[loc] in self.matchWhite:
- loc += 1
-
- if loc - start < self.minLen:
- raise ParseException(instring, loc, self.errmsg, self)
-
- return loc, instring[start:loc]
-
-
-class PositionToken(Token):
- def __init__(self):
- super().__init__()
- self.mayReturnEmpty = True
- self.mayIndexError = False
-
-
-class GoToColumn(PositionToken):
- """Token to advance to a specific column of input text; useful for
- tabular report scraping.
- """
-
- def __init__(self, colno: int):
- super().__init__()
- self.col = colno
-
- def preParse(self, instring, loc):
- if col(loc, instring) != self.col:
- instrlen = len(instring)
- if self.ignoreExprs:
- loc = self._skipIgnorables(instring, loc)
- while (
- loc < instrlen
- and instring[loc].isspace()
- and col(loc, instring) != self.col
- ):
- loc += 1
- return loc
-
- def parseImpl(self, instring, loc, doActions=True):
- thiscol = col(loc, instring)
- if thiscol > self.col:
- raise ParseException(instring, loc, "Text not in expected column", self)
- newloc = loc + self.col - thiscol
- ret = instring[loc:newloc]
- return newloc, ret
-
-
-class LineStart(PositionToken):
- r"""Matches if current position is at the beginning of a line within
- the parse string
-
- Example::
-
- test = '''\
- AAA this line
- AAA and this line
- AAA but not this one
- B AAA and definitely not this one
- '''
-
- for t in (LineStart() + 'AAA' + restOfLine).search_string(test):
- print(t)
-
- prints::
-
- ['AAA', ' this line']
- ['AAA', ' and this line']
-
- """
-
- def __init__(self):
- super().__init__()
- self.leave_whitespace()
- self.orig_whiteChars = set() | self.whiteChars
- self.whiteChars.discard("\n")
- self.skipper = Empty().set_whitespace_chars(self.whiteChars)
- self.errmsg = "Expected start of line"
-
- def preParse(self, instring, loc):
- if loc == 0:
- return loc
- else:
- ret = self.skipper.preParse(instring, loc)
- if "\n" in self.orig_whiteChars:
- while instring[ret : ret + 1] == "\n":
- ret = self.skipper.preParse(instring, ret + 1)
- return ret
-
- def parseImpl(self, instring, loc, doActions=True):
- if col(loc, instring) == 1:
- return loc, []
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-class LineEnd(PositionToken):
- """Matches if current position is at the end of a line within the
- parse string
- """
-
- def __init__(self):
- super().__init__()
- self.whiteChars.discard("\n")
- self.set_whitespace_chars(self.whiteChars, copy_defaults=False)
- self.errmsg = "Expected end of line"
-
- def parseImpl(self, instring, loc, doActions=True):
- if loc < len(instring):
- if instring[loc] == "\n":
- return loc + 1, "\n"
- else:
- raise ParseException(instring, loc, self.errmsg, self)
- elif loc == len(instring):
- return loc + 1, []
- else:
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-class StringStart(PositionToken):
- """Matches if current position is at the beginning of the parse
- string
- """
-
- def __init__(self):
- super().__init__()
- self.errmsg = "Expected start of text"
-
- def parseImpl(self, instring, loc, doActions=True):
- if loc != 0:
- # see if entire string up to here is just whitespace and ignoreables
- if loc != self.preParse(instring, 0):
- raise ParseException(instring, loc, self.errmsg, self)
- return loc, []
-
-
-class StringEnd(PositionToken):
- """
- Matches if current position is at the end of the parse string
- """
-
- def __init__(self):
- super().__init__()
- self.errmsg = "Expected end of text"
-
- def parseImpl(self, instring, loc, doActions=True):
- if loc < len(instring):
- raise ParseException(instring, loc, self.errmsg, self)
- elif loc == len(instring):
- return loc + 1, []
- elif loc > len(instring):
- return loc, []
- else:
- raise ParseException(instring, loc, self.errmsg, self)
-
-
-class WordStart(PositionToken):
- """Matches if the current position is at the beginning of a
- :class:`Word`, and is not preceded by any character in a given
- set of ``word_chars`` (default= ``printables``). To emulate the
- ``\b`` behavior of regular expressions, use
- ``WordStart(alphanums)``. ``WordStart`` will also match at
- the beginning of the string being parsed, or at the beginning of
- a line.
- """
-
- def __init__(self, word_chars: str = printables, *, wordChars: str = printables):
- wordChars = word_chars if wordChars == printables else wordChars
- super().__init__()
- self.wordChars = set(wordChars)
- self.errmsg = "Not at the start of a word"
-
- def parseImpl(self, instring, loc, doActions=True):
- if loc != 0:
- if (
- instring[loc - 1] in self.wordChars
- or instring[loc] not in self.wordChars
- ):
- raise ParseException(instring, loc, self.errmsg, self)
- return loc, []
-
-
-class WordEnd(PositionToken):
- """Matches if the current position is at the end of a :class:`Word`,
- and is not followed by any character in a given set of ``word_chars``
- (default= ``printables``). To emulate the ``\b`` behavior of
- regular expressions, use ``WordEnd(alphanums)``. ``WordEnd``
- will also match at the end of the string being parsed, or at the end
- of a line.
- """
-
- def __init__(self, word_chars: str = printables, *, wordChars: str = printables):
- wordChars = word_chars if wordChars == printables else wordChars
- super().__init__()
- self.wordChars = set(wordChars)
- self.skipWhitespace = False
- self.errmsg = "Not at the end of a word"
-
- def parseImpl(self, instring, loc, doActions=True):
- instrlen = len(instring)
- if instrlen > 0 and loc < instrlen:
- if (
- instring[loc] in self.wordChars
- or instring[loc - 1] not in self.wordChars
- ):
- raise ParseException(instring, loc, self.errmsg, self)
- return loc, []
-
-
-class ParseExpression(ParserElement):
- """Abstract subclass of ParserElement, for combining and
- post-processing parsed tokens.
- """
-
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
- super().__init__(savelist)
- self.exprs: List[ParserElement]
- if isinstance(exprs, _generatorType):
- exprs = list(exprs)
-
- if isinstance(exprs, str_type):
- self.exprs = [self._literalStringClass(exprs)]
- elif isinstance(exprs, ParserElement):
- self.exprs = [exprs]
- elif isinstance(exprs, Iterable):
- exprs = list(exprs)
- # if sequence of strings provided, wrap with Literal
- if any(isinstance(expr, str_type) for expr in exprs):
- exprs = (
- self._literalStringClass(e) if isinstance(e, str_type) else e
- for e in exprs
- )
- self.exprs = list(exprs)
- else:
- try:
- self.exprs = list(exprs)
- except TypeError:
- self.exprs = [exprs]
- self.callPreparse = False
-
- def recurse(self) -> Sequence[ParserElement]:
- return self.exprs[:]
-
- def append(self, other) -> ParserElement:
- self.exprs.append(other)
- self._defaultName = None
- return self
-
- def leave_whitespace(self, recursive: bool = True) -> ParserElement:
- """
- Extends ``leave_whitespace`` defined in base class, and also invokes ``leave_whitespace`` on
- all contained expressions.
- """
- super().leave_whitespace(recursive)
-
- if recursive:
- self.exprs = [e.copy() for e in self.exprs]
- for e in self.exprs:
- e.leave_whitespace(recursive)
- return self
-
- def ignore_whitespace(self, recursive: bool = True) -> ParserElement:
- """
- Extends ``ignore_whitespace`` defined in base class, and also invokes ``leave_whitespace`` on
- all contained expressions.
- """
- super().ignore_whitespace(recursive)
- if recursive:
- self.exprs = [e.copy() for e in self.exprs]
- for e in self.exprs:
- e.ignore_whitespace(recursive)
- return self
-
- def ignore(self, other) -> ParserElement:
- if isinstance(other, Suppress):
- if other not in self.ignoreExprs:
- super().ignore(other)
- for e in self.exprs:
- e.ignore(self.ignoreExprs[-1])
- else:
- super().ignore(other)
- for e in self.exprs:
- e.ignore(self.ignoreExprs[-1])
- return self
-
- def _generateDefaultName(self):
- return "{}:({})".format(self.__class__.__name__, str(self.exprs))
-
- def streamline(self) -> ParserElement:
- if self.streamlined:
- return self
-
- super().streamline()
-
- for e in self.exprs:
- e.streamline()
-
- # collapse nested :class:`And`'s of the form ``And(And(And(a, b), c), d)`` to ``And(a, b, c, d)``
- # but only if there are no parse actions or resultsNames on the nested And's
- # (likewise for :class:`Or`'s and :class:`MatchFirst`'s)
- if len(self.exprs) == 2:
- other = self.exprs[0]
- if (
- isinstance(other, self.__class__)
- and not other.parseAction
- and other.resultsName is None
- and not other.debug
- ):
- self.exprs = other.exprs[:] + [self.exprs[1]]
- self._defaultName = None
- self.mayReturnEmpty |= other.mayReturnEmpty
- self.mayIndexError |= other.mayIndexError
-
- other = self.exprs[-1]
- if (
- isinstance(other, self.__class__)
- and not other.parseAction
- and other.resultsName is None
- and not other.debug
- ):
- self.exprs = self.exprs[:-1] + other.exprs[:]
- self._defaultName = None
- self.mayReturnEmpty |= other.mayReturnEmpty
- self.mayIndexError |= other.mayIndexError
-
- self.errmsg = "Expected " + str(self)
-
- return self
-
- def validate(self, validateTrace=None) -> None:
- tmp = (validateTrace if validateTrace is not None else [])[:] + [self]
- for e in self.exprs:
- e.validate(tmp)
- self._checkRecursion([])
-
- def copy(self) -> ParserElement:
- ret = super().copy()
- ret.exprs = [e.copy() for e in self.exprs]
- return ret
-
- def _setResultsName(self, name, listAllMatches=False):
- if (
- __diag__.warn_ungrouped_named_tokens_in_collection
- and Diagnostics.warn_ungrouped_named_tokens_in_collection
- not in self.suppress_warnings_
- ):
- for e in self.exprs:
- if (
- isinstance(e, ParserElement)
- and e.resultsName
- and Diagnostics.warn_ungrouped_named_tokens_in_collection
- not in e.suppress_warnings_
- ):
- warnings.warn(
- "{}: setting results name {!r} on {} expression "
- "collides with {!r} on contained expression".format(
- "warn_ungrouped_named_tokens_in_collection",
- name,
- type(self).__name__,
- e.resultsName,
- ),
- stacklevel=3,
- )
-
- return super()._setResultsName(name, listAllMatches)
-
- ignoreWhitespace = ignore_whitespace
- leaveWhitespace = leave_whitespace
-
-
-class And(ParseExpression):
- """
- Requires all given :class:`ParseExpression` s to be found in the given order.
- Expressions may be separated by whitespace.
- May be constructed using the ``'+'`` operator.
- May also be constructed using the ``'-'`` operator, which will
- suppress backtracking.
-
- Example::
-
- integer = Word(nums)
- name_expr = Word(alphas)[1, ...]
-
- expr = And([integer("id"), name_expr("name"), integer("age")])
- # more easily written as:
- expr = integer("id") + name_expr("name") + integer("age")
- """
-
- class _ErrorStop(Empty):
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- self.leave_whitespace()
-
- def _generateDefaultName(self):
- return "-"
-
- def __init__(
- self, exprs_arg: typing.Iterable[ParserElement], savelist: bool = True
- ):
- exprs: List[ParserElement] = list(exprs_arg)
- if exprs and Ellipsis in exprs:
- tmp = []
- for i, expr in enumerate(exprs):
- if expr is Ellipsis:
- if i < len(exprs) - 1:
- skipto_arg: ParserElement = (Empty() + exprs[i + 1]).exprs[-1]
- tmp.append(SkipTo(skipto_arg)("_skipped*"))
- else:
- raise Exception(
- "cannot construct And with sequence ending in ..."
- )
- else:
- tmp.append(expr)
- exprs[:] = tmp
- super().__init__(exprs, savelist)
- if self.exprs:
- self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs)
- if not isinstance(self.exprs[0], White):
- self.set_whitespace_chars(
- self.exprs[0].whiteChars,
- copy_defaults=self.exprs[0].copyDefaultWhiteChars,
- )
- self.skipWhitespace = self.exprs[0].skipWhitespace
- else:
- self.skipWhitespace = False
- else:
- self.mayReturnEmpty = True
- self.callPreparse = True
-
- def streamline(self) -> ParserElement:
- # collapse any _PendingSkip's
- if self.exprs:
- if any(
- isinstance(e, ParseExpression)
- and e.exprs
- and isinstance(e.exprs[-1], _PendingSkip)
- for e in self.exprs[:-1]
- ):
- for i, e in enumerate(self.exprs[:-1]):
- if e is None:
- continue
- if (
- isinstance(e, ParseExpression)
- and e.exprs
- and isinstance(e.exprs[-1], _PendingSkip)
- ):
- e.exprs[-1] = e.exprs[-1] + self.exprs[i + 1]
- self.exprs[i + 1] = None
- self.exprs = [e for e in self.exprs if e is not None]
-
- super().streamline()
-
- # link any IndentedBlocks to the prior expression
- for prev, cur in zip(self.exprs, self.exprs[1:]):
- # traverse cur or any first embedded expr of cur looking for an IndentedBlock
- # (but watch out for recursive grammar)
- seen = set()
- while cur:
- if id(cur) in seen:
- break
- seen.add(id(cur))
- if isinstance(cur, IndentedBlock):
- prev.add_parse_action(
- lambda s, l, t, cur_=cur: setattr(
- cur_, "parent_anchor", col(l, s)
- )
- )
- break
- subs = cur.recurse()
- cur = next(iter(subs), None)
-
- self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs)
- return self
-
- def parseImpl(self, instring, loc, doActions=True):
- # pass False as callPreParse arg to _parse for first element, since we already
- # pre-parsed the string as part of our And pre-parsing
- loc, resultlist = self.exprs[0]._parse(
- instring, loc, doActions, callPreParse=False
- )
- errorStop = False
- for e in self.exprs[1:]:
- # if isinstance(e, And._ErrorStop):
- if type(e) is And._ErrorStop:
- errorStop = True
- continue
- if errorStop:
- try:
- loc, exprtokens = e._parse(instring, loc, doActions)
- except ParseSyntaxException:
- raise
- except ParseBaseException as pe:
- pe.__traceback__ = None
- raise ParseSyntaxException._from_exception(pe)
- except IndexError:
- raise ParseSyntaxException(
- instring, len(instring), self.errmsg, self
- )
- else:
- loc, exprtokens = e._parse(instring, loc, doActions)
- if exprtokens or exprtokens.haskeys():
- resultlist += exprtokens
- return loc, resultlist
-
- def __iadd__(self, other):
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- return self.append(other) # And([self, other])
-
- def _checkRecursion(self, parseElementList):
- subRecCheckList = parseElementList[:] + [self]
- for e in self.exprs:
- e._checkRecursion(subRecCheckList)
- if not e.mayReturnEmpty:
- break
-
- def _generateDefaultName(self):
- inner = " ".join(str(e) for e in self.exprs)
- # strip off redundant inner {}'s
- while len(inner) > 1 and inner[0 :: len(inner) - 1] == "{}":
- inner = inner[1:-1]
- return "{" + inner + "}"
-
-
-class Or(ParseExpression):
- """Requires that at least one :class:`ParseExpression` is found. If
- two expressions match, the expression that matches the longest
- string will be used. May be constructed using the ``'^'``
- operator.
-
- Example::
-
- # construct Or using '^' operator
-
- number = Word(nums) ^ Combine(Word(nums) + '.' + Word(nums))
- print(number.search_string("123 3.1416 789"))
-
- prints::
-
- [['123'], ['3.1416'], ['789']]
- """
-
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
- super().__init__(exprs, savelist)
- if self.exprs:
- self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
- self.skipWhitespace = all(e.skipWhitespace for e in self.exprs)
- else:
- self.mayReturnEmpty = True
-
- def streamline(self) -> ParserElement:
- super().streamline()
- if self.exprs:
- self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
- self.saveAsList = any(e.saveAsList for e in self.exprs)
- self.skipWhitespace = all(
- e.skipWhitespace and not isinstance(e, White) for e in self.exprs
- )
- else:
- self.saveAsList = False
- return self
-
- def parseImpl(self, instring, loc, doActions=True):
- maxExcLoc = -1
- maxException = None
- matches = []
- fatals = []
- if all(e.callPreparse for e in self.exprs):
- loc = self.preParse(instring, loc)
- for e in self.exprs:
- try:
- loc2 = e.try_parse(instring, loc, raise_fatal=True)
- except ParseFatalException as pfe:
- pfe.__traceback__ = None
- pfe.parserElement = e
- fatals.append(pfe)
- maxException = None
- maxExcLoc = -1
- except ParseException as err:
- if not fatals:
- err.__traceback__ = None
- if err.loc > maxExcLoc:
- maxException = err
- maxExcLoc = err.loc
- except IndexError:
- if len(instring) > maxExcLoc:
- maxException = ParseException(
- instring, len(instring), e.errmsg, self
- )
- maxExcLoc = len(instring)
- else:
- # save match among all matches, to retry longest to shortest
- matches.append((loc2, e))
-
- if matches:
- # re-evaluate all matches in descending order of length of match, in case attached actions
- # might change whether or how much they match of the input.
- matches.sort(key=itemgetter(0), reverse=True)
-
- if not doActions:
- # no further conditions or parse actions to change the selection of
- # alternative, so the first match will be the best match
- best_expr = matches[0][1]
- return best_expr._parse(instring, loc, doActions)
-
- longest = -1, None
- for loc1, expr1 in matches:
- if loc1 <= longest[0]:
- # already have a longer match than this one will deliver, we are done
- return longest
-
- try:
- loc2, toks = expr1._parse(instring, loc, doActions)
- except ParseException as err:
- err.__traceback__ = None
- if err.loc > maxExcLoc:
- maxException = err
- maxExcLoc = err.loc
- else:
- if loc2 >= loc1:
- return loc2, toks
- # didn't match as much as before
- elif loc2 > longest[0]:
- longest = loc2, toks
-
- if longest != (-1, None):
- return longest
-
- if fatals:
- if len(fatals) > 1:
- fatals.sort(key=lambda e: -e.loc)
- if fatals[0].loc == fatals[1].loc:
- fatals.sort(key=lambda e: (-e.loc, -len(str(e.parserElement))))
- max_fatal = fatals[0]
- raise max_fatal
-
- if maxException is not None:
- maxException.msg = self.errmsg
- raise maxException
- else:
- raise ParseException(
- instring, loc, "no defined alternatives to match", self
- )
-
- def __ixor__(self, other):
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- return self.append(other) # Or([self, other])
-
- def _generateDefaultName(self):
- return "{" + " ^ ".join(str(e) for e in self.exprs) + "}"
-
- def _setResultsName(self, name, listAllMatches=False):
- if (
- __diag__.warn_multiple_tokens_in_named_alternation
- and Diagnostics.warn_multiple_tokens_in_named_alternation
- not in self.suppress_warnings_
- ):
- if any(
- isinstance(e, And)
- and Diagnostics.warn_multiple_tokens_in_named_alternation
- not in e.suppress_warnings_
- for e in self.exprs
- ):
- warnings.warn(
- "{}: setting results name {!r} on {} expression "
- "will return a list of all parsed tokens in an And alternative, "
- "in prior versions only the first token was returned; enclose "
- "contained argument in Group".format(
- "warn_multiple_tokens_in_named_alternation",
- name,
- type(self).__name__,
- ),
- stacklevel=3,
- )
-
- return super()._setResultsName(name, listAllMatches)
-
-
-class MatchFirst(ParseExpression):
- """Requires that at least one :class:`ParseExpression` is found. If
- more than one expression matches, the first one listed is the one that will
- match. May be constructed using the ``'|'`` operator.
-
- Example::
-
- # construct MatchFirst using '|' operator
-
- # watch the order of expressions to match
- number = Word(nums) | Combine(Word(nums) + '.' + Word(nums))
- print(number.search_string("123 3.1416 789")) # Fail! -> [['123'], ['3'], ['1416'], ['789']]
-
- # put more selective expression first
- number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums)
- print(number.search_string("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']]
- """
-
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
- super().__init__(exprs, savelist)
- if self.exprs:
- self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
- self.skipWhitespace = all(e.skipWhitespace for e in self.exprs)
- else:
- self.mayReturnEmpty = True
-
- def streamline(self) -> ParserElement:
- if self.streamlined:
- return self
-
- super().streamline()
- if self.exprs:
- self.saveAsList = any(e.saveAsList for e in self.exprs)
- self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
- self.skipWhitespace = all(
- e.skipWhitespace and not isinstance(e, White) for e in self.exprs
- )
- else:
- self.saveAsList = False
- self.mayReturnEmpty = True
- return self
-
- def parseImpl(self, instring, loc, doActions=True):
- maxExcLoc = -1
- maxException = None
-
- for e in self.exprs:
- try:
- return e._parse(
- instring,
- loc,
- doActions,
- )
- except ParseFatalException as pfe:
- pfe.__traceback__ = None
- pfe.parserElement = e
- raise
- except ParseException as err:
- if err.loc > maxExcLoc:
- maxException = err
- maxExcLoc = err.loc
- except IndexError:
- if len(instring) > maxExcLoc:
- maxException = ParseException(
- instring, len(instring), e.errmsg, self
- )
- maxExcLoc = len(instring)
-
- if maxException is not None:
- maxException.msg = self.errmsg
- raise maxException
- else:
- raise ParseException(
- instring, loc, "no defined alternatives to match", self
- )
-
- def __ior__(self, other):
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- return self.append(other) # MatchFirst([self, other])
-
- def _generateDefaultName(self):
- return "{" + " | ".join(str(e) for e in self.exprs) + "}"
-
- def _setResultsName(self, name, listAllMatches=False):
- if (
- __diag__.warn_multiple_tokens_in_named_alternation
- and Diagnostics.warn_multiple_tokens_in_named_alternation
- not in self.suppress_warnings_
- ):
- if any(
- isinstance(e, And)
- and Diagnostics.warn_multiple_tokens_in_named_alternation
- not in e.suppress_warnings_
- for e in self.exprs
- ):
- warnings.warn(
- "{}: setting results name {!r} on {} expression "
- "will return a list of all parsed tokens in an And alternative, "
- "in prior versions only the first token was returned; enclose "
- "contained argument in Group".format(
- "warn_multiple_tokens_in_named_alternation",
- name,
- type(self).__name__,
- ),
- stacklevel=3,
- )
-
- return super()._setResultsName(name, listAllMatches)
-
-
-class Each(ParseExpression):
- """Requires all given :class:`ParseExpression` s to be found, but in
- any order. Expressions may be separated by whitespace.
-
- May be constructed using the ``'&'`` operator.
-
- Example::
-
- color = one_of("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN")
- shape_type = one_of("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON")
- integer = Word(nums)
- shape_attr = "shape:" + shape_type("shape")
- posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn")
- color_attr = "color:" + color("color")
- size_attr = "size:" + integer("size")
-
- # use Each (using operator '&') to accept attributes in any order
- # (shape and posn are required, color and size are optional)
- shape_spec = shape_attr & posn_attr & Opt(color_attr) & Opt(size_attr)
-
- shape_spec.run_tests('''
- shape: SQUARE color: BLACK posn: 100, 120
- shape: CIRCLE size: 50 color: BLUE posn: 50,80
- color:GREEN size:20 shape:TRIANGLE posn:20,40
- '''
- )
-
- prints::
-
- shape: SQUARE color: BLACK posn: 100, 120
- ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']]
- - color: BLACK
- - posn: ['100', ',', '120']
- - x: 100
- - y: 120
- - shape: SQUARE
-
-
- shape: CIRCLE size: 50 color: BLUE posn: 50,80
- ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']]
- - color: BLUE
- - posn: ['50', ',', '80']
- - x: 50
- - y: 80
- - shape: CIRCLE
- - size: 50
-
-
- color: GREEN size: 20 shape: TRIANGLE posn: 20,40
- ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']]
- - color: GREEN
- - posn: ['20', ',', '40']
- - x: 20
- - y: 40
- - shape: TRIANGLE
- - size: 20
- """
-
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = True):
- super().__init__(exprs, savelist)
- if self.exprs:
- self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs)
- else:
- self.mayReturnEmpty = True
- self.skipWhitespace = True
- self.initExprGroups = True
- self.saveAsList = True
-
- def streamline(self) -> ParserElement:
- super().streamline()
- if self.exprs:
- self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs)
- else:
- self.mayReturnEmpty = True
- return self
-
- def parseImpl(self, instring, loc, doActions=True):
- if self.initExprGroups:
- self.opt1map = dict(
- (id(e.expr), e) for e in self.exprs if isinstance(e, Opt)
- )
- opt1 = [e.expr for e in self.exprs if isinstance(e, Opt)]
- opt2 = [
- e
- for e in self.exprs
- if e.mayReturnEmpty and not isinstance(e, (Opt, Regex, ZeroOrMore))
- ]
- self.optionals = opt1 + opt2
- self.multioptionals = [
- e.expr.set_results_name(e.resultsName, list_all_matches=True)
- for e in self.exprs
- if isinstance(e, _MultipleMatch)
- ]
- self.multirequired = [
- e.expr.set_results_name(e.resultsName, list_all_matches=True)
- for e in self.exprs
- if isinstance(e, OneOrMore)
- ]
- self.required = [
- e for e in self.exprs if not isinstance(e, (Opt, ZeroOrMore, OneOrMore))
- ]
- self.required += self.multirequired
- self.initExprGroups = False
-
- tmpLoc = loc
- tmpReqd = self.required[:]
- tmpOpt = self.optionals[:]
- multis = self.multioptionals[:]
- matchOrder = []
-
- keepMatching = True
- failed = []
- fatals = []
- while keepMatching:
- tmpExprs = tmpReqd + tmpOpt + multis
- failed.clear()
- fatals.clear()
- for e in tmpExprs:
- try:
- tmpLoc = e.try_parse(instring, tmpLoc, raise_fatal=True)
- except ParseFatalException as pfe:
- pfe.__traceback__ = None
- pfe.parserElement = e
- fatals.append(pfe)
- failed.append(e)
- except ParseException:
- failed.append(e)
- else:
- matchOrder.append(self.opt1map.get(id(e), e))
- if e in tmpReqd:
- tmpReqd.remove(e)
- elif e in tmpOpt:
- tmpOpt.remove(e)
- if len(failed) == len(tmpExprs):
- keepMatching = False
-
- # look for any ParseFatalExceptions
- if fatals:
- if len(fatals) > 1:
- fatals.sort(key=lambda e: -e.loc)
- if fatals[0].loc == fatals[1].loc:
- fatals.sort(key=lambda e: (-e.loc, -len(str(e.parserElement))))
- max_fatal = fatals[0]
- raise max_fatal
-
- if tmpReqd:
- missing = ", ".join([str(e) for e in tmpReqd])
- raise ParseException(
- instring,
- loc,
- "Missing one or more required elements ({})".format(missing),
- )
-
- # add any unmatched Opts, in case they have default values defined
- matchOrder += [e for e in self.exprs if isinstance(e, Opt) and e.expr in tmpOpt]
-
- total_results = ParseResults([])
- for e in matchOrder:
- loc, results = e._parse(instring, loc, doActions)
- total_results += results
-
- return loc, total_results
-
- def _generateDefaultName(self):
- return "{" + " & ".join(str(e) for e in self.exprs) + "}"
-
-
-class ParseElementEnhance(ParserElement):
- """Abstract subclass of :class:`ParserElement`, for combining and
- post-processing parsed tokens.
- """
-
- def __init__(self, expr: Union[ParserElement, str], savelist: bool = False):
- super().__init__(savelist)
- if isinstance(expr, str_type):
- if issubclass(self._literalStringClass, Token):
- expr = self._literalStringClass(expr)
- elif issubclass(type(self), self._literalStringClass):
- expr = Literal(expr)
- else:
- expr = self._literalStringClass(Literal(expr))
- self.expr = expr
- if expr is not None:
- self.mayIndexError = expr.mayIndexError
- self.mayReturnEmpty = expr.mayReturnEmpty
- self.set_whitespace_chars(
- expr.whiteChars, copy_defaults=expr.copyDefaultWhiteChars
- )
- self.skipWhitespace = expr.skipWhitespace
- self.saveAsList = expr.saveAsList
- self.callPreparse = expr.callPreparse
- self.ignoreExprs.extend(expr.ignoreExprs)
-
- def recurse(self) -> Sequence[ParserElement]:
- return [self.expr] if self.expr is not None else []
-
- def parseImpl(self, instring, loc, doActions=True):
- if self.expr is not None:
- return self.expr._parse(instring, loc, doActions, callPreParse=False)
- else:
- raise ParseException(instring, loc, "No expression defined", self)
-
- def leave_whitespace(self, recursive: bool = True) -> ParserElement:
- super().leave_whitespace(recursive)
-
- if recursive:
- self.expr = self.expr.copy()
- if self.expr is not None:
- self.expr.leave_whitespace(recursive)
- return self
-
- def ignore_whitespace(self, recursive: bool = True) -> ParserElement:
- super().ignore_whitespace(recursive)
-
- if recursive:
- self.expr = self.expr.copy()
- if self.expr is not None:
- self.expr.ignore_whitespace(recursive)
- return self
-
- def ignore(self, other) -> ParserElement:
- if isinstance(other, Suppress):
- if other not in self.ignoreExprs:
- super().ignore(other)
- if self.expr is not None:
- self.expr.ignore(self.ignoreExprs[-1])
- else:
- super().ignore(other)
- if self.expr is not None:
- self.expr.ignore(self.ignoreExprs[-1])
- return self
-
- def streamline(self) -> ParserElement:
- super().streamline()
- if self.expr is not None:
- self.expr.streamline()
- return self
-
- def _checkRecursion(self, parseElementList):
- if self in parseElementList:
- raise RecursiveGrammarException(parseElementList + [self])
- subRecCheckList = parseElementList[:] + [self]
- if self.expr is not None:
- self.expr._checkRecursion(subRecCheckList)
-
- def validate(self, validateTrace=None) -> None:
- if validateTrace is None:
- validateTrace = []
- tmp = validateTrace[:] + [self]
- if self.expr is not None:
- self.expr.validate(tmp)
- self._checkRecursion([])
-
- def _generateDefaultName(self):
- return "{}:({})".format(self.__class__.__name__, str(self.expr))
-
- ignoreWhitespace = ignore_whitespace
- leaveWhitespace = leave_whitespace
-
-
-class IndentedBlock(ParseElementEnhance):
- """
- Expression to match one or more expressions at a given indentation level.
- Useful for parsing text where structure is implied by indentation (like Python source code).
- """
-
- class _Indent(Empty):
- def __init__(self, ref_col: int):
- super().__init__()
- self.errmsg = "expected indent at column {}".format(ref_col)
- self.add_condition(lambda s, l, t: col(l, s) == ref_col)
-
- class _IndentGreater(Empty):
- def __init__(self, ref_col: int):
- super().__init__()
- self.errmsg = "expected indent at column greater than {}".format(ref_col)
- self.add_condition(lambda s, l, t: col(l, s) > ref_col)
-
- def __init__(
- self, expr: ParserElement, *, recursive: bool = False, grouped: bool = True
- ):
- super().__init__(expr, savelist=True)
- # if recursive:
- # raise NotImplementedError("IndentedBlock with recursive is not implemented")
- self._recursive = recursive
- self._grouped = grouped
- self.parent_anchor = 1
-
- def parseImpl(self, instring, loc, doActions=True):
- # advance parse position to non-whitespace by using an Empty()
- # this should be the column to be used for all subsequent indented lines
- anchor_loc = Empty().preParse(instring, loc)
-
- # see if self.expr matches at the current location - if not it will raise an exception
- # and no further work is necessary
- self.expr.try_parse(instring, anchor_loc, doActions)
-
- indent_col = col(anchor_loc, instring)
- peer_detect_expr = self._Indent(indent_col)
-
- inner_expr = Empty() + peer_detect_expr + self.expr
- if self._recursive:
- sub_indent = self._IndentGreater(indent_col)
- nested_block = IndentedBlock(
- self.expr, recursive=self._recursive, grouped=self._grouped
- )
- nested_block.set_debug(self.debug)
- nested_block.parent_anchor = indent_col
- inner_expr += Opt(sub_indent + nested_block)
-
- inner_expr.set_name(f"inner {hex(id(inner_expr))[-4:].upper()}@{indent_col}")
- block = OneOrMore(inner_expr)
-
- trailing_undent = self._Indent(self.parent_anchor) | StringEnd()
-
- if self._grouped:
- wrapper = Group
- else:
- wrapper = lambda expr: expr
- return (wrapper(block) + Optional(trailing_undent)).parseImpl(
- instring, anchor_loc, doActions
- )
-
-
-class AtStringStart(ParseElementEnhance):
- """Matches if expression matches at the beginning of the parse
- string::
-
- AtStringStart(Word(nums)).parse_string("123")
- # prints ["123"]
-
- AtStringStart(Word(nums)).parse_string(" 123")
- # raises ParseException
- """
-
- def __init__(self, expr: Union[ParserElement, str]):
- super().__init__(expr)
- self.callPreparse = False
-
- def parseImpl(self, instring, loc, doActions=True):
- if loc != 0:
- raise ParseException(instring, loc, "not found at string start")
- return super().parseImpl(instring, loc, doActions)
-
-
-class AtLineStart(ParseElementEnhance):
- r"""Matches if an expression matches at the beginning of a line within
- the parse string
-
- Example::
-
- test = '''\
- AAA this line
- AAA and this line
- AAA but not this one
- B AAA and definitely not this one
- '''
-
- for t in (AtLineStart('AAA') + restOfLine).search_string(test):
- print(t)
-
- prints::
-
- ['AAA', ' this line']
- ['AAA', ' and this line']
-
- """
-
- def __init__(self, expr: Union[ParserElement, str]):
- super().__init__(expr)
- self.callPreparse = False
-
- def parseImpl(self, instring, loc, doActions=True):
- if col(loc, instring) != 1:
- raise ParseException(instring, loc, "not found at line start")
- return super().parseImpl(instring, loc, doActions)
-
-
-class FollowedBy(ParseElementEnhance):
- """Lookahead matching of the given parse expression.
- ``FollowedBy`` does *not* advance the parsing position within
- the input string, it only verifies that the specified parse
- expression matches at the current position. ``FollowedBy``
- always returns a null token list. If any results names are defined
- in the lookahead expression, those *will* be returned for access by
- name.
-
- Example::
-
- # use FollowedBy to match a label only if it is followed by a ':'
- data_word = Word(alphas)
- label = data_word + FollowedBy(':')
- attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
-
- attr_expr[1, ...].parse_string("shape: SQUARE color: BLACK posn: upper left").pprint()
-
- prints::
-
- [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']]
- """
-
- def __init__(self, expr: Union[ParserElement, str]):
- super().__init__(expr)
- self.mayReturnEmpty = True
-
- def parseImpl(self, instring, loc, doActions=True):
- # by using self._expr.parse and deleting the contents of the returned ParseResults list
- # we keep any named results that were defined in the FollowedBy expression
- _, ret = self.expr._parse(instring, loc, doActions=doActions)
- del ret[:]
-
- return loc, ret
-
-
-class PrecededBy(ParseElementEnhance):
- """Lookbehind matching of the given parse expression.
- ``PrecededBy`` does not advance the parsing position within the
- input string, it only verifies that the specified parse expression
- matches prior to the current position. ``PrecededBy`` always
- returns a null token list, but if a results name is defined on the
- given expression, it is returned.
-
- Parameters:
-
- - expr - expression that must match prior to the current parse
- location
- - retreat - (default= ``None``) - (int) maximum number of characters
- to lookbehind prior to the current parse location
-
- If the lookbehind expression is a string, :class:`Literal`,
- :class:`Keyword`, or a :class:`Word` or :class:`CharsNotIn`
- with a specified exact or maximum length, then the retreat
- parameter is not required. Otherwise, retreat must be specified to
- give a maximum number of characters to look back from
- the current parse position for a lookbehind match.
-
- Example::
-
- # VB-style variable names with type prefixes
- int_var = PrecededBy("#") + pyparsing_common.identifier
- str_var = PrecededBy("$") + pyparsing_common.identifier
-
- """
-
- def __init__(
- self, expr: Union[ParserElement, str], retreat: typing.Optional[int] = None
- ):
- super().__init__(expr)
- self.expr = self.expr().leave_whitespace()
- self.mayReturnEmpty = True
- self.mayIndexError = False
- self.exact = False
- if isinstance(expr, str_type):
- retreat = len(expr)
- self.exact = True
- elif isinstance(expr, (Literal, Keyword)):
- retreat = expr.matchLen
- self.exact = True
- elif isinstance(expr, (Word, CharsNotIn)) and expr.maxLen != _MAX_INT:
- retreat = expr.maxLen
- self.exact = True
- elif isinstance(expr, PositionToken):
- retreat = 0
- self.exact = True
- self.retreat = retreat
- self.errmsg = "not preceded by " + str(expr)
- self.skipWhitespace = False
- self.parseAction.append(lambda s, l, t: t.__delitem__(slice(None, None)))
-
- def parseImpl(self, instring, loc=0, doActions=True):
- if self.exact:
- if loc < self.retreat:
- raise ParseException(instring, loc, self.errmsg)
- start = loc - self.retreat
- _, ret = self.expr._parse(instring, start)
- else:
- # retreat specified a maximum lookbehind window, iterate
- test_expr = self.expr + StringEnd()
- instring_slice = instring[max(0, loc - self.retreat) : loc]
- last_expr = ParseException(instring, loc, self.errmsg)
- for offset in range(1, min(loc, self.retreat + 1) + 1):
- try:
- # print('trying', offset, instring_slice, repr(instring_slice[loc - offset:]))
- _, ret = test_expr._parse(
- instring_slice, len(instring_slice) - offset
- )
- except ParseBaseException as pbe:
- last_expr = pbe
- else:
- break
- else:
- raise last_expr
- return loc, ret
-
-
-class Located(ParseElementEnhance):
- """
- Decorates a returned token with its starting and ending
- locations in the input string.
-
- This helper adds the following results names:
-
- - ``locn_start`` - location where matched expression begins
- - ``locn_end`` - location where matched expression ends
- - ``value`` - the actual parsed results
-
- Be careful if the input text contains ```` characters, you
- may want to call :class:`ParserElement.parse_with_tabs`
-
- Example::
-
- wd = Word(alphas)
- for match in Located(wd).search_string("ljsdf123lksdjjf123lkkjj1222"):
- print(match)
-
- prints::
-
- [0, ['ljsdf'], 5]
- [8, ['lksdjjf'], 15]
- [18, ['lkkjj'], 23]
-
- """
-
- def parseImpl(self, instring, loc, doActions=True):
- start = loc
- loc, tokens = self.expr._parse(instring, start, doActions, callPreParse=False)
- ret_tokens = ParseResults([start, tokens, loc])
- ret_tokens["locn_start"] = start
- ret_tokens["value"] = tokens
- ret_tokens["locn_end"] = loc
- if self.resultsName:
- # must return as a list, so that the name will be attached to the complete group
- return loc, [ret_tokens]
- else:
- return loc, ret_tokens
-
-
-class NotAny(ParseElementEnhance):
- """
- Lookahead to disallow matching with the given parse expression.
- ``NotAny`` does *not* advance the parsing position within the
- input string, it only verifies that the specified parse expression
- does *not* match at the current position. Also, ``NotAny`` does
- *not* skip over leading whitespace. ``NotAny`` always returns
- a null token list. May be constructed using the ``'~'`` operator.
-
- Example::
-
- AND, OR, NOT = map(CaselessKeyword, "AND OR NOT".split())
-
- # take care not to mistake keywords for identifiers
- ident = ~(AND | OR | NOT) + Word(alphas)
- boolean_term = Opt(NOT) + ident
-
- # very crude boolean expression - to support parenthesis groups and
- # operation hierarchy, use infix_notation
- boolean_expr = boolean_term + ((AND | OR) + boolean_term)[...]
-
- # integers that are followed by "." are actually floats
- integer = Word(nums) + ~Char(".")
- """
-
- def __init__(self, expr: Union[ParserElement, str]):
- super().__init__(expr)
- # do NOT use self.leave_whitespace(), don't want to propagate to exprs
- # self.leave_whitespace()
- self.skipWhitespace = False
-
- self.mayReturnEmpty = True
- self.errmsg = "Found unwanted token, " + str(self.expr)
-
- def parseImpl(self, instring, loc, doActions=True):
- if self.expr.can_parse_next(instring, loc):
- raise ParseException(instring, loc, self.errmsg, self)
- return loc, []
-
- def _generateDefaultName(self):
- return "~{" + str(self.expr) + "}"
-
-
-class _MultipleMatch(ParseElementEnhance):
- def __init__(
- self,
- expr: ParserElement,
- stop_on: typing.Optional[Union[ParserElement, str]] = None,
- *,
- stopOn: typing.Optional[Union[ParserElement, str]] = None,
- ):
- super().__init__(expr)
- stopOn = stopOn or stop_on
- self.saveAsList = True
- ender = stopOn
- if isinstance(ender, str_type):
- ender = self._literalStringClass(ender)
- self.stopOn(ender)
-
- def stopOn(self, ender) -> ParserElement:
- if isinstance(ender, str_type):
- ender = self._literalStringClass(ender)
- self.not_ender = ~ender if ender is not None else None
- return self
-
- def parseImpl(self, instring, loc, doActions=True):
- self_expr_parse = self.expr._parse
- self_skip_ignorables = self._skipIgnorables
- check_ender = self.not_ender is not None
- if check_ender:
- try_not_ender = self.not_ender.tryParse
-
- # must be at least one (but first see if we are the stopOn sentinel;
- # if so, fail)
- if check_ender:
- try_not_ender(instring, loc)
- loc, tokens = self_expr_parse(instring, loc, doActions)
- try:
- hasIgnoreExprs = not not self.ignoreExprs
- while 1:
- if check_ender:
- try_not_ender(instring, loc)
- if hasIgnoreExprs:
- preloc = self_skip_ignorables(instring, loc)
- else:
- preloc = loc
- loc, tmptokens = self_expr_parse(instring, preloc, doActions)
- if tmptokens or tmptokens.haskeys():
- tokens += tmptokens
- except (ParseException, IndexError):
- pass
-
- return loc, tokens
-
- def _setResultsName(self, name, listAllMatches=False):
- if (
- __diag__.warn_ungrouped_named_tokens_in_collection
- and Diagnostics.warn_ungrouped_named_tokens_in_collection
- not in self.suppress_warnings_
- ):
- for e in [self.expr] + self.expr.recurse():
- if (
- isinstance(e, ParserElement)
- and e.resultsName
- and Diagnostics.warn_ungrouped_named_tokens_in_collection
- not in e.suppress_warnings_
- ):
- warnings.warn(
- "{}: setting results name {!r} on {} expression "
- "collides with {!r} on contained expression".format(
- "warn_ungrouped_named_tokens_in_collection",
- name,
- type(self).__name__,
- e.resultsName,
- ),
- stacklevel=3,
- )
-
- return super()._setResultsName(name, listAllMatches)
-
-
-class OneOrMore(_MultipleMatch):
- """
- Repetition of one or more of the given expression.
-
- Parameters:
- - expr - expression that must match one or more times
- - stop_on - (default= ``None``) - expression for a terminating sentinel
- (only required if the sentinel would ordinarily match the repetition
- expression)
-
- Example::
-
- data_word = Word(alphas)
- label = data_word + FollowedBy(':')
- attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).set_parse_action(' '.join))
-
- text = "shape: SQUARE posn: upper left color: BLACK"
- attr_expr[1, ...].parse_string(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']]
-
- # use stop_on attribute for OneOrMore to avoid reading label string as part of the data
- attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
- OneOrMore(attr_expr).parse_string(text).pprint() # Better -> [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']]
-
- # could also be written as
- (attr_expr * (1,)).parse_string(text).pprint()
- """
-
- def _generateDefaultName(self):
- return "{" + str(self.expr) + "}..."
-
-
-class ZeroOrMore(_MultipleMatch):
- """
- Optional repetition of zero or more of the given expression.
-
- Parameters:
- - ``expr`` - expression that must match zero or more times
- - ``stop_on`` - expression for a terminating sentinel
- (only required if the sentinel would ordinarily match the repetition
- expression) - (default= ``None``)
-
- Example: similar to :class:`OneOrMore`
- """
-
- def __init__(
- self,
- expr: ParserElement,
- stop_on: typing.Optional[Union[ParserElement, str]] = None,
- *,
- stopOn: typing.Optional[Union[ParserElement, str]] = None,
- ):
- super().__init__(expr, stopOn=stopOn or stop_on)
- self.mayReturnEmpty = True
-
- def parseImpl(self, instring, loc, doActions=True):
- try:
- return super().parseImpl(instring, loc, doActions)
- except (ParseException, IndexError):
- return loc, ParseResults([], name=self.resultsName)
-
- def _generateDefaultName(self):
- return "[" + str(self.expr) + "]..."
-
-
-class _NullToken:
- def __bool__(self):
- return False
-
- def __str__(self):
- return ""
-
-
-class Opt(ParseElementEnhance):
- """
- Optional matching of the given expression.
-
- Parameters:
- - ``expr`` - expression that must match zero or more times
- - ``default`` (optional) - value to be returned if the optional expression is not found.
-
- Example::
-
- # US postal code can be a 5-digit zip, plus optional 4-digit qualifier
- zip = Combine(Word(nums, exact=5) + Opt('-' + Word(nums, exact=4)))
- zip.run_tests('''
- # traditional ZIP code
- 12345
-
- # ZIP+4 form
- 12101-0001
-
- # invalid ZIP
- 98765-
- ''')
-
- prints::
-
- # traditional ZIP code
- 12345
- ['12345']
-
- # ZIP+4 form
- 12101-0001
- ['12101-0001']
-
- # invalid ZIP
- 98765-
- ^
- FAIL: Expected end of text (at char 5), (line:1, col:6)
- """
-
- __optionalNotMatched = _NullToken()
-
- def __init__(
- self, expr: Union[ParserElement, str], default: Any = __optionalNotMatched
- ):
- super().__init__(expr, savelist=False)
- self.saveAsList = self.expr.saveAsList
- self.defaultValue = default
- self.mayReturnEmpty = True
-
- def parseImpl(self, instring, loc, doActions=True):
- self_expr = self.expr
- try:
- loc, tokens = self_expr._parse(instring, loc, doActions, callPreParse=False)
- except (ParseException, IndexError):
- default_value = self.defaultValue
- if default_value is not self.__optionalNotMatched:
- if self_expr.resultsName:
- tokens = ParseResults([default_value])
- tokens[self_expr.resultsName] = default_value
- else:
- tokens = [default_value]
- else:
- tokens = []
- return loc, tokens
-
- def _generateDefaultName(self):
- inner = str(self.expr)
- # strip off redundant inner {}'s
- while len(inner) > 1 and inner[0 :: len(inner) - 1] == "{}":
- inner = inner[1:-1]
- return "[" + inner + "]"
-
-
-Optional = Opt
-
-
-class SkipTo(ParseElementEnhance):
- """
- Token for skipping over all undefined text until the matched
- expression is found.
-
- Parameters:
- - ``expr`` - target expression marking the end of the data to be skipped
- - ``include`` - if ``True``, the target expression is also parsed
- (the skipped text and target expression are returned as a 2-element
- list) (default= ``False``).
- - ``ignore`` - (default= ``None``) used to define grammars (typically quoted strings and
- comments) that might contain false matches to the target expression
- - ``fail_on`` - (default= ``None``) define expressions that are not allowed to be
- included in the skipped test; if found before the target expression is found,
- the :class:`SkipTo` is not a match
-
- Example::
-
- report = '''
- Outstanding Issues Report - 1 Jan 2000
-
- # | Severity | Description | Days Open
- -----+----------+-------------------------------------------+-----------
- 101 | Critical | Intermittent system crash | 6
- 94 | Cosmetic | Spelling error on Login ('log|n') | 14
- 79 | Minor | System slow when running too many reports | 47
- '''
- integer = Word(nums)
- SEP = Suppress('|')
- # use SkipTo to simply match everything up until the next SEP
- # - ignore quoted strings, so that a '|' character inside a quoted string does not match
- # - parse action will call token.strip() for each matched token, i.e., the description body
- string_data = SkipTo(SEP, ignore=quoted_string)
- string_data.set_parse_action(token_map(str.strip))
- ticket_expr = (integer("issue_num") + SEP
- + string_data("sev") + SEP
- + string_data("desc") + SEP
- + integer("days_open"))
-
- for tkt in ticket_expr.search_string(report):
- print tkt.dump()
-
- prints::
-
- ['101', 'Critical', 'Intermittent system crash', '6']
- - days_open: '6'
- - desc: 'Intermittent system crash'
- - issue_num: '101'
- - sev: 'Critical'
- ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14']
- - days_open: '14'
- - desc: "Spelling error on Login ('log|n')"
- - issue_num: '94'
- - sev: 'Cosmetic'
- ['79', 'Minor', 'System slow when running too many reports', '47']
- - days_open: '47'
- - desc: 'System slow when running too many reports'
- - issue_num: '79'
- - sev: 'Minor'
- """
-
- def __init__(
- self,
- other: Union[ParserElement, str],
- include: bool = False,
- ignore: bool = None,
- fail_on: typing.Optional[Union[ParserElement, str]] = None,
- *,
- failOn: Union[ParserElement, str] = None,
- ):
- super().__init__(other)
- failOn = failOn or fail_on
- self.ignoreExpr = ignore
- self.mayReturnEmpty = True
- self.mayIndexError = False
- self.includeMatch = include
- self.saveAsList = False
- if isinstance(failOn, str_type):
- self.failOn = self._literalStringClass(failOn)
- else:
- self.failOn = failOn
- self.errmsg = "No match found for " + str(self.expr)
-
- def parseImpl(self, instring, loc, doActions=True):
- startloc = loc
- instrlen = len(instring)
- self_expr_parse = self.expr._parse
- self_failOn_canParseNext = (
- self.failOn.canParseNext if self.failOn is not None else None
- )
- self_ignoreExpr_tryParse = (
- self.ignoreExpr.tryParse if self.ignoreExpr is not None else None
- )
-
- tmploc = loc
- while tmploc <= instrlen:
- if self_failOn_canParseNext is not None:
- # break if failOn expression matches
- if self_failOn_canParseNext(instring, tmploc):
- break
-
- if self_ignoreExpr_tryParse is not None:
- # advance past ignore expressions
- while 1:
- try:
- tmploc = self_ignoreExpr_tryParse(instring, tmploc)
- except ParseBaseException:
- break
-
- try:
- self_expr_parse(instring, tmploc, doActions=False, callPreParse=False)
- except (ParseException, IndexError):
- # no match, advance loc in string
- tmploc += 1
- else:
- # matched skipto expr, done
- break
-
- else:
- # ran off the end of the input string without matching skipto expr, fail
- raise ParseException(instring, loc, self.errmsg, self)
-
- # build up return values
- loc = tmploc
- skiptext = instring[startloc:loc]
- skipresult = ParseResults(skiptext)
-
- if self.includeMatch:
- loc, mat = self_expr_parse(instring, loc, doActions, callPreParse=False)
- skipresult += mat
-
- return loc, skipresult
-
-
-class Forward(ParseElementEnhance):
- """
- Forward declaration of an expression to be defined later -
- used for recursive grammars, such as algebraic infix notation.
- When the expression is known, it is assigned to the ``Forward``
- variable using the ``'<<'`` operator.
-
- Note: take care when assigning to ``Forward`` not to overlook
- precedence of operators.
-
- Specifically, ``'|'`` has a lower precedence than ``'<<'``, so that::
-
- fwd_expr << a | b | c
-
- will actually be evaluated as::
-
- (fwd_expr << a) | b | c
-
- thereby leaving b and c out as parseable alternatives. It is recommended that you
- explicitly group the values inserted into the ``Forward``::
-
- fwd_expr << (a | b | c)
-
- Converting to use the ``'<<='`` operator instead will avoid this problem.
-
- See :class:`ParseResults.pprint` for an example of a recursive
- parser created using ``Forward``.
- """
-
- def __init__(self, other: typing.Optional[Union[ParserElement, str]] = None):
- self.caller_frame = traceback.extract_stack(limit=2)[0]
- super().__init__(other, savelist=False)
- self.lshift_line = None
-
- def __lshift__(self, other):
- if hasattr(self, "caller_frame"):
- del self.caller_frame
- if isinstance(other, str_type):
- other = self._literalStringClass(other)
- self.expr = other
- self.mayIndexError = self.expr.mayIndexError
- self.mayReturnEmpty = self.expr.mayReturnEmpty
- self.set_whitespace_chars(
- self.expr.whiteChars, copy_defaults=self.expr.copyDefaultWhiteChars
- )
- self.skipWhitespace = self.expr.skipWhitespace
- self.saveAsList = self.expr.saveAsList
- self.ignoreExprs.extend(self.expr.ignoreExprs)
- self.lshift_line = traceback.extract_stack(limit=2)[-2]
- return self
-
- def __ilshift__(self, other):
- return self << other
-
- def __or__(self, other):
- caller_line = traceback.extract_stack(limit=2)[-2]
- if (
- __diag__.warn_on_match_first_with_lshift_operator
- and caller_line == self.lshift_line
- and Diagnostics.warn_on_match_first_with_lshift_operator
- not in self.suppress_warnings_
- ):
- warnings.warn(
- "using '<<' operator with '|' is probably an error, use '<<='",
- stacklevel=2,
- )
- ret = super().__or__(other)
- return ret
-
- def __del__(self):
- # see if we are getting dropped because of '=' reassignment of var instead of '<<=' or '<<'
- if (
- self.expr is None
- and __diag__.warn_on_assignment_to_Forward
- and Diagnostics.warn_on_assignment_to_Forward not in self.suppress_warnings_
- ):
- warnings.warn_explicit(
- "Forward defined here but no expression attached later using '<<=' or '<<'",
- UserWarning,
- filename=self.caller_frame.filename,
- lineno=self.caller_frame.lineno,
- )
-
- def parseImpl(self, instring, loc, doActions=True):
- if (
- self.expr is None
- and __diag__.warn_on_parse_using_empty_Forward
- and Diagnostics.warn_on_parse_using_empty_Forward
- not in self.suppress_warnings_
- ):
- # walk stack until parse_string, scan_string, search_string, or transform_string is found
- parse_fns = [
- "parse_string",
- "scan_string",
- "search_string",
- "transform_string",
- ]
- tb = traceback.extract_stack(limit=200)
- for i, frm in enumerate(reversed(tb), start=1):
- if frm.name in parse_fns:
- stacklevel = i + 1
- break
- else:
- stacklevel = 2
- warnings.warn(
- "Forward expression was never assigned a value, will not parse any input",
- stacklevel=stacklevel,
- )
- if not ParserElement._left_recursion_enabled:
- return super().parseImpl(instring, loc, doActions)
- # ## Bounded Recursion algorithm ##
- # Recursion only needs to be processed at ``Forward`` elements, since they are
- # the only ones that can actually refer to themselves. The general idea is
- # to handle recursion stepwise: We start at no recursion, then recurse once,
- # recurse twice, ..., until more recursion offers no benefit (we hit the bound).
- #
- # The "trick" here is that each ``Forward`` gets evaluated in two contexts
- # - to *match* a specific recursion level, and
- # - to *search* the bounded recursion level
- # and the two run concurrently. The *search* must *match* each recursion level
- # to find the best possible match. This is handled by a memo table, which
- # provides the previous match to the next level match attempt.
- #
- # See also "Left Recursion in Parsing Expression Grammars", Medeiros et al.
- #
- # There is a complication since we not only *parse* but also *transform* via
- # actions: We do not want to run the actions too often while expanding. Thus,
- # we expand using `doActions=False` and only run `doActions=True` if the next
- # recursion level is acceptable.
- with ParserElement.recursion_lock:
- memo = ParserElement.recursion_memos
- try:
- # we are parsing at a specific recursion expansion - use it as-is
- prev_loc, prev_result = memo[loc, self, doActions]
- if isinstance(prev_result, Exception):
- raise prev_result
- return prev_loc, prev_result.copy()
- except KeyError:
- act_key = (loc, self, True)
- peek_key = (loc, self, False)
- # we are searching for the best recursion expansion - keep on improving
- # both `doActions` cases must be tracked separately here!
- prev_loc, prev_peek = memo[peek_key] = (
- loc - 1,
- ParseException(
- instring, loc, "Forward recursion without base case", self
- ),
- )
- if doActions:
- memo[act_key] = memo[peek_key]
- while True:
- try:
- new_loc, new_peek = super().parseImpl(instring, loc, False)
- except ParseException:
- # we failed before getting any match – do not hide the error
- if isinstance(prev_peek, Exception):
- raise
- new_loc, new_peek = prev_loc, prev_peek
- # the match did not get better: we are done
- if new_loc <= prev_loc:
- if doActions:
- # replace the match for doActions=False as well,
- # in case the action did backtrack
- prev_loc, prev_result = memo[peek_key] = memo[act_key]
- del memo[peek_key], memo[act_key]
- return prev_loc, prev_result.copy()
- del memo[peek_key]
- return prev_loc, prev_peek.copy()
- # the match did get better: see if we can improve further
- else:
- if doActions:
- try:
- memo[act_key] = super().parseImpl(instring, loc, True)
- except ParseException as e:
- memo[peek_key] = memo[act_key] = (new_loc, e)
- raise
- prev_loc, prev_peek = memo[peek_key] = new_loc, new_peek
-
- def leave_whitespace(self, recursive: bool = True) -> ParserElement:
- self.skipWhitespace = False
- return self
-
- def ignore_whitespace(self, recursive: bool = True) -> ParserElement:
- self.skipWhitespace = True
- return self
-
- def streamline(self) -> ParserElement:
- if not self.streamlined:
- self.streamlined = True
- if self.expr is not None:
- self.expr.streamline()
- return self
-
- def validate(self, validateTrace=None) -> None:
- if validateTrace is None:
- validateTrace = []
-
- if self not in validateTrace:
- tmp = validateTrace[:] + [self]
- if self.expr is not None:
- self.expr.validate(tmp)
- self._checkRecursion([])
-
- def _generateDefaultName(self):
- # Avoid infinite recursion by setting a temporary _defaultName
- self._defaultName = ": ..."
-
- # Use the string representation of main expression.
- retString = "..."
- try:
- if self.expr is not None:
- retString = str(self.expr)[:1000]
- else:
- retString = "None"
- finally:
- return self.__class__.__name__ + ": " + retString
-
- def copy(self) -> ParserElement:
- if self.expr is not None:
- return super().copy()
- else:
- ret = Forward()
- ret <<= self
- return ret
-
- def _setResultsName(self, name, list_all_matches=False):
- if (
- __diag__.warn_name_set_on_empty_Forward
- and Diagnostics.warn_name_set_on_empty_Forward
- not in self.suppress_warnings_
- ):
- if self.expr is None:
- warnings.warn(
- "{}: setting results name {!r} on {} expression "
- "that has no contained expression".format(
- "warn_name_set_on_empty_Forward", name, type(self).__name__
- ),
- stacklevel=3,
- )
-
- return super()._setResultsName(name, list_all_matches)
-
- ignoreWhitespace = ignore_whitespace
- leaveWhitespace = leave_whitespace
-
-
-class TokenConverter(ParseElementEnhance):
- """
- Abstract subclass of :class:`ParseExpression`, for converting parsed results.
- """
-
- def __init__(self, expr: Union[ParserElement, str], savelist=False):
- super().__init__(expr) # , savelist)
- self.saveAsList = False
-
-
-class Combine(TokenConverter):
- """Converter to concatenate all matching tokens to a single string.
- By default, the matching patterns must also be contiguous in the
- input string; this can be disabled by specifying
- ``'adjacent=False'`` in the constructor.
-
- Example::
-
- real = Word(nums) + '.' + Word(nums)
- print(real.parse_string('3.1416')) # -> ['3', '.', '1416']
- # will also erroneously match the following
- print(real.parse_string('3. 1416')) # -> ['3', '.', '1416']
-
- real = Combine(Word(nums) + '.' + Word(nums))
- print(real.parse_string('3.1416')) # -> ['3.1416']
- # no match when there are internal spaces
- print(real.parse_string('3. 1416')) # -> Exception: Expected W:(0123...)
- """
-
- def __init__(
- self,
- expr: ParserElement,
- join_string: str = "",
- adjacent: bool = True,
- *,
- joinString: typing.Optional[str] = None,
- ):
- super().__init__(expr)
- joinString = joinString if joinString is not None else join_string
- # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself
- if adjacent:
- self.leave_whitespace()
- self.adjacent = adjacent
- self.skipWhitespace = True
- self.joinString = joinString
- self.callPreparse = True
-
- def ignore(self, other) -> ParserElement:
- if self.adjacent:
- ParserElement.ignore(self, other)
- else:
- super().ignore(other)
- return self
-
- def postParse(self, instring, loc, tokenlist):
- retToks = tokenlist.copy()
- del retToks[:]
- retToks += ParseResults(
- ["".join(tokenlist._asStringList(self.joinString))], modal=self.modalResults
- )
-
- if self.resultsName and retToks.haskeys():
- return [retToks]
- else:
- return retToks
-
-
-class Group(TokenConverter):
- """Converter to return the matched tokens as a list - useful for
- returning tokens of :class:`ZeroOrMore` and :class:`OneOrMore` expressions.
-
- The optional ``aslist`` argument when set to True will return the
- parsed tokens as a Python list instead of a pyparsing ParseResults.
-
- Example::
-
- ident = Word(alphas)
- num = Word(nums)
- term = ident | num
- func = ident + Opt(delimited_list(term))
- print(func.parse_string("fn a, b, 100"))
- # -> ['fn', 'a', 'b', '100']
-
- func = ident + Group(Opt(delimited_list(term)))
- print(func.parse_string("fn a, b, 100"))
- # -> ['fn', ['a', 'b', '100']]
- """
-
- def __init__(self, expr: ParserElement, aslist: bool = False):
- super().__init__(expr)
- self.saveAsList = True
- self._asPythonList = aslist
-
- def postParse(self, instring, loc, tokenlist):
- if self._asPythonList:
- return ParseResults.List(
- tokenlist.asList()
- if isinstance(tokenlist, ParseResults)
- else list(tokenlist)
- )
- else:
- return [tokenlist]
-
-
-class Dict(TokenConverter):
- """Converter to return a repetitive expression as a list, but also
- as a dictionary. Each element can also be referenced using the first
- token in the expression as its key. Useful for tabular report
- scraping when the first column can be used as a item key.
-
- The optional ``asdict`` argument when set to True will return the
- parsed tokens as a Python dict instead of a pyparsing ParseResults.
-
- Example::
-
- data_word = Word(alphas)
- label = data_word + FollowedBy(':')
-
- text = "shape: SQUARE posn: upper left color: light blue texture: burlap"
- attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
-
- # print attributes as plain groups
- print(attr_expr[1, ...].parse_string(text).dump())
-
- # instead of OneOrMore(expr), parse using Dict(Group(expr)[1, ...]) - Dict will auto-assign names
- result = Dict(Group(attr_expr)[1, ...]).parse_string(text)
- print(result.dump())
-
- # access named fields as dict entries, or output as dict
- print(result['shape'])
- print(result.as_dict())
-
- prints::
-
- ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap']
- [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']]
- - color: 'light blue'
- - posn: 'upper left'
- - shape: 'SQUARE'
- - texture: 'burlap'
- SQUARE
- {'color': 'light blue', 'posn': 'upper left', 'texture': 'burlap', 'shape': 'SQUARE'}
-
- See more examples at :class:`ParseResults` of accessing fields by results name.
- """
-
- def __init__(self, expr: ParserElement, asdict: bool = False):
- super().__init__(expr)
- self.saveAsList = True
- self._asPythonDict = asdict
-
- def postParse(self, instring, loc, tokenlist):
- for i, tok in enumerate(tokenlist):
- if len(tok) == 0:
- continue
-
- ikey = tok[0]
- if isinstance(ikey, int):
- ikey = str(ikey).strip()
-
- if len(tok) == 1:
- tokenlist[ikey] = _ParseResultsWithOffset("", i)
-
- elif len(tok) == 2 and not isinstance(tok[1], ParseResults):
- tokenlist[ikey] = _ParseResultsWithOffset(tok[1], i)
-
- else:
- try:
- dictvalue = tok.copy() # ParseResults(i)
- except Exception:
- exc = TypeError(
- "could not extract dict values from parsed results"
- " - Dict expression must contain Grouped expressions"
- )
- raise exc from None
-
- del dictvalue[0]
-
- if len(dictvalue) != 1 or (
- isinstance(dictvalue, ParseResults) and dictvalue.haskeys()
- ):
- tokenlist[ikey] = _ParseResultsWithOffset(dictvalue, i)
- else:
- tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0], i)
-
- if self._asPythonDict:
- return [tokenlist.as_dict()] if self.resultsName else tokenlist.as_dict()
- else:
- return [tokenlist] if self.resultsName else tokenlist
-
-
-class Suppress(TokenConverter):
- """Converter for ignoring the results of a parsed expression.
-
- Example::
-
- source = "a, b, c,d"
- wd = Word(alphas)
- wd_list1 = wd + (',' + wd)[...]
- print(wd_list1.parse_string(source))
-
- # often, delimiters that are useful during parsing are just in the
- # way afterward - use Suppress to keep them out of the parsed output
- wd_list2 = wd + (Suppress(',') + wd)[...]
- print(wd_list2.parse_string(source))
-
- # Skipped text (using '...') can be suppressed as well
- source = "lead in START relevant text END trailing text"
- start_marker = Keyword("START")
- end_marker = Keyword("END")
- find_body = Suppress(...) + start_marker + ... + end_marker
- print(find_body.parse_string(source)
-
- prints::
-
- ['a', ',', 'b', ',', 'c', ',', 'd']
- ['a', 'b', 'c', 'd']
- ['START', 'relevant text ', 'END']
-
- (See also :class:`delimited_list`.)
- """
-
- def __init__(self, expr: Union[ParserElement, str], savelist: bool = False):
- if expr is ...:
- expr = _PendingSkip(NoMatch())
- super().__init__(expr)
-
- def __add__(self, other) -> "ParserElement":
- if isinstance(self.expr, _PendingSkip):
- return Suppress(SkipTo(other)) + other
- else:
- return super().__add__(other)
-
- def __sub__(self, other) -> "ParserElement":
- if isinstance(self.expr, _PendingSkip):
- return Suppress(SkipTo(other)) - other
- else:
- return super().__sub__(other)
-
- def postParse(self, instring, loc, tokenlist):
- return []
-
- def suppress(self) -> ParserElement:
- return self
-
-
-def trace_parse_action(f: ParseAction) -> ParseAction:
- """Decorator for debugging parse actions.
-
- When the parse action is called, this decorator will print
- ``">> entering method-name(line:, , )"``.
- When the parse action completes, the decorator will print
- ``"<<"`` followed by the returned value, or any exception that the parse action raised.
-
- Example::
-
- wd = Word(alphas)
-
- @trace_parse_action
- def remove_duplicate_chars(tokens):
- return ''.join(sorted(set(''.join(tokens))))
-
- wds = wd[1, ...].set_parse_action(remove_duplicate_chars)
- print(wds.parse_string("slkdjs sld sldd sdlf sdljf"))
-
- prints::
-
- >>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {}))
- < 3:
- thisFunc = paArgs[0].__class__.__name__ + "." + thisFunc
- sys.stderr.write(
- ">>entering {}(line: {!r}, {}, {!r})\n".format(thisFunc, line(l, s), l, t)
- )
- try:
- ret = f(*paArgs)
- except Exception as exc:
- sys.stderr.write("< str:
- r"""Helper to easily define string ranges for use in :class:`Word`
- construction. Borrows syntax from regexp ``'[]'`` string range
- definitions::
-
- srange("[0-9]") -> "0123456789"
- srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz"
- srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_"
-
- The input string must be enclosed in []'s, and the returned string
- is the expanded character set joined into a single string. The
- values enclosed in the []'s may be:
-
- - a single character
- - an escaped character with a leading backslash (such as ``\-``
- or ``\]``)
- - an escaped hex character with a leading ``'\x'``
- (``\x21``, which is a ``'!'`` character) (``\0x##``
- is also supported for backwards compatibility)
- - an escaped octal character with a leading ``'\0'``
- (``\041``, which is a ``'!'`` character)
- - a range of any of the above, separated by a dash (``'a-z'``,
- etc.)
- - any combination of the above (``'aeiouy'``,
- ``'a-zA-Z0-9_$'``, etc.)
- """
- _expanded = (
- lambda p: p
- if not isinstance(p, ParseResults)
- else "".join(chr(c) for c in range(ord(p[0]), ord(p[1]) + 1))
- )
- try:
- return "".join(_expanded(part) for part in _reBracketExpr.parse_string(s).body)
- except Exception:
- return ""
-
-
-def token_map(func, *args) -> ParseAction:
- """Helper to define a parse action by mapping a function to all
- elements of a :class:`ParseResults` list. If any additional args are passed,
- they are forwarded to the given function as additional arguments
- after the token, as in
- ``hex_integer = Word(hexnums).set_parse_action(token_map(int, 16))``,
- which will convert the parsed data to an integer using base 16.
-
- Example (compare the last to example in :class:`ParserElement.transform_string`::
-
- hex_ints = Word(hexnums)[1, ...].set_parse_action(token_map(int, 16))
- hex_ints.run_tests('''
- 00 11 22 aa FF 0a 0d 1a
- ''')
-
- upperword = Word(alphas).set_parse_action(token_map(str.upper))
- upperword[1, ...].run_tests('''
- my kingdom for a horse
- ''')
-
- wd = Word(alphas).set_parse_action(token_map(str.title))
- wd[1, ...].set_parse_action(' '.join).run_tests('''
- now is the winter of our discontent made glorious summer by this sun of york
- ''')
-
- prints::
-
- 00 11 22 aa FF 0a 0d 1a
- [0, 17, 34, 170, 255, 10, 13, 26]
-
- my kingdom for a horse
- ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE']
-
- now is the winter of our discontent made glorious summer by this sun of york
- ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York']
- """
-
- def pa(s, l, t):
- return [func(tokn, *args) for tokn in t]
-
- func_name = getattr(func, "__name__", getattr(func, "__class__").__name__)
- pa.__name__ = func_name
-
- return pa
-
-
-def autoname_elements() -> None:
- """
- Utility to simplify mass-naming of parser elements, for
- generating railroad diagram with named subdiagrams.
- """
- for name, var in sys._getframe().f_back.f_locals.items():
- if isinstance(var, ParserElement) and not var.customName:
- var.set_name(name)
-
-
-dbl_quoted_string = Combine(
- Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*') + '"'
-).set_name("string enclosed in double quotes")
-
-sgl_quoted_string = Combine(
- Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*") + "'"
-).set_name("string enclosed in single quotes")
-
-quoted_string = Combine(
- Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*') + '"'
- | Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*") + "'"
-).set_name("quotedString using single or double quotes")
-
-unicode_string = Combine("u" + quoted_string.copy()).set_name("unicode string literal")
-
-
-alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]")
-punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]")
-
-# build list of built-in expressions, for future reference if a global default value
-# gets updated
-_builtin_exprs: List[ParserElement] = [
- v for v in vars().values() if isinstance(v, ParserElement)
-]
-
-# backward compatibility names
-tokenMap = token_map
-conditionAsParseAction = condition_as_parse_action
-nullDebugAction = null_debug_action
-sglQuotedString = sgl_quoted_string
-dblQuotedString = dbl_quoted_string
-quotedString = quoted_string
-unicodeString = unicode_string
-lineStart = line_start
-lineEnd = line_end
-stringStart = string_start
-stringEnd = string_end
-traceParseAction = trace_parse_action
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/version.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/version.py
deleted file mode 100644
index a406a301446b1c1079137f1834caa4448807a693..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/version.py
+++ /dev/null
@@ -1,358 +0,0 @@
-#
-# distutils/version.py
-#
-# Implements multiple version numbering conventions for the
-# Python Module Distribution Utilities.
-#
-# $Id$
-#
-
-"""Provides classes to represent module version numbers (one class for
-each style of version numbering). There are currently two such classes
-implemented: StrictVersion and LooseVersion.
-
-Every version number class implements the following interface:
- * the 'parse' method takes a string and parses it to some internal
- representation; if the string is an invalid version number,
- 'parse' raises a ValueError exception
- * the class constructor takes an optional string argument which,
- if supplied, is passed to 'parse'
- * __str__ reconstructs the string that was passed to 'parse' (or
- an equivalent string -- ie. one that will generate an equivalent
- version number instance)
- * __repr__ generates Python code to recreate the version number instance
- * _cmp compares the current instance with either another instance
- of the same class or a string (which will be parsed to an instance
- of the same class, thus must follow the same rules)
-"""
-
-import re
-import warnings
-import contextlib
-
-
-@contextlib.contextmanager
-def suppress_known_deprecation():
- with warnings.catch_warnings(record=True) as ctx:
- warnings.filterwarnings(
- action='default',
- category=DeprecationWarning,
- message="distutils Version classes are deprecated.",
- )
- yield ctx
-
-
-class Version:
- """Abstract base class for version numbering classes. Just provides
- constructor (__init__) and reproducer (__repr__), because those
- seem to be the same for all version numbering classes; and route
- rich comparisons to _cmp.
- """
-
- def __init__(self, vstring=None):
- if vstring:
- self.parse(vstring)
- warnings.warn(
- "distutils Version classes are deprecated. "
- "Use packaging.version instead.",
- DeprecationWarning,
- stacklevel=2,
- )
-
- def __repr__(self):
- return "%s ('%s')" % (self.__class__.__name__, str(self))
-
- def __eq__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c == 0
-
- def __lt__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c < 0
-
- def __le__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c <= 0
-
- def __gt__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c > 0
-
- def __ge__(self, other):
- c = self._cmp(other)
- if c is NotImplemented:
- return c
- return c >= 0
-
-
-# Interface for version-number classes -- must be implemented
-# by the following classes (the concrete ones -- Version should
-# be treated as an abstract class).
-# __init__ (string) - create and take same action as 'parse'
-# (string parameter is optional)
-# parse (string) - convert a string representation to whatever
-# internal representation is appropriate for
-# this style of version numbering
-# __str__ (self) - convert back to a string; should be very similar
-# (if not identical to) the string supplied to parse
-# __repr__ (self) - generate Python code to recreate
-# the instance
-# _cmp (self, other) - compare two version numbers ('other' may
-# be an unparsed version string, or another
-# instance of your version class)
-
-
-class StrictVersion(Version):
-
- """Version numbering for anal retentives and software idealists.
- Implements the standard interface for version number classes as
- described above. A version number consists of two or three
- dot-separated numeric components, with an optional "pre-release" tag
- on the end. The pre-release tag consists of the letter 'a' or 'b'
- followed by a number. If the numeric components of two version
- numbers are equal, then one with a pre-release tag will always
- be deemed earlier (lesser) than one without.
-
- The following are valid version numbers (shown in the order that
- would be obtained by sorting according to the supplied cmp function):
-
- 0.4 0.4.0 (these two are equivalent)
- 0.4.1
- 0.5a1
- 0.5b3
- 0.5
- 0.9.6
- 1.0
- 1.0.4a3
- 1.0.4b1
- 1.0.4
-
- The following are examples of invalid version numbers:
-
- 1
- 2.7.2.2
- 1.3.a4
- 1.3pl1
- 1.3c4
-
- The rationale for this version numbering system will be explained
- in the distutils documentation.
- """
-
- version_re = re.compile(
- r'^(\d+) \. (\d+) (\. (\d+))? ([ab](\d+))?$', re.VERBOSE | re.ASCII
- )
-
- def parse(self, vstring):
- match = self.version_re.match(vstring)
- if not match:
- raise ValueError("invalid version number '%s'" % vstring)
-
- (major, minor, patch, prerelease, prerelease_num) = match.group(1, 2, 4, 5, 6)
-
- if patch:
- self.version = tuple(map(int, [major, minor, patch]))
- else:
- self.version = tuple(map(int, [major, minor])) + (0,)
-
- if prerelease:
- self.prerelease = (prerelease[0], int(prerelease_num))
- else:
- self.prerelease = None
-
- def __str__(self):
-
- if self.version[2] == 0:
- vstring = '.'.join(map(str, self.version[0:2]))
- else:
- vstring = '.'.join(map(str, self.version))
-
- if self.prerelease:
- vstring = vstring + self.prerelease[0] + str(self.prerelease[1])
-
- return vstring
-
- def _cmp(self, other):
- if isinstance(other, str):
- with suppress_known_deprecation():
- other = StrictVersion(other)
- elif not isinstance(other, StrictVersion):
- return NotImplemented
-
- if self.version != other.version:
- # numeric versions don't match
- # prerelease stuff doesn't matter
- if self.version < other.version:
- return -1
- else:
- return 1
-
- # have to compare prerelease
- # case 1: neither has prerelease; they're equal
- # case 2: self has prerelease, other doesn't; other is greater
- # case 3: self doesn't have prerelease, other does: self is greater
- # case 4: both have prerelease: must compare them!
-
- if not self.prerelease and not other.prerelease:
- return 0
- elif self.prerelease and not other.prerelease:
- return -1
- elif not self.prerelease and other.prerelease:
- return 1
- elif self.prerelease and other.prerelease:
- if self.prerelease == other.prerelease:
- return 0
- elif self.prerelease < other.prerelease:
- return -1
- else:
- return 1
- else:
- assert False, "never get here"
-
-
-# end class StrictVersion
-
-
-# The rules according to Greg Stein:
-# 1) a version number has 1 or more numbers separated by a period or by
-# sequences of letters. If only periods, then these are compared
-# left-to-right to determine an ordering.
-# 2) sequences of letters are part of the tuple for comparison and are
-# compared lexicographically
-# 3) recognize the numeric components may have leading zeroes
-#
-# The LooseVersion class below implements these rules: a version number
-# string is split up into a tuple of integer and string components, and
-# comparison is a simple tuple comparison. This means that version
-# numbers behave in a predictable and obvious way, but a way that might
-# not necessarily be how people *want* version numbers to behave. There
-# wouldn't be a problem if people could stick to purely numeric version
-# numbers: just split on period and compare the numbers as tuples.
-# However, people insist on putting letters into their version numbers;
-# the most common purpose seems to be:
-# - indicating a "pre-release" version
-# ('alpha', 'beta', 'a', 'b', 'pre', 'p')
-# - indicating a post-release patch ('p', 'pl', 'patch')
-# but of course this can't cover all version number schemes, and there's
-# no way to know what a programmer means without asking him.
-#
-# The problem is what to do with letters (and other non-numeric
-# characters) in a version number. The current implementation does the
-# obvious and predictable thing: keep them as strings and compare
-# lexically within a tuple comparison. This has the desired effect if
-# an appended letter sequence implies something "post-release":
-# eg. "0.99" < "0.99pl14" < "1.0", and "5.001" < "5.001m" < "5.002".
-#
-# However, if letters in a version number imply a pre-release version,
-# the "obvious" thing isn't correct. Eg. you would expect that
-# "1.5.1" < "1.5.2a2" < "1.5.2", but under the tuple/lexical comparison
-# implemented here, this just isn't so.
-#
-# Two possible solutions come to mind. The first is to tie the
-# comparison algorithm to a particular set of semantic rules, as has
-# been done in the StrictVersion class above. This works great as long
-# as everyone can go along with bondage and discipline. Hopefully a
-# (large) subset of Python module programmers will agree that the
-# particular flavour of bondage and discipline provided by StrictVersion
-# provides enough benefit to be worth using, and will submit their
-# version numbering scheme to its domination. The free-thinking
-# anarchists in the lot will never give in, though, and something needs
-# to be done to accommodate them.
-#
-# Perhaps a "moderately strict" version class could be implemented that
-# lets almost anything slide (syntactically), and makes some heuristic
-# assumptions about non-digits in version number strings. This could
-# sink into special-case-hell, though; if I was as talented and
-# idiosyncratic as Larry Wall, I'd go ahead and implement a class that
-# somehow knows that "1.2.1" < "1.2.2a2" < "1.2.2" < "1.2.2pl3", and is
-# just as happy dealing with things like "2g6" and "1.13++". I don't
-# think I'm smart enough to do it right though.
-#
-# In any case, I've coded the test suite for this module (see
-# ../test/test_version.py) specifically to fail on things like comparing
-# "1.2a2" and "1.2". That's not because the *code* is doing anything
-# wrong, it's because the simple, obvious design doesn't match my
-# complicated, hairy expectations for real-world version numbers. It
-# would be a snap to fix the test suite to say, "Yep, LooseVersion does
-# the Right Thing" (ie. the code matches the conception). But I'd rather
-# have a conception that matches common notions about version numbers.
-
-
-class LooseVersion(Version):
-
- """Version numbering for anarchists and software realists.
- Implements the standard interface for version number classes as
- described above. A version number consists of a series of numbers,
- separated by either periods or strings of letters. When comparing
- version numbers, the numeric components will be compared
- numerically, and the alphabetic components lexically. The following
- are all valid version numbers, in no particular order:
-
- 1.5.1
- 1.5.2b2
- 161
- 3.10a
- 8.02
- 3.4j
- 1996.07.12
- 3.2.pl0
- 3.1.1.6
- 2g6
- 11g
- 0.960923
- 2.2beta29
- 1.13++
- 5.5.kw
- 2.0b1pl0
-
- In fact, there is no such thing as an invalid version number under
- this scheme; the rules for comparison are simple and predictable,
- but may not always give the results you want (for some definition
- of "want").
- """
-
- component_re = re.compile(r'(\d+ | [a-z]+ | \.)', re.VERBOSE)
-
- def parse(self, vstring):
- # I've given up on thinking I can reconstruct the version string
- # from the parsed tuple -- so I just store the string here for
- # use by __str__
- self.vstring = vstring
- components = [x for x in self.component_re.split(vstring) if x and x != '.']
- for i, obj in enumerate(components):
- try:
- components[i] = int(obj)
- except ValueError:
- pass
-
- self.version = components
-
- def __str__(self):
- return self.vstring
-
- def __repr__(self):
- return "LooseVersion ('%s')" % str(self)
-
- def _cmp(self, other):
- if isinstance(other, str):
- other = LooseVersion(other)
- elif not isinstance(other, LooseVersion):
- return NotImplemented
-
- if self.version == other.version:
- return 0
- if self.version < other.version:
- return -1
- if self.version > other.version:
- return 1
-
-
-# end class LooseVersion
diff --git a/spaces/toloka/open-llm-leaderboard/static/js/787.c4e7f8f9.chunk.js b/spaces/toloka/open-llm-leaderboard/static/js/787.c4e7f8f9.chunk.js
deleted file mode 100644
index 7664e5721f86c326866b93cf75e0fa02125ca81c..0000000000000000000000000000000000000000
--- a/spaces/toloka/open-llm-leaderboard/static/js/787.c4e7f8f9.chunk.js
+++ /dev/null
@@ -1,2 +0,0 @@
-"use strict";(self.webpackChunkclient=self.webpackChunkclient||[]).push([[787],{787:function(e,t,n){n.r(t),n.d(t,{getCLS:function(){return y},getFCP:function(){return g},getFID:function(){return C},getLCP:function(){return P},getTTFB:function(){return D}});var i,r,a,o,u=function(e,t){return{name:e,value:void 0===t?-1:t,delta:0,entries:[],id:"v2-".concat(Date.now(),"-").concat(Math.floor(8999999999999*Math.random())+1e12)}},c=function(e,t){try{if(PerformanceObserver.supportedEntryTypes.includes(e)){if("first-input"===e&&!("PerformanceEventTiming"in self))return;var n=new PerformanceObserver((function(e){return e.getEntries().map(t)}));return n.observe({type:e,buffered:!0}),n}}catch(e){}},f=function(e,t){var n=function n(i){"pagehide"!==i.type&&"hidden"!==document.visibilityState||(e(i),t&&(removeEventListener("visibilitychange",n,!0),removeEventListener("pagehide",n,!0)))};addEventListener("visibilitychange",n,!0),addEventListener("pagehide",n,!0)},s=function(e){addEventListener("pageshow",(function(t){t.persisted&&e(t)}),!0)},m=function(e,t,n){var i;return function(r){t.value>=0&&(r||n)&&(t.delta=t.value-(i||0),(t.delta||void 0===i)&&(i=t.value,e(t)))}},v=-1,p=function(){return"hidden"===document.visibilityState?0:1/0},d=function(){f((function(e){var t=e.timeStamp;v=t}),!0)},l=function(){return v<0&&(v=p(),d(),s((function(){setTimeout((function(){v=p(),d()}),0)}))),{get firstHiddenTime(){return v}}},g=function(e,t){var n,i=l(),r=u("FCP"),a=function(e){"first-contentful-paint"===e.name&&(f&&f.disconnect(),e.startTime-1&&e(t)},r=u("CLS",0),a=0,o=[],v=function(e){if(!e.hadRecentInput){var t=o[0],i=o[o.length-1];a&&e.startTime-i.startTime<1e3&&e.startTime-t.startTime<5e3?(a+=e.value,o.push(e)):(a=e.value,o=[e]),a>r.value&&(r.value=a,r.entries=o,n())}},p=c("layout-shift",v);p&&(n=m(i,r,t),f((function(){p.takeRecords().map(v),n(!0)})),s((function(){a=0,T=-1,r=u("CLS",0),n=m(i,r,t)})))},E={passive:!0,capture:!0},w=new Date,L=function(e,t){i||(i=t,r=e,a=new Date,F(removeEventListener),S())},S=function(){if(r>=0&&r1e12?new Date:performance.now())-e.timeStamp;"pointerdown"==e.type?function(e,t){var n=function(){L(e,t),r()},i=function(){r()},r=function(){removeEventListener("pointerup",n,E),removeEventListener("pointercancel",i,E)};addEventListener("pointerup",n,E),addEventListener("pointercancel",i,E)}(t,e):L(t,e)}},F=function(e){["mousedown","keydown","touchstart","pointerdown"].forEach((function(t){return e(t,b,E)}))},C=function(e,t){var n,a=l(),v=u("FID"),p=function(e){e.startTimeperformance.now())return;n.entries=[t],e(n)}catch(e){}},"complete"===document.readyState?setTimeout(t,0):addEventListener("load",(function(){return setTimeout(t,0)}))}}}]);
-//# sourceMappingURL=787.c4e7f8f9.chunk.js.map
\ No newline at end of file
diff --git a/spaces/tomofi/MMOCR/mmocr/models/kie/extractors/sdmgr.py b/spaces/tomofi/MMOCR/mmocr/models/kie/extractors/sdmgr.py
deleted file mode 100644
index 9fa08cccc9a4ae893cad2dd8d4e3408ecc1d2b29..0000000000000000000000000000000000000000
--- a/spaces/tomofi/MMOCR/mmocr/models/kie/extractors/sdmgr.py
+++ /dev/null
@@ -1,166 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import warnings
-
-import mmcv
-from mmdet.core import bbox2roi
-from torch import nn
-from torch.nn import functional as F
-
-from mmocr.core import imshow_edge, imshow_node
-from mmocr.models.builder import DETECTORS, build_roi_extractor
-from mmocr.models.common.detectors import SingleStageDetector
-from mmocr.utils import list_from_file
-
-
-@DETECTORS.register_module()
-class SDMGR(SingleStageDetector):
- """The implementation of the paper: Spatial Dual-Modality Graph Reasoning
- for Key Information Extraction. https://arxiv.org/abs/2103.14470.
-
- Args:
- visual_modality (bool): Whether use the visual modality.
- class_list (None | str): Mapping file of class index to
- class name. If None, class index will be shown in
- `show_results`, else class name.
- """
-
- def __init__(self,
- backbone,
- neck=None,
- bbox_head=None,
- extractor=dict(
- type='mmdet.SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7),
- featmap_strides=[1]),
- visual_modality=False,
- train_cfg=None,
- test_cfg=None,
- class_list=None,
- init_cfg=None,
- openset=False):
- super().__init__(
- backbone, neck, bbox_head, train_cfg, test_cfg, init_cfg=init_cfg)
- self.visual_modality = visual_modality
- if visual_modality:
- self.extractor = build_roi_extractor({
- **extractor, 'out_channels':
- self.backbone.base_channels
- })
- self.maxpool = nn.MaxPool2d(extractor['roi_layer']['output_size'])
- else:
- self.extractor = None
- self.class_list = class_list
- self.openset = openset
-
- def forward_train(self, img, img_metas, relations, texts, gt_bboxes,
- gt_labels):
- """
- Args:
- img (tensor): Input images of shape (N, C, H, W).
- Typically these should be mean centered and std scaled.
- img_metas (list[dict]): A list of image info dict where each dict
- contains: 'img_shape', 'scale_factor', 'flip', and may also
- contain 'filename', 'ori_shape', 'pad_shape', and
- 'img_norm_cfg'. For details of the values of these keys,
- please see :class:`mmdet.datasets.pipelines.Collect`.
- relations (list[tensor]): Relations between bboxes.
- texts (list[tensor]): Texts in bboxes.
- gt_bboxes (list[tensor]): Each item is the truth boxes for each
- image in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[tensor]): Class indices corresponding to each box.
-
- Returns:
- dict[str, tensor]: A dictionary of loss components.
- """
- x = self.extract_feat(img, gt_bboxes)
- node_preds, edge_preds = self.bbox_head.forward(relations, texts, x)
- return self.bbox_head.loss(node_preds, edge_preds, gt_labels)
-
- def forward_test(self,
- img,
- img_metas,
- relations,
- texts,
- gt_bboxes,
- rescale=False):
- x = self.extract_feat(img, gt_bboxes)
- node_preds, edge_preds = self.bbox_head.forward(relations, texts, x)
- return [
- dict(
- img_metas=img_metas,
- nodes=F.softmax(node_preds, -1),
- edges=F.softmax(edge_preds, -1))
- ]
-
- def extract_feat(self, img, gt_bboxes):
- if self.visual_modality:
- x = super().extract_feat(img)[-1]
- feats = self.maxpool(self.extractor([x], bbox2roi(gt_bboxes)))
- return feats.view(feats.size(0), -1)
- return None
-
- def show_result(self,
- img,
- result,
- boxes,
- win_name='',
- show=False,
- wait_time=0,
- out_file=None,
- **kwargs):
- """Draw `result` on `img`.
-
- Args:
- img (str or tensor): The image to be displayed.
- result (dict): The results to draw on `img`.
- boxes (list): Bbox of img.
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- Default: 0.
- show (bool): Whether to show the image.
- Default: False.
- out_file (str or None): The output filename.
- Default: None.
-
- Returns:
- img (tensor): Only if not `show` or `out_file`.
- """
- img = mmcv.imread(img)
- img = img.copy()
-
- idx_to_cls = {}
- if self.class_list is not None:
- for line in list_from_file(self.class_list):
- class_idx, class_label = line.strip().split()
- idx_to_cls[class_idx] = class_label
-
- # if out_file specified, do not show image in window
- if out_file is not None:
- show = False
-
- if self.openset:
- img = imshow_edge(
- img,
- result,
- boxes,
- show=show,
- win_name=win_name,
- wait_time=wait_time,
- out_file=out_file)
- else:
- img = imshow_node(
- img,
- result,
- boxes,
- idx_to_cls=idx_to_cls,
- show=show,
- win_name=win_name,
- wait_time=wait_time,
- out_file=out_file)
-
- if not (show or out_file):
- warnings.warn('show==False and out_file is not specified, only '
- 'result image will be returned')
- return img
-
- return img
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py
deleted file mode 100644
index 9c65305baa16eb4e940d236cf45122b46b942ea9..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolo/yolov3_d53_mstrain-608_273e_coco.py
+++ /dev/null
@@ -1,124 +0,0 @@
-_base_ = '../_base_/default_runtime.py'
-# model settings
-model = dict(
- type='YOLOV3',
- pretrained='open-mmlab://darknet53',
- backbone=dict(type='Darknet', depth=53, out_indices=(3, 4, 5)),
- neck=dict(
- type='YOLOV3Neck',
- num_scales=3,
- in_channels=[1024, 512, 256],
- out_channels=[512, 256, 128]),
- bbox_head=dict(
- type='YOLOV3Head',
- num_classes=80,
- in_channels=[512, 256, 128],
- out_channels=[1024, 512, 256],
- anchor_generator=dict(
- type='YOLOAnchorGenerator',
- base_sizes=[[(116, 90), (156, 198), (373, 326)],
- [(30, 61), (62, 45), (59, 119)],
- [(10, 13), (16, 30), (33, 23)]],
- strides=[32, 16, 8]),
- bbox_coder=dict(type='YOLOBBoxCoder'),
- featmap_strides=[32, 16, 8],
- loss_cls=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0,
- reduction='sum'),
- loss_conf=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=1.0,
- reduction='sum'),
- loss_xy=dict(
- type='CrossEntropyLoss',
- use_sigmoid=True,
- loss_weight=2.0,
- reduction='sum'),
- loss_wh=dict(type='MSELoss', loss_weight=2.0, reduction='sum')),
- # training and testing settings
- train_cfg=dict(
- assigner=dict(
- type='GridAssigner',
- pos_iou_thr=0.5,
- neg_iou_thr=0.5,
- min_pos_iou=0)),
- test_cfg=dict(
- nms_pre=1000,
- min_bbox_size=0,
- score_thr=0.05,
- conf_thr=0.005,
- nms=dict(type='nms', iou_threshold=0.45),
- max_per_img=100))
-# dataset settings
-dataset_type = 'CocoDataset'
-data_root = 'data/coco/'
-img_norm_cfg = dict(mean=[0, 0, 0], std=[255., 255., 255.], to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile', to_float32=True),
- dict(type='LoadAnnotations', with_bbox=True),
- dict(type='PhotoMetricDistortion'),
- dict(
- type='Expand',
- mean=img_norm_cfg['mean'],
- to_rgb=img_norm_cfg['to_rgb'],
- ratio_range=(1, 2)),
- dict(
- type='MinIoURandomCrop',
- min_ious=(0.4, 0.5, 0.6, 0.7, 0.8, 0.9),
- min_crop_size=0.3),
- dict(type='Resize', img_scale=[(320, 320), (608, 608)], keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(608, 608),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img'])
- ])
-]
-data = dict(
- samples_per_gpu=8,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_train2017.json',
- img_prefix=data_root + 'train2017/',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- ann_file=data_root + 'annotations/instances_val2017.json',
- img_prefix=data_root + 'val2017/',
- pipeline=test_pipeline))
-# optimizer
-optimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0005)
-optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=2000, # same as burn-in in darknet
- warmup_ratio=0.1,
- step=[218, 246])
-# runtime settings
-runner = dict(type='EpochBasedRunner', max_epochs=273)
-evaluation = dict(interval=1, metric=['bbox'])
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/post_processing/merge_augs.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/post_processing/merge_augs.py
deleted file mode 100644
index f16cdde3d97234335a5accade6ebf80d4da55f02..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/post_processing/merge_augs.py
+++ /dev/null
@@ -1,122 +0,0 @@
-"""
-This code is based on the following file:
-https://github.com/tztztztztz/eqlv2/blob/master/mmdet/core/post_processing/merge_augs.py
-"""
-
-import numpy as np
-import torch
-from mmcv.ops import nms
-
-from ..bbox import bbox_mapping_back
-
-
-def merge_aug_proposals(aug_proposals, img_metas, rpn_test_cfg):
- """Merge augmented proposals (multiscale, flip, etc.)
-
- Args:
- aug_proposals (list[Tensor]): proposals from different testing
- schemes, shape (n, 5). Note that they are not rescaled to the
- original image size.
-
- img_metas (list[dict]): list of image info dict where each dict has:
- 'img_shape', 'scale_factor', 'flip', and may also contain
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
- For details on the values of these keys see
- `mmdet/datasets/pipelines/formatting.py:Collect`.
-
- rpn_test_cfg (dict): rpn test config.
-
- Returns:
- Tensor: shape (n, 4), proposals corresponding to original image scale.
- """
- recovered_proposals = []
- for proposals, img_info in zip(aug_proposals, img_metas):
- img_shape = img_info['img_shape']
- scale_factor = img_info['scale_factor']
- flip = img_info['flip']
- flip_direction = img_info['flip_direction']
- _proposals = proposals.clone()
- _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape,
- scale_factor, flip,
- flip_direction)
- recovered_proposals.append(_proposals)
- aug_proposals = torch.cat(recovered_proposals, dim=0)
- merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(),
- aug_proposals[:, -1].contiguous(),
- rpn_test_cfg.nms_thr)
- scores = merged_proposals[:, 4]
- _, order = scores.sort(0, descending=True)
- num = min(rpn_test_cfg.max_num, merged_proposals.shape[0])
- order = order[:num]
- merged_proposals = merged_proposals[order, :]
- return merged_proposals
-
-
-def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg):
- """Merge augmented detection bboxes and scores.
-
- Args:
- aug_bboxes (list[Tensor]): shape (n, 4*#class)
- aug_scores (list[Tensor] or None): shape (n, #class)
- img_shapes (list[Tensor]): shape (3, ).
- rcnn_test_cfg (dict): rcnn test config.
-
- Returns:
- tuple: (bboxes, scores)
- """
- recovered_bboxes = []
- for bboxes, img_info in zip(aug_bboxes, img_metas):
- img_shape = img_info[0]['img_shape']
- scale_factor = img_info[0]['scale_factor']
- flip = img_info[0]['flip']
- flip_direction = img_info[0]['flip_direction']
- bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip,
- flip_direction)
- recovered_bboxes.append(bboxes)
- bboxes = torch.stack(recovered_bboxes).mean(dim=0)
- if aug_scores is None:
- return bboxes
- else:
- scores = torch.stack(aug_scores).mean(dim=0)
- return bboxes, scores
-
-
-def merge_aug_scores(aug_scores):
- """Merge augmented bbox scores."""
- if isinstance(aug_scores[0], torch.Tensor):
- return torch.mean(torch.stack(aug_scores), dim=0)
- else:
- return np.mean(aug_scores, axis=0)
-
-
-def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None):
- """Merge augmented mask prediction.
-
- Args:
- aug_masks (list[ndarray]): shape (n, #class, h, w)
- img_shapes (list[ndarray]): shape (3, ).
- rcnn_test_cfg (dict): rcnn test config.
-
- Returns:
- tuple: (bboxes, scores)
- """
- recovered_masks = []
- for mask, img_info in zip(aug_masks, img_metas):
- flip = img_info[0]['flip']
- flip_direction = img_info[0]['flip_direction']
- if flip:
- if flip_direction == 'horizontal':
- mask = mask[:, :, :, ::-1]
- elif flip_direction == 'vertical':
- mask = mask[:, :, ::-1, :]
- else:
- raise ValueError(
- f"Invalid flipping direction '{flip_direction}'")
- recovered_masks.append(mask)
-
- if weights is None:
- merged_masks = np.mean(recovered_masks, axis=0)
- else:
- merged_masks = np.average(
- np.array(recovered_masks), axis=0, weights=np.array(weights))
- return merged_masks
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py
deleted file mode 100644
index e58cae1b4c8cd587e73c08582e333ea21054534a..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_heads/coarse_mask_head.py
+++ /dev/null
@@ -1,99 +0,0 @@
-from mmcv.cnn import ConvModule, Linear
-from mmcv.runner import ModuleList, auto_fp16
-
-from mmdet.models.builder import HEADS
-from .fcn_mask_head import FCNMaskHead
-
-
-@HEADS.register_module()
-class CoarseMaskHead(FCNMaskHead):
- """Coarse mask head used in PointRend.
-
- Compared with standard ``FCNMaskHead``, ``CoarseMaskHead`` will downsample
- the input feature map instead of upsample it.
-
- Args:
- num_convs (int): Number of conv layers in the head. Default: 0.
- num_fcs (int): Number of fc layers in the head. Default: 2.
- fc_out_channels (int): Number of output channels of fc layer.
- Default: 1024.
- downsample_factor (int): The factor that feature map is downsampled by.
- Default: 2.
- init_cfg (dict or list[dict], optional): Initialization config dict.
- """
-
- def __init__(self,
- num_convs=0,
- num_fcs=2,
- fc_out_channels=1024,
- downsample_factor=2,
- init_cfg=dict(
- type='Xavier',
- override=[
- dict(name='fcs'),
- dict(type='Constant', val=0.001, name='fc_logits')
- ]),
- *arg,
- **kwarg):
- super(CoarseMaskHead, self).__init__(
- *arg,
- num_convs=num_convs,
- upsample_cfg=dict(type=None),
- init_cfg=None,
- **kwarg)
- self.init_cfg = init_cfg
- self.num_fcs = num_fcs
- assert self.num_fcs > 0
- self.fc_out_channels = fc_out_channels
- self.downsample_factor = downsample_factor
- assert self.downsample_factor >= 1
- # remove conv_logit
- delattr(self, 'conv_logits')
-
- if downsample_factor > 1:
- downsample_in_channels = (
- self.conv_out_channels
- if self.num_convs > 0 else self.in_channels)
- self.downsample_conv = ConvModule(
- downsample_in_channels,
- self.conv_out_channels,
- kernel_size=downsample_factor,
- stride=downsample_factor,
- padding=0,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg)
- else:
- self.downsample_conv = None
-
- self.output_size = (self.roi_feat_size[0] // downsample_factor,
- self.roi_feat_size[1] // downsample_factor)
- self.output_area = self.output_size[0] * self.output_size[1]
-
- last_layer_dim = self.conv_out_channels * self.output_area
-
- self.fcs = ModuleList()
- for i in range(num_fcs):
- fc_in_channels = (
- last_layer_dim if i == 0 else self.fc_out_channels)
- self.fcs.append(Linear(fc_in_channels, self.fc_out_channels))
- last_layer_dim = self.fc_out_channels
- output_channels = self.num_classes * self.output_area
- self.fc_logits = Linear(last_layer_dim, output_channels)
-
- def init_weights(self):
- super(FCNMaskHead, self).init_weights()
-
- @auto_fp16()
- def forward(self, x):
- for conv in self.convs:
- x = conv(x)
-
- if self.downsample_conv is not None:
- x = self.downsample_conv(x)
-
- x = x.flatten(1)
- for fc in self.fcs:
- x = self.relu(fc(x))
- mask_pred = self.fc_logits(x).view(
- x.size(0), self.num_classes, *self.output_size)
- return mask_pred
diff --git a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/typing.py b/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/typing.py
deleted file mode 100644
index 4abe384f07c4b90504e47291674905f85a5b8f52..0000000000000000000000000000000000000000
--- a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/typing.py
+++ /dev/null
@@ -1,18 +0,0 @@
-from typing import Literal, Dict, Any
-import gradio
-
-Component = gradio.File or gradio.Image or gradio.Video or gradio.Slider
-ComponentName = Literal\
-[
- 'source_file',
- 'target_file',
- 'preview_frame_slider',
- 'face_recognition_dropdown',
- 'reference_face_position_gallery',
- 'reference_face_distance_slider',
- 'face_analyser_direction_dropdown',
- 'face_analyser_age_dropdown',
- 'face_analyser_gender_dropdown',
- 'frame_processors_checkbox_group'
-]
-Update = Dict[Any, Any]
diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/image_degradation/utils_image.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/image_degradation/utils_image.py
deleted file mode 100644
index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000
--- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/image_degradation/utils_image.py
+++ /dev/null
@@ -1,916 +0,0 @@
-import os
-import math
-import random
-import numpy as np
-import torch
-import cv2
-from torchvision.utils import make_grid
-from datetime import datetime
-#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py
-
-
-os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
-
-
-'''
-# --------------------------------------------
-# Kai Zhang (github: https://github.com/cszn)
-# 03/Mar/2019
-# --------------------------------------------
-# https://github.com/twhui/SRGAN-pyTorch
-# https://github.com/xinntao/BasicSR
-# --------------------------------------------
-'''
-
-
-IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']
-
-
-def is_image_file(filename):
- return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
-
-
-def get_timestamp():
- return datetime.now().strftime('%y%m%d-%H%M%S')
-
-
-def imshow(x, title=None, cbar=False, figsize=None):
- plt.figure(figsize=figsize)
- plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
- if title:
- plt.title(title)
- if cbar:
- plt.colorbar()
- plt.show()
-
-
-def surf(Z, cmap='rainbow', figsize=None):
- plt.figure(figsize=figsize)
- ax3 = plt.axes(projection='3d')
-
- w, h = Z.shape[:2]
- xx = np.arange(0,w,1)
- yy = np.arange(0,h,1)
- X, Y = np.meshgrid(xx, yy)
- ax3.plot_surface(X,Y,Z,cmap=cmap)
- #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap)
- plt.show()
-
-
-'''
-# --------------------------------------------
-# get image pathes
-# --------------------------------------------
-'''
-
-
-def get_image_paths(dataroot):
- paths = None # return None if dataroot is None
- if dataroot is not None:
- paths = sorted(_get_paths_from_images(dataroot))
- return paths
-
-
-def _get_paths_from_images(path):
- assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
- images = []
- for dirpath, _, fnames in sorted(os.walk(path)):
- for fname in sorted(fnames):
- if is_image_file(fname):
- img_path = os.path.join(dirpath, fname)
- images.append(img_path)
- assert images, '{:s} has no valid image file'.format(path)
- return images
-
-
-'''
-# --------------------------------------------
-# split large images into small images
-# --------------------------------------------
-'''
-
-
-def patches_from_image(img, p_size=512, p_overlap=64, p_max=800):
- w, h = img.shape[:2]
- patches = []
- if w > p_max and h > p_max:
- w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))
- h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))
- w1.append(w-p_size)
- h1.append(h-p_size)
-# print(w1)
-# print(h1)
- for i in w1:
- for j in h1:
- patches.append(img[i:i+p_size, j:j+p_size,:])
- else:
- patches.append(img)
-
- return patches
-
-
-def imssave(imgs, img_path):
- """
- imgs: list, N images of size WxHxC
- """
- img_name, ext = os.path.splitext(os.path.basename(img_path))
-
- for i, img in enumerate(imgs):
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')
- cv2.imwrite(new_path, img)
-
-
-def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
- """
- split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
- and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
- will be splitted.
- Args:
- original_dataroot:
- taget_dataroot:
- p_size: size of small images
- p_overlap: patch size in training is a good choice
- p_max: images with smaller size than (p_max)x(p_max) keep unchanged.
- """
- paths = get_image_paths(original_dataroot)
- for img_path in paths:
- # img_name, ext = os.path.splitext(os.path.basename(img_path))
- img = imread_uint(img_path, n_channels=n_channels)
- patches = patches_from_image(img, p_size, p_overlap, p_max)
- imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))
- #if original_dataroot == taget_dataroot:
- #del img_path
-
-'''
-# --------------------------------------------
-# makedir
-# --------------------------------------------
-'''
-
-
-def mkdir(path):
- if not os.path.exists(path):
- os.makedirs(path)
-
-
-def mkdirs(paths):
- if isinstance(paths, str):
- mkdir(paths)
- else:
- for path in paths:
- mkdir(path)
-
-
-def mkdir_and_rename(path):
- if os.path.exists(path):
- new_name = path + '_archived_' + get_timestamp()
- print('Path already exists. Rename it to [{:s}]'.format(new_name))
- os.rename(path, new_name)
- os.makedirs(path)
-
-
-'''
-# --------------------------------------------
-# read image from path
-# opencv is fast, but read BGR numpy image
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# get uint8 image of size HxWxn_channles (RGB)
-# --------------------------------------------
-def imread_uint(path, n_channels=3):
- # input: path
- # output: HxWx3(RGB or GGG), or HxWx1 (G)
- if n_channels == 1:
- img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
- img = np.expand_dims(img, axis=2) # HxWx1
- elif n_channels == 3:
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
- if img.ndim == 2:
- img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
- else:
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
- return img
-
-
-# --------------------------------------------
-# matlab's imwrite
-# --------------------------------------------
-def imsave(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-def imwrite(img, img_path):
- img = np.squeeze(img)
- if img.ndim == 3:
- img = img[:, :, [2, 1, 0]]
- cv2.imwrite(img_path, img)
-
-
-
-# --------------------------------------------
-# get single image of size HxWxn_channles (BGR)
-# --------------------------------------------
-def read_img(path):
- # read image by cv2
- # return: Numpy float32, HWC, BGR, [0,1]
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
- img = img.astype(np.float32) / 255.
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- # some images have 4 channels
- if img.shape[2] > 3:
- img = img[:, :, :3]
- return img
-
-
-'''
-# --------------------------------------------
-# image format conversion
-# --------------------------------------------
-# numpy(single) <---> numpy(unit)
-# numpy(single) <---> tensor
-# numpy(unit) <---> tensor
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# numpy(single) [0, 1] <---> numpy(unit)
-# --------------------------------------------
-
-
-def uint2single(img):
-
- return np.float32(img/255.)
-
-
-def single2uint(img):
-
- return np.uint8((img.clip(0, 1)*255.).round())
-
-
-def uint162single(img):
-
- return np.float32(img/65535.)
-
-
-def single2uint16(img):
-
- return np.uint16((img.clip(0, 1)*65535.).round())
-
-
-# --------------------------------------------
-# numpy(unit) (HxWxC or HxW) <---> tensor
-# --------------------------------------------
-
-
-# convert uint to 4-dimensional torch tensor
-def uint2tensor4(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
-
-
-# convert uint to 3-dimensional torch tensor
-def uint2tensor3(img):
- if img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
-
-
-# convert 2/3/4-dimensional torch tensor to uint
-def tensor2uint(img):
- img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- return np.uint8((img*255.0).round())
-
-
-# --------------------------------------------
-# numpy(single) (HxWxC) <---> tensor
-# --------------------------------------------
-
-
-# convert single (HxWxC) to 3-dimensional torch tensor
-def single2tensor3(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
-
-
-# convert single (HxWxC) to 4-dimensional torch tensor
-def single2tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
-
-
-# convert torch tensor to single
-def tensor2single(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
-
- return img
-
-# convert torch tensor to single
-def tensor2single3(img):
- img = img.data.squeeze().float().cpu().numpy()
- if img.ndim == 3:
- img = np.transpose(img, (1, 2, 0))
- elif img.ndim == 2:
- img = np.expand_dims(img, axis=2)
- return img
-
-
-def single2tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)
-
-
-def single32tensor5(img):
- return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)
-
-
-def single42tensor4(img):
- return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()
-
-
-# from skimage.io import imread, imsave
-def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
- '''
- Converts a torch Tensor into an image Numpy array of BGR channel order
- Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
- Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
- '''
- tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
- tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
- n_dim = tensor.dim()
- if n_dim == 4:
- n_img = len(tensor)
- img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 3:
- img_np = tensor.numpy()
- img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
- elif n_dim == 2:
- img_np = tensor.numpy()
- else:
- raise TypeError(
- 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
- if out_type == np.uint8:
- img_np = (img_np * 255.0).round()
- # Important. Unlike matlab, numpy.unit8() WILL NOT round by default.
- return img_np.astype(out_type)
-
-
-'''
-# --------------------------------------------
-# Augmentation, flipe and/or rotate
-# --------------------------------------------
-# The following two are enough.
-# (1) augmet_img: numpy image of WxHxC or WxH
-# (2) augment_img_tensor4: tensor image 1xCxWxH
-# --------------------------------------------
-'''
-
-
-def augment_img(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return np.flipud(np.rot90(img))
- elif mode == 2:
- return np.flipud(img)
- elif mode == 3:
- return np.rot90(img, k=3)
- elif mode == 4:
- return np.flipud(np.rot90(img, k=2))
- elif mode == 5:
- return np.rot90(img)
- elif mode == 6:
- return np.rot90(img, k=2)
- elif mode == 7:
- return np.flipud(np.rot90(img, k=3))
-
-
-def augment_img_tensor4(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- if mode == 0:
- return img
- elif mode == 1:
- return img.rot90(1, [2, 3]).flip([2])
- elif mode == 2:
- return img.flip([2])
- elif mode == 3:
- return img.rot90(3, [2, 3])
- elif mode == 4:
- return img.rot90(2, [2, 3]).flip([2])
- elif mode == 5:
- return img.rot90(1, [2, 3])
- elif mode == 6:
- return img.rot90(2, [2, 3])
- elif mode == 7:
- return img.rot90(3, [2, 3]).flip([2])
-
-
-def augment_img_tensor(img, mode=0):
- '''Kai Zhang (github: https://github.com/cszn)
- '''
- img_size = img.size()
- img_np = img.data.cpu().numpy()
- if len(img_size) == 3:
- img_np = np.transpose(img_np, (1, 2, 0))
- elif len(img_size) == 4:
- img_np = np.transpose(img_np, (2, 3, 1, 0))
- img_np = augment_img(img_np, mode=mode)
- img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
- if len(img_size) == 3:
- img_tensor = img_tensor.permute(2, 0, 1)
- elif len(img_size) == 4:
- img_tensor = img_tensor.permute(3, 2, 0, 1)
-
- return img_tensor.type_as(img)
-
-
-def augment_img_np3(img, mode=0):
- if mode == 0:
- return img
- elif mode == 1:
- return img.transpose(1, 0, 2)
- elif mode == 2:
- return img[::-1, :, :]
- elif mode == 3:
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 4:
- return img[:, ::-1, :]
- elif mode == 5:
- img = img[:, ::-1, :]
- img = img.transpose(1, 0, 2)
- return img
- elif mode == 6:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- return img
- elif mode == 7:
- img = img[:, ::-1, :]
- img = img[::-1, :, :]
- img = img.transpose(1, 0, 2)
- return img
-
-
-def augment_imgs(img_list, hflip=True, rot=True):
- # horizontal flip OR rotate
- hflip = hflip and random.random() < 0.5
- vflip = rot and random.random() < 0.5
- rot90 = rot and random.random() < 0.5
-
- def _augment(img):
- if hflip:
- img = img[:, ::-1, :]
- if vflip:
- img = img[::-1, :, :]
- if rot90:
- img = img.transpose(1, 0, 2)
- return img
-
- return [_augment(img) for img in img_list]
-
-
-'''
-# --------------------------------------------
-# modcrop and shave
-# --------------------------------------------
-'''
-
-
-def modcrop(img_in, scale):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- if img.ndim == 2:
- H, W = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r]
- elif img.ndim == 3:
- H, W, C = img.shape
- H_r, W_r = H % scale, W % scale
- img = img[:H - H_r, :W - W_r, :]
- else:
- raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
- return img
-
-
-def shave(img_in, border=0):
- # img_in: Numpy, HWC or HW
- img = np.copy(img_in)
- h, w = img.shape[:2]
- img = img[border:h-border, border:w-border]
- return img
-
-
-'''
-# --------------------------------------------
-# image processing process on numpy image
-# channel_convert(in_c, tar_type, img_list):
-# rgb2ycbcr(img, only_y=True):
-# bgr2ycbcr(img, only_y=True):
-# ycbcr2rgb(img):
-# --------------------------------------------
-'''
-
-
-def rgb2ycbcr(img, only_y=True):
- '''same as matlab rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
- [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def ycbcr2rgb(img):
- '''same as matlab ycbcr2rgb
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
- [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def bgr2ycbcr(img, only_y=True):
- '''bgr version of rgb2ycbcr
- only_y: only return Y channel
- Input:
- uint8, [0, 255]
- float, [0, 1]
- '''
- in_img_type = img.dtype
- img.astype(np.float32)
- if in_img_type != np.uint8:
- img *= 255.
- # convert
- if only_y:
- rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
- else:
- rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
- [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
- if in_img_type == np.uint8:
- rlt = rlt.round()
- else:
- rlt /= 255.
- return rlt.astype(in_img_type)
-
-
-def channel_convert(in_c, tar_type, img_list):
- # conversion among BGR, gray and y
- if in_c == 3 and tar_type == 'gray': # BGR to gray
- gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in gray_list]
- elif in_c == 3 and tar_type == 'y': # BGR to y
- y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
- return [np.expand_dims(img, axis=2) for img in y_list]
- elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
- return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
- else:
- return img_list
-
-
-'''
-# --------------------------------------------
-# metric, PSNR and SSIM
-# --------------------------------------------
-'''
-
-
-# --------------------------------------------
-# PSNR
-# --------------------------------------------
-def calculate_psnr(img1, img2, border=0):
- # img1 and img2 have range [0, 255]
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- mse = np.mean((img1 - img2)**2)
- if mse == 0:
- return float('inf')
- return 20 * math.log10(255.0 / math.sqrt(mse))
-
-
-# --------------------------------------------
-# SSIM
-# --------------------------------------------
-def calculate_ssim(img1, img2, border=0):
- '''calculate SSIM
- the same outputs as MATLAB's
- img1, img2: [0, 255]
- '''
- #img1 = img1.squeeze()
- #img2 = img2.squeeze()
- if not img1.shape == img2.shape:
- raise ValueError('Input images must have the same dimensions.')
- h, w = img1.shape[:2]
- img1 = img1[border:h-border, border:w-border]
- img2 = img2[border:h-border, border:w-border]
-
- if img1.ndim == 2:
- return ssim(img1, img2)
- elif img1.ndim == 3:
- if img1.shape[2] == 3:
- ssims = []
- for i in range(3):
- ssims.append(ssim(img1[:,:,i], img2[:,:,i]))
- return np.array(ssims).mean()
- elif img1.shape[2] == 1:
- return ssim(np.squeeze(img1), np.squeeze(img2))
- else:
- raise ValueError('Wrong input image dimensions.')
-
-
-def ssim(img1, img2):
- C1 = (0.01 * 255)**2
- C2 = (0.03 * 255)**2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1**2
- mu2_sq = mu2**2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
- (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-'''
-# --------------------------------------------
-# matlab's bicubic imresize (numpy and torch) [0, 1]
-# --------------------------------------------
-'''
-
-
-# matlab 'imresize' function, now only support 'bicubic'
-def cubic(x):
- absx = torch.abs(x)
- absx2 = absx**2
- absx3 = absx**3
- return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
- (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
-
-
-def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
- if (scale < 1) and (antialiasing):
- # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
- kernel_width = kernel_width / scale
-
- # Output-space coordinates
- x = torch.linspace(1, out_length, out_length)
-
- # Input-space coordinates. Calculate the inverse mapping such that 0.5
- # in output space maps to 0.5 in input space, and 0.5+scale in output
- # space maps to 1.5 in input space.
- u = x / scale + 0.5 * (1 - 1 / scale)
-
- # What is the left-most pixel that can be involved in the computation?
- left = torch.floor(u - kernel_width / 2)
-
- # What is the maximum number of pixels that can be involved in the
- # computation? Note: it's OK to use an extra pixel here; if the
- # corresponding weights are all zero, it will be eliminated at the end
- # of this function.
- P = math.ceil(kernel_width) + 2
-
- # The indices of the input pixels involved in computing the k-th output
- # pixel are in row k of the indices matrix.
- indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
- 1, P).expand(out_length, P)
-
- # The weights used to compute the k-th output pixel are in row k of the
- # weights matrix.
- distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
- # apply cubic kernel
- if (scale < 1) and (antialiasing):
- weights = scale * cubic(distance_to_center * scale)
- else:
- weights = cubic(distance_to_center)
- # Normalize the weights matrix so that each row sums to 1.
- weights_sum = torch.sum(weights, 1).view(out_length, 1)
- weights = weights / weights_sum.expand(out_length, P)
-
- # If a column in weights is all zero, get rid of it. only consider the first and last column.
- weights_zero_tmp = torch.sum((weights == 0), 0)
- if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 1, P - 2)
- weights = weights.narrow(1, 1, P - 2)
- if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
- indices = indices.narrow(1, 0, P - 2)
- weights = weights.narrow(1, 0, P - 2)
- weights = weights.contiguous()
- indices = indices.contiguous()
- sym_len_s = -indices.min() + 1
- sym_len_e = indices.max() - in_length
- indices = indices + sym_len_s - 1
- return weights, indices, int(sym_len_s), int(sym_len_e)
-
-
-# --------------------------------------------
-# imresize for tensor image [0, 1]
-# --------------------------------------------
-def imresize(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: pytorch tensor, CHW or HW [0,1]
- # output: CHW or HW [0,1] w/o round
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(0)
- in_C, in_H, in_W = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
- img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:, :sym_len_Hs, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[:, -sym_len_He:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(in_C, out_H, in_W)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
- out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :, :sym_len_Ws]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, :, -sym_len_We:]
- inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(2, inv_idx)
- out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(in_C, out_H, out_W)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
- return out_2
-
-
-# --------------------------------------------
-# imresize for numpy image [0, 1]
-# --------------------------------------------
-def imresize_np(img, scale, antialiasing=True):
- # Now the scale should be the same for H and W
- # input: img: Numpy, HWC or HW [0,1]
- # output: HWC or HW [0,1] w/o round
- img = torch.from_numpy(img)
- need_squeeze = True if img.dim() == 2 else False
- if need_squeeze:
- img.unsqueeze_(2)
-
- in_H, in_W, in_C = img.size()
- out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
- kernel_width = 4
- kernel = 'cubic'
-
- # Return the desired dimension order for performing the resize. The
- # strategy is to perform the resize first along the dimension with the
- # smallest scale factor.
- # Now we do not support this.
-
- # get weights and indices
- weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
- in_H, out_H, scale, kernel, kernel_width, antialiasing)
- weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
- in_W, out_W, scale, kernel, kernel_width, antialiasing)
- # process H dimension
- # symmetric copying
- img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
- img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
-
- sym_patch = img[:sym_len_Hs, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
-
- sym_patch = img[-sym_len_He:, :, :]
- inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(0, inv_idx)
- img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
-
- out_1 = torch.FloatTensor(out_H, in_W, in_C)
- kernel_width = weights_H.size(1)
- for i in range(out_H):
- idx = int(indices_H[i][0])
- for j in range(out_C):
- out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
-
- # process W dimension
- # symmetric copying
- out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
- out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
-
- sym_patch = out_1[:, :sym_len_Ws, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
-
- sym_patch = out_1[:, -sym_len_We:, :]
- inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
- sym_patch_inv = sym_patch.index_select(1, inv_idx)
- out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
-
- out_2 = torch.FloatTensor(out_H, out_W, in_C)
- kernel_width = weights_W.size(1)
- for i in range(out_W):
- idx = int(indices_W[i][0])
- for j in range(out_C):
- out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
- if need_squeeze:
- out_2.squeeze_()
-
- return out_2.numpy()
-
-
-if __name__ == '__main__':
- print('---')
-# img = imread_uint('test.bmp', 3)
-# img = uint2single(img)
-# img_bicubic = imresize_np(img, 1/4)
\ No newline at end of file
diff --git a/spaces/triggah61/chingu-music/tests/modules/test_codebooks_patterns.py b/spaces/triggah61/chingu-music/tests/modules/test_codebooks_patterns.py
deleted file mode 100644
index b658f4779a369f9ec8dde692a61b7f0fe3485724..0000000000000000000000000000000000000000
--- a/spaces/triggah61/chingu-music/tests/modules/test_codebooks_patterns.py
+++ /dev/null
@@ -1,246 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import pytest
-import torch
-
-from audiocraft.modules.codebooks_patterns import (
- DelayedPatternProvider,
- ParallelPatternProvider,
- Pattern,
- UnrolledPatternProvider,
-)
-
-
-class TestParallelPatternProvider:
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [0, 1, 16, 100])
- def test_get_pattern(self, n_q: int, timesteps: int):
- provider = ParallelPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- # + 1 to account for 1st step
- assert len(pattern.layout) == timesteps + 1
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- def test_pattern_content(self, n_q: int, timesteps: int):
- provider = ParallelPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- for s, v in enumerate(pattern.layout):
- for i, code in enumerate(v):
- assert i == code.q
- assert code.t == s - 1 # account for the 1st empty step
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- def test_pattern_max_delay(self, n_q: int, timesteps: int):
- provider = ParallelPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- assert pattern.max_delay == 0
- assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay
-
-
-class TestDelayedPatternProvider:
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [0, 1, 16, 100])
- def test_get_pattern(self, n_q: int, timesteps: int):
- delays = [
- list(range(n_q)),
- [0] + [1] * (n_q - 1),
- [0] + [4] * (n_q - 1),
- ]
- for delay in delays:
- provider = DelayedPatternProvider(n_q, delay)
- pattern = provider.get_pattern(timesteps)
- # + 1 to account for 1st step
- assert len(pattern.layout) == timesteps + max(delay) + 1
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- def test_pattern_content(self, n_q: int, timesteps: int):
- provider = DelayedPatternProvider(n_q)
- pattern = provider.get_pattern(timesteps)
- for s, v in enumerate(pattern.layout):
- for i, code in enumerate(v):
- assert i == code.q
- assert code.t == max(0, s - code.q - 1)
-
- @pytest.mark.parametrize("timesteps", [8, 16, 100])
- @pytest.mark.parametrize("delay", [[0, 1, 2, 3], [0, 1, 1, 1], [0, 3, 3, 3], [0, 3]])
- def test_pattern_max_delay(self, timesteps: int, delay: list):
- provider = DelayedPatternProvider(len(delay), delay)
- pattern = provider.get_pattern(timesteps)
- assert pattern.max_delay == max(delay)
- assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay
-
-
-class TestUnrolledPatternProvider:
-
- @pytest.mark.parametrize("timesteps", [0, 1, 16])
- @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]])
- @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]])
- def test_get_pattern(self, timesteps: int, flattening: list, delays: list):
- n_q = len(flattening)
- max_delay = max(delays)
- provider = UnrolledPatternProvider(n_q, flattening, delays)
- pattern = provider.get_pattern(timesteps)
- assert len(pattern.layout) == provider.num_virtual_steps(timesteps) + max_delay
-
- @pytest.mark.parametrize("timesteps", [0, 1, 16])
- @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]])
- @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]])
- def test_pattern_max_delay(self, timesteps: int, flattening: list, delays: list):
- n_q = len(flattening)
- max_delay = max(delays)
- provider = UnrolledPatternProvider(n_q, flattening, delays)
- pattern = provider.get_pattern(timesteps)
- assert pattern.max_delay == max_delay
-
-
-class TestPattern:
-
- def ref_build_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int):
- """Reference method to build the sequence from the pattern without using fancy scatter."""
- bs, n_q, T = z.shape
- z = z.cpu().numpy()
- assert n_q == pattern.n_q
- assert T <= pattern.timesteps
- inp = torch.full((bs, n_q, len(pattern.layout)), special_token, dtype=torch.long).numpy()
- inp[:] = special_token
- for s, v in enumerate(pattern.layout):
- for (t, q) in v:
- if t < T:
- inp[:, q, s] = z[:, q, t]
- return torch.from_numpy(inp)
-
- def ref_revert_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int):
- """Reference method to revert the sequence from the pattern without using fancy scatter."""
- z = z.cpu().numpy()
- bs, n_q, S = z.shape
- assert pattern.n_q == n_q
- inp = torch.full((bs, pattern.n_q, pattern.timesteps), special_token, dtype=torch.long).numpy()
- inp[:] = special_token
- for s, v in enumerate(pattern.layout):
- for (t, q) in v:
- if t < pattern.timesteps:
- inp[:, q, t] = z[:, q, s]
- return torch.from_numpy(inp)
-
- def ref_revert_pattern_logits(self, z: torch.Tensor, pattern: Pattern, special_token: float):
- """Reference method to revert the logits from the pattern without using fancy scatter."""
- z = z.cpu().numpy()
- bs, card, n_q, S = z.shape
- assert pattern.n_q == n_q
- ref_layout = pattern.layout
- inp = torch.full((bs, card, pattern.n_q, pattern.timesteps), special_token, dtype=torch.float).numpy()
- inp[:] = special_token
- for s, v in enumerate(ref_layout[1:]):
- if s < S:
- for (t, q) in v:
- if t < pattern.timesteps:
- inp[:, :, q, t] = z[:, :, q, s]
- return torch.from_numpy(inp)
-
- def _get_pattern_providers(self, n_q: int):
- pattern_provider_1 = ParallelPatternProvider(n_q)
- pattern_provider_2 = DelayedPatternProvider(n_q, list(range(n_q)))
- pattern_provider_3 = DelayedPatternProvider(n_q, [0] + [1] * (n_q - 1))
- pattern_provider_4 = UnrolledPatternProvider(
- n_q, flattening=list(range(n_q)), delays=[0] * n_q
- )
- pattern_provider_5 = UnrolledPatternProvider(
- n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] * n_q
- )
- pattern_provider_6 = UnrolledPatternProvider(
- n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] + [5] * (n_q - 1)
- )
- return [
- pattern_provider_1,
- pattern_provider_2,
- pattern_provider_3,
- pattern_provider_4,
- pattern_provider_5,
- pattern_provider_6,
- ]
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [16, 72])
- def test_build_pattern_sequence(self, n_q: int, timesteps: int):
- bs = 2
- card = 256
- special_token = card
-
- pattern_providers = self._get_pattern_providers(n_q)
- for pattern_provider in pattern_providers:
- pattern = pattern_provider.get_pattern(timesteps)
- # we can correctly build the sequence from the pattern
- z = torch.randint(0, card, (bs, n_q, timesteps))
- ref_res = self.ref_build_pattern_sequence(z, pattern, special_token)
- res, indexes, mask = pattern.build_pattern_sequence(z, special_token)
- assert (res == ref_res).float().mean() == 1.0
-
- # expected assertion fails on the number of timesteps
- invalid_timesteps = [timesteps + 1]
- if pattern.num_sequence_steps != pattern.timesteps:
- invalid_timesteps.append(pattern.num_sequence_steps)
- for i_timesteps in invalid_timesteps:
- z2 = torch.randint(0, card, (bs, n_q, i_timesteps))
- with pytest.raises(AssertionError):
- pattern.build_pattern_sequence(z2, special_token)
-
- # expected assertion fails on the number of codebooks
- invalid_qs = [0, n_q - 1, n_q + 1]
- for i_q in invalid_qs:
- z3 = torch.randint(0, card, (bs, i_q, timesteps))
- with pytest.raises(AssertionError):
- pattern.build_pattern_sequence(z3, special_token)
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [16, 72])
- def test_revert_pattern_sequence(self, n_q: int, timesteps: int):
- bs = 2
- card = 256
- special_token = card
-
- pattern_providers = self._get_pattern_providers(n_q)
- for pattern_provider in pattern_providers:
- pattern = pattern_provider.get_pattern(timesteps)
- # this works assuming previous tests are successful
- z = torch.randint(0, card, (bs, n_q, timesteps))
- s = self.ref_build_pattern_sequence(z, pattern, special_token)
- ref_out = self.ref_revert_pattern_sequence(s, pattern, special_token)
- # ensure our reference script retrieve the original sequence
- assert z.shape == ref_out.shape
- assert (z == ref_out).float().mean() == 1.0
- # now we can test the scatter version
- out, indexes, mask = pattern.revert_pattern_sequence(s, special_token)
- assert out.shape == ref_out.shape
- assert (out == ref_out).float().mean() == 1.0
-
- @pytest.mark.parametrize("n_q", [1, 4, 32])
- @pytest.mark.parametrize("timesteps", [16, 72])
- @pytest.mark.parametrize("card", [1, 2, 256, 1024])
- def test_revert_pattern_logits(self, n_q: int, timesteps: int, card: int):
- bs = 2
- special_token = card
- logits_special_token = float('nan')
-
- pattern_providers = self._get_pattern_providers(n_q)
- for pattern_provider in pattern_providers:
- pattern = pattern_provider.get_pattern(timesteps)
- # this works assuming previous tests are successful
- z = torch.randint(0, card, (bs, n_q, timesteps))
- s = self.ref_build_pattern_sequence(z, pattern, special_token)
- logits = torch.randn((bs, card, n_q, s.shape[-1]))
- ref_out = self.ref_revert_pattern_logits(logits, pattern, logits_special_token)
- # ensure our reference script retrieve the original sequence
- assert ref_out.shape == torch.Size([bs, card, n_q, timesteps])
- # now we can test the scatter version
- out, indexes, mask = pattern.revert_pattern_logits(logits, logits_special_token)
- assert out.shape == ref_out.shape
- assert (out == ref_out).float().mean() == 1.0
diff --git a/spaces/trttung1610/musicgen/audiocraft/optim/polynomial_decay_lr_scheduler.py b/spaces/trttung1610/musicgen/audiocraft/optim/polynomial_decay_lr_scheduler.py
deleted file mode 100644
index c5ea30b094538269dbb0055ab3163f84d1cf6e90..0000000000000000000000000000000000000000
--- a/spaces/trttung1610/musicgen/audiocraft/optim/polynomial_decay_lr_scheduler.py
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-from torch.optim import Optimizer
-from torch.optim.lr_scheduler import _LRScheduler
-
-
-class PolynomialDecayLRScheduler(_LRScheduler):
- """Polynomial decay LR scheduler.
-
- Args:
- optimizer (Optimizer): Torch optimizer.
- warmup_steps (int): Number of warmup steps.
- total_steps (int): Total number of steps.
- end_lr (float): Final learning rate to achieve over total number of steps.
- zero_lr_warmup_steps (int): Number of steps with a learning rate of value 0.
- power (float): Decay exponent.
- """
- def __init__(self, optimizer: Optimizer, warmup_steps: int, total_steps: int,
- end_lr: float = 0., zero_lr_warmup_steps: int = 0, power: float = 1.):
- self.warmup_steps = warmup_steps
- self.total_steps = total_steps
- self.end_lr = end_lr
- self.zero_lr_warmup_steps = zero_lr_warmup_steps
- self.power = power
- super().__init__(optimizer)
-
- def _get_sched_lr(self, lr: float, step: int):
- if self.zero_lr_warmup_steps > 0 and step <= self.zero_lr_warmup_steps:
- lr = 0
- elif self.warmup_steps > 0 and step <= self.warmup_steps + self.zero_lr_warmup_steps:
- lr_ratio = (step - self.zero_lr_warmup_steps) / float(self.warmup_steps)
- lr = lr_ratio * lr
- elif step >= self.total_steps:
- lr = self.end_lr
- else:
- total_warmup_steps = self.warmup_steps + self.zero_lr_warmup_steps
- lr_range = lr - self.end_lr
- pct_remaining = 1 - (step - total_warmup_steps) / (self.total_steps - total_warmup_steps)
- lr = lr_range * pct_remaining ** self.power + self.end_lr
- return lr
-
- def get_lr(self):
- return [self._get_sched_lr(base_lr, self.last_epoch) for base_lr in self.base_lrs]
diff --git a/spaces/tskolm/YouTube_comments_generation/README.md b/spaces/tskolm/YouTube_comments_generation/README.md
deleted file mode 100644
index 7bccf3304a52abd62fcb2a630d91b3ba5f67d39e..0000000000000000000000000000000000000000
--- a/spaces/tskolm/YouTube_comments_generation/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: YouTube_comments_generation
-emoji: 📚
-colorFrom: green
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.2.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/twigs/simplifier/app.py b/spaces/twigs/simplifier/app.py
deleted file mode 100644
index 553b123fadd4aadc3c564ced7d54d20cae888c91..0000000000000000000000000000000000000000
--- a/spaces/twigs/simplifier/app.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import streamlit as st
-from transformers import AutoTokenizer, AutoModelForSequenceClassification, BartTokenizer, BartForConditionalGeneration, pipeline
-import numpy as np
-import torch
-import re
-from textstat import textstat
-
-
-MAX_LEN = 256
-NUM_BEAMS = 4
-EARLY_STOPPING = True
-N_OUT = 4
-
-
-cwi_tok = AutoTokenizer.from_pretrained('twigs/cwi-regressor')
-cwi_model = AutoModelForSequenceClassification.from_pretrained(
- 'twigs/cwi-regressor')
-simpl_tok = BartTokenizer.from_pretrained('twigs/bart-text2text-simplifier')
-simpl_model = BartForConditionalGeneration.from_pretrained(
- 'twigs/bart-text2text-simplifier')
-cwi_pipe = pipeline('text-classification', model=cwi_model,
- tokenizer=cwi_tok, function_to_apply='none')
-fill_pipe = pipeline('fill-mask', top_k=1)
-
-
-def id_replace_complex(s, threshold=0.2):
-
- # get all tokens
- tokens = re.compile('\w+').findall(s)
- cands = [f"{t}. {s}" for t in tokens]
- # get complex tokens
- # if score >= threshold select tokens[idx]
- compl_tok = [tokens[idx] for idx, x in enumerate(
- cwi_pipe(cands)) if x['score'] >= threshold]
-
- masked = [s[:s.index(t)] + '' + s[s.index(t)+len(t):] for t in compl_tok]
- cands = fill_pipe(masked)
- # structure is different in 1 vs n complex words
- replacements = [el['token_str'] if type(
- el) == dict else el[0]['token_str'] for el in cands]
- # some tokens get prefixed with space
- replacements = [tok if tok.find(' ') == -1 else tok[1:]
- for tok in replacements]
-
- for i, el in enumerate(compl_tok):
- idx = s.index(el)
- s = s[:idx] + replacements[i] + s[idx+len(el):]
-
- return s, compl_tok, replacements
-
-def generate_candidate_text(s, model, tokenizer, tokenized=False):
-
-
- out = simpl_tok([s], max_length=256, padding="max_length", truncation=True,
- return_tensors='pt') if not tokenized else s
-
- generated_ids = model.generate(
- input_ids=out['input_ids'],
- attention_mask=out['attention_mask'],
- use_cache=True,
- decoder_start_token_id=simpl_model.config.pad_token_id,
- num_beams=NUM_BEAMS,
- max_length=MAX_LEN,
- early_stopping=EARLY_STOPPING,
- num_return_sequences=N_OUT
- )
-
- return [tokenizer.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)[
- 1:] for ids in generated_ids]
-
-
-def rank_candidate_text(sentences):
- fkgl_scores = [textstat.flesch_kincaid_grade(s) for s in sentences]
- return sentences[np.argmin(fkgl_scores)]
-
-
-def full_pipeline(source, simpl_model, simpl_tok, tokens, lexical=False):
-
- modified, complex_words, replacements = id_replace_complex(source, threshold=0.2) if lexical else (source, None, None)
- cands = generate_candidate_text(tokens+modified, simpl_model, simpl_tok)
- output = rank_candidate_text(cands)
- return output, complex_words, replacements
-
-def main():
-
- aug_tok = ['c_', 'lev_', 'dep_', 'rank_', 'rat_', 'n_syl_']
- base_tokens = ['CharRatio', 'LevSim', 'DependencyTreeDepth',
- 'WordComplexity', 'WordRatio', 'NumberOfSyllables']
-
- default_values = [0.8, 0.6, 0.9, 0.8, 0.9, 1.9]
- user_values = default_values
- tok_values = dict((t, default_values[idx]) for idx, t in enumerate(base_tokens))
-
- example_sentences = ["A matchbook is a small cardboard folder (matchcover) enclosing a quantity of matches and having a coarse striking surface on the exterior.",
- "If there are no strong land use controls, buildings are built along a bypass, converting it into an ordinary town road, and the bypass may eventually become as congested as the local streets it was intended to avoid.",
- "Plot Captain Caleb Holt (Kirk Cameron) is a firefighter in Albany, Georgia and firmly keeps the cardinal rule of all firemen, \"Never leave your partner behind\".",
- "Britpop emerged from the British independent music scene of the early 1990s and was characterised by bands influenced by British guitar pop music of the 1960s and 1970s."]
-
-
- st.title("Make it Simple")
-
- with st.expander("Example sentences"):
- for s in example_sentences:
- st.code(body=s)
-
-
- with st.form(key="simplify"):
- input_sentence = st.text_area("Original sentence")
-
- lexical = st.checkbox("Identify and replace complex words", value=True)
-
- tok = st.multiselect(
- label="Tokens to augment the sentence", options=base_tokens, default=base_tokens)
- if (tok):
- st.text("Select the desired intensity")
- for idx, t in enumerate(tok):
- user_values[idx] = st.slider(
- t, min_value=0., max_value=1., value=tok_values[t], step=0.1, key=t)
-
- submit = st.form_submit_button("Process")
- if (submit):
-
- tokens = " ".join([t+str(v) for t, v in zip(aug_tok, user_values)]) + " "
- output, words, replacements = full_pipeline(input_sentence, simpl_model, simpl_tok, tokens, lexical)
-
-
- c1, c2, c3 = st.columns([1,1,2])
-
- with c1:
- st.markdown("#### Words identified as complex")
- if words:
- for w in words:
- st.markdown(f"* {w}")
-
- else:
- st.markdown("None :smile:")
-
- with c2:
- st.markdown("#### Their mask-predicted replacement")
- if replacements:
- for w in replacements:
- st.markdown(f"* {w}")
-
- else:
- st.markdown("None :smile:")
-
- with c3:
- st.markdown(f"#### Original Sentence:\n > {input_sentence}")
- st.markdown(f"#### Output Sentence:\n > {output}")
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/ucalyptus/PTI/models/__init__.py b/spaces/ucalyptus/PTI/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ulysses115/diffsvc_test/modules/parallel_wavegan/losses/__init__.py b/spaces/ulysses115/diffsvc_test/modules/parallel_wavegan/losses/__init__.py
deleted file mode 100644
index b03080a907cb5cb4b316ceb74866ddbc406b33bf..0000000000000000000000000000000000000000
--- a/spaces/ulysses115/diffsvc_test/modules/parallel_wavegan/losses/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .stft_loss import * # NOQA
diff --git a/spaces/unco3892/real_estate_ie/README.md b/spaces/unco3892/real_estate_ie/README.md
deleted file mode 100644
index daaca67e53c3cb6fdbbd3b62f171e4638b9e1a3e..0000000000000000000000000000000000000000
--- a/spaces/unco3892/real_estate_ie/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Real_estate_ie
-emoji: 🏡
-colorFrom: indigo
-colorTo: blue
-sdk: gradio
-sdk_version: 3.0.11
-app_file: app.py
-pinned: true
----
\ No newline at end of file
diff --git a/spaces/upstage/open-ko-llm-leaderboard/src/assets/hardcoded_evals.py b/spaces/upstage/open-ko-llm-leaderboard/src/assets/hardcoded_evals.py
deleted file mode 100644
index 34de91df9bbf32b2a582df182e075a27d7a071a4..0000000000000000000000000000000000000000
--- a/spaces/upstage/open-ko-llm-leaderboard/src/assets/hardcoded_evals.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from src.display_models.utils import AutoEvalColumn, model_hyperlink
-
-baseline = {
- AutoEvalColumn.model.name: "
-
-
-👋 Hello from the [Ultralytics](https://ultralytics.com/) Team! We've been working hard these last few months to
-launch [Ultralytics HUB](https://bit.ly/ultralytics_hub), a new web tool for training and deploying all your YOLOv5 and YOLOv8 🚀
-models from one spot!
-
-## Introduction
-
-HUB is designed to be user-friendly and intuitive, with a drag-and-drop interface that allows users to
-easily upload their data and train new models quickly. It offers a range of pre-trained models and
-templates to choose from, making it easy for users to get started with training their own models. Once a model is
-trained, it can be easily deployed and used for real-time object detection, instance segmentation and classification tasks.
-
-We hope that the resources here will help you get the most out of HUB. Please browse the HUB Docs for details, raise an issue on GitHub for support, and join our Discord community for questions and discussions!
-
-- [**Quickstart**](./quickstart.md). Start training and deploying YOLO models with HUB in seconds.
-- [**Datasets: Preparing and Uploading**](./datasets.md). Learn how to prepare and upload your datasets to HUB in YOLO format.
-- [**Projects: Creating and Managing**](./projects.md). Group your models into projects for improved organization.
-- [**Models: Training and Exporting**](./models.md). Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
-- [**Integrations: Options**](./integrations.md). Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
-- [**Ultralytics HUB App**](./app/index.md). Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
- * [**iOS**](./app/ios.md). Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
- * [**Android**](./app/android.md). Explore TFLite acceleration on mobile devices.
-- [**Inference API**](./inference_api.md). Understand how to use the Inference API for running your trained models in the cloud to generate predictions.
\ No newline at end of file
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/engine/predictor.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/engine/predictor.md
deleted file mode 100644
index f4aed3c06653fa3188a5222357fe36c76fbde1f2..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/yolo/engine/predictor.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-description: '"The BasePredictor class in Ultralytics YOLO Engine predicts object detection in images and videos. Learn to implement YOLO with ease."'
-keywords: Ultralytics, YOLO, BasePredictor, Object Detection, Computer Vision, Fast Model, Insights
----
-
-## BasePredictor
----
-### ::: ultralytics.yolo.engine.predictor.BasePredictor
-
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/callbacks/base.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/callbacks/base.py
deleted file mode 100644
index 0b1734798ffb760f92c9ea098e996d4637b082ea..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/ultralytics/yolo/utils/callbacks/base.py
+++ /dev/null
@@ -1,212 +0,0 @@
-# Ultralytics YOLO 🚀, AGPL-3.0 license
-"""
-Base callbacks
-"""
-
-from collections import defaultdict
-from copy import deepcopy
-
-# Trainer callbacks ----------------------------------------------------------------------------------------------------
-
-
-def on_pretrain_routine_start(trainer):
- """Called before the pretraining routine starts."""
- pass
-
-
-def on_pretrain_routine_end(trainer):
- """Called after the pretraining routine ends."""
- pass
-
-
-def on_train_start(trainer):
- """Called when the training starts."""
- pass
-
-
-def on_train_epoch_start(trainer):
- """Called at the start of each training epoch."""
- pass
-
-
-def on_train_batch_start(trainer):
- """Called at the start of each training batch."""
- pass
-
-
-def optimizer_step(trainer):
- """Called when the optimizer takes a step."""
- pass
-
-
-def on_before_zero_grad(trainer):
- """Called before the gradients are set to zero."""
- pass
-
-
-def on_train_batch_end(trainer):
- """Called at the end of each training batch."""
- pass
-
-
-def on_train_epoch_end(trainer):
- """Called at the end of each training epoch."""
- pass
-
-
-def on_fit_epoch_end(trainer):
- """Called at the end of each fit epoch (train + val)."""
- pass
-
-
-def on_model_save(trainer):
- """Called when the model is saved."""
- pass
-
-
-def on_train_end(trainer):
- """Called when the training ends."""
- pass
-
-
-def on_params_update(trainer):
- """Called when the model parameters are updated."""
- pass
-
-
-def teardown(trainer):
- """Called during the teardown of the training process."""
- pass
-
-
-# Validator callbacks --------------------------------------------------------------------------------------------------
-
-
-def on_val_start(validator):
- """Called when the validation starts."""
- pass
-
-
-def on_val_batch_start(validator):
- """Called at the start of each validation batch."""
- pass
-
-
-def on_val_batch_end(validator):
- """Called at the end of each validation batch."""
- pass
-
-
-def on_val_end(validator):
- """Called when the validation ends."""
- pass
-
-
-# Predictor callbacks --------------------------------------------------------------------------------------------------
-
-
-def on_predict_start(predictor):
- """Called when the prediction starts."""
- pass
-
-
-def on_predict_batch_start(predictor):
- """Called at the start of each prediction batch."""
- pass
-
-
-def on_predict_batch_end(predictor):
- """Called at the end of each prediction batch."""
- pass
-
-
-def on_predict_postprocess_end(predictor):
- """Called after the post-processing of the prediction ends."""
- pass
-
-
-def on_predict_end(predictor):
- """Called when the prediction ends."""
- pass
-
-
-# Exporter callbacks ---------------------------------------------------------------------------------------------------
-
-
-def on_export_start(exporter):
- """Called when the model export starts."""
- pass
-
-
-def on_export_end(exporter):
- """Called when the model export ends."""
- pass
-
-
-default_callbacks = {
- # Run in trainer
- 'on_pretrain_routine_start': [on_pretrain_routine_start],
- 'on_pretrain_routine_end': [on_pretrain_routine_end],
- 'on_train_start': [on_train_start],
- 'on_train_epoch_start': [on_train_epoch_start],
- 'on_train_batch_start': [on_train_batch_start],
- 'optimizer_step': [optimizer_step],
- 'on_before_zero_grad': [on_before_zero_grad],
- 'on_train_batch_end': [on_train_batch_end],
- 'on_train_epoch_end': [on_train_epoch_end],
- 'on_fit_epoch_end': [on_fit_epoch_end], # fit = train + val
- 'on_model_save': [on_model_save],
- 'on_train_end': [on_train_end],
- 'on_params_update': [on_params_update],
- 'teardown': [teardown],
-
- # Run in validator
- 'on_val_start': [on_val_start],
- 'on_val_batch_start': [on_val_batch_start],
- 'on_val_batch_end': [on_val_batch_end],
- 'on_val_end': [on_val_end],
-
- # Run in predictor
- 'on_predict_start': [on_predict_start],
- 'on_predict_batch_start': [on_predict_batch_start],
- 'on_predict_postprocess_end': [on_predict_postprocess_end],
- 'on_predict_batch_end': [on_predict_batch_end],
- 'on_predict_end': [on_predict_end],
-
- # Run in exporter
- 'on_export_start': [on_export_start],
- 'on_export_end': [on_export_end]}
-
-
-def get_default_callbacks():
- """
- Return a copy of the default_callbacks dictionary with lists as default values.
-
- Returns:
- (defaultdict): A defaultdict with keys from default_callbacks and empty lists as default values.
- """
- return defaultdict(list, deepcopy(default_callbacks))
-
-
-def add_integration_callbacks(instance):
- """
- Add integration callbacks from various sources to the instance's callbacks.
-
- Args:
- instance (Trainer, Predictor, Validator, Exporter): An object with a 'callbacks' attribute that is a dictionary
- of callback lists.
- """
- from .clearml import callbacks as clearml_cb
- from .comet import callbacks as comet_cb
- from .dvc import callbacks as dvc_cb
- from .hub import callbacks as hub_cb
- from .mlflow import callbacks as mlflow_cb
- from .neptune import callbacks as neptune_cb
- from .raytune import callbacks as tune_cb
- from .tensorboard import callbacks as tensorboard_cb
- from .wb import callbacks as wb_cb
-
- for x in clearml_cb, comet_cb, hub_cb, mlflow_cb, neptune_cb, tune_cb, tensorboard_cb, wb_cb, dvc_cb:
- for k, v in x.items():
- if v not in instance.callbacks[k]: # prevent duplicate callbacks addition
- instance.callbacks[k].append(v) # callback[name].append(func)
diff --git a/spaces/vesteinn/Bird-Classifier-CLIP-NABirds/app.py b/spaces/vesteinn/Bird-Classifier-CLIP-NABirds/app.py
deleted file mode 100644
index 2857549b1318fa56f7918541a64c216fc281d31a..0000000000000000000000000000000000000000
--- a/spaces/vesteinn/Bird-Classifier-CLIP-NABirds/app.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import torch
-import numpy as np
-import pandas as pd
-import gradio as gr
-from io import BytesIO
-from PIL import Image as PILIMAGE
-#from IPython.display import Image
-#from IPython.core.display import HTML
-from transformers import CLIPProcessor, CLIPModel, CLIPTokenizer
-import os
-
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-model = CLIPModel.from_pretrained("vesteinn/clip-nabirds").to(device)
-processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
-tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-base-patch32")
-
-
-def load_class_names(dataset_path=''):
- names = {}
- with open(os.path.join(dataset_path, 'classes.txt')) as f:
- for line in f:
- pieces = line.strip().split()
- class_id = pieces[0]
- names[class_id] = ' '.join(pieces[1:])
-
- return names
-
-
-def get_labels():
- labels = []
- class_names = load_class_names(".")
- for _, name in class_names.items():
- labels.append(f"This is a photo of {name}.")
- return labels
-
-
-def encode_text(text):
- with torch.no_grad():
- inputs = tokenizer([text], padding=True, return_tensors="pt")
- text_encoded = model.get_text_features(**inputs).detach().numpy()
- return text_encoded
-
-
-ALL_LABELS = get_labels()
-try:
- LABEL_FEATURES = np.load("label_features.np")
-except:
- LABEL_FEATURES = []
- for label in ALL_LABELS:
- LABEL_FEATURES.append(encode_text(label))
- LABEL_FEATURES = np.vstack(LABEL_FEATURES)
- np.save(open("label_features.np", "wb"), LABEL_FEATURES)
-
-
-def encode_image(image):
- image = PILIMAGE.fromarray(image.astype('uint8'), 'RGB')
- with torch.no_grad():
- photo_preprocessed = processor(text=None, images=image, return_tensors="pt", padding=True)["pixel_values"]
- search_photo_feature = model.get_image_features(photo_preprocessed.to(device))
- search_photo_feature /= search_photo_feature.norm(dim=-1, keepdim=True)
- image_encoded = search_photo_feature.cpu().numpy()
- return image_encoded
-
-
-def similarity(feature, label_features):
- similarities = list((feature @ label_features.T).squeeze(0))
- return similarities
-
-
-def find_best_matches(image):
- image_features = encode_image(image)
- similarities = similarity(image_features, LABEL_FEATURES)
- best_spec = sorted(zip(similarities, range(LABEL_FEATURES.shape[0])), key=lambda x: x[0], reverse=True)
- idx = best_spec[0][1]
- label = ALL_LABELS[idx]
- return label
-
-examples=[['bj.jpg'],['duckly.jpg'],['some.jpg'],['turdus.jpg'],['seag.jpg'],['thursh.jpg'], ['woodcock.jpeg'],['dipper.jpeg']]
-
-gr.Interface(fn=find_best_matches,
- inputs=[
- gr.inputs.Image(label="Image to classify", optional=False),
- ],
- examples=examples,
- theme="grass",
- outputs=gr.outputs.Label(), enable_queue=True, title="North American Bird Classifier",
- description="This application can classify North American Birds.").launch()
-
-
-
-
diff --git a/spaces/victor/autotrain-victormautotraindreambooth-FS8JGUBRYX-2450175922/README.md b/spaces/victor/autotrain-victormautotraindreambooth-FS8JGUBRYX-2450175922/README.md
deleted file mode 100644
index d251d2110fa8a3a52a41ab4fcd02f95a9f824b7d..0000000000000000000000000000000000000000
--- a/spaces/victor/autotrain-victormautotraindreambooth-FS8JGUBRYX-2450175922/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: AutoTrain Dreambooth(autotrain-victormautotraindreambooth-FS8JGUBRYX-2450175922)
-emoji: 😻
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
-tags:
- - autotrain
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/vietvd/image-enhance/utils/util_calculate_psnr_ssim.py b/spaces/vietvd/image-enhance/utils/util_calculate_psnr_ssim.py
deleted file mode 100644
index 1a8fb27161f9c1fd3e37b14654dfe05eaadf619c..0000000000000000000000000000000000000000
--- a/spaces/vietvd/image-enhance/utils/util_calculate_psnr_ssim.py
+++ /dev/null
@@ -1,346 +0,0 @@
-import cv2
-import numpy as np
-import torch
-
-
-def calculate_psnr(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate PSNR (Peak Signal-to-Noise Ratio).
-
- Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the PSNR calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: psnr result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- mse = np.mean((img1 - img2) ** 2)
- if mse == 0:
- return float('inf')
- return 20. * np.log10(255. / np.sqrt(mse))
-
-
-def _ssim(img1, img2):
- """Calculate SSIM (structural similarity) for one channel images.
-
- It is called by func:`calculate_ssim`.
-
- Args:
- img1 (ndarray): Images with range [0, 255] with order 'HWC'.
- img2 (ndarray): Images with range [0, 255] with order 'HWC'.
-
- Returns:
- float: ssim result.
- """
-
- C1 = (0.01 * 255) ** 2
- C2 = (0.03 * 255) ** 2
-
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
- kernel = cv2.getGaussianKernel(11, 1.5)
- window = np.outer(kernel, kernel.transpose())
-
- mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5]
- mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
- mu1_sq = mu1 ** 2
- mu2_sq = mu2 ** 2
- mu1_mu2 = mu1 * mu2
- sigma1_sq = cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sq
- sigma2_sq = cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sq
- sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
-
- ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
- return ssim_map.mean()
-
-
-def calculate_ssim(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate SSIM (structural similarity).
-
- Ref:
- Image quality assessment: From error visibility to structural similarity
-
- The results are the same as that of the official released MATLAB code in
- https://ece.uwaterloo.ca/~z70wang/research/ssim/.
-
- For three-channel images, SSIM is calculated for each channel and then
- averaged.
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the SSIM calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: ssim result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- ssims = []
- for i in range(img1.shape[2]):
- ssims.append(_ssim(img1[..., i], img2[..., i]))
- return np.array(ssims).mean()
-
-
-def _blocking_effect_factor(im):
- block_size = 8
-
- block_horizontal_positions = torch.arange(7, im.shape[3] - 1, 8)
- block_vertical_positions = torch.arange(7, im.shape[2] - 1, 8)
-
- horizontal_block_difference = (
- (im[:, :, :, block_horizontal_positions] - im[:, :, :, block_horizontal_positions + 1]) ** 2).sum(
- 3).sum(2).sum(1)
- vertical_block_difference = (
- (im[:, :, block_vertical_positions, :] - im[:, :, block_vertical_positions + 1, :]) ** 2).sum(3).sum(
- 2).sum(1)
-
- nonblock_horizontal_positions = np.setdiff1d(torch.arange(0, im.shape[3] - 1), block_horizontal_positions)
- nonblock_vertical_positions = np.setdiff1d(torch.arange(0, im.shape[2] - 1), block_vertical_positions)
-
- horizontal_nonblock_difference = (
- (im[:, :, :, nonblock_horizontal_positions] - im[:, :, :, nonblock_horizontal_positions + 1]) ** 2).sum(
- 3).sum(2).sum(1)
- vertical_nonblock_difference = (
- (im[:, :, nonblock_vertical_positions, :] - im[:, :, nonblock_vertical_positions + 1, :]) ** 2).sum(
- 3).sum(2).sum(1)
-
- n_boundary_horiz = im.shape[2] * (im.shape[3] // block_size - 1)
- n_boundary_vert = im.shape[3] * (im.shape[2] // block_size - 1)
- boundary_difference = (horizontal_block_difference + vertical_block_difference) / (
- n_boundary_horiz + n_boundary_vert)
-
- n_nonboundary_horiz = im.shape[2] * (im.shape[3] - 1) - n_boundary_horiz
- n_nonboundary_vert = im.shape[3] * (im.shape[2] - 1) - n_boundary_vert
- nonboundary_difference = (horizontal_nonblock_difference + vertical_nonblock_difference) / (
- n_nonboundary_horiz + n_nonboundary_vert)
-
- scaler = np.log2(block_size) / np.log2(min([im.shape[2], im.shape[3]]))
- bef = scaler * (boundary_difference - nonboundary_difference)
-
- bef[boundary_difference <= nonboundary_difference] = 0
- return bef
-
-
-def calculate_psnrb(img1, img2, crop_border, input_order='HWC', test_y_channel=False):
- """Calculate PSNR-B (Peak Signal-to-Noise Ratio).
-
- Ref: Quality assessment of deblocked images, for JPEG image deblocking evaluation
- # https://gitlab.com/Queuecumber/quantization-guided-ac/-/blob/master/metrics/psnrb.py
-
- Args:
- img1 (ndarray): Images with range [0, 255].
- img2 (ndarray): Images with range [0, 255].
- crop_border (int): Cropped pixels in each edge of an image. These
- pixels are not involved in the PSNR calculation.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- Default: 'HWC'.
- test_y_channel (bool): Test on Y channel of YCbCr. Default: False.
-
- Returns:
- float: psnr result.
- """
-
- assert img1.shape == img2.shape, (f'Image shapes are differnet: {img1.shape}, {img2.shape}.')
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' '"HWC" and "CHW"')
- img1 = reorder_image(img1, input_order=input_order)
- img2 = reorder_image(img2, input_order=input_order)
- img1 = img1.astype(np.float64)
- img2 = img2.astype(np.float64)
-
- if crop_border != 0:
- img1 = img1[crop_border:-crop_border, crop_border:-crop_border, ...]
- img2 = img2[crop_border:-crop_border, crop_border:-crop_border, ...]
-
- if test_y_channel:
- img1 = to_y_channel(img1)
- img2 = to_y_channel(img2)
-
- # follow https://gitlab.com/Queuecumber/quantization-guided-ac/-/blob/master/metrics/psnrb.py
- img1 = torch.from_numpy(img1).permute(2, 0, 1).unsqueeze(0) / 255.
- img2 = torch.from_numpy(img2).permute(2, 0, 1).unsqueeze(0) / 255.
-
- total = 0
- for c in range(img1.shape[1]):
- mse = torch.nn.functional.mse_loss(img1[:, c:c + 1, :, :], img2[:, c:c + 1, :, :], reduction='none')
- bef = _blocking_effect_factor(img1[:, c:c + 1, :, :])
-
- mse = mse.view(mse.shape[0], -1).mean(1)
- total += 10 * torch.log10(1 / (mse + bef))
-
- return float(total) / img1.shape[1]
-
-
-def reorder_image(img, input_order='HWC'):
- """Reorder images to 'HWC' order.
-
- If the input_order is (h, w), return (h, w, 1);
- If the input_order is (c, h, w), return (h, w, c);
- If the input_order is (h, w, c), return as it is.
-
- Args:
- img (ndarray): Input image.
- input_order (str): Whether the input order is 'HWC' or 'CHW'.
- If the input image shape is (h, w), input_order will not have
- effects. Default: 'HWC'.
-
- Returns:
- ndarray: reordered image.
- """
-
- if input_order not in ['HWC', 'CHW']:
- raise ValueError(f'Wrong input_order {input_order}. Supported input_orders are ' "'HWC' and 'CHW'")
- if len(img.shape) == 2:
- img = img[..., None]
- if input_order == 'CHW':
- img = img.transpose(1, 2, 0)
- return img
-
-
-def to_y_channel(img):
- """Change to Y channel of YCbCr.
-
- Args:
- img (ndarray): Images with range [0, 255].
-
- Returns:
- (ndarray): Images with range [0, 255] (float type) without round.
- """
- img = img.astype(np.float32) / 255.
- if img.ndim == 3 and img.shape[2] == 3:
- img = bgr2ycbcr(img, y_only=True)
- img = img[..., None]
- return img * 255.
-
-
-def _convert_input_type_range(img):
- """Convert the type and range of the input image.
-
- It converts the input image to np.float32 type and range of [0, 1].
- It is mainly used for pre-processing the input image in colorspace
- convertion functions such as rgb2ycbcr and ycbcr2rgb.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
-
- Returns:
- (ndarray): The converted image with type of np.float32 and range of
- [0, 1].
- """
- img_type = img.dtype
- img = img.astype(np.float32)
- if img_type == np.float32:
- pass
- elif img_type == np.uint8:
- img /= 255.
- else:
- raise TypeError('The img type should be np.float32 or np.uint8, ' f'but got {img_type}')
- return img
-
-
-def _convert_output_type_range(img, dst_type):
- """Convert the type and range of the image according to dst_type.
-
- It converts the image to desired type and range. If `dst_type` is np.uint8,
- images will be converted to np.uint8 type with range [0, 255]. If
- `dst_type` is np.float32, it converts the image to np.float32 type with
- range [0, 1].
- It is mainly used for post-processing images in colorspace convertion
- functions such as rgb2ycbcr and ycbcr2rgb.
-
- Args:
- img (ndarray): The image to be converted with np.float32 type and
- range [0, 255].
- dst_type (np.uint8 | np.float32): If dst_type is np.uint8, it
- converts the image to np.uint8 type with range [0, 255]. If
- dst_type is np.float32, it converts the image to np.float32 type
- with range [0, 1].
-
- Returns:
- (ndarray): The converted image with desired type and range.
- """
- if dst_type not in (np.uint8, np.float32):
- raise TypeError('The dst_type should be np.float32 or np.uint8, ' f'but got {dst_type}')
- if dst_type == np.uint8:
- img = img.round()
- else:
- img /= 255.
- return img.astype(dst_type)
-
-
-def bgr2ycbcr(img, y_only=False):
- """Convert a BGR image to YCbCr image.
-
- The bgr version of rgb2ycbcr.
- It implements the ITU-R BT.601 conversion for standard-definition
- television. See more details in
- https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion.
-
- It differs from a similar function in cv2.cvtColor: `BGR <-> YCrCb`.
- In OpenCV, it implements a JPEG conversion. See more details in
- https://en.wikipedia.org/wiki/YCbCr#JPEG_conversion.
-
- Args:
- img (ndarray): The input image. It accepts:
- 1. np.uint8 type with range [0, 255];
- 2. np.float32 type with range [0, 1].
- y_only (bool): Whether to only return Y channel. Default: False.
-
- Returns:
- ndarray: The converted YCbCr image. The output image has the same type
- and range as input image.
- """
- img_type = img.dtype
- img = _convert_input_type_range(img)
- if y_only:
- out_img = np.dot(img, [24.966, 128.553, 65.481]) + 16.0
- else:
- out_img = np.matmul(
- img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], [65.481, -37.797, 112.0]]) + [16, 128, 128]
- out_img = _convert_output_type_range(out_img, img_type)
- return out_img
diff --git a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/backbones/__init__.py b/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/backbones/__init__.py
deleted file mode 100644
index 55bd4c5d1889a1a998b52eb56793bbc1eef1b691..0000000000000000000000000000000000000000
--- a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/backbones/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from .iresnet import iresnet18, iresnet34, iresnet50, iresnet100, iresnet200
-from .mobilefacenet import get_mbf
-
-
-def get_model(name, **kwargs):
- # resnet
- if name == "r18":
- return iresnet18(False, **kwargs)
- elif name == "r34":
- return iresnet34(False, **kwargs)
- elif name == "r50":
- return iresnet50(False, **kwargs)
- elif name == "r100":
- return iresnet100(False, **kwargs)
- elif name == "r200":
- return iresnet200(False, **kwargs)
- elif name == "r2060":
- from .iresnet2060 import iresnet2060
- return iresnet2060(False, **kwargs)
- elif name == "mbf":
- fp16 = kwargs.get("fp16", False)
- num_features = kwargs.get("num_features", 512)
- return get_mbf(fp16=fp16, num_features=num_features)
- else:
- raise ValueError()
\ No newline at end of file
diff --git a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r18.py b/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r18.py
deleted file mode 100644
index 7a8db34cd547e8e667103c93585296e47a894e97..0000000000000000000000000000000000000000
--- a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/configs/glint360k_r18.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "cosface"
-config.network = "r18"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 5e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/glint360k"
-config.num_classes = 360232
-config.num_image = 17091657
-config.num_epoch = 20
-config.warmup_epoch = -1
-config.decay_epoch = [8, 12, 15, 18]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/questions.py b/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/questions.py
deleted file mode 100644
index 0858c0c4d3cd5838567177f2da05a48c288f79c5..0000000000000000000000000000000000000000
--- a/spaces/vitaliykinakh/Galaxy_Zoo_Generation/src/app/questions.py
+++ /dev/null
@@ -1,36 +0,0 @@
-"""
-This file contains question and options for the manual labeling
-"""
-
-q1 = 'Is the object a smooth galaxy, a galaxy with features/disk or a star?'
-q1_options = ['Smooth', 'Features or disk', 'Star or artifact']
-
-q2 = 'Is it edge-on? '
-q2_options = ['Yes', 'No']
-
-q3 = 'Is there a bar?'
-q3_options = ['Yes', 'No']
-
-q4 = 'Is there a spiral pattern?'
-q4_options = ['Yes', 'No']
-
-q5 = 'How prominent is the central bulge?'
-q5_options = ['No bulge', 'Just noticeable', 'Obvious', 'Dominant']
-
-q6 = 'Is there anything "odd" about the galaxy?'
-q6_options = ['Yes', 'No']
-
-q7 = 'How round is the smooth galaxy?'
-q7_options = ['Completely round', 'In between', 'Cigar-shaped']
-
-q8 = 'What is the odd feature?'
-q8_options = ['Ring', 'Lens or are', 'Disturbed', 'Irregular', 'Other', 'Merger', 'Dust lane']
-
-q9 = 'What shape is the bulge in the edge-on galaxy?'
-q9_options = ['Rounded', 'Boxy', 'No bulge']
-
-q10 = 'How tightly wound are the spiral arms?'
-q10_options = ['Tight', 'Medium', 'Loose']
-
-q11 = 'How many spiral arms are there?'
-q11_options = ['1', '2', '3', '4', 'more than four', 'can`t tell']
diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/fileio/io.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/fileio/io.py
deleted file mode 100644
index aaefde58aa3ea5b58f86249ce7e1c40c186eb8dd..0000000000000000000000000000000000000000
--- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/fileio/io.py
+++ /dev/null
@@ -1,151 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from io import BytesIO, StringIO
-from pathlib import Path
-
-from ..utils import is_list_of, is_str
-from .file_client import FileClient
-from .handlers import BaseFileHandler, JsonHandler, PickleHandler, YamlHandler
-
-file_handlers = {
- 'json': JsonHandler(),
- 'yaml': YamlHandler(),
- 'yml': YamlHandler(),
- 'pickle': PickleHandler(),
- 'pkl': PickleHandler()
-}
-
-
-def load(file, file_format=None, file_client_args=None, **kwargs):
- """Load data from json/yaml/pickle files.
-
- This method provides a unified api for loading data from serialized files.
-
- Note:
- In v1.3.16 and later, ``load`` supports loading data from serialized
- files those can be storaged in different backends.
-
- Args:
- file (str or :obj:`Path` or file-like object): Filename or a file-like
- object.
- file_format (str, optional): If not specified, the file format will be
- inferred from the file extension, otherwise use the specified one.
- Currently supported formats include "json", "yaml/yml" and
- "pickle/pkl".
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> load('/path/of/your/file') # file is storaged in disk
- >>> load('https://path/of/your/file') # file is storaged in Internet
- >>> load('s3://path/of/your/file') # file is storaged in petrel
-
- Returns:
- The content from the file.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None and is_str(file):
- file_format = file.split('.')[-1]
- if file_format not in file_handlers:
- raise TypeError(f'Unsupported format: {file_format}')
-
- handler = file_handlers[file_format]
- if is_str(file):
- file_client = FileClient.infer_client(file_client_args, file)
- if handler.str_like:
- with StringIO(file_client.get_text(file)) as f:
- obj = handler.load_from_fileobj(f, **kwargs)
- else:
- with BytesIO(file_client.get(file)) as f:
- obj = handler.load_from_fileobj(f, **kwargs)
- elif hasattr(file, 'read'):
- obj = handler.load_from_fileobj(file, **kwargs)
- else:
- raise TypeError('"file" must be a filepath str or a file-object')
- return obj
-
-
-def dump(obj, file=None, file_format=None, file_client_args=None, **kwargs):
- """Dump data to json/yaml/pickle strings or files.
-
- This method provides a unified api for dumping data as strings or to files,
- and also supports custom arguments for each file format.
-
- Note:
- In v1.3.16 and later, ``dump`` supports dumping data as strings or to
- files which is saved to different backends.
-
- Args:
- obj (any): The python object to be dumped.
- file (str or :obj:`Path` or file-like object, optional): If not
- specified, then the object is dumped to a str, otherwise to a file
- specified by the filename or file-like object.
- file_format (str, optional): Same as :func:`load`.
- file_client_args (dict, optional): Arguments to instantiate a
- FileClient. See :class:`mmcv.fileio.FileClient` for details.
- Default: None.
-
- Examples:
- >>> dump('hello world', '/path/of/your/file') # disk
- >>> dump('hello world', 's3://path/of/your/file') # ceph or petrel
-
- Returns:
- bool: True for success, False otherwise.
- """
- if isinstance(file, Path):
- file = str(file)
- if file_format is None:
- if is_str(file):
- file_format = file.split('.')[-1]
- elif file is None:
- raise ValueError(
- 'file_format must be specified since file is None')
- if file_format not in file_handlers:
- raise TypeError(f'Unsupported format: {file_format}')
-
- handler = file_handlers[file_format]
- if file is None:
- return handler.dump_to_str(obj, **kwargs)
- elif is_str(file):
- file_client = FileClient.infer_client(file_client_args, file)
- if handler.str_like:
- with StringIO() as f:
- handler.dump_to_fileobj(obj, f, **kwargs)
- file_client.put_text(f.getvalue(), file)
- else:
- with BytesIO() as f:
- handler.dump_to_fileobj(obj, f, **kwargs)
- file_client.put(f.getvalue(), file)
- elif hasattr(file, 'write'):
- handler.dump_to_fileobj(obj, file, **kwargs)
- else:
- raise TypeError('"file" must be a filename str or a file-object')
-
-
-def _register_handler(handler, file_formats):
- """Register a handler for some file extensions.
-
- Args:
- handler (:obj:`BaseFileHandler`): Handler to be registered.
- file_formats (str or list[str]): File formats to be handled by this
- handler.
- """
- if not isinstance(handler, BaseFileHandler):
- raise TypeError(
- f'handler must be a child of BaseFileHandler, not {type(handler)}')
- if isinstance(file_formats, str):
- file_formats = [file_formats]
- if not is_list_of(file_formats, str):
- raise TypeError('file_formats must be a str or a list of str')
- for ext in file_formats:
- file_handlers[ext] = handler
-
-
-def register_handler(file_formats, **kwargs):
-
- def wrap(cls):
- _register_handler(cls(**kwargs), file_formats)
- return cls
-
- return wrap
diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py
deleted file mode 100644
index 7a38772b0c93a8608f32c6357b8616e77c139dc9..0000000000000000000000000000000000000000
--- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class NeptuneLoggerHook(LoggerHook):
- """Class to log metrics to NeptuneAI.
-
- It requires `neptune-client` to be installed.
-
- Args:
- init_kwargs (dict): a dict contains the initialization keys as below:
- - project (str): Name of a project in a form of
- namespace/project_name. If None, the value of
- NEPTUNE_PROJECT environment variable will be taken.
- - api_token (str): User’s API token.
- If None, the value of NEPTUNE_API_TOKEN environment
- variable will be taken. Note: It is strongly recommended
- to use NEPTUNE_API_TOKEN environment variable rather than
- placing your API token in plain text in your source code.
- - name (str, optional, default is 'Untitled'): Editable name of
- the run. Name is displayed in the run's Details and in
- Runs table as a column.
- Check https://docs.neptune.ai/api-reference/neptune#init for
- more init arguments.
- interval (int): Logging interval (every k iterations).
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- reset_flag (bool): Whether to clear the output buffer after logging
- by_epoch (bool): Whether EpochBasedRunner is used.
-
- .. _NeptuneAI:
- https://docs.neptune.ai/you-should-know/logging-metadata
- """
-
- def __init__(self,
- init_kwargs=None,
- interval=10,
- ignore_last=True,
- reset_flag=True,
- with_step=True,
- by_epoch=True):
-
- super(NeptuneLoggerHook, self).__init__(interval, ignore_last,
- reset_flag, by_epoch)
- self.import_neptune()
- self.init_kwargs = init_kwargs
- self.with_step = with_step
-
- def import_neptune(self):
- try:
- import neptune.new as neptune
- except ImportError:
- raise ImportError(
- 'Please run "pip install neptune-client" to install neptune')
- self.neptune = neptune
- self.run = None
-
- @master_only
- def before_run(self, runner):
- if self.init_kwargs:
- self.run = self.neptune.init(**self.init_kwargs)
- else:
- self.run = self.neptune.init()
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner)
- if tags:
- for tag_name, tag_value in tags.items():
- if self.with_step:
- self.run[tag_name].log(
- tag_value, step=self.get_iter(runner))
- else:
- tags['global_step'] = self.get_iter(runner)
- self.run[tag_name].log(tags)
-
- @master_only
- def after_run(self, runner):
- self.run.stop()
diff --git a/spaces/wangguanlin/vits_Kazari/attentions.py b/spaces/wangguanlin/vits_Kazari/attentions.py
deleted file mode 100644
index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000
--- a/spaces/wangguanlin/vits_Kazari/attentions.py
+++ /dev/null
@@ -1,300 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/wanghuoto/gogoai/src/components/chat-message.tsx b/spaces/wanghuoto/gogoai/src/components/chat-message.tsx
deleted file mode 100644
index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000
--- a/spaces/wanghuoto/gogoai/src/components/chat-message.tsx
+++ /dev/null
@@ -1,93 +0,0 @@
-import remarkGfm from 'remark-gfm'
-import remarkMath from 'remark-math'
-import supersub from 'remark-supersub'
-import remarkBreaks from 'remark-breaks'
-import { cn } from '@/lib/utils'
-import { CodeBlock } from '@/components/ui/codeblock'
-import { MemoizedReactMarkdown } from '@/components/markdown'
-import { LearnMore } from './learn-more'
-import { ChatMessageModel } from '@/lib/bots/bing/types'
-import { useEffect } from 'react'
-import { TurnCounter } from './turn-counter'
-
-export interface ChatMessageProps {
- message: ChatMessageModel
-}
-
-export function ChatMessage({ message, ...props }: ChatMessageProps) {
- useEffect(() => {
- if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) {
- window.scrollBy(0, 200)
- }
- }, [message.text])
-
- return message.text ? (
-
- Streaming / Unlimited conversations / Save history / Preset prompts / Chat with files / Web search
- LaTeX rendering / Table rendering / Code highlighting
- Auto dark mode / Adaptive web interface / WeChat-like theme
- Multi-parameters tuning / Multi-API-Key support / Multi-user support
- Compatible with GPT-4 / Local deployment for LLMs
-
-
-## Usage Tips
-
-- To better control the ChatGPT, use System Prompt.
-- To use a Prompt Template, select the Prompt Template Collection file first, and then choose certain prompt from the drop-down menu.
-- To try again if the response is unsatisfactory, use `🔄 Regenerate` button.
-- To start a new line in the input box, press Shift + Enter keys.
-- To quickly switch between input history, press ↑ and ↓ key in the input box.
-- To deploy the program onto a server, change the last line of the program to `demo.launch(server_name="0.0.0.0", server_port=)`.
-- To get a public shared link, change the last line of the program to `demo.launch(share=True)`. Please be noted that the program must be running in order to be accessed via a public link.
-- To use it in Hugging Face Spaces: It is recommended to **Duplicate Space** and run the program in your own Space for a faster and more secure experience.
-
-## Installation
-
-```shell
-git clone https://github.com/GaiZhenbiao/ChuanhuChatGPT.git
-cd ChuanhuChatGPT
-pip install -r requirements.txt
-```
-
-Then make a copy of `config_example.json`, rename it to `config.json`, and then fill in your API-Key and other settings in the file.
-
-```shell
-python ChuanhuChatbot.py
-```
-
-A browser window will open and you will be able to chat with ChatGPT.
-
-> **Note**
->
-> Please check our [wiki page](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用教程) for detailed instructions.
-
-## Troubleshooting
-
-When you encounter problems, you should try manually pulling the latest changes of this project first. The steps are as follows:
-
-1. Download the latest code archive by clicking on `Download ZIP` on the webpage, or
- ```shell
- git pull https://github.com/GaiZhenbiao/ChuanhuChatGPT.git main -f
- ```
-2. Try installing the dependencies again (as this project may have introduced new dependencies)
- ```
- pip install -r requirements.txt
- ```
-3. Update Gradio
- ```
- pip install gradio --upgrade --force-reinstall
- ```
-
-Generally, you can solve most problems by following these steps.
-
-If the problem still exists, please refer to this page: [Frequently Asked Questions (FAQ)](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/常见问题)
-
-This page lists almost all the possible problems and solutions. Please read it carefully.
-
-## More Information
-
-More information could be found in our [wiki](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki):
-
-- [How to contribute a translation](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/Localization)
-- [How to make a contribution](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/贡献指南)
-- [How to cite the project](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可#如何引用该项目)
-- [Project changelog](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/更新日志)
-- [Project license](https://github.com/GaiZhenbiao/ChuanhuChatGPT/wiki/使用许可)
-
-## Starchart
-
-[](https://star-history.com/#GaiZhenbiao/ChuanhuChatGPT&Date)
-
-## Contributors
-
-
-
-
-
-## Sponsor
-
-🐯 If you find this project helpful, feel free to buy me a coke or a cup of coffee~
-
-
-
-
diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/ArrangeView/ArrangeViewCanvas/Lines.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/ArrangeView/ArrangeViewCanvas/Lines.tsx
deleted file mode 100644
index b2c05ac07d95f12fbc2b3cc416a44e90cce610f7..0000000000000000000000000000000000000000
--- a/spaces/yderre-aubay/midi-player-demo/src/main/components/ArrangeView/ArrangeViewCanvas/Lines.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-import { Rectangles } from "@ryohey/webgl-react"
-import Color from "color"
-import { observer } from "mobx-react-lite"
-import { FC, useMemo } from "react"
-import { IRect } from "../../../../common/geometry"
-import { colorToVec4 } from "../../../gl/color"
-import { useStores } from "../../../hooks/useStores"
-import { useTheme } from "../../../hooks/useTheme"
-
-export const Lines: FC<{ width: number; zIndex: number }> = observer(
- ({ width, zIndex }) => {
- const {
- song: { tracks },
- arrangeViewStore: { trackHeight },
- } = useStores()
- const theme = useTheme()
-
- const hline = (y: number): IRect => ({
- x: 0,
- y,
- width,
- height: 1,
- })
-
- const rects = useMemo(
- () => tracks.map((_, i) => trackHeight * (i + 1) - 1).map(hline),
- [tracks, width, trackHeight],
- )
-
- const color = colorToVec4(Color(theme.dividerColor))
-
- return
- },
-)
diff --git a/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py b/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py
deleted file mode 100644
index 02593556d88a90232bbe55a062875f4af4520621..0000000000000000000000000000000000000000
--- a/spaces/ygangang/CodeFormer/CodeFormer/facelib/detection/retinaface/retinaface.py
+++ /dev/null
@@ -1,370 +0,0 @@
-import cv2
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from PIL import Image
-from torchvision.models._utils import IntermediateLayerGetter as IntermediateLayerGetter
-
-from facelib.detection.align_trans import get_reference_facial_points, warp_and_crop_face
-from facelib.detection.retinaface.retinaface_net import FPN, SSH, MobileNetV1, make_bbox_head, make_class_head, make_landmark_head
-from facelib.detection.retinaface.retinaface_utils import (PriorBox, batched_decode, batched_decode_landm, decode, decode_landm,
- py_cpu_nms)
-
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-
-
-def generate_config(network_name):
-
- cfg_mnet = {
- 'name': 'mobilenet0.25',
- 'min_sizes': [[16, 32], [64, 128], [256, 512]],
- 'steps': [8, 16, 32],
- 'variance': [0.1, 0.2],
- 'clip': False,
- 'loc_weight': 2.0,
- 'gpu_train': True,
- 'batch_size': 32,
- 'ngpu': 1,
- 'epoch': 250,
- 'decay1': 190,
- 'decay2': 220,
- 'image_size': 640,
- 'return_layers': {
- 'stage1': 1,
- 'stage2': 2,
- 'stage3': 3
- },
- 'in_channel': 32,
- 'out_channel': 64
- }
-
- cfg_re50 = {
- 'name': 'Resnet50',
- 'min_sizes': [[16, 32], [64, 128], [256, 512]],
- 'steps': [8, 16, 32],
- 'variance': [0.1, 0.2],
- 'clip': False,
- 'loc_weight': 2.0,
- 'gpu_train': True,
- 'batch_size': 24,
- 'ngpu': 4,
- 'epoch': 100,
- 'decay1': 70,
- 'decay2': 90,
- 'image_size': 840,
- 'return_layers': {
- 'layer2': 1,
- 'layer3': 2,
- 'layer4': 3
- },
- 'in_channel': 256,
- 'out_channel': 256
- }
-
- if network_name == 'mobile0.25':
- return cfg_mnet
- elif network_name == 'resnet50':
- return cfg_re50
- else:
- raise NotImplementedError(f'network_name={network_name}')
-
-
-class RetinaFace(nn.Module):
-
- def __init__(self, network_name='resnet50', half=False, phase='test'):
- super(RetinaFace, self).__init__()
- self.half_inference = half
- cfg = generate_config(network_name)
- self.backbone = cfg['name']
-
- self.model_name = f'retinaface_{network_name}'
- self.cfg = cfg
- self.phase = phase
- self.target_size, self.max_size = 1600, 2150
- self.resize, self.scale, self.scale1 = 1., None, None
- self.mean_tensor = torch.tensor([[[[104.]], [[117.]], [[123.]]]]).to(device)
- self.reference = get_reference_facial_points(default_square=True)
- # Build network.
- backbone = None
- if cfg['name'] == 'mobilenet0.25':
- backbone = MobileNetV1()
- self.body = IntermediateLayerGetter(backbone, cfg['return_layers'])
- elif cfg['name'] == 'Resnet50':
- import torchvision.models as models
- backbone = models.resnet50(pretrained=False)
- self.body = IntermediateLayerGetter(backbone, cfg['return_layers'])
-
- in_channels_stage2 = cfg['in_channel']
- in_channels_list = [
- in_channels_stage2 * 2,
- in_channels_stage2 * 4,
- in_channels_stage2 * 8,
- ]
-
- out_channels = cfg['out_channel']
- self.fpn = FPN(in_channels_list, out_channels)
- self.ssh1 = SSH(out_channels, out_channels)
- self.ssh2 = SSH(out_channels, out_channels)
- self.ssh3 = SSH(out_channels, out_channels)
-
- self.ClassHead = make_class_head(fpn_num=3, inchannels=cfg['out_channel'])
- self.BboxHead = make_bbox_head(fpn_num=3, inchannels=cfg['out_channel'])
- self.LandmarkHead = make_landmark_head(fpn_num=3, inchannels=cfg['out_channel'])
-
- self.to(device)
- self.eval()
- if self.half_inference:
- self.half()
-
- def forward(self, inputs):
- out = self.body(inputs)
-
- if self.backbone == 'mobilenet0.25' or self.backbone == 'Resnet50':
- out = list(out.values())
- # FPN
- fpn = self.fpn(out)
-
- # SSH
- feature1 = self.ssh1(fpn[0])
- feature2 = self.ssh2(fpn[1])
- feature3 = self.ssh3(fpn[2])
- features = [feature1, feature2, feature3]
-
- bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1)
- classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)], dim=1)
- tmp = [self.LandmarkHead[i](feature) for i, feature in enumerate(features)]
- ldm_regressions = (torch.cat(tmp, dim=1))
-
- if self.phase == 'train':
- output = (bbox_regressions, classifications, ldm_regressions)
- else:
- output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions)
- return output
-
- def __detect_faces(self, inputs):
- # get scale
- height, width = inputs.shape[2:]
- self.scale = torch.tensor([width, height, width, height], dtype=torch.float32).to(device)
- tmp = [width, height, width, height, width, height, width, height, width, height]
- self.scale1 = torch.tensor(tmp, dtype=torch.float32).to(device)
-
- # forawrd
- inputs = inputs.to(device)
- if self.half_inference:
- inputs = inputs.half()
- loc, conf, landmarks = self(inputs)
-
- # get priorbox
- priorbox = PriorBox(self.cfg, image_size=inputs.shape[2:])
- priors = priorbox.forward().to(device)
-
- return loc, conf, landmarks, priors
-
- # single image detection
- def transform(self, image, use_origin_size):
- # convert to opencv format
- if isinstance(image, Image.Image):
- image = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
- image = image.astype(np.float32)
-
- # testing scale
- im_size_min = np.min(image.shape[0:2])
- im_size_max = np.max(image.shape[0:2])
- resize = float(self.target_size) / float(im_size_min)
-
- # prevent bigger axis from being more than max_size
- if np.round(resize * im_size_max) > self.max_size:
- resize = float(self.max_size) / float(im_size_max)
- resize = 1 if use_origin_size else resize
-
- # resize
- if resize != 1:
- image = cv2.resize(image, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR)
-
- # convert to torch.tensor format
- # image -= (104, 117, 123)
- image = image.transpose(2, 0, 1)
- image = torch.from_numpy(image).unsqueeze(0)
-
- return image, resize
-
- def detect_faces(
- self,
- image,
- conf_threshold=0.8,
- nms_threshold=0.4,
- use_origin_size=True,
- ):
- """
- Params:
- imgs: BGR image
- """
- image, self.resize = self.transform(image, use_origin_size)
- image = image.to(device)
- if self.half_inference:
- image = image.half()
- image = image - self.mean_tensor
-
- loc, conf, landmarks, priors = self.__detect_faces(image)
-
- boxes = decode(loc.data.squeeze(0), priors.data, self.cfg['variance'])
- boxes = boxes * self.scale / self.resize
- boxes = boxes.cpu().numpy()
-
- scores = conf.squeeze(0).data.cpu().numpy()[:, 1]
-
- landmarks = decode_landm(landmarks.squeeze(0), priors, self.cfg['variance'])
- landmarks = landmarks * self.scale1 / self.resize
- landmarks = landmarks.cpu().numpy()
-
- # ignore low scores
- inds = np.where(scores > conf_threshold)[0]
- boxes, landmarks, scores = boxes[inds], landmarks[inds], scores[inds]
-
- # sort
- order = scores.argsort()[::-1]
- boxes, landmarks, scores = boxes[order], landmarks[order], scores[order]
-
- # do NMS
- bounding_boxes = np.hstack((boxes, scores[:, np.newaxis])).astype(np.float32, copy=False)
- keep = py_cpu_nms(bounding_boxes, nms_threshold)
- bounding_boxes, landmarks = bounding_boxes[keep, :], landmarks[keep]
- # self.t['forward_pass'].toc()
- # print(self.t['forward_pass'].average_time)
- # import sys
- # sys.stdout.flush()
- return np.concatenate((bounding_boxes, landmarks), axis=1)
-
- def __align_multi(self, image, boxes, landmarks, limit=None):
-
- if len(boxes) < 1:
- return [], []
-
- if limit:
- boxes = boxes[:limit]
- landmarks = landmarks[:limit]
-
- faces = []
- for landmark in landmarks:
- facial5points = [[landmark[2 * j], landmark[2 * j + 1]] for j in range(5)]
-
- warped_face = warp_and_crop_face(np.array(image), facial5points, self.reference, crop_size=(112, 112))
- faces.append(warped_face)
-
- return np.concatenate((boxes, landmarks), axis=1), faces
-
- def align_multi(self, img, conf_threshold=0.8, limit=None):
-
- rlt = self.detect_faces(img, conf_threshold=conf_threshold)
- boxes, landmarks = rlt[:, 0:5], rlt[:, 5:]
-
- return self.__align_multi(img, boxes, landmarks, limit)
-
- # batched detection
- def batched_transform(self, frames, use_origin_size):
- """
- Arguments:
- frames: a list of PIL.Image, or torch.Tensor(shape=[n, h, w, c],
- type=np.float32, BGR format).
- use_origin_size: whether to use origin size.
- """
- from_PIL = True if isinstance(frames[0], Image.Image) else False
-
- # convert to opencv format
- if from_PIL:
- frames = [cv2.cvtColor(np.asarray(frame), cv2.COLOR_RGB2BGR) for frame in frames]
- frames = np.asarray(frames, dtype=np.float32)
-
- # testing scale
- im_size_min = np.min(frames[0].shape[0:2])
- im_size_max = np.max(frames[0].shape[0:2])
- resize = float(self.target_size) / float(im_size_min)
-
- # prevent bigger axis from being more than max_size
- if np.round(resize * im_size_max) > self.max_size:
- resize = float(self.max_size) / float(im_size_max)
- resize = 1 if use_origin_size else resize
-
- # resize
- if resize != 1:
- if not from_PIL:
- frames = F.interpolate(frames, scale_factor=resize)
- else:
- frames = [
- cv2.resize(frame, None, None, fx=resize, fy=resize, interpolation=cv2.INTER_LINEAR)
- for frame in frames
- ]
-
- # convert to torch.tensor format
- if not from_PIL:
- frames = frames.transpose(1, 2).transpose(1, 3).contiguous()
- else:
- frames = frames.transpose((0, 3, 1, 2))
- frames = torch.from_numpy(frames)
-
- return frames, resize
-
- def batched_detect_faces(self, frames, conf_threshold=0.8, nms_threshold=0.4, use_origin_size=True):
- """
- Arguments:
- frames: a list of PIL.Image, or np.array(shape=[n, h, w, c],
- type=np.uint8, BGR format).
- conf_threshold: confidence threshold.
- nms_threshold: nms threshold.
- use_origin_size: whether to use origin size.
- Returns:
- final_bounding_boxes: list of np.array ([n_boxes, 5],
- type=np.float32).
- final_landmarks: list of np.array ([n_boxes, 10], type=np.float32).
- """
- # self.t['forward_pass'].tic()
- frames, self.resize = self.batched_transform(frames, use_origin_size)
- frames = frames.to(device)
- frames = frames - self.mean_tensor
-
- b_loc, b_conf, b_landmarks, priors = self.__detect_faces(frames)
-
- final_bounding_boxes, final_landmarks = [], []
-
- # decode
- priors = priors.unsqueeze(0)
- b_loc = batched_decode(b_loc, priors, self.cfg['variance']) * self.scale / self.resize
- b_landmarks = batched_decode_landm(b_landmarks, priors, self.cfg['variance']) * self.scale1 / self.resize
- b_conf = b_conf[:, :, 1]
-
- # index for selection
- b_indice = b_conf > conf_threshold
-
- # concat
- b_loc_and_conf = torch.cat((b_loc, b_conf.unsqueeze(-1)), dim=2).float()
-
- for pred, landm, inds in zip(b_loc_and_conf, b_landmarks, b_indice):
-
- # ignore low scores
- pred, landm = pred[inds, :], landm[inds, :]
- if pred.shape[0] == 0:
- final_bounding_boxes.append(np.array([], dtype=np.float32))
- final_landmarks.append(np.array([], dtype=np.float32))
- continue
-
- # sort
- # order = score.argsort(descending=True)
- # box, landm, score = box[order], landm[order], score[order]
-
- # to CPU
- bounding_boxes, landm = pred.cpu().numpy(), landm.cpu().numpy()
-
- # NMS
- keep = py_cpu_nms(bounding_boxes, nms_threshold)
- bounding_boxes, landmarks = bounding_boxes[keep, :], landm[keep]
-
- # append
- final_bounding_boxes.append(bounding_boxes)
- final_landmarks.append(landmarks)
- # self.t['forward_pass'].toc(average=True)
- # self.batch_time += self.t['forward_pass'].diff
- # self.total_frame += len(frames)
- # print(self.batch_time / self.total_frame)
-
- return final_bounding_boxes, final_landmarks
diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/data/config.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/data/config.py
deleted file mode 100644
index e57cdc530e3d78c4aa6310985c90c5ee125f8f01..0000000000000000000000000000000000000000
--- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/data/config.py
+++ /dev/null
@@ -1,42 +0,0 @@
-# config.py
-
-cfg_mnet = {
- 'name': 'mobilenet0.25',
- 'min_sizes': [[16, 32], [64, 128], [256, 512]],
- 'steps': [8, 16, 32],
- 'variance': [0.1, 0.2],
- 'clip': False,
- 'loc_weight': 2.0,
- 'gpu_train': True,
- 'batch_size': 32,
- 'ngpu': 1,
- 'epoch': 250,
- 'decay1': 190,
- 'decay2': 220,
- 'image_size': 640,
- 'pretrain': False,
- 'return_layers': {'stage1': 1, 'stage2': 2, 'stage3': 3},
- 'in_channel': 32,
- 'out_channel': 64
-}
-
-cfg_re50 = {
- 'name': 'Resnet50',
- 'min_sizes': [[16, 32], [64, 128], [256, 512]],
- 'steps': [8, 16, 32],
- 'variance': [0.1, 0.2],
- 'clip': False,
- 'loc_weight': 2.0,
- 'gpu_train': True,
- 'batch_size': 24,
- 'ngpu': 4,
- 'epoch': 100,
- 'decay1': 70,
- 'decay2': 90,
- 'image_size': 840,
- 'pretrain': False,
- 'return_layers': {'layer2': 1, 'layer3': 2, 'layer4': 3},
- 'in_channel': 256,
- 'out_channel': 256
-}
-
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/debug_utils.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/debug_utils.py
deleted file mode 100644
index dbceb1d849076999c6821556accaea05e53a9ff9..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/debug_utils.py
+++ /dev/null
@@ -1,346 +0,0 @@
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import collections
-
-from .utils import ExplicitEnum, is_torch_available, logging
-
-
-if is_torch_available():
- import torch
-
-
-logger = logging.get_logger(__name__)
-
-
-class DebugUnderflowOverflow:
- """
- This debug class helps detect and understand where the model starts getting very large or very small, and more
- importantly `nan` or `inf` weight and activation elements.
-
- There are 2 working modes:
-
- 1. Underflow/overflow detection (default)
- 2. Specific batch absolute min/max tracing without detection
-
- Mode 1: Underflow/overflow detection
-
- To activate the underflow/overflow detection, initialize the object with the model :
-
- ```python
- debug_overflow = DebugUnderflowOverflow(model)
- ```
-
- then run the training as normal and if `nan` or `inf` gets detected in at least one of the weight, input or output
- elements this module will throw an exception and will print `max_frames_to_save` frames that lead to this event,
- each frame reporting
-
- 1. the fully qualified module name plus the class name whose `forward` was run
- 2. the absolute min and max value of all elements for each module weights, and the inputs and output
-
- For example, here is the header and the last few frames in detection report for `google/mt5-small` run in fp16
- mixed precision :
-
- ```
- Detected inf/nan during batch_number=0
- Last 21 forward frames:
- abs min abs max metadata
- [...]
- encoder.block.2.layer.1.DenseReluDense.wi_0 Linear
- 2.17e-07 4.50e+00 weight
- 1.79e-06 4.65e+00 input[0]
- 2.68e-06 3.70e+01 output
- encoder.block.2.layer.1.DenseReluDense.wi_1 Linear
- 8.08e-07 2.66e+01 weight
- 1.79e-06 4.65e+00 input[0]
- 1.27e-04 2.37e+02 output
- encoder.block.2.layer.1.DenseReluDense.wo Linear
- 1.01e-06 6.44e+00 weight
- 0.00e+00 9.74e+03 input[0]
- 3.18e-04 6.27e+04 output
- encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense
- 1.79e-06 4.65e+00 input[0]
- 3.18e-04 6.27e+04 output
- encoder.block.2.layer.1.dropout Dropout
- 3.18e-04 6.27e+04 input[0]
- 0.00e+00 inf output
- ```
-
- You can see here, that `T5DenseGatedGeluDense.forward` resulted in output activations, whose absolute max value was
- around 62.7K, which is very close to fp16's top limit of 64K. In the next frame we have `Dropout` which
- renormalizes the weights, after it zeroed some of the elements, which pushes the absolute max value to more than
- 64K, and we get an overlow.
-
- As you can see it's the previous frames that we need to look into when the numbers start going into very large for
- fp16 numbers.
-
- The tracking is done in a forward hook, which gets invoked immediately after `forward` has completed.
-
- By default the last 21 frames are printed. You can change the default to adjust for your needs. For example :
-
- ```python
- debug_overflow = DebugUnderflowOverflow(model, max_frames_to_save=100)
- ```
-
- To validate that you have set up this debugging feature correctly, and you intend to use it in a training that
- may take hours to complete, first run it with normal tracing enabled for one of a few batches as explained in
- the next section.
-
-
- Mode 2. Specific batch absolute min/max tracing without detection
-
- The second work mode is per-batch tracing with the underflow/overflow detection feature turned off.
-
- Let's say you want to watch the absolute min and max values for all the ingredients of each `forward` call of a
- given batch, and only do that for batches 1 and 3. Then you instantiate this class as :
-
- ```python
- debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3])
- ```
-
- And now full batches 1 and 3 will be traced using the same format as explained above. Batches are 0-indexed.
-
- This is helpful if you know that the program starts misbehaving after a certain batch number, so you can
- fast-forward right to that area.
-
-
- Early stopping:
-
- You can also specify the batch number after which to stop the training, with :
-
- ```python
- debug_overflow = DebugUnderflowOverflow(model, trace_batch_nums=[1, 3], abort_after_batch_num=3)
- ```
-
- This feature is mainly useful in the tracing mode, but you can use it for any mode.
-
-
- **Performance**:
-
- As this module measures absolute `min`/``max` of each weight of the model on every forward it'll slow the training
- down. Therefore remember to turn it off once the debugging needs have been met.
-
- Args:
- model (`nn.Module`):
- The model to debug.
- max_frames_to_save (`int`, *optional*, defaults to 21):
- How many frames back to record
- trace_batch_nums(`List[int]`, *optional*, defaults to `[]`):
- Which batch numbers to trace (turns detection off)
- abort_after_batch_num (`int``, *optional*):
- Whether to abort after a certain batch number has finished
- """
-
- def __init__(self, model, max_frames_to_save=21, trace_batch_nums=[], abort_after_batch_num=None):
- self.model = model
- self.trace_batch_nums = trace_batch_nums
- self.abort_after_batch_num = abort_after_batch_num
-
- # keep a LIFO buffer of frames to dump as soon as inf/nan is encountered to give context to the problem emergence
- self.frames = collections.deque([], max_frames_to_save)
- self.frame = []
- self.batch_number = 0
- self.total_calls = 0
- self.detected_overflow = False
- self.prefix = " "
-
- self.analyse_model()
-
- self.register_forward_hook()
-
- def save_frame(self, frame=None):
- if frame is not None:
- self.expand_frame(frame)
- self.frames.append("\n".join(self.frame))
- self.frame = [] # start a new frame
-
- def expand_frame(self, line):
- self.frame.append(line)
-
- def trace_frames(self):
- print("\n".join(self.frames))
- self.frames = []
-
- def reset_saved_frames(self):
- self.frames = []
-
- def dump_saved_frames(self):
- print(f"\nDetected inf/nan during batch_number={self.batch_number}")
- print(f"Last {len(self.frames)} forward frames:")
- print(f"{'abs min':8} {'abs max':8} metadata")
- print("\n".join(self.frames))
- print("\n\n")
- self.frames = []
-
- def analyse_model(self):
- # extract the fully qualified module names, to be able to report at run time. e.g.:
- # encoder.block.2.layer.0.SelfAttention.o
- #
- # for shared weights only the first shared module name will be registered
- self.module_names = {m: name for name, m in self.model.named_modules()}
- # self.longest_module_name = max(len(v) for v in self.module_names.values())
-
- def analyse_variable(self, var, ctx):
- if torch.is_tensor(var):
- self.expand_frame(get_abs_min_max(var, ctx))
- if detect_overflow(var, ctx):
- self.detected_overflow = True
- elif var is None:
- self.expand_frame(f"{'None':>17} {ctx}")
- else:
- self.expand_frame(f"{'not a tensor':>17} {ctx}")
-
- def batch_start_frame(self):
- self.expand_frame(f"\n\n{self.prefix} *** Starting batch number={self.batch_number} ***")
- self.expand_frame(f"{'abs min':8} {'abs max':8} metadata")
-
- def batch_end_frame(self):
- self.expand_frame(f"{self.prefix} *** Finished batch number={self.batch_number-1} ***\n\n")
-
- def create_frame(self, module, input, output):
- self.expand_frame(f"{self.prefix} {self.module_names[module]} {module.__class__.__name__}")
-
- # params
- for name, p in module.named_parameters(recurse=False):
- self.analyse_variable(p, name)
-
- # inputs
- if isinstance(input, tuple):
- for i, x in enumerate(input):
- self.analyse_variable(x, f"input[{i}]")
- else:
- self.analyse_variable(input, "input")
-
- # outputs
- if isinstance(output, tuple):
- for i, x in enumerate(output):
- # possibly a tuple of tuples
- if isinstance(x, tuple):
- for j, y in enumerate(x):
- self.analyse_variable(y, f"output[{i}][{j}]")
- else:
- self.analyse_variable(x, f"output[{i}]")
- else:
- self.analyse_variable(output, "output")
-
- self.save_frame()
-
- def register_forward_hook(self):
- self.model.apply(self._register_forward_hook)
-
- def _register_forward_hook(self, module):
- module.register_forward_hook(self.forward_hook)
-
- def forward_hook(self, module, input, output):
- # - input is a tuple of packed inputs (could be non-Tensors)
- # - output could be a Tensor or a tuple of Tensors and non-Tensors
-
- last_frame_of_batch = False
-
- trace_mode = True if self.batch_number in self.trace_batch_nums else False
- if trace_mode:
- self.reset_saved_frames()
-
- if self.total_calls == 0:
- self.batch_start_frame()
- self.total_calls += 1
-
- # count batch numbers - the very first forward hook of the batch will be called when the
- # batch completes - i.e. it gets called very last - we know this batch has finished
- if module == self.model:
- self.batch_number += 1
- last_frame_of_batch = True
-
- self.create_frame(module, input, output)
-
- # if last_frame_of_batch:
- # self.batch_end_frame()
-
- if trace_mode:
- self.trace_frames()
-
- if last_frame_of_batch:
- self.batch_start_frame()
-
- if self.detected_overflow and not trace_mode:
- self.dump_saved_frames()
-
- # now we can abort, as it's pointless to continue running
- raise ValueError(
- "DebugUnderflowOverflow: inf/nan detected, aborting as there is no point running further. "
- "Please scroll up above this traceback to see the activation values prior to this event."
- )
-
- # abort after certain batch if requested to do so
- if self.abort_after_batch_num is not None and self.batch_number > self.abort_after_batch_num:
- raise ValueError(
- f"DebugUnderflowOverflow: aborting after {self.batch_number} batches due to"
- f" `abort_after_batch_num={self.abort_after_batch_num}` arg"
- )
-
-
-def get_abs_min_max(var, ctx):
- abs_var = var.abs()
- return f"{abs_var.min():8.2e} {abs_var.max():8.2e} {ctx}"
-
-
-def detect_overflow(var, ctx):
- """
- Report whether the tensor contains any `nan` or `inf` entries.
-
- This is useful for detecting overflows/underflows and best to call right after the function that did some math that
- modified the tensor in question.
-
- This function contains a few other helper features that you can enable and tweak directly if you want to track
- various other things.
-
- Args:
- var: the tensor variable to check
- ctx: the message to print as a context
-
- Return:
- `True` if `inf` or `nan` was detected, `False` otherwise
- """
- detected = False
- if torch.isnan(var).any().item():
- detected = True
- print(f"{ctx} has nans")
- if torch.isinf(var).any().item():
- detected = True
- print(f"{ctx} has infs")
-
- # if needed to monitor large elements can enable the following
- if 0: # and detected:
- n100 = var[torch.ge(var.abs(), 100)]
- if n100.numel() > 0:
- print(f"{ctx}: n100={n100.numel()}")
- n1000 = var[torch.ge(var.abs(), 1000)]
- if n1000.numel() > 0:
- print(f"{ctx}: n1000={n1000.numel()}")
- n10000 = var[torch.ge(var.abs(), 10000)]
- if n10000.numel() > 0:
- print(f"{ctx}: n10000={n10000.numel()}")
-
- if 0:
- print(f"min={var.min():9.2e} max={var.max():9.2e}")
-
- if 0:
- print(f"min={var.min():9.2e} max={var.max():9.2e} var={var.var():9.2e} mean={var.mean():9.2e} ({ctx})")
-
- return detected
-
-
-class DebugOption(ExplicitEnum):
- UNDERFLOW_OVERFLOW = "underflow_overflow"
- TPU_METRICS_DEBUG = "tpu_metrics_debug"
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/encoder_decoder/modeling_encoder_decoder.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/encoder_decoder/modeling_encoder_decoder.py
deleted file mode 100644
index 3548e48c595a4a653034f4ef3b12dee1dcd78b40..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/encoder_decoder/modeling_encoder_decoder.py
+++ /dev/null
@@ -1,692 +0,0 @@
-# coding=utf-8
-# Copyright 2018 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Classes to support Encoder-Decoder architectures"""
-
-
-import gc
-import inspect
-import os
-import tempfile
-import warnings
-from typing import Optional, Tuple, Union
-
-import torch
-from torch import nn
-from torch.nn import CrossEntropyLoss
-
-from ...configuration_utils import PretrainedConfig
-from ...modeling_outputs import BaseModelOutput, Seq2SeqLMOutput
-from ...modeling_utils import PreTrainedModel
-from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
-from ..auto.configuration_auto import AutoConfig
-from ..auto.modeling_auto import AutoModel, AutoModelForCausalLM
-from .configuration_encoder_decoder import EncoderDecoderConfig
-
-
-logger = logging.get_logger(__name__)
-
-_CONFIG_FOR_DOC = "EncoderDecoderConfig"
-
-DEPRECATION_WARNING = (
- "Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the"
- " encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if"
- " fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the"
- " labels, no need to pass them yourself anymore."
-)
-
-ENCODER_DECODER_START_DOCSTRING = r"""
- This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the
- encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via
- [`~AutoModel.from_pretrained`] function and the decoder is loaded via [`~AutoModelForCausalLM.from_pretrained`]
- function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream
- generative task, like summarization.
-
- The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation
- tasks was shown in [Leveraging Pre-trained Checkpoints for Sequence Generation
- Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi
- Zhou, Wei Li, Peter J. Liu.
-
- After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models
- (see the examples for more information).
-
- This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
- library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
- etc.)
-
- This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
- Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
- and behavior.
-
- Parameters:
- config ([`EncoderDecoderConfig`]): Model configuration class with all the parameters of the model.
- Initializing with a config file does not load the weights associated with the model, only the
- configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights.
-"""
-
-ENCODER_DECODER_INPUTS_DOCSTRING = r"""
- Args:
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
- Indices of input sequence tokens in the vocabulary.
-
- Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
- attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
-
- - 1 for tokens that are **not masked**,
- - 0 for tokens that are **masked**.
-
- [What are attention masks?](../glossary#attention-mask)
- decoder_input_ids (`torch.LongTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
- Indices of decoder input sequence tokens in the vocabulary.
-
- Indices can be obtained using [`PreTrainedTokenizer`]. See [`PreTrainedTokenizer.encode`] and
- [`PreTrainedTokenizer.__call__`] for details.
-
- [What are input IDs?](../glossary#input-ids)
-
- If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see
- `past_key_values`).
-
- For training, `decoder_input_ids` are automatically created by the model by shifting the `labels` to the
- right, replacing -100 by the `pad_token_id` and prepending them with the `decoder_start_token_id`.
- decoder_attention_mask (`torch.BoolTensor` of shape `(batch_size, target_sequence_length)`, *optional*):
- Default behavior: generate a tensor that ignores pad tokens in `decoder_input_ids`. Causal mask will also
- be used by default.
- encoder_outputs (`tuple(torch.FloatTensor)`, *optional*):
- This tuple must consist of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`)
- `last_hidden_state` (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) is a tensor
- of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the
- decoder.
- past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
-
- If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that
- don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all
- `decoder_input_ids` of shape `(batch_size, sequence_length)`.
- inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
- is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
- model's internal embedding lookup matrix.
- decoder_inputs_embeds (`torch.FloatTensor` of shape `(batch_size, target_sequence_length, hidden_size)`, *optional*):
- Optionally, instead of passing `decoder_input_ids` you can choose to directly pass an embedded
- representation. This is useful if you want more control over how to convert `decoder_input_ids` indices
- into associated vectors than the model's internal embedding lookup matrix.
- labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
- Labels for computing the masked language modeling loss for the decoder. Indices should be in `[-100, 0,
- ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored
- (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
- use_cache (`bool`, *optional*):
- If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
- `past_key_values`).
- output_attentions (`bool`, *optional*):
- Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
- tensors for more detail.
- output_hidden_states (`bool`, *optional*):
- Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
- more detail.
- return_dict (`bool`, *optional*):
- If set to `True`, the model will return a [`~utils.Seq2SeqLMOutput`] instead of a plain tuple.
- kwargs (*optional*): Remaining dictionary of keyword arguments. Keyword arguments come in two flavors:
-
- - Without a prefix which will be input as `**encoder_kwargs` for the encoder forward function.
- - With a *decoder_* prefix which will be input as `**decoder_kwargs` for the decoder forward function.
-"""
-
-
-def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
- """
- Shift input ids one token to the right.
- """
- shifted_input_ids = input_ids.new_zeros(input_ids.shape)
- shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
- if decoder_start_token_id is None:
- raise ValueError("Make sure to set the decoder_start_token_id attribute of the model's configuration.")
- shifted_input_ids[:, 0] = decoder_start_token_id
-
- if pad_token_id is None:
- raise ValueError("Make sure to set the pad_token_id attribute of the model's configuration.")
- # replace possible -100 values in labels by `pad_token_id`
- shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
-
- return shifted_input_ids
-
-
-@add_start_docstrings(ENCODER_DECODER_START_DOCSTRING)
-class EncoderDecoderModel(PreTrainedModel):
- r"""
- [`EncoderDecoderModel`] is a generic model class that will be instantiated as a transformer architecture with one
- of the base model classes of the library as encoder and another one as decoder when created with the
- :meth*~transformers.AutoModel.from_pretrained* class method for the encoder and
- :meth*~transformers.AutoModelForCausalLM.from_pretrained* class method for the decoder.
- """
- config_class = EncoderDecoderConfig
- base_model_prefix = "encoder_decoder"
- main_input_name = "input_ids"
- supports_gradient_checkpointing = True
-
- def __init__(
- self,
- config: Optional[PretrainedConfig] = None,
- encoder: Optional[PreTrainedModel] = None,
- decoder: Optional[PreTrainedModel] = None,
- ):
- if config is None and (encoder is None or decoder is None):
- raise ValueError("Either a configuration or an encoder and a decoder has to be provided.")
- if config is None:
- config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder.config, decoder.config)
- else:
- if not isinstance(config, self.config_class):
- raise ValueError(f"Config: {config} has to be of type {self.config_class}")
-
- if config.decoder.cross_attention_hidden_size is not None:
- if config.decoder.cross_attention_hidden_size != config.encoder.hidden_size:
- raise ValueError(
- "If `cross_attention_hidden_size` is specified in the decoder's configuration, it has to be equal"
- f" to the encoder's `hidden_size`. Got {config.decoder.cross_attention_hidden_size} for"
- f" `config.decoder.cross_attention_hidden_size` and {config.encoder.hidden_size} for"
- " `config.encoder.hidden_size`."
- )
-
- # initialize with config
- super().__init__(config)
-
- if encoder is None:
- from ..auto.modeling_auto import AutoModel
-
- encoder = AutoModel.from_config(config.encoder)
-
- if decoder is None:
- from ..auto.modeling_auto import AutoModelForCausalLM
-
- decoder = AutoModelForCausalLM.from_config(config.decoder)
-
- self.encoder = encoder
- self.decoder = decoder
-
- if self.encoder.config.to_dict() != self.config.encoder.to_dict():
- logger.warning(
- f"Config of the encoder: {self.encoder.__class__} is overwritten by shared encoder config:"
- f" {self.config.encoder}"
- )
- if self.decoder.config.to_dict() != self.config.decoder.to_dict():
- logger.warning(
- f"Config of the decoder: {self.decoder.__class__} is overwritten by shared decoder config:"
- f" {self.config.decoder}"
- )
-
- # make sure that the individual model's config refers to the shared config
- # so that the updates to the config will be synced
- self.encoder.config = self.config.encoder
- self.decoder.config = self.config.decoder
-
- # encoder outputs might need to be projected to different dimension for decoder
- if (
- self.encoder.config.hidden_size != self.decoder.config.hidden_size
- and self.decoder.config.cross_attention_hidden_size is None
- ):
- self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size)
-
- if self.encoder.get_output_embeddings() is not None:
- raise ValueError(
- f"The encoder {self.encoder} should not have a LM Head. Please use a model without LM Head"
- )
-
- decoder_signature = set(inspect.signature(self.decoder.forward).parameters.keys())
- if "encoder_hidden_states" not in decoder_signature:
- raise ValueError(
- "The selected decoder is not prepared for the encoder hidden states to be passed. Please see the "
- "following discussion on GitHub: https://github.com/huggingface/transformers/issues/23350"
- )
-
- # tie encoder, decoder weights if config set accordingly
- self.tie_weights()
-
- def tie_weights(self):
- # tie encoder & decoder if needed
- if self.config.tie_encoder_decoder:
- # tie encoder and decoder base model
- decoder_base_model_prefix = self.decoder.base_model_prefix
- self._tie_encoder_decoder_weights(
- self.encoder, self.decoder._modules[decoder_base_model_prefix], self.decoder.base_model_prefix
- )
-
- def _set_gradient_checkpointing(self, module, value=False):
- # call both encoder and decoder function on gradient checkpointing
- self.encoder._set_gradient_checkpointing(module, value=value)
- self.decoder._set_gradient_checkpointing(module, value=value)
-
- def get_encoder(self):
- return self.encoder
-
- def get_decoder(self):
- return self.decoder
-
- def get_input_embeddings(self):
- return self.encoder.get_input_embeddings()
-
- def get_output_embeddings(self):
- return self.decoder.get_output_embeddings()
-
- def set_output_embeddings(self, new_embeddings):
- return self.decoder.set_output_embeddings(new_embeddings)
-
- @classmethod
- def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs):
- r"""
- Example:
-
- ```python
- >>> from transformers import EncoderDecoderModel
-
- >>> model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
- ```"""
-
- from_tf = kwargs.pop("from_tf", False)
- if from_tf:
- from transformers import TFEncoderDecoderModel
-
- # a workaround to load from tensorflow checkpoint
- # Using `_tf_model` won't work, because the weight names in the encoder/decoder of `_tf_model` get
- # extended before saving those components. For example, The name of `_tf_model.encoder.vit` is
- # `[top model name]/encoder/vit`, but the name of `tf_model.encoder.vit` is `[top model name]/vit`. The
- # [top model name] is handled (stripped) by the conversion method, and the former case gets extra `encoder`,
- # which should not occur when we want to save the components alone.
- # There was a (very) ugly potential fix, which wasn't integrated to `transformers`: see
- # https://github.com/huggingface/transformers/pull/13222/commits/dbb3c9de76eee235791d2064094654637c99f36d#r697304245
- # (the change in `src/transformers/modeling_tf_utils.py`)
- _tf_model = TFEncoderDecoderModel.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
- config = _tf_model.config
-
- # Using `tf_model` instead
- encoder = _tf_model.encoder.__class__(_tf_model.config.encoder)
- decoder = _tf_model.decoder.__class__(_tf_model.config.decoder)
- # Make sure models are built
- encoder(encoder.dummy_inputs)
- decoder(decoder.dummy_inputs)
-
- # Get the variable correspondence between `_tf_model` and `encoder` and `decoder`
- encoder_variables = {}
- for v in encoder.trainable_variables + encoder.non_trainable_variables:
- encoder_variables["/".join(v.name.split("/")[1:])] = v
- decoder_variables = {}
- for v in decoder.trainable_variables + decoder.non_trainable_variables:
- decoder_variables["/".join(v.name.split("/")[1:])] = v
-
- _encoder_variables = {}
- for v in _tf_model.encoder.trainable_variables + _tf_model.encoder.non_trainable_variables:
- _encoder_variables["/".join(v.name.split("/")[2:])] = v
- _decoder_variables = {}
- for v in _tf_model.decoder.trainable_variables + _tf_model.decoder.non_trainable_variables:
- _decoder_variables["/".join(v.name.split("/")[2:])] = v
-
- # assign weight values to `encoder` and `decoder` from `_tf_model`
- for name, v in encoder_variables.items():
- v.assign(_encoder_variables[name])
- for name, v in decoder_variables.items():
- v.assign(_decoder_variables[name])
-
- tf_model = TFEncoderDecoderModel(encoder=encoder, decoder=decoder)
-
- # Deal with `enc_to_dec_proj`
- if hasattr(_tf_model, "enc_to_dec_proj"):
- tf_model(tf_model.dummy_inputs)
- tf_model.enc_to_dec_proj.kernel.assign(_tf_model.enc_to_dec_proj.kernel)
- tf_model.enc_to_dec_proj.bias.assign(_tf_model.enc_to_dec_proj.bias)
-
- with tempfile.TemporaryDirectory() as tmpdirname:
- encoder_dir = os.path.join(tmpdirname, "encoder")
- decoder_dir = os.path.join(tmpdirname, "decoder")
- tf_model.encoder.save_pretrained(encoder_dir)
- tf_model.decoder.save_pretrained(decoder_dir)
-
- if hasattr(tf_model, "enc_to_dec_proj"):
- enc_to_dec_proj_weight = torch.transpose(
- torch.from_numpy(tf_model.enc_to_dec_proj.kernel.numpy()), 1, 0
- )
- enc_to_dec_proj_bias = torch.from_numpy(tf_model.enc_to_dec_proj.bias.numpy())
-
- del _tf_model
- del tf_model
- gc.collect()
-
- model = EncoderDecoderModel.from_encoder_decoder_pretrained(
- encoder_dir, decoder_dir, encoder_from_tf=True, decoder_from_tf=True
- )
- # This is only for copying some specific attributes of this particular model.
- model.config = config
-
- if hasattr(model, "enc_to_dec_proj"):
- model.enc_to_dec_proj.weight.data = enc_to_dec_proj_weight
- model.enc_to_dec_proj.bias.data = enc_to_dec_proj_bias
-
- return model
-
- # At the moment fast initialization is not supported for composite models
- if kwargs.get("_fast_init", False):
- logger.warning(
- "Fast initialization is currently not supported for EncoderDecoderModel. "
- "Falling back to slow initialization..."
- )
- kwargs["_fast_init"] = False
-
- return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
-
- @classmethod
- def from_encoder_decoder_pretrained(
- cls,
- encoder_pretrained_model_name_or_path: str = None,
- decoder_pretrained_model_name_or_path: str = None,
- *model_args,
- **kwargs,
- ) -> PreTrainedModel:
- r"""
- Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model
- checkpoints.
-
-
- The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated). To train
- the model, you need to first set it back in training mode with `model.train()`.
-
- Params:
- encoder_pretrained_model_name_or_path (`str`, *optional*):
- Information necessary to initiate the encoder. Can be either:
-
- - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a
- user or organization name, like `dbmdz/bert-base-german-cased`.
- - A path to a *directory* containing model weights saved using
- [`~PreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`.
- - A path or url to a *tensorflow index checkpoint file* (e.g, `./tf_model/model.ckpt.index`). In
- this case, `from_tf` should be set to `True` and a configuration object should be provided as
- `config` argument. This loading path is slower than converting the TensorFlow checkpoint in a
- PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
-
- decoder_pretrained_model_name_or_path (`str`, *optional*, defaults to `None`):
- Information necessary to initiate the decoder. Can be either:
-
- - A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
- Valid model ids can be located at the root-level, like `bert-base-uncased`, or namespaced under a
- user or organization name, like `dbmdz/bert-base-german-cased`.
- - A path to a *directory* containing model weights saved using
- [`~PreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`.
- - A path or url to a *tensorflow index checkpoint file* (e.g, `./tf_model/model.ckpt.index`). In
- this case, `from_tf` should be set to `True` and a configuration object should be provided as
- `config` argument. This loading path is slower than converting the TensorFlow checkpoint in a
- PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
-
- model_args (remaining positional arguments, *optional*):
- All remaining positional arguments will be passed to the underlying model's `__init__` method.
-
- kwargs (remaining dictionary of keyword arguments, *optional*):
- Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
- `output_attentions=True`).
-
- - To update the encoder configuration, use the prefix *encoder_* for each configuration parameter.
- - To update the decoder configuration, use the prefix *decoder_* for each configuration parameter.
- - To update the parent model configuration, do not use a prefix for each configuration parameter.
-
- Behaves differently depending on whether a `config` is provided or automatically loaded.
-
- Example:
-
- ```python
- >>> from transformers import EncoderDecoderModel
-
- >>> # initialize a bert2bert from two pretrained BERT models. Note that the cross-attention layers will be randomly initialized
- >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
- >>> # saving model after fine-tuning
- >>> model.save_pretrained("./bert2bert")
- >>> # load fine-tuned model
- >>> model = EncoderDecoderModel.from_pretrained("./bert2bert")
- ```"""
-
- kwargs_encoder = {
- argument[len("encoder_") :]: value for argument, value in kwargs.items() if argument.startswith("encoder_")
- }
-
- kwargs_decoder = {
- argument[len("decoder_") :]: value for argument, value in kwargs.items() if argument.startswith("decoder_")
- }
-
- # remove encoder, decoder kwargs from kwargs
- for key in kwargs_encoder.keys():
- del kwargs["encoder_" + key]
- for key in kwargs_decoder.keys():
- del kwargs["decoder_" + key]
-
- # Load and initialize the encoder and decoder
- # The distinction between encoder and decoder at the model level is made
- # by the value of the flag `is_decoder` that we need to set correctly.
- encoder = kwargs_encoder.pop("model", None)
- if encoder is None:
- if encoder_pretrained_model_name_or_path is None:
- raise ValueError(
- "If `encoder_model` is not defined as an argument, a `encoder_pretrained_model_name_or_path` has "
- "to be defined."
- )
-
- if "config" not in kwargs_encoder:
- encoder_config, kwargs_encoder = AutoConfig.from_pretrained(
- encoder_pretrained_model_name_or_path, **kwargs_encoder, return_unused_kwargs=True
- )
-
- if encoder_config.is_decoder is True or encoder_config.add_cross_attention is True:
- logger.info(
- f"Initializing {encoder_pretrained_model_name_or_path} as a encoder model "
- "from a decoder model. Cross-attention and casual mask are disabled."
- )
- encoder_config.is_decoder = False
- encoder_config.add_cross_attention = False
-
- kwargs_encoder["config"] = encoder_config
-
- encoder = AutoModel.from_pretrained(encoder_pretrained_model_name_or_path, *model_args, **kwargs_encoder)
-
- decoder = kwargs_decoder.pop("model", None)
- if decoder is None:
- if decoder_pretrained_model_name_or_path is None:
- raise ValueError(
- "If `decoder_model` is not defined as an argument, a `decoder_pretrained_model_name_or_path` has "
- "to be defined."
- )
-
- if "config" not in kwargs_decoder:
- decoder_config, kwargs_decoder = AutoConfig.from_pretrained(
- decoder_pretrained_model_name_or_path, **kwargs_decoder, return_unused_kwargs=True
- )
-
- if decoder_config.is_decoder is False or decoder_config.add_cross_attention is False:
- logger.info(
- f"Initializing {decoder_pretrained_model_name_or_path} as a decoder model. Cross attention"
- f" layers are added to {decoder_pretrained_model_name_or_path} and randomly initialized if"
- f" {decoder_pretrained_model_name_or_path}'s architecture allows for cross attention layers."
- )
- decoder_config.is_decoder = True
- decoder_config.add_cross_attention = True
-
- kwargs_decoder["config"] = decoder_config
-
- if kwargs_decoder["config"].is_decoder is False or kwargs_decoder["config"].add_cross_attention is False:
- logger.warning(
- f"Decoder model {decoder_pretrained_model_name_or_path} is not initialized as a decoder. "
- f"In order to initialize {decoder_pretrained_model_name_or_path} as a decoder, "
- "make sure that the attributes `is_decoder` and `add_cross_attention` of `decoder_config` "
- "passed to `.from_encoder_decoder_pretrained(...)` are set to `True` or do not pass a "
- "`decoder_config` to `.from_encoder_decoder_pretrained(...)`"
- )
-
- decoder = AutoModelForCausalLM.from_pretrained(decoder_pretrained_model_name_or_path, **kwargs_decoder)
-
- # instantiate config with corresponding kwargs
- config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder.config, decoder.config, **kwargs)
- return cls(encoder=encoder, decoder=decoder, config=config)
-
- @add_start_docstrings_to_model_forward(ENCODER_DECODER_INPUTS_DOCSTRING)
- @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC)
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.BoolTensor] = None,
- encoder_outputs: Optional[Tuple[torch.FloatTensor]] = None,
- past_key_values: Tuple[Tuple[torch.FloatTensor]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- **kwargs,
- ) -> Union[Tuple, Seq2SeqLMOutput]:
- r"""
- Returns:
-
- Examples:
-
- ```python
- >>> from transformers import EncoderDecoderModel, BertTokenizer
- >>> import torch
-
- >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
- >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained(
- ... "bert-base-uncased", "bert-base-uncased"
- ... ) # initialize Bert2Bert from pre-trained checkpoints
-
- >>> # training
- >>> model.config.decoder_start_token_id = tokenizer.cls_token_id
- >>> model.config.pad_token_id = tokenizer.pad_token_id
- >>> model.config.vocab_size = model.config.decoder.vocab_size
-
- >>> input_ids = tokenizer("This is a really long text", return_tensors="pt").input_ids
- >>> labels = tokenizer("This is the corresponding summary", return_tensors="pt").input_ids
- >>> outputs = model(input_ids=input_ids, labels=labels)
- >>> loss, logits = outputs.loss, outputs.logits
-
- >>> # save and load from pretrained
- >>> model.save_pretrained("bert2bert")
- >>> model = EncoderDecoderModel.from_pretrained("bert2bert")
-
- >>> # generation
- >>> generated = model.generate(input_ids)
- ```"""
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- kwargs_encoder = {argument: value for argument, value in kwargs.items() if not argument.startswith("decoder_")}
-
- kwargs_decoder = {
- argument[len("decoder_") :]: value for argument, value in kwargs.items() if argument.startswith("decoder_")
- }
-
- if encoder_outputs is None:
- encoder_outputs = self.encoder(
- input_ids=input_ids,
- attention_mask=attention_mask,
- inputs_embeds=inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- return_dict=return_dict,
- **kwargs_encoder,
- )
- elif isinstance(encoder_outputs, tuple):
- encoder_outputs = BaseModelOutput(*encoder_outputs)
-
- encoder_hidden_states = encoder_outputs[0]
-
- # optionally project encoder_hidden_states
- if (
- self.encoder.config.hidden_size != self.decoder.config.hidden_size
- and self.decoder.config.cross_attention_hidden_size is None
- ):
- encoder_hidden_states = self.enc_to_dec_proj(encoder_hidden_states)
-
- if (labels is not None) and (decoder_input_ids is None and decoder_inputs_embeds is None):
- decoder_input_ids = shift_tokens_right(
- labels, self.config.pad_token_id, self.config.decoder_start_token_id
- )
-
- # Decode
- decoder_outputs = self.decoder(
- input_ids=decoder_input_ids,
- attention_mask=decoder_attention_mask,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=attention_mask,
- inputs_embeds=decoder_inputs_embeds,
- output_attentions=output_attentions,
- output_hidden_states=output_hidden_states,
- use_cache=use_cache,
- past_key_values=past_key_values,
- return_dict=return_dict,
- **kwargs_decoder,
- )
-
- # Compute loss independent from decoder (as some shift the logits inside them)
- loss = None
- if labels is not None:
- warnings.warn(DEPRECATION_WARNING, FutureWarning)
- logits = decoder_outputs.logits if return_dict else decoder_outputs[0]
- loss_fct = CrossEntropyLoss()
- loss = loss_fct(logits.reshape(-1, self.decoder.config.vocab_size), labels.view(-1))
-
- if not return_dict:
- if loss is not None:
- return (loss,) + decoder_outputs + encoder_outputs
- else:
- return decoder_outputs + encoder_outputs
-
- return Seq2SeqLMOutput(
- loss=loss,
- logits=decoder_outputs.logits,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
- def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
- return shift_tokens_right(labels, self.config.pad_token_id, self.config.decoder_start_token_id)
-
- def prepare_inputs_for_generation(
- self, input_ids, past_key_values=None, attention_mask=None, use_cache=None, encoder_outputs=None, **kwargs
- ):
- decoder_inputs = self.decoder.prepare_inputs_for_generation(input_ids, past_key_values=past_key_values)
- decoder_attention_mask = decoder_inputs["attention_mask"] if "attention_mask" in decoder_inputs else None
- input_dict = {
- "attention_mask": attention_mask,
- "decoder_attention_mask": decoder_attention_mask,
- "decoder_input_ids": decoder_inputs["input_ids"],
- "encoder_outputs": encoder_outputs,
- "past_key_values": decoder_inputs["past_key_values"],
- "use_cache": use_cache,
- }
- return input_dict
-
- def resize_token_embeddings(self, *args, **kwargs):
- raise NotImplementedError(
- "Resizing the embedding layers via the EncoderDecoderModel directly is not supported. Please use the"
- " respective methods of the wrapped objects (model.encoder.resize_token_embeddings(...) or"
- " model.decoder.resize_token_embeddings(...))"
- )
-
- def _reorder_cache(self, past_key_values, beam_idx):
- # apply decoder cache reordering here
- return self.decoder._reorder_cache(past_key_values, beam_idx)
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilenet_v1/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilenet_v1/__init__.py
deleted file mode 100644
index dec8eeec2de5663c3fe092b12fdc1a48fde3bd48..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/mobilenet_v1/__init__.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import TYPE_CHECKING
-
-from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available, is_vision_available
-
-
-_import_structure = {
- "configuration_mobilenet_v1": [
- "MOBILENET_V1_PRETRAINED_CONFIG_ARCHIVE_MAP",
- "MobileNetV1Config",
- "MobileNetV1OnnxConfig",
- ],
-}
-
-try:
- if not is_vision_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["feature_extraction_mobilenet_v1"] = ["MobileNetV1FeatureExtractor"]
- _import_structure["image_processing_mobilenet_v1"] = ["MobileNetV1ImageProcessor"]
-
-try:
- if not is_torch_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["modeling_mobilenet_v1"] = [
- "MOBILENET_V1_PRETRAINED_MODEL_ARCHIVE_LIST",
- "MobileNetV1ForImageClassification",
- "MobileNetV1Model",
- "MobileNetV1PreTrainedModel",
- "load_tf_weights_in_mobilenet_v1",
- ]
-
-
-if TYPE_CHECKING:
- from .configuration_mobilenet_v1 import (
- MOBILENET_V1_PRETRAINED_CONFIG_ARCHIVE_MAP,
- MobileNetV1Config,
- MobileNetV1OnnxConfig,
- )
-
- try:
- if not is_vision_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .feature_extraction_mobilenet_v1 import MobileNetV1FeatureExtractor
- from .image_processing_mobilenet_v1 import MobileNetV1ImageProcessor
-
- try:
- if not is_torch_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .modeling_mobilenet_v1 import (
- MOBILENET_V1_PRETRAINED_MODEL_ARCHIVE_LIST,
- MobileNetV1ForImageClassification,
- MobileNetV1Model,
- MobileNetV1PreTrainedModel,
- load_tf_weights_in_mobilenet_v1,
- )
-
-
-else:
- import sys
-
- sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/inference/infer_tool_grad.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/inference/infer_tool_grad.py
deleted file mode 100644
index 561c22c55e4f0527d038bbce3cef317393ded542..0000000000000000000000000000000000000000
--- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/inference/infer_tool_grad.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import hashlib
-import json
-import logging
-import os
-import time
-from pathlib import Path
-import io
-import librosa
-import maad
-import numpy as np
-from inference import slicer
-import parselmouth
-import soundfile
-import torch
-import torchaudio
-
-from hubert import hubert_model
-import utils
-from models import SynthesizerTrn
-logging.getLogger('numba').setLevel(logging.WARNING)
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-def resize2d_f0(x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)),
- source)
- res = np.nan_to_num(target)
- return res
-
-def get_f0(x, p_len,f0_up_key=0):
-
- time_step = 160 / 16000 * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
-
- f0 = parselmouth.Sound(x, 16000).to_pitch_ac(
- time_step=time_step / 1000, voicing_threshold=0.6,
- pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
-
- pad_size=(p_len - len(f0) + 1) // 2
- if(pad_size>0 or p_len - len(f0) - pad_size>0):
- f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant')
-
- f0 *= pow(2, f0_up_key / 12)
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0
-
-def clean_pitch(input_pitch):
- num_nan = np.sum(input_pitch == 1)
- if num_nan / len(input_pitch) > 0.9:
- input_pitch[input_pitch != 1] = 1
- return input_pitch
-
-
-def plt_pitch(input_pitch):
- input_pitch = input_pitch.astype(float)
- input_pitch[input_pitch == 1] = np.nan
- return input_pitch
-
-
-def f0_to_pitch(ff):
- f0_pitch = 69 + 12 * np.log2(ff / 440)
- return f0_pitch
-
-
-def fill_a_to_b(a, b):
- if len(a) < len(b):
- for _ in range(0, len(b) - len(a)):
- a.append(a[0])
-
-
-def mkdir(paths: list):
- for path in paths:
- if not os.path.exists(path):
- os.mkdir(path)
-
-
-class VitsSvc(object):
- def __init__(self):
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.SVCVITS = None
- self.hps = None
- self.speakers = None
- self.hubert_soft = utils.get_hubert_model()
-
- def set_device(self, device):
- self.device = torch.device(device)
- self.hubert_soft.to(self.device)
- if self.SVCVITS != None:
- self.SVCVITS.to(self.device)
-
- def loadCheckpoint(self, path):
- self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json")
- self.SVCVITS = SynthesizerTrn(
- self.hps.data.filter_length // 2 + 1,
- self.hps.train.segment_size // self.hps.data.hop_length,
- **self.hps.model)
- _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None)
- _ = self.SVCVITS.eval().to(self.device)
- self.speakers = self.hps.spk
-
- def get_units(self, source, sr):
- source = source.unsqueeze(0).to(self.device)
- with torch.inference_mode():
- units = self.hubert_soft.units(source)
- return units
-
-
- def get_unit_pitch(self, in_path, tran):
- source, sr = torchaudio.load(in_path)
- source = torchaudio.functional.resample(source, sr, 16000)
- if len(source.shape) == 2 and source.shape[1] >= 2:
- source = torch.mean(source, dim=0).unsqueeze(0)
- soft = self.get_units(source, sr).squeeze(0).cpu().numpy()
- f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran)
- return soft, f0
-
- def infer(self, speaker_id, tran, raw_path):
- speaker_id = self.speakers[speaker_id]
- sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0)
- soft, pitch = self.get_unit_pitch(raw_path, tran)
- f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device)
- stn_tst = torch.FloatTensor(soft)
- with torch.no_grad():
- x_tst = stn_tst.unsqueeze(0).to(self.device)
- x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2)
- audio,_ = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float()
- return audio, audio.shape[-1]
-
- def inference(self,srcaudio,chara,tran,slice_db):
- sampling_rate, audio = srcaudio
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- soundfile.write("tmpwav.wav", audio, 16000, format="wav")
- chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db)
- audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks)
- audio = []
- for (slice_tag, data) in audio_data:
- length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate))
- raw_path = io.BytesIO()
- soundfile.write(raw_path, data, audio_sr, format="wav")
- raw_path.seek(0)
- if slice_tag:
- _audio = np.zeros(length)
- else:
- out_audio, out_sr = self.infer(chara, tran, raw_path)
- _audio = out_audio.cpu().numpy()
- audio.extend(list(_audio))
- audio = (np.array(audio) * 32768.0).astype('int16')
- return (self.hps.data.sampling_rate,audio)
diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/filter-value.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/filter-value.js
deleted file mode 100644
index 98e5f612282da6170f2aee28eb2d480980de806d..0000000000000000000000000000000000000000
--- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/filter-value.js
+++ /dev/null
@@ -1,14 +0,0 @@
-let Value = require('../value')
-
-class FilterValue extends Value {
- constructor(name, prefixes) {
- super(name, prefixes)
- if (name === 'filter-function') {
- this.name = 'filter'
- }
- }
-}
-
-FilterValue.names = ['filter', 'filter-function']
-
-module.exports = FilterValue
diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/placeholder-shown.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/placeholder-shown.js
deleted file mode 100644
index 8bb1cc8e789d47705ddfa205f62c14c8bb333dfd..0000000000000000000000000000000000000000
--- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/placeholder-shown.js
+++ /dev/null
@@ -1,17 +0,0 @@
-let Selector = require('../selector')
-
-class PlaceholderShown extends Selector {
- /**
- * Return different selectors depend on prefix
- */
- prefixed(prefix) {
- if (prefix === '-ms-') {
- return ':-ms-input-placeholder'
- }
- return `:${prefix}placeholder-shown`
- }
-}
-
-PlaceholderShown.names = [':placeholder-shown']
-
-module.exports = PlaceholderShown
diff --git a/spaces/zanyPhi/cats_vs_dogs/app.py b/spaces/zanyPhi/cats_vs_dogs/app.py
deleted file mode 100644
index 3183f7c81b767052a9f3891ffc2bab25335b8cef..0000000000000000000000000000000000000000
--- a/spaces/zanyPhi/cats_vs_dogs/app.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import numpy as np
-import tensorflow as tf
-from PIL import Image
-import gradio as gr
-
-def load_model(model_path):
- model = tf.keras.models.load_model(model_path)
- return model
-
-def classify_image(image):
- # Convert the image to a Pillow image object
- img = Image.fromarray(image.astype('uint8'), 'RGB')
- # Resize the image
- img = img.resize((160, 160))
- # Convert the image to a numpy array
- img_array = tf.keras.preprocessing.image.img_to_array(img)
- # Expand dimensions to match input target shape
- img_array = np.expand_dims(img_array, axis=0)
- # Create the model object
- model = load_model("softmax_2units")
- # Make prediction
- pred = model.predict(x=img_array)
- # Get the predicted class
- pred_class = np.argmax(pred)
-
- if pred_class == 0:
- return "Cat"
- if pred_class == 1:
- return "Dog"
-
-examples = ['cat.4.jpg', 'cat.64.jpg', 'dog.4.jpg', 'dog.45.jpg']
-description = "Upload a picture. Click submit"
-
-interface = gr.Interface(fn=classify_image,
- inputs=gr.Image(shape=(200, 200)),
- outputs=gr.Text(),
- examples=examples,
- description=description,
- flagging_options=['Correct', 'Wrong'])
-
-
-interface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/zss2341/chatgpt_with_email_password_logging/app.py b/spaces/zss2341/chatgpt_with_email_password_logging/app.py
deleted file mode 100644
index f10fa06f1512ae481ebf2c84185a641a12bb4418..0000000000000000000000000000000000000000
--- a/spaces/zss2341/chatgpt_with_email_password_logging/app.py
+++ /dev/null
@@ -1,52 +0,0 @@
-import gradio as gr
-from revChatGPT.V1 import Chatbot
-
-email = None
-password = None
-access_token = None
-session_token = None
-
-def configure_chatbot(email, password):
- config = {}
- if password:
- config.update({"email": email,
- "password": password})
-
- global chatbot
- chatbot = Chatbot(config=config)
-
-def ask_bot(prompt):
- message = ""
- for data in chatbot.ask(prompt):
- message = data["message"]
-# message = message.replace("\n", " ")
- return message
-
-def chatgpt_clone(inputs, history):
- history = history or []
- output = ask_bot(inputs)
- history.append((inputs, output))
- return history, history
-
-with gr.Blocks() as demo:
- gr.Markdown("""