diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Discover the Secrets of Microbiology with Pelczar Ebook PDF Free 330 A User-Friendly and In-Depth Book.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Discover the Secrets of Microbiology with Pelczar Ebook PDF Free 330 A User-Friendly and In-Depth Book.md deleted file mode 100644 index 33dc3b1000acba2498ad592402bd04a4cfc97a64..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Discover the Secrets of Microbiology with Pelczar Ebook PDF Free 330 A User-Friendly and In-Depth Book.md +++ /dev/null @@ -1,166 +0,0 @@ - -
If you are a student, teacher, researcher, or practitioner of microbiology, you might have heard of Pelczar Microbiology. It is one of the most popular and widely used textbooks in the field of microbiology. But what is Pelczar Microbiology exactly? How can you download it for free as a PDF file? And how can you use it effectively for your studies or work? In this article, we will answer all these questions and more. We will provide you with a comprehensive guide on Pelczar Microbiology Ebook PDF Free 330, covering its history, features, benefits, contents, structure, download process, usage methods, examples, and applications. By the end of this article, you will have a clear understanding of what Pelczar Microbiology is and how you can get the most out of it.
-Pelczar Microbiology is a textbook of microbiology written by Michael J. Pelczar Jr., E.C.S. Chan, and Noel R. Krieg. It was first published in 1958 and has since been revised and updated several times. The latest edition is the seventh edition, which was published in 2010. It is also known as Pelczar Microbiology Ebook PDF Free 330 because it is available online as a free PDF file with 330 pages.
-Download File ✔ https://byltly.com/2uKyij
Pelczar Microbiology was originally written by Michael J. Pelczar Jr., who was a professor of microbiology at the University of Maryland. He was also the director of the Institute for Molecular and Cell Biology at the University of Maryland Biotechnology Institute. He had a distinguished career as a microbiologist, author, editor, and administrator. He wrote several books and articles on microbiology, biotechnology, biochemistry, genetics, immunology, and molecular biology. He also served as the president of the American Society for Microbiology and the International Union of Microbiological Societies.
-Pelczar Jr. collaborated with E.C.S. Chan and Noel R. Krieg to write the first edition of Pelczar Microbiology in 1958. E.C.S. Chan was a professor of microbiology at the National University of Singapore. He was also the dean of the Faculty of Science and the vice-chancellor of the university. He had a prominent role in developing microbiology education and research in Singapore and Southeast Asia. He wrote several books and articles on microbiology, biotechnology, environmental science, food science, and public health. He also served as the president of the Singapore National Academy of Science and the Federation of Asian Scientific Academies and Societies.
-pelczar microbiology textbook pdf download free
-free pdf of pelczar microbiology 5th edition
-pelczar microbiology ebook online free access
-how to get pelczar microbiology pdf for free
-pelczar microbiology book pdf free 330 pages
-download pelczar microbiology ebook pdf without registration
-pelczar microbiology pdf free ebook for students
-pelczar microbiology 5th edition ebook pdf free
-pelczar microbiology ebook pdf free download link
-free pelczar microbiology pdf ebook with solutions
-pelczar microbiology ebook pdf free 330 questions and answers
-best site to download pelczar microbiology pdf for free
-pelczar microbiology ebook pdf free 330 chapter summaries
-pelczar microbiology pdf ebook free download no survey
-pelczar microbiology ebook pdf free 330 review questions
-where can I find pelczar microbiology pdf for free
-pelczar microbiology ebook pdf free 330 test bank
-pelczar microbiology pdf ebook free download zip file
-pelczar microbiology ebook pdf free 330 lecture notes
-pelczar microbiology pdf ebook free download torrent
-pelczar microbiology ebook pdf free 330 practice problems
-pelczar microbiology pdf ebook free download google drive
-pelczar microbiology ebook pdf free 330 study guide
-pelczar microbiology pdf ebook free download epub format
-pelczar microbiology ebook pdf free 330 flashcards
-pelczar microbiology pdf ebook free download mobi format
-pelczar microbiology ebook pdf free 330 glossary terms
-pelczar microbiology pdf ebook free download dropbox link
-pelczar microbiology ebook pdf free 330 case studies
-pelczar microbiology pdf ebook free download mediafire link
-pelczar microbiology ebook pdf free 330 multiple choice questions
-pelczar microbiology pdf ebook free download mega link
-pelczar microbiology ebook pdf free 330 key concepts
-pelczar microbiology pdf ebook free download rar file
-pelczar microbiology ebook pdf free 330 learning objectives
-pelczar microbiology pdf ebook free download one drive link
-pelczar microbiology ebook pdf free 330 critical thinking questions
-pelczar microbiology pdf ebook free download box link
-pelczar microbiology ebook pdf free 330 self-assessment questions
-pelczar microbiology pdf ebook free download zippyshare link
-pelczar microbiology ebook pdf free 330 diagrams and illustrations
-pelczar microbiology pdf ebook free download pcloud link
-pelczar microbiology ebook pdf free 330 tables and charts
-pelczar microbiology pdf ebook free download sendspace link
-pelczar microbiology ebook pdf free 330 references and citations
-pelczar microbiology pdf ebook free download solidfiles link
-pelczar microbiology ebook pdf free 330 appendices and index
-pelczar microbiology pdf ebook free download uploadboy link
-pelczar microbiology ebook pdf free 330 supplementary materials
Noel R. Krieg was a professor emeritus of microbiology at Virginia Tech. He was also the director emeritus of the Virginia Agricultural Experiment Station. He had an illustrious career as a microbiologist, author, editor, and consultant. He wrote several books and articles on microbiology, soil science, ecology, taxonomy, phylogeny, evolution, and biotechnology. He also served as the president of the Society for Industrial Microbiology and Biotechnology and the Bergey's Manual Trust.
-The three authors aimed to provide a comprehensive and up-to-date introduction to microbiology for undergraduate students in biology, agriculture, medicine, dentistry, pharmacy, veterinary science, engineering, and other related fields. They also intended to make the book useful for graduate students, teachers, researchers, and practitioners who need a reference or a review of microbiology. They covered various aspects of microbiology, such as its history, scope, methods, principles, diversity, structure, function, metabolism, genetics, evolution, ecology, interactions, applications, and challenges. They also included numerous examples, illustrations, tables, diagrams, exercises, questions, and references to enhance the learning and understanding of microbiology.
-Pelczar Microbiology has many features and benefits that make it one of the best textbooks in microbiology. Some of these features and benefits are:
-These are some of the main features and benefits of Pelczar Microbiology that make it a valuable and enjoyable resource for anyone who wants to learn more about microbiology.
-Pelczar Microbiology consists of 24 chapters that are divided into six parts. Each part covers a major theme or area of microbiology. The parts and chapters are as follows:
- | Part | Title | Chapters | | --- | --- | --- | | I | Introduction to Microbiology | 1. The Scope and History of MicrobiologyThe structure of each chapter is similar. It starts with an introduction that gives an overview of the topic and its importance. It then presents the main points and subpoints of the topic in a clear and concise way. It ends with a summary that highlights the key takeaways and a list of exercises, questions, and references that help the reader to review and reinforce their learning.
-If you want to download Pelczar Microbiology Ebook PDF Free 330, you need to follow some steps and tips that will help you to get it easily and safely.
-Downloading Pelczar Microbiology Ebook PDF Free 330 has some advantages and disadvantages that you should consider before doing it.
-The advantages are:
-The disadvantages are:
-Therefore, you should weigh the pros and cons carefully before deciding to download Pelczar Microbiology Ebook PDF Free 330.
-To download Pelczar Microbiology Ebook PDF Free 330, you need to follow these steps and tips:
-These are some of the steps and tips that will help you to download Pelczar Microbiology Ebook PDF Free 330 successfully.
-If you are looking for some of the best sources and websites to download Pelczar Microbiology Ebook PDF Free 330, here are some suggestions:
-These are some of the best sources and websites that offer Pelczar Microbiology Ebook PDF Free 330 for free.
-Before using Pelczar Microbiology Ebook PDF Free 330, you need to make sure that you have some prerequisites and requirements that will enable you to use it properly. These are:
-These are some of the prerequisites and requirements that you need to have before using Pelczar Microbiology Ebook PDF Free 330.
-When using Pelczar Microbiology Ebook PDF Free 330, you need to apply some methods and strategies that will help you to use it effectively. These are:
-These are some of the methods and strategies that will help you to use Pelczar Microbiology Ebook PDF Free 330 effectively.
-To illustrate how you can use Pelczar Microbiology Ebook PDF Free 330 effectively, here are some examples and applications of using it for different purposes:
-These are some of the examples and applications of using Pelczar Microbiology Ebook PDF Free 330 effectively for different purposes.
-Pelczar Microbiology Ebook PDF Free 330 is a comprehensive and up-to-date textbook of microbiology that covers various aspects of microbiology such as its history scope methods principles diversity structure function metabolism genetics evolution ecology interactions applications challenges It is written by Michael J Pelczar Jr ECS Chan Noel R Krieg who are renowned experts in microbiology It is available online as a free PDF file with 330 pages It has many features benefits such as being clear concise informative interesting visual logical interactive actionable It is useful for anyone who wants to learn more about microbiology such as students teachers researchers practitioners To download it you need to find a reliable source website follow some steps tips consider some advantages disadvantages To use it effectively you need to have some prerequisites requirements apply some methods strategies use it for different purposes such as textbook reference review source resource guide By using it effectively you will be able to gain more knowledge understanding appreciation application of microbiology
-Here are some frequently asked questions about Pelczar Microbiology Ebook PDF Free 330:
-Pelczar Microbiology Ebook PDF Free 330 is different from other textbooks of microbiology in several ways such as being more comprehensive up-to-date clear concise informative interesting visual logical interactive actionable free online
-The authors of Pelczar Microbiology Ebook PDF Free 330 are Michael J Pelczar Jr ECS Chan Noel R Krieg who are renowned experts in microbiology They have written several books articles on microbiology biotechnology biochemistry genetics immunology molecular biology They have also served as presidents directors deans professors editors consultants in various institutions organizations societies related to microbiology
-Pelczar Microbiology Ebook PDF Free 330 has 330 pages It consists of 24 chapters that are divided into six parts Each part covers a major theme or area of microbiology Each chapter starts with an introduction ends with a summary exercises questions references
-You can download Pelczar Microbiology Ebook PDF Free 330 by finding a reliable source website that offers it clicking on the download button link icon following the instructions choosing a suitable format size location for your ebook file avoiding any pop-ups ads redirects that might appear during the download process
-You can use Pelczar Microbiology Ebook PDF Free 330 effectively by setting a clear goal purpose for using it planning a schedule budget for using it selecting the relevant parts chapters of it reading studying them in a systematic active way using various techniques tools such as skimming scanning highlighting annotating summarizing paraphrasing outlining mapping questioning answering comparing contrasting applying analyzing evaluating synthesizing using it for different purposes such as textbook reference review source resource guide
-If you are looking for a powerful plugin that can help you with various tasks related to advertisement design, such as nesting, cutting, measuring, and creating effects, you might want to try eCut for CorelDRAW. eCut is a versatile tool that supports all full versions of CorelDRAW since X3 and has more than 40 different functions. In this article, we will show you how to download and install eCut for CorelDRAW in a few simple steps.
- -The first thing you need to do is to download the eCut for CorelDRAW installer from the official website or from the eCut software website. You can choose to download the full version or the demo version. The full version requires an activation key that you can purchase online, while the demo version allows you to try all the functions for 4 days without any restrictions. The file size is about 43 MB and it is compatible with Windows XP, Vista, 7, 8, and 10.
-DOWNLOAD ★ https://byltly.com/2uKyOU
After downloading the installer, you need to close all CorelDRAW applications before running the setup file. Then, follow the instructions on the screen and click Next until the installation is complete. The setup file will automatically detect your CorelDRAW version and install the appropriate plugin.
- -Once the installation is done, you need to import the eCut toolbar into CorelDRAW. To do this, open CorelDRAW and go to Tools > Options > Customization > Commands. Then, click on Import and browse to the folder where you installed eCut (usually C:\Program Files\eCut). Select the file named ecut.cui and click Open. You should see a new toolbar named eCut appear on your CorelDRAW workspace.
- -The last step is to activate eCut for CorelDRAW. If you have purchased an activation key, you can enter it in the eCut dialog box that pops up when you launch CorelDRAW. If you want to use the demo version, you need to have a good internet connection and make sure that CorelDRAW is excluded from all antivirus and firewall programs. Then, click on Start Test Period and enjoy using eCut for 4 days.
- -eCut for CorelDRAW is a useful plugin that can enhance your productivity and creativity when working with advertisement design projects. It has many features that can help you with nesting, cutting, measuring, and creating effects. To download and install eCut for CorelDRAW, you just need to follow these four steps: download the installer, run the setup file, import the toolbar into CorelDRAW, and activate eCut. We hope this article was helpful and informative. If you have any questions or feedback, please feel free to contact us.
ddb901b051DOWNLOAD ⇒⇒⇒ https://imgfil.com/2uy0iM
Download Zip >>>>> https://imgfil.com/2uxZrY
In the free energy formula of Eq (7.36), nonideality is expressed by the general form gex , the excess free energy. The simplifications used in the prior analyses of ideal melting and phase separation, namely neglecting sex and confining hex to the regular-solution model, are not valid for most binary systems. In order to construct phase diagrams by the common-tangent technique, more elaborate solution models are needed to relate free energy to composition for all likely phases. Figure 8.9 shows plots of g Vs xB for the three phases at six temperatures, with T6 the highest and T1 the lowest. In the six graphs, the curves for each phase keep approximately the same shape but shift relative Fig. 8.9 Free energy composition curves for an A-B binary system with two solid phases (a and P) and a liquid phase (From Ref. 1) Fig. 8.9 Free energy composition curves for an A-B binary system with two solid phases (a and P) and a liquid phase (From Ref.
-As in Eq (7.21) for the enthalpy, the molar Gibbs free energy of a solution (g) can be written in terms of pure-component contributions (gA and gB ) and an excess value (gex ). However, an important contribution needs to be added. For a binary solution, the terms contributing to g are
-Download > https://imgfil.com/2uxYTJ
Even though the thermochemical database need contain only AGo (or, equivalently, AHo and ASo ), the number of reactions that would have to be included in such a compilation is intractably large. The key to reducing data requirements to manageable size is to provide the standard free energy changes of forming the individual molecular species from their constituent elements. Particular reactions are constructed from these so-called formation reactions. For molecular compounds containing two or more elements, the basic information is the Free energy change for reactions by which the compound is created from its constituent elements, the latter in their normal state at the particular temperature. These reaction free energy changes are called standard free energies of formation of the compound. For example, the methane combustion reaction of Eq (9.1) involves one elemental compound (O2 ) and three molecular compounds (CH4 , CO2 , and H2 O).
-Another notable solid-solid equilibrium is the graphite-to-diamond transition in the element carbon. Graphite is fairly common in the earth's crust but the rarity of diamond is the origin of its value. Under normal terrestrial conditions (300 K, 1 atm) the two forms of carbon are not in equilibrium and so, thermodynamically speaking, only one form should exist. The stable form is the one with the lowest Gibbs free energy. At 300 K, the enthalpy difference between diamond and graphite is Ahd-g 1900 J mole, with diamond less stable than graphite in this regard. Being a highly ordered structure, diamond has a molar entropy lower than that of graphite, and Asd-g -3.3 J mole-K (see Fig. 3.6). This difference also favors the stability of graphite. The combination of the enthalpy and entropy effects produces a free-energy difference of Since the phase with the lowest free energy (graphite) is stable, diamond is a metastable phase.
-The definition has been chosen so that the activity tends to unity for pure i that is, gi , the molar free energy of pure i. Activity varies monotonically with concentration. Therefore, when component i approaches infinite dilution ai 0 and - ot. This inconvenient behavior of the chemical potential at zero concentration is avoided by using the activity in practical thermodynamic calculations involving species in solution. Another reason for the choice of the mathematical form of the relation between and ai embodied in Eq (7.29) is that the activity is directly measurable as the ratio of the equilibrium pressure exerted by a component in solution to the vapor pressure of the pure substance. This important connection is discussed in Chap. 8. Problem 7.7 shows how this equation can be used to assess the validity of formulas for hex . In an equally important application, the above equation can be integrated to give the Gibbs free energy analog of Eq (7.19) for the enthalpy
-The equilibrium criterion of minimum Gibbs free energy (Sect. 1.11) can be applied to any of the phase transitions described in the previous section. At fixed pressure and temperature, let the system contain nI moles of phase I and nII moles of phase II, with molar Gibbs free energies of gI and gII, respectively. The total Gibbs free energy of the two-phase mixture is This is an expression of chemical equilibrium. It complements the conditions of thermal equilibrium (TI TII) and mechanical equilibrium (pI pII). Since the Gibbs free energy is defined by g h - Ts, another form of Eq (5.2) is
-To Matthews that Tesla had entrusted two of his greatest inventions prior to his death - the Tesla interplanetary communications set and the Tesla anti-war device. Tesla also left special instructions to Otis T. Carr of Baltimore, who used this information to develop free-energy devices capable of 'powering anything from a hearing aid to a spaceship.' (73) Tesla's technology, through Carr's free-energy device, will revolutionize the world. Instead of purchasing power from the large corporations, which is then delivered to our homes via wires and cables, the new technology consists of nothing more than a small antenna that will be attached to the roof of every building
-As in any system constrained to constant temperature and pressure, the equilibrium of a chemical reaction is attained when the free energy is a minimum. Specifically, this means that dG 0, where the differential of G is with respect to the composition of the mixture. In order to convert this criterion to an equation relating the equilibrium concentrations of the reactants and products, the chemical potentials are the essential intermediaries. At equilibrium, Eq (7.27) provides the equation
-Irrespective of the complexity of the nonideal behavior of the phases involved, the phase diagram can always be constructed if the free energy Vs composition curves for each phase can be drawn. The link between the two graphical representations is the common-tangent rule. Because of the wide variations in the shapes of free-energy curves, the types of phase diagrams deduced from them reaches zoological proportions. In this section, a common variety called the eutectic phase diagram5 is developed by the graphical method.
-The structure of a phase diagram is determined by the condition of chemical equilibrium. As shown in Sect. 8.2, this condition can be expressed in one of two ways either the total free energy of the system (Eq (8.1)) is minimized or the chemical potentials of the each component (Eq (8.2)) in coexisting phases are equated. The choice of the manner of expressing equilibrium is a matter of convenience and varies with the particular application. Free-energy minimization is usually used with the graphical method and chemical-potential equality is the method of choice for the analytic approach.
- -The chemical potential is directly related to the Gibbs free energy of a system. For a one-component system, the chemical potential is identical to the molar Gibbs free energy of the pure substance. In solutions or mixtures, the chemical potential is simply another name for the partial molar Gibbs free energy. The discussion in Sect. 7.3, in which enthalpy was used to illustrate partial molar and excess properties, applies to the Gibbs free energy one need only replace h everywhere by g. The reason that the partial molar Gibbs free energy (g) is accorded the special name chemical potential is not only to shorten a cumbersome five-word designation. More important is the role of the chemical potential in phase equilibria and chemical equilibria when the restraints are constant temperature and pressure. Instead of the symbol g, the chemical potential is designated by The connection between the Gibbs free energy of a system at fixed T and p and the equilibrium state is shown in Fig. 1.18.
-By the time I arrived at Princeton in the fall of 1962, I was thoroughly pumped up to join the quest for controlled fusion at PPL. My under-grad senior thesis on an obscure plasma instability led to working in Jim Drummond's Plasma Physics Group at the Boeing Scientific Research Laboratories in my hometown, Seattle, during the year following graduation. In fact, this millennial dream of realizing a virtually limitless source of pollution-free energy was largely my motivation for applying to grad school, and only to Princeton.
-That an under-ice ocean exists on Europa is remarkable. It is especially remarkable when it is realized that Jupiter sits well outside of the habitable zone (defined in Chapter 5, and see Figure 5.9) and given that the surface temperature of the moon is not much greater than 100 K. How, indeed, can this ocean exist There is not enough solar energy to warm Europa above the freezing point of water, and the moon is so small that it should have cooled off relatively rapidly after formation.19 The possibility of terraforming Europa and, indeed, the other Galilean moons has been discussed by numerous researchers, but in all cases, bar the stellifying of Jupiter option, the biggest hurdle to overcome is that of supplying enough surface heat.
-The term on the left is A , the chemical-potential difference of overall reaction (10.21). It is the aqueous equivalent of the free-energy difference AG used in describing nonaqueous cells. The electric potential difference on the right is the cell EMF, s, so the equation is
-Propellant to generate electricity is not an efficient way to power a satellite that could use free solar energy instead (by means of solar cells). Nevertheless, electrodynamic tether power generation could be useful for generating short bursts of electrical energy, for instance when needed for high-energy but short duration experiments involving powerful lidars (instruments similar to radar but using laser light instead of short wavelength radio waves).
aaccfb2cb3Вы хотите изучать английский язык, но не знаете, с чего начать? Вы ищете эффективный и интересный способ улучшить свои знания и навыки? Тогда вам стоит попробовать читать книги на английском языке. Это не только увлекательное, но и полезное занятие, которое поможет вам развиться в разных аспектах языка. В этой статье мы расскажем вам, почему читать книги на английском языке полезно для начинающих, как выбрать подходящие книги для чтения на английском языке, и где скачать книги английский для начинающих бесплатно.
-Download >>>>> https://urlin.us/2uSSCl
Чтение книг на английском языке имеет много преимуществ для тех, кто только начинает изучать язык. Вот некоторые из них:
-Когда вы читаете книгу на английском языке, вы неизбежно сталкиваетесь с новыми словами и выражениями, которые вы можете запомнить и использовать в своей речи. Вы также видите, как эти слова и выражения употребляются в контексте, что помогает вам понять их значение и правильное употребление. Чтение книг на английском языке также помогает вам освоить грамматические правила и структуры, которые вы можете применять в своем письме и говорении. Вы учитесь строить предложения, согласовывать времена, использовать модальные глаголы, артикли, предлоги и другие элементы языка.
-Чтение книг на английском языке также тренирует ваши навыки чтения и понимания. Вы учитесь сканировать текст, выделять главную идею, находить ключевые слова, делать выводы и предположения, анализировать и критиковать информацию. Вы также развиваете свою способность угадывать значение неизвестных слов по контексту, использовать синонимы и антонимы, распознавать фразовые глаголы и идиомы. Все эти навыки пригодятся вам не только при чтении книг, но и при просмотре фильмов, прослушивании подкастов, общении с носителями языка и сдаче международных экзаменов.
-Наконец, чтение книг на английском языке дает вам возможность погрузиться в культуру и менталитет стран, где говорят на этом языке. Вы можете узнать много интересных фактов об истории, географии, политике, экономике, образовании, науке, искусстве, спорте, религии, традициях и обычаях этих стран. Вы также можете почувствовать атмосферу, юмор, эмоции и характер людей, которые живут там. Это поможет вам лучше понять и уважать разные культуры и народы, а также расширить свой кругозор и обогатить свой опыт.
-Чтобы читать книги на английском языке с удовольствием и пользой, важно выбрать те книги, которые соответствуют вашему уровню владения языком, интересам и целям. Вот несколько советов, как это сделать:
-скачать книги по английской грамматике бесплатно
-скачать книги на английском языке для начинающих
-скачать бесплатно учебники английского для начинающих
-скачать книги для изучения английского бесплатно
-скачать книги на английском в формате pdf бесплатно
-скачать книги на английском с переводом бесплатно
-скачать книги на английском для чтения бесплатно
-скачать книги по английскому языку бесплатно epub
-скачать книги на английском для детей бесплатно
-скачать книги на английском для подростков бесплатно
-скачать книги на английском для продвинутых бесплатно
-скачать книги на английском для самообразования бесплатно
-скачать книги на английском по интересам бесплатно
-скачать книги на английском по уровням бесплатно
-скачать книги на английском по жанрам бесплатно
-скачать книги на английском классической литературы бесплатно
-скачать книги на английском современной литературы бесплатно
-скачать книги на английском детективы бесплатно
-скачать книги на английском фантастика бесплатно
-скачать книги на английском романы бесплатно
-скачать книги на английском приключения бесплатно
-скачать книги на английском юмор бесплатно
-скачать книги на английском психология бесплатно
-скачать книги на английском история бесплатно
-скачать книги на английском экономика бесплатно
-скачать книги на английском маркетинг бесплатно
-скачать книги на английском менеджмент бесплатно
-скачать книги на английском лидерство бесплатно
-скачать книги на английском мотивация бесплатно
-скачать книги на английском развитие личности бесплатно
-скачать книги на английском обучение иностранным языкам бесплатно
-скачать книги на английском путешествия и туризм бесплатно
-скачать книги на английском здоровье и сп
Перед тем, как скачать книги английский для начинающих бесплатно, вы должны определить свой уровень владения языком. Это поможет вам подобрать книги, которые не будут слишком легкими или слишком сложными для вас. Существ уют разные системы определения уровня владения языком, такие как CEFR (Common European Framework of Reference for Languages), IELTS (International English Language Testing System), TOEFL (Test of English as a Foreign Language) и другие. Вы можете пройти онлайн-тест или самостоятельно оценить свои знания по таким критериям, как словарный запас, грамматика, чтение, письмо, говорение и слушание. В зависимости от вашего уровня, вы можете выбирать книги с разной степенью сложности, объема и жанра.
-Чтобы читать книги на английском языке с удовольствием и мотивацией, важно выбирать те книги, которые вам интересны и соответствуют вашим целям. Например, если вы хотите изучать английский язык для работы или учебы, вы можете выбирать книги по своей специальности или профессии. Если вы хотите изучать английский язык для путешествий или общения, вы можете выбирать книги по странам, культурам, людям или ситуациям, которые вас привлекают. Если вы хотите изучать английский язык для развлечения или саморазвития, вы можете выбирать книги по своим хобби, увлечениям, вкусам или ценностям. Главное, чтобы книги были интересными и полезными для вас.
-Чтобы читать книги на английском языке эффективно и комфортно, полезно искать те книги, которые имеют дополнительные ресурсы, такие как аудио, перевод и упражнения. Аудио поможет вам слушать произношение слов и интонацию предложений, а также тренировать свое слуховое восприятие. Перевод поможет вам понять смысл слов и выражений, которые вы не знаете или не уверены. Упражнения помогут вам проверить свое понимание текста и закрепить новые знания и навыки. Вы можете искать книги с аудио, переводом и упражнениями на специализированных сайтах или приложениях для изучения английского языка.
-Если вы решили читать книги на английском языке, то вам нужно знать, где вы можете скачать книги английский для начинающих бесплатно. Существует много сайтов и ресурсов, которые предлагают бесплатные книги на английском языке для разных уровней и целей. Вот некоторые из них:
-English 4U - это сайт для изучения английского языка онлай н. На этом сайте вы можете скачать две книги по грамматике английского языка с упражнениями: English Grammar in Use и Essential Grammar in Use. Эти книги подходят для начинающих и среднего уровня, и содержат много примеров, объяснений и тестов по разным темам грамматики. Вы можете скачать книги в формате PDF или DOC, а также аудиофайлы в формате MP3. Вы можете скачать книги по этой ссылке: .
-Grammar Teacher - это сайт для изучения грамматики английского языка онлайн. На этом сайте вы можете скачать учебник English Grammar Secrets, который состоит из 16 глав, посвященных разным аспектам грамматики. Каждая глава содержит теоретическую часть, примеры, упражнения и тесты. Вы можете скачать учебник в формате PDF или DOC, а также аудиофайлы в формате MP3. Вы можете скачать учебник по этой ссылке: .
-Oxford Guide to English Grammar - это полный справочник по грамматике английского языка, который подходит для всех уровней. Эта книга охватывает все основные и дополнительные темы грамматики, такие как части речи, предложения, времена, модальность, пассивный залог, условные предложения и другие. Книга содержит много примеров, таблиц, схем и правил. Вы можете скачать книгу в формате PDF по этой ссылке: .
-Englishpage Online English Grammar Book - это интерактивная книга по грамматике английского языка, которая доступна онлайн. Эта книга состоит из 10 разделов, которые посвящены разным темам грамматики, таким как артикли, наречия, прилагательные, местоимения, фразовые глаголы и другие. Каждый раздел содержит теоретическую часть, примеры и задания. Вы можете читать книгу онлайн или скачать ее в формате PDF по этой ссылке: .
-Альдебаран - это электронная библиотека, которая содержит более 100 тысяч книг на русском и английском языках по разным темам и жанрам. На этом сайте вы можете найти книги по литературе, истории, философии, психологии, экономике, праву, медицине и другим областям знаний. Вы также можете найти художественные произведения разных авторов и направлений. Вы можете читать книги онлайн или скачать их в форматах FB 2, RTF, TXT, DOC и другие. Вы можете посетить сайт по этой ссылке: .
-English Online Club - это сайт для изучения английского языка онлайн. На этом сайте вы можете найти книги для чтения по уровням, от начального до продвинутого. Вы можете выбрать книги по разным жанрам, таким как детективы, фантастика, приключения, романы и другие. Вы также можете найти книги по классике английской и американской литературы, такие как Шекспир, Диккенс, Твен, Хемингуэй и другие. Вы можете читать книги онлайн или скачать их в форматах PDF или DOC. Вы можете посетить сайт по этой ссылке: .
-Чтение книг на английском языке - это один из лучших способов изучать английский язык для начинающих. Это помогает вам улучшить свой словарный запас и грамматику, развить навыки чтения и понимания, погрузиться в культуру и менталитет англоязычных стран. Чтобы читать книги на английском языке с удовольствием и пользой, важно выбирать те книги, которые соответствуют вашему уровню владения языком, интересам и целям. Вы также можете искать те книги, которые имеют дополнительные ресурсы, такие как аудио, перевод и упражнения. В интернете вы можете найти много сайтов и ресурсов, которые предлагают скачать книги английский для начинающих бесплатно. Мы надеемся, что наша статья помогла вам в этом вопросе и вы сможете насладиться чтением книг на английском языке.
-Вот некоторые часто задаваемые вопросы о теме скачать книги английский для начинающих бесплатно:
-Нет однозначного ответа на этот вопрос, так как выбор книги зависит от вашего уровня владения языком, интересов и целей. Однако в целом рекомендуется выбирать те книги, которые не слишком сложные или слишком легкие для вас, которые имеют интересный сюжет или полезную информацию, которые имеют дополнительные ресурсы, такие как аудио, перевод и упражнения. Вы можете начать с книг по грамматике или лексике, а затем перейти к художественным или научно-популярным произведениям.
-Это зависит от вашего свободного времени, мотивации и целей. Од нако, в целом, рекомендуется читать книги на английском языке регулярно и систематически, по возможности каждый день или несколько раз в неделю. Вы можете уделять чтению книг на английском языке столько времени, сколько вам комфортно, но не меньше 15-20 минут за сеанс. Вы можете читать книги на английском языке в любое удобное для вас время и место, например, дома, в общественном транспорте, в парке или в кафе.
-Чтобы проверить свое понимание книг на английском языке, вы можете использовать разные способы, такие как:
-Чтобы повысить свой уровень чтения книг на английском языке, вы можете следовать таким советам:
-Кроме сайтов и ресурсов, которые мы упомянули выше, вы можете использовать такие ресурсы для чтения книг на английском языке:
-Мы надеемся, что наша статья помогла вам узнать больше о том, как и где скачать книги английский для начинающих бесплатно. Желаем вам удачи и удовольствия в изучении английского языка через чтение книг на английском языке!
197e85843dDownload Zip ✑ ✑ ✑ https://urlin.us/2uSZQX
If you are a fan of anime fighting games, you might have heard of Bleach vs Naruto 3.3 Mod, a popular flash game that features characters from two of the most famous shonen anime series, Bleach and Naruto. This game allows you to choose from over 40 heroes, each with their own unique style and technique, and battle against your friends or the computer in various modes and arenas.
-DOWNLOAD • https://jinyurl.com/2uNTnQ
While this game is originally designed for web browsers, you might wonder if you can play it on your PC as well. The answer is yes, you can! Playing Bleach vs Naruto 3.3 Mod on PC has many advantages, such as better graphics, smoother performance, larger screen, and more comfortable controls. In this article, we will show you how to download and play Bleach vs Naruto 3.3 Mod on PC using two different methods. We will also give you some tips and tricks for playing the game, as well as a comparison of Bleach and Naruto characters.
-There are two ways you can download and play Bleach vs Naruto 3.3 Mod on PC: using an emulator or using an APK file. An emulator is a software that mimics the Android operating system on your PC, allowing you to run Android apps and games. An APK file is a package file that contains all the data and code of an Android app or game. Here are the steps for each option:
-Bluestacks is one of the most popular and reliable Android emulators for PC. It lets you access the Google Play Store and download, install, and play Android apps and games on your PC. Here is how to use Bluestacks to play Bleach vs Naruto 3.3 Mod on PC:
-If you don't want to use an emulator, you can also download and play Bleach vs Naruto 3.3 Mod on PC using an APK file. An APK file is a package file that contains all the data and code of an Android app or game. You can download an APK file from various sources online, but make sure you choose a reliable and safe one. Here is how to use an APK file to play Bleach vs Naruto 3.3 Mod on PC:
-bleach vs naruto 3.3 mod apk pc free download
-how to install bleach vs naruto 3.3 mod on pc
-bleach vs naruto 3.3 mod 200+ characters pc download
-bleach vs naruto 3.3 mod pc gameplay
-bleach vs naruto 3.3 mod pack for pc
-bleach vs naruto 3.3 mod android apk download for pc
-bleach vs naruto 3.3 mod tutorial pc
-bleach vs naruto 3.3 mod latest version pc download
-bleach vs naruto 3.3 mod pc online
-bleach vs naruto 3.3 mod pc cheats
-bleach vs naruto 3.3 mod pc system requirements
-bleach vs naruto 3.3 mod pc controller support
-bleach vs naruto 3.3 mod pc best characters
-bleach vs naruto 3.3 mod pc update
-bleach vs naruto 3.3 mod pc review
-bleach vs naruto 3.3 mod pc full screen
-bleach vs naruto 3.3 mod pc multiplayer
-bleach vs naruto 3.3 mod pc all characters unlocked
-bleach vs naruto 3.3 mod pc no virus
-bleach vs naruto 3.3 mod pc mega link
-bleach vs naruto 3.3 mod pc mediafire link
-bleach vs naruto 3.3 mod pc google drive link
-bleach vs naruto 3.3 mod pc zip file download
-bleach vs naruto 3.3 mod pc rar file download
-bleach vs naruto 3.3 mod pc exe file download
-bleach vs naruto 3.3 mod pc iso file download
-bleach vs naruto 3.3 mod pc emulator download
-bleach vs naruto 3.3 mod for windows pc download
-bleach vs naruto 3.3 mod for mac pc download
-bleach vs naruto 3.3 mod for linux pc download
-bleach vs naruto 2d fighting game for pc with mods download
-how to play bleach vs naruto on pc with mods download
-how to get more characters in bleach vs naruto on pc with mods download
-how to unlock all characters in bleach vs naruto on pc with mods download
-how to change controls in bleach vs naruto on pc with mods download
-how to fix lag in bleach vs naruto on pc with mods download
-how to customize characters in bleach vs naruto on pc with mods download
-how to add music in bleach vs naruto on pc with mods download
-how to change language in bleach vs naruto on pc with mods download
-how to save progress in bleach vs naruto on pc with mods download
-how to load game in bleach vs naruto on pc with mods download
-how to delete game in bleach vs naruto on pc with mods download
-how to reset game in bleach vs naruto on pc with mods download
-how to make your own character in bleach vs naruto on pc with mods download
-how to make your own stage in bleach vs naruto on pc with mods download
-how to make your own music in bleach vs naruto on pc with mods download
-how to make your own story mode in bleach vs naruto on pc with mods download
-how to make your own arcade mode in bleach vs naruto on pc with mods download
Now that you have downloaded and installed Bleach vs Naruto 3.3 Mod on your PC, you are ready to play it. But before you jump into action, here are some tips and tricks for playing the game that will help you improve your skills and enjoy it more.
-Bleach vs Naruto 3.3 Mod is a fun and challenging fighting game that requires quick reflexes, strategic thinking, and mastery of different skills and combos. Here are some tips and tricks for playing the game:
-Bleach vs Naruto 3.3 Mod is a game that brings together two of the most popular anime series, Bleach and Naruto. Both series have a large and diverse cast of characters, each with their own personality, backstory, and abilities. If you are curious about how these characters compare to each other, here is a table that shows some of their strengths and weaknesses, as well as their similarities and differences.
-Character | -Strengths | -Weaknesses | -Similarities | -Differences | -
---|---|---|---|---|
Naruto | -- Has a powerful nine-tailed fox spirit inside him - Can use various types of jutsu, such as shadow clones, rasengan, sage mode, etc. - Has a strong will and determination to protect his friends and achieve his goals |
-- Can be reckless and impulsive - Can be easily angered and lose control of his fox spirit - Can be naive and gullible |
-- Both are orange-haired protagonists who aspire to become the strongest in their world - Both have a rival who is more talented and has a dark past - Both have a mentor who is eccentric and powerful |
-- Naruto is a ninja who lives in a world where people use chakra to perform jutsu - Naruto is an orphan who grew up without parents - Naruto is cheerful and optimistic despite his hardships |
-
Ichigo | -- Has a high level of spiritual energy that allows him to see and fight spirits - Can use various forms of power, such as shinigami, hollow, fullbring, etc. - Has a strong sense of justice and responsibility to protect his family and friends |
-- Can be stubborn and prideful - Can be overconfident and underestimate his enemies - Can be reluctant to accept help from others |
-- Both are orange-haired protagonists who aspire to become the strongest in their world - Both have a rival who is more talented and has a dark past - Both have a mentor who is eccentric and powerful |
-- Ichigo is a human who lives in a world where people have souls that can become shinigami or hollows - Ichigo has a loving family who supports him - Ichigo is cynical and sarcastic due to his experiences |
-
Sasuke | -- Has a rare bloodline trait that gives him the sharingan eye - Can use various types of jutsu, such as fire, lightning, genjutsu, etc. - Has a high intelligence and analytical skills |
-- Can be cold and aloof - Can be consumed by hatred and revenge - Can be arrogant and dismissive of others |
-- Both are dark-haired rivals of the protagonists who have a tragic past - Both have a powerful eye technique that can manipulate reality - Both have a brother who is influential and mysterious |
-- Sasuke is a ninja who lives in a world where people use chakra to perform jutsu - Sasuke's clan was massacred by his brother when he was young - Sasuke is ambitious and driven to surpass his brother |
-
Aizen | -- Has a genius-level intellect and a vast knowledge of spiritual matters - Can use various forms of power, such as shinigami, hollow, hogyoku, etc. - Has a masterful control of his spiritual energy and his zanpakuto |
-- Can be manipulative and deceptive - Can be overconfident and underestimate his enemies - Can be cruel and ruthless |
-- Both are dark-haired rivals of the protagonists who have a tragic past - Both have a powerful eye technique that can manipulate reality - Both have a brother who is influential and mysterious |
-- Aizen is a shinigami who lives in a world where people have souls that can become shinigami or hollows - Aizen's past is shrouded in mystery and he has no known family - Aizen is treacherous and scheming to overthrow the Soul Society |
-
Bleach vs Naruto 3.3 Mod is an amazing game that lets you enjoy the best of both anime worlds. You can choose from over 40 characters from Bleach and Naruto, each with their own skills and combos, and fight against your friends or the computer in various modes and arenas. You can also download and play the game on your PC using an emulator or an APK file, which will give you better graphics, smoother performance, larger screen, and more comfortable controls. If you are looking for a fun and challenging fighting game that features your favorite anime characters, you should definitely try Bleach vs Naruto 3.3 Mod on PC.
-Here are some of the most frequently asked questions about Bleach vs Naruto 3.3 Mod on PC:
-If you want to know more about the hardware components of your PC, such as your processor, motherboard, RAM, and graphics card, then you might want to download CPU Z apk for PC. CPU Z is a freeware system information utility that can show you detailed data about your hardware and help you test your system's performance and stability. In this article, we will explain what CPU Z is, how to download and install it on your PC, how to use it to monitor your hardware, what are the benefits of using it, and what are some alternatives to it.
-Download → https://jinyurl.com/2uNPIw
CPU Z is a freeware system information utility that was developed by the French company CPUID. It was originally designed for overclockers who wanted to check their CPU frequencies and voltages, but it has since evolved into a comprehensive tool that can provide information on various aspects of your hardware.
-CPU Z can gather information on some of the main devices of your system, such as:
-CPU Z can also measure the real-time frequency of each core of your processor and the memory frequency.
-CPU Z can help you understand the specifications and capabilities of your hardware components. For example, you can find out:
-CPU Z can also show you the logo and the codename of your processor and graphics card, as well as the manufacturer and the model of your motherboard.
-CPU Z can also help you measure and compare the performance of your hardware components. For example, you can use CPU Z to:
-How to download cpu z apk for pc windows 10
-Download cpu z apk for pc free full version
-Download cpu z apk for pc offline installer
-Download cpu z apk for pc latest version 2023
-Download cpu z apk for pc uptodown
-Download cpu z apk for pc softonic
-Download cpu z apk for pc filehippo
-Download cpu z apk for pc windows 7 32 bit
-Download cpu z apk for pc windows 8.1 64 bit
-Download cpu z apk for pc from official website
-Download cpu z apk for pc without emulator
-Download cpu z apk for pc using bluestacks
-Download cpu z apk for pc using nox player
-Download cpu z apk for pc using memu play
-Download cpu z apk for pc using ldplayer
-Download cpu z apk for pc to monitor hardware
-Download cpu z apk for pc to check processor details
-Download cpu z apk for pc to test ram speed
-Download cpu z apk for pc to benchmark cpu performance
-Download cpu z apk for pc to stress test overclocking
-Download cpu z apk for pc with custom skins
-Download cpu z apk for pc with cooler master skin
-Download cpu z apk for pc with asus rog skin
-Download cpu z apk for pc with msi gaming skin
-Download cpu z apk for pc with gigabyte aorus skin
-Download cpu z apk for pc with gigabyte skin
-Download cpu z apk for pc with asrock phantom skin
-Download cpu z apk for pc with asrock taichi skin
-Download cpu z apk for pc with asrock formula skin
-Download cpu z apk for pc compatible with windows 11
-Download cpu z apk for pc compatible with intel xeon sapphire rapids platform
-Download cpu z apk for pc compatible with amd storm peak platform
-Download cpu z apk for pc compatible with zhaoxin kx-6000g/4 cpu
-Download cpu z apk for pc compatible with via apollo mvp4 chipset
-Download cpu z apk for pc compatible with via apollo vpx chipset
-Best site to download cpu z apk for pc safely and securely
-Best site to download cpu z apk for pc fast and easy
-Best site to download cpu z apk for pc without ads and malware
-Best site to download cpu z apk for cpuid official site[^2^]
-Best site to download cpu z apk from uptodown[^1^]
-Best alternative to download cpu z apk for pc windows xp
-Best alternative to download cpu z apk for mac os x
-Best alternative to download cpu z apk for linux ubuntu
-Best alternative to download cpuz app from google play store[^3^]
-Best alternative to download cpuz app from apple app store
CPU Z can also detect any overclocking or underclocking of your processor and graphics card, and show you the current and the maximum frequency of each core.
-If you want to use CPU Z on your PC, you will need to download and install an Android emulator first. An Android emulator is a software that can create a virtual Android environment on your PC, allowing you to run Android apps and games on your computer. There are many Android emulators available for PC, such as BlueStacks, NoxPlayer, LDPlayer, MEmu, etc. You can choose the one that suits your preferences and system requirements.
-Once you have installed an Android emulator on your PC, you will need to download the CPU Z apk file. An apk file is a package file that contains the installation files and data of an Android app. You can download CPU Z apk from various sources online, such as APKPure, APKMirror, Uptodown, etc. Make sure you download the latest version of CPU Z apk from a reliable and safe source.
-After downloading the CPU Z apk file, you will need to run it on your Android emulator. Depending on the emulator you are using, there are different ways to do this. For example, you can:
-The emulator will then install CPU Z apk on your PC and create a shortcut icon on your desktop or home screen.
To summarize, you can follow these steps to download and install CPU Z apk for PC:
-Once you have installed CPU Z on your PC, you can use it to monitor your hardware and test your system's performance and stability. CPU Z has a simple and user-friendly interface that consists of several tabs that display different information about your hardware. You can also access some features and options from the menu bar or the toolbar.
-CPU Z has six main tabs that show you various data about your hardware components. These tabs are:
-CPU Z can also help you evaluate and compare the performance of your processor using the benchmark and stress test features. You can access these features from the menu bar or the toolbar. The benchmark feature allows you to run a single-thread or a multi-thread test that measures the processing speed of your CPU in millions of instructions per second (MIPS). The stress test feature allows you to run a heavy load on your CPU for a specified duration and monitor its temperature and stability. You can also compare your results with other processors online or offline.
-CPU Z can also help you create and share an online report of your hardware data using the validation feature. You can access this feature from the menu bar or the toolbar. The validation feature allows you to submit your hardware data to the CPUID website and generate a unique URL that you can share with others. You can also view and download your validation report as a PDF file. You can also browse other validation reports from other users and compare your hardware with theirs.
-CPU Z is a useful tool for anyone who wants to monitor their hardware and improve their system performance. Some of the benefits of using CPU Z are:
-CPU Z can help you optimize your system settings and performance by providing you with accurate and detailed information about your hardware components. You can use this information to adjust your BIOS settings, overclock or underclock your processor and graphics card, enable or disable certain features, update your drivers, etc. You can also use CPU Z to test and verify the effects of your changes on your system's performance and stability.
-CPU Z can help you troubleshoot any hardware issues or errors by providing you with real-time data about your hardware components. You can use this data to diagnose any problems or anomalies in your system, such as high temperature, low voltage, incorrect frequency, faulty memory, etc. You can also use CPU Z to check if your hardware components are compatible with each other and with your system requirements.
-CPU Z can help you compare your hardware with other systems and devices by providing you with benchmark and validation features. You can use these features to measure and compare the performance of your processor with other processors online or offline. You can also use these features to create and share an online report of your hardware data with others. You can also browse other validation reports from other users and compare your hardware with theirs.
-There are many other system information utilities and benchmark tools available for PC that can provide you with similar or different information and options than CPU Z. Some of the most popular alternatives to CPU Z are:
-HWiNFO is a freeware system information utility that can provide you with comprehensive information about your hardware components, such as processor, motherboard, memory, disk drives, network adapters, sensors, etc. It can also monitor your system's health, performance, and power consumption in real-time. It can also generate reports, logs, graphs, alerts, etc.
-Speccy is a freeware system information utility that can provide you with concise information about your hardware components, such as processor, motherboard, memory, graphics card, storage devices, optical drives, audio devices, network devices, etc. It can also show you the temperature of each component and the operating system details. It can also generate reports in various formats.
-AIDA64 is a shareware system information utility that can provide you with detailed information about your hardware components, such as processor, motherboard, memory, graphics card, storage devices, network devices, sensors, etc. It can also monitor your system's health, performance, and power consumption in real-time. It can also run various tests and benchmarks to measure your system's capabilities and stability. It can also generate reports in various formats.
-GPU-Z is a freeware system information utility that can provide you with specific information about your graphics card or integrated graphics processor (IGP), such as the model, lithography, specific GPU chip equipped, memory frequency, memory technology, bus width, etc. It can also monitor your graphics card's temperature, fan speed, voltage, load, etc. It can also run a render test to verify the PCI-Express lane configuration.
-CPU Z is a useful tool for anyone who wants to monitor their hardware and improve their system performance. You can download and install CPU Z apk for PC using an Android emulator and follow the instructions in this guide. You can also check out other similar tools that can provide you with more information and options.
-CPU Z is safe to use as long as you download it from a reliable and safe source online. CPU Z does not contain any malware or spyware and does not modify any system files or settings. However, you should always be careful when downloading and installing any software on your PC and scan it with an antivirus program before running it.
-CPU Z supports Windows 10 as well as Windows XP, Vista, 7, 8, and 8.1. CPU Z also supports 32-bit and 64-bit versions of Windows. However, some features of CPU Z may not work properly on some versions of Windows or some hardware configurations.
-CPU Z does not damage your hardware by itself. CPU Z only reads and displays the information about your hardware components and does not change or modify any of them. However, if you use CPU Z to overclock or underclock your processor or graphics card, you may risk damaging your hardware if you do not know what you are doing or if you exceed the safe limits of your hardware.
-CPU Z is generally accurate and reliable in providing information about your hardware components. However, CPU Z may not be able to detect or display some information correctly depending on the type and model of your hardware components or the version of your BIOS or drivers. CPU Z may also show some discrepancies or errors in some cases due to rounding or measurement errors.
-CPU Z is updated regularly by the developers to support new hardware components and fix any bugs or issues. You can check the latest version of CPU Z on the official website of CPUID or on the sources where you downloaded it from. You can also enable the automatic update feature in CPU Z to get notified when a new version is available.
401be4b1e0If you are a fan of Zambian music, you have probably heard of the song "Adam Na Eva" by Adam Na Eva featuring 4 Na 5. This catchy tune has been making waves in the local music scene since its release in March 2023. It is a fusion of kalindula, a traditional Zambian genre, and afrobeat, a modern African genre. The song showcases the talents and creativity of four young Zambian artists who have collaborated to produce a hit that appeals to a wide audience.
-Adam Na Eva are a Zambian duo composed of Adam Mwale and Eva Chanda. They met in 2020 at a talent show in Lusaka, where they discovered their mutual passion for music. They decided to form a group and named themselves after the biblical characters Adam and Eve. Their musical style is influenced by various genres such as gospel, reggae, dancehall, and hip hop. They have released several singles such as "Nimwe", "Mwana Wanga", and "Ndiwe". They have also performed at various events and festivals across Zambia.
-DOWNLOAD ✫ https://jinyurl.com/2uNTCa
4 Na 5 are another Zambian duo composed of Y-Celeb and Chanda Na Kay. They also met in 2020 at a studio in Ndola, where they were recording their solo projects. They realized that they had a similar vibe and decided to work together as a team. They named themselves 4 Na 5 because they believe that they are more than just two people; they are a movement that represents the youth and their aspirations. Their musical style is mainly afrobeat with elements of rap and comedy. They have released several singles such as "Iyee", "Aboloka", and "Ka Boom". They have also gained popularity on social media platforms such as YouTube and TikTok.
-The song "Adam Na Eva" is a love song that expresses the feelings of two couples who are deeply in love. The chorus goes like this:
---Adam na Eva, we are meant to be
-
-You are my rib, I am your tree
-Adam na Eva, we are one in love
-You are my queen, I am your dove
The verses describe the qualities and actions that make the couples happy and loyal to each other. For example, Adam Na Eva sing:
---You make me smile when I am sad
-
-You make me calm when I am mad
-You make me strong when I am weak
-You make me whole when I am bleakdownload emilokan freestyle by qdot mp3
-
-download emilokan freestyle by qdot audio
-download emilokan freestyle by qdot song
-download emilokan freestyle by qdot music
-download emilokan freestyle by qdot naijaloaded
-download emilokan freestyle by qdot ghupload
-download emilokan freestyle by qdot tuneloaded
-download emilokan freestyle by qdot lyrics
-download emilokan freestyle by qdot video
-download emilokan freestyle by qdot youtube
-download emi lokan by qdot mp3
-download emi lokan by qdot audio
-download emi lokan by qdot song
-download emi lokan by qdot music
-download emi lokan by qdot naijaloaded
-download emi lokan by qdot ghupload
-download emi lokan by qdot tuneloaded
-download emi lokan by qdot lyrics
-download emi lokan by qdot video
-download emi lokan by qdot youtube
-how to download emilokan freestyle by qdot mp3
-how to download emilokan freestyle by qdot audio
-how to download emilokan freestyle by qdot song
-how to download emilokan freestyle by qdot music
-how to download emilokan freestyle by qdot naijaloaded
-how to download emilokan freestyle by qdot ghupload
-how to download emilokan freestyle by qdot tuneloaded
-how to download emilokan freestyle by qdot lyrics
-how to download emilokan freestyle by qdot video
-how to download emilokan freestyle by qdot youtube
-where to download emilokan freestyle by qdot mp3
-where to download emilokan freestyle by qdot audio
-where to download emilokan freestyle by qdot song
-where to download emilokan freestyle by qdot music
-where to download emilokan freestyle by qdot naijaloaded
-where to download emilokan freestyle by qdot ghupload
-where to download emilokan freestyle by qdot tuneloaded
-where to download emilokan freestyle by qdot lyrics
-where to download emilokan freestyle by qdot video
-where to download emilokan freestyle by qdot youtube
-free download of emilokan freestyle by qdot mp3
-free download of emilokan freestyle by qdot audio
-free download of emilokan freestyle by qdot song
-free download of emilokan freestyle by qdot music
-free download of emilokan freestyle by qdot naijaloaded
-free download of emilokan freestyle by qdot ghupload
-free download of emilokan freestyle by qdot tuneloaded
-free download of emilokan freestyle by qdot lyrics
-free download of emilokan freestyle by qdot video
And 4 Na 5 sing:
---I will never cheat on you
-
-I will never lie to you
-I will never hurt you
-I will never leave you
The song also uses some metaphors and references to Zambian culture and history. For example, Adam Na Eva compare their love to the Zambezi River, which is the longest river in Zambia and a symbol of life and prosperity. They also mention the names of some famous Zambian leaders and heroes, such as Kenneth Kaunda, Levy Mwanawasa, and Simon Mwansa Kapwepwe, to show their respect and admiration.
The song "Adam Na Eva" has been received very well by the Zambian music fans and critics. It has become one of the most popular songs in the country, topping the charts on various platforms such as Mvesesani, ZedBeats, and Zambian Music Blog. It has also received millions of views and likes on YouTube and TikTok, where many users have created videos dancing and lip-syncing to the song. The song has also been played on several radio stations and TV channels, such as ZNBC, QFM, and Diamond TV. The song has also received positive feedback from other Zambian artists and celebrities, such as Macky 2, Chef 187, Slapdee, and Pompi, who have praised the song for its quality, originality, and message.
-The success of the song "Adam Na Eva" reflects the growth and potential of the Zambian music industry, which has been producing more diverse and innovative music in recent years. However, there are also some challenges and opportunities that face the industry and its artists. Some of the challenges include:
-Some of the opportunities include:
-Now that you know more about the song "Adam Na Eva" and its artists, you might be wondering how to download it as an mp3 file. There are different ways to do this, depending on your preferences and resources. However, before you proceed, you should be aware of some legal and ethical issues involved in downloading music. Downloading music without paying for it or without the permission of the artists or the owners of the rights is considered piracy, which is illegal and punishable by law. It also deprives the artists of their deserved income and recognition, which can affect their livelihood and career. Therefore, you should always respect the rights of the artists and support them by buying their music from official sources or streaming it from licensed platforms.
-One of the best ways to download the song "Adam Na Eva" as an mp3 file is to buy it from official sources, such as iTunes or Google Play Music. These are online stores that sell digital music legally and securely. You can access them from your computer or mobile device, and pay with your credit card or other methods. The advantages of buying music from official sources are:
-The disadvantages of buying music from official sources are:
-To buy the song "Adam Na Eva" from iTunes, follow these steps:
-The price of the song on iTunes is $0.99 USD as of June 2023.
-To buy the song "Adam Na Eva" from Google Play Music, follow these steps:
-The price of the song on Google Play Music is $0.99 USD as of June 2023.
Another way to download the song "Adam Na Eva" as an mp3 file is to convert YouTube videos into mp3 files. YouTube is a popular online platform that hosts millions of videos, including music videos. You can access YouTube from your computer or mobile device, and watch videos for free. However, YouTube does not allow you to download videos or audio directly from its site, for legal and ethical reasons. Therefore, you need to use third-party tools or software that can extract the audio from YouTube videos and save them as mp3 files. The advantages of downloading music from YouTube are:
-The disadvantages of downloading music from YouTube are:
-To download the song "Adam Na Eva" from YouTube, follow these steps:
-A third way to download the song "Adam Na Eva" as an mp3 file is to find and download free mp3 files from websites that offer royalty-free or Creative Commons music. Royalty-free music is music that you can use for personal or commercial purposes without paying any royalties or fees to the artists or the owners of the rights. Creative Commons music is music that the artists have licensed under certain conditions, such as attribution, non-commercial use, or share-alike. You can access these websites from your computer or mobile device, and download music for free or for a small donation. The advantages of downloading music from other websites are:
-The disadvantages of downloading music from other websites are:
-To download the song "Adam Na Eva" from other websites, follow these steps:
-To help you decide which method to use for downloading the song "Adam Na Eva" as an mp3 file, here is a table that compares the pros and cons of each method, based on criteria such as quality, cost, convenience, and legality.
-Method | Quality | Cost | Convenience | Legality |
---|---|---|---|---|
Buying music from official sources | High | Medium to high | Easy to moderate | Legal and ethical |
Downloading music from YouTube | Low to medium | Free | Moderate to hard | Illegal and unethical |
Downloading music from other websites | Low to high | Free to low | Moderate to hard | Legal and ethical (with conditions) |
In conclusion, the song "Adam Na Eva" by Adam Na Eva featuring 4 Na 5 is a Zambian music hit that has been making waves in the local and regional scene. It is a fusion of kalindula and afrobeat genres that showcases the talents and creativity of four young Zambian artists. The song is a love song that expresses the feelings of two couples who are deeply in love. The song has been received very well by the Zambian music fans and critics, who have praised it for its quality, originality, and message. The song also reflects the growth and potential of the Zambian music industry, which faces some challenges and opportunities in terms of funding, recognition, regulation, digitalization, collaboration, and exposure.
-If you want to download the song "Adam Na Eva" as an mp3 file, you have three options: buying it from official sources such as iTunes or Google Play Music; converting it from YouTube videos using online tools or software; or finding it on other websites that offer royalty-free or Creative Commons music. Each option has its pros and cons in terms of quality, cost, convenience, and legality. You should weigh these factors carefully before choosing your preferred option. However, we recommend that you buy the song from official sources if you can afford it, as this is the best way to support the artists and respect their rights. However, if you cannot or do not want to buy the song, you can use the other options with caution and discretion, and always give credit to the original sources.
-Here are some frequently asked questions and answers related to the topic of the article:
-Some other popular songs by 4 Na 5 are:
-Score Bleu = "+str(int(round(corpus_bleu(s_trad,[s_trad_ref]).score,0)))+"%
", \ - unsafe_allow_html=True) - -@st.cache_data -def find_lang_label(lang_sel): - global lang_tgt, label_lang - return label_lang[lang_tgt.index(lang_sel)] - -def run(): - - global n1, df_data_src, df_data_tgt, translation_model, placeholder, model_speech - global df_data_en, df_data_fr, lang_classifier, translation_en_fr, translation_fr_en - global lang_tgt, label_lang - - st.title(title) - # - st.write("## **Explications :**\n") - - st.markdown( - """ - Enfin, nous avons réalisé une traduction :red[**Seq2Seq**] ("Sequence-to-Sequence") avec des :red[**réseaux neuronaux**]. - La traduction Seq2Seq est une méthode d'apprentissage automatique qui permet de traduire des séquences de texte d'une langue à une autre en utilisant - un :red[**encodeur**] pour capturer le sens du texte source, un :red[**décodeur**] pour générer la traduction, et un :red[**vecteur de contexte**] pour relier les deux parties du modèle. - """ - ) - - lang_tgt = ['en','fr','ab','aa','af','ak','sq','de','am','en','ar','an','hy','as','av','ae','ay','az','ba','bm','eu','bn','bi','be','bh','my','bs','br','bg','ks','ca','ch','ny','zh','si','ko','kw','co','ht','cr','hr','da','dz','gd','es','eo','et','ee','fo','fj','fi','fr','fy','gl','cy','lg','ka','el','kl','gn','gu','ha','he','hz','hi','ho','hu','io','ig','id','ia','iu','ik','ga','is','it','ja','jv','kn','kr','kk','km','kg','ki','rw','ky','rn','kv','kj','ku','lo','la','lv','li','ln','lt','lu','lb','mk','ms','ml','dv','mg','mt','gv','mi','mr','mh','mo','mn','na','nv','ng','nl','ne','no','nb','nn','nr','ie','oc','oj','or','om','os','ug','ur','uz','ps','pi','pa','fa','ff','pl','pt','qu','rm','ro','ru','se','sm','sg','sa','sc','sr','sh','sn','nd','sd','sk','sl','so','st','su','sv','sw','ss','tg','tl','ty','ta','tt','cs','ce','cv','te','th','bo','ti','to','ts','tn','tr','tk','tw','uk','ve','vi','cu','vo','wa','wo','xh','ii','yi','yo','za','zu'] - label_lang = ['Anglais','Français','Abkhaze','Afar','Afrikaans','Akan','Albanais','Allemand','Amharique','Anglais','Arabe','Aragonais','Arménien','Assamais','Avar','Avestique','Aymara','Azéri','Bachkir','Bambara','Basque','Bengali','Bichelamar','Biélorusse','Bihari','Birman','Bosnien','Breton','Bulgare','Cachemiri','Catalan','Chamorro','Chichewa','Chinois','Cingalais','Coréen','Cornique','Corse','Créolehaïtien','Cri','Croate','Danois','Dzongkha','Écossais','Espagnol','Espéranto','Estonien','Ewe','Féroïen','Fidjien','Finnois','Français','Frisonoccidental','Galicien','Gallois','Ganda','Géorgien','Grecmoderne','Groenlandais','Guarani','Gujarati','Haoussa','Hébreu','Héréro','Hindi','Hirimotu','Hongrois','Ido','Igbo','Indonésien','Interlingua','Inuktitut','Inupiak','Irlandais','Islandais','Italien','Japonais','Javanais','Kannada','Kanouri','Kazakh','Khmer','Kikongo','Kikuyu','Kinyarwanda','Kirghiz','Kirundi','Komi','Kuanyama','Kurde','Lao','Latin','Letton','Limbourgeois','Lingala','Lituanien','Luba','Luxembourgeois','Macédonien','Malais','Malayalam','Maldivien','Malgache','Maltais','Mannois','MaorideNouvelle-Zélande','Marathi','Marshallais','Moldave','Mongol','Nauruan','Navajo','Ndonga','Néerlandais','Népalais','Norvégien','Norvégienbokmål','Norvégiennynorsk','Nrebele','Occidental','Occitan','Ojibwé','Oriya','Oromo','Ossète','Ouïghour','Ourdou','Ouzbek','Pachto','Pali','Pendjabi','Persan','Peul','Polonais','Portugais','Quechua','Romanche','Roumain','Russe','SameduNord','Samoan','Sango','Sanskrit','Sarde','Serbe','Serbo-croate','Shona','Sindebele','Sindhi','Slovaque','Slovène','Somali','SothoduSud','Soundanais','Suédois','Swahili','Swati','Tadjik','Tagalog','Tahitien','Tamoul','Tatar','Tchèque','Tchétchène','Tchouvache','Télougou','Thaï','Tibétain','Tigrigna','Tongien','Tsonga','Tswana','Turc','Turkmène','Twi','Ukrainien','Venda','Vietnamien','Vieux-slave','Volapük','Wallon','Wolof','Xhosa','Yi','Yiddish','Yoruba','Zhuang','Zoulou'] - lang_src = {'ar': 'arabic', 'bg': 'bulgarian', 'de': 'german', 'el':'modern greek', 'en': 'english', 'es': 'spanish', 'fr': 'french', \ - 'hi': 'hindi', 'it': 'italian', 'ja': 'japanese', 'nl': 'dutch', 'pl': 'polish', 'pt': 'portuguese', 'ru': 'russian', 'sw': 'swahili', \ - 'th': 'thai', 'tr': 'turkish', 'ur': 'urdu', 'vi': 'vietnamese', 'zh': 'chinese'} - st.write("## **Paramètres :**\n") - - st.write("#### Choisissez le type de traduction:") - # tab1, tab2, tab3 = st.tabs(["small vocab avec Keras et un GRU","Phrases à saisir", "Phrases à dicter"]) - chosen_id = tab_bar(data=[ - TabBarItemData(id="tab1", title="small vocab", description="avec Keras et un GRU"), - TabBarItemData(id="tab2", title="small vocab", description="avec Keras et un Transformer"), - TabBarItemData(id="tab3", title="Phrase personnelle", description="à saisir"), - TabBarItemData(id="tab4", title="Phrase personnelle", description="à dicter")], - default="tab1") - - if (chosen_id == "tab1") or (chosen_id == "tab2") : - TabContainerHolder = st.container() - Sens = TabContainerHolder.radio('Sens de la traduction:',('Anglais -> Français','Français -> Anglais'), horizontal=True) - Lang = ('en_fr' if Sens=='Anglais -> Français' else 'fr_en') - - if (Lang=='en_fr'): - df_data_src = df_data_en - df_data_tgt = df_data_fr - if (chosen_id == "tab1"): - translation_model = rnn_en_fr - else: - translation_model = transformer_en_fr - else: - df_data_src = df_data_fr - df_data_tgt = df_data_en - if (chosen_id == "tab1"): - translation_model = rnn_fr_en - else: - translation_model = transformer_fr_en - - st.write("= self._len: - raise IndexError("index out of range") - - def __del__(self): - if self.data_file: - self.data_file.close() - - @lru_cache(maxsize=8) - def __getitem__(self, i) -> torch.Tensor: - if not self.data_file: - self.read_data(self.path) - self.check_index(i) - tensor_size = self.sizes[self.dim_offsets[i] : self.dim_offsets[i + 1]] - a = np.empty(tensor_size, dtype=self.dtype) - self.data_file.seek(self.data_offsets[i] * self.element_size) - self.data_file.readinto(a) - item = torch.from_numpy(a).long() - if self.fix_lua_indexing: - item -= 1 # subtract 1 for 0-based indexing - return item - - def __len__(self): - return self._len - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - @staticmethod - def exists(path): - return PathManager.exists(index_file_path(path)) and PathManager.exists( - data_file_path(path) - ) - - @property - def supports_prefetch(self): - return False # avoid prefetching to save memory - - -class IndexedCachedDataset(IndexedDataset): - def __init__(self, path, fix_lua_indexing=False): - super().__init__(path, fix_lua_indexing=fix_lua_indexing) - self.cache = None - self.cache_index = {} - - @property - def supports_prefetch(self): - return True - - def prefetch(self, indices): - if all(i in self.cache_index for i in indices): - return - if not self.data_file: - self.read_data(self.path) - indices = sorted(set(indices)) - total_size = 0 - for i in indices: - total_size += self.data_offsets[i + 1] - self.data_offsets[i] - self.cache = np.empty(total_size, dtype=self.dtype) - ptx = 0 - self.cache_index.clear() - for i in indices: - self.cache_index[i] = ptx - size = self.data_offsets[i + 1] - self.data_offsets[i] - a = self.cache[ptx : ptx + size] - self.data_file.seek(self.data_offsets[i] * self.element_size) - self.data_file.readinto(a) - ptx += size - if self.data_file: - # close and delete data file after prefetch so we can pickle - self.data_file.close() - self.data_file = None - - @lru_cache(maxsize=8) - def __getitem__(self, i): - self.check_index(i) - tensor_size = self.sizes[self.dim_offsets[i] : self.dim_offsets[i + 1]] - a = np.empty(tensor_size, dtype=self.dtype) - ptx = self.cache_index[i] - np.copyto(a, self.cache[ptx : ptx + a.size]) - item = torch.from_numpy(a).long() - if self.fix_lua_indexing: - item -= 1 # subtract 1 for 0-based indexing - return item - - -class IndexedRawTextDataset(FairseqDataset): - """Takes a text file as input and binarizes it in memory at instantiation. - Original lines are also kept in memory""" - - def __init__(self, path, dictionary, append_eos=True, reverse_order=False): - self.tokens_list = [] - self.lines = [] - self.sizes = [] - self.append_eos = append_eos - self.reverse_order = reverse_order - self.read_data(path, dictionary) - self.size = len(self.tokens_list) - - def read_data(self, path, dictionary): - with open(path, "r", encoding="utf-8") as f: - for line in f: - self.lines.append(line.strip("\n")) - tokens = dictionary.encode_line( - line, - add_if_not_exist=False, - append_eos=self.append_eos, - reverse_order=self.reverse_order, - ).long() - self.tokens_list.append(tokens) - self.sizes.append(len(tokens)) - self.sizes = np.array(self.sizes) - - def check_index(self, i): - if i < 0 or i >= self.size: - raise IndexError("index out of range") - - @lru_cache(maxsize=8) - def __getitem__(self, i): - self.check_index(i) - return self.tokens_list[i] - - def get_original_text(self, i): - self.check_index(i) - return self.lines[i] - - def __del__(self): - pass - - def __len__(self): - return self.size - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - @staticmethod - def exists(path): - return PathManager.exists(path) - - -class IndexedDatasetBuilder: - element_sizes = { - np.uint8: 1, - np.int8: 1, - np.int16: 2, - np.int32: 4, - np.int64: 8, - np.float: 4, - np.double: 8, - } - - def __init__(self, out_file, dtype=np.int32): - self.out_file = open(out_file, "wb") - self.dtype = dtype - self.data_offsets = [0] - self.dim_offsets = [0] - self.sizes = [] - self.element_size = self.element_sizes[self.dtype] - - def add_item(self, tensor): - # +1 for Lua compatibility - bytes = self.out_file.write(np.array(tensor.numpy() + 1, dtype=self.dtype)) - self.data_offsets.append(self.data_offsets[-1] + bytes / self.element_size) - for s in tensor.size(): - self.sizes.append(s) - self.dim_offsets.append(self.dim_offsets[-1] + len(tensor.size())) - - def merge_file_(self, another_file): - index = IndexedDataset(another_file) - assert index.dtype == self.dtype - - begin = self.data_offsets[-1] - for offset in index.data_offsets[1:]: - self.data_offsets.append(begin + offset) - self.sizes.extend(index.sizes) - begin = self.dim_offsets[-1] - for dim_offset in index.dim_offsets[1:]: - self.dim_offsets.append(begin + dim_offset) - - with open(data_file_path(another_file), "rb") as f: - while True: - data = f.read(1024) - if data: - self.out_file.write(data) - else: - break - - def finalize(self, index_file): - self.out_file.close() - index = open(index_file, "wb") - index.write(b"TNTIDX\x00\x00") - index.write(struct.pack("str: - local_index_path = PathManager.get_local_path(index_file_path(path)) - local_data_path = PathManager.get_local_path(data_file_path(path)) - - assert local_index_path.endswith(".idx") and local_data_path.endswith(".bin"), ( - "PathManager.get_local_path does not return files with expected patterns: " - f"{local_index_path} and {local_data_path}" - ) - - local_path = local_data_path[:-4] # stripping surfix ".bin" - assert local_path == local_index_path[:-4] # stripping surfix ".idx" - return local_path - - -class MMapIndexedDatasetBuilder: - def __init__(self, out_file, dtype=np.int64): - self._data_file = open(out_file, "wb") - self._dtype = dtype - self._sizes = [] - - def add_item(self, tensor): - np_array = np.array(tensor.numpy(), dtype=self._dtype) - self._data_file.write(np_array.tobytes(order="C")) - self._sizes.append(np_array.size) - - def merge_file_(self, another_file): - # Concatenate index - index = MMapIndexedDataset.Index(index_file_path(another_file)) - assert index.dtype == self._dtype - - for size in index.sizes: - self._sizes.append(size) - - # Concatenate data - with open(data_file_path(another_file), "rb") as f: - shutil.copyfileobj(f, self._data_file) - - def finalize(self, index_file): - self._data_file.close() - - with MMapIndexedDataset.Index.writer(index_file, self._dtype) as index: - index.write(self._sizes) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/cider/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/cider/__init__.py deleted file mode 100644 index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/utils/cider/pyciderevalcap/cider/__init__.py +++ /dev/null @@ -1 +0,0 @@ -__author__ = 'tylin' diff --git a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py b/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py deleted file mode 100644 index 9754b40821b519aeee669973156d970b18ef6f3b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/indic_nlp_library/indicnlp/transliterate/unicode_transliterate.py +++ /dev/null @@ -1,347 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -#Program for text written in one Indic script to another based on Unicode mappings. -# -# @author Anoop Kunchukuttan -# - -import sys, string, itertools, re, os -from collections import defaultdict - -from indicnlp import common -from indicnlp import langinfo -from indicnlp.script import indic_scripts as isc -from indicnlp.transliterate.sinhala_transliterator import SinhalaDevanagariTransliterator as sdt -import pandas as pd - -OFFSET_TO_ITRANS={} -ITRANS_TO_OFFSET=defaultdict(list) - -DUPLICATE_ITRANS_REPRESENTATIONS={} - - -def init(): - """ - To be called by library loader, do not call it in your program - """ - - ### Load the ITRANS-script offset map. The map was initially generated using the snippet below (uses the old itrans transliterator) - ### The map is modified as needed to accomodate extensions and corrections to the mappings - # - # base=0x900 - # l=[] - # for i in range(0,0x80): - # c=chr(base+i) - # itrans=ItransTransliterator.to_itrans(c,'hi') - # l.append((hex(i),c,itrans)) - # print(l) - # - # pd.DataFrame(l,columns=['offset_hex','devnag_char','itrans']).to_csv('offset_itrans_map.csv',index=False,encoding='utf-8') - - itrans_map_fname=os.path.join(common.get_resources_path(),'transliterate','offset_itrans_map.csv') - #itrans_map_fname=r'D:\src\python_sandbox\src\offset_itrans_map.csv' - itrans_df=pd.read_csv(itrans_map_fname,encoding='utf-8') - - global OFFSET_TO_ITRANS, ITRANS_TO_OFFSET, DUPLICATE_ITRANS_REPRESENTATIONS - - for r in itrans_df.iterrows(): - itrans=r[1]['itrans'] - o=int(r[1]['offset_hex'],base=16) - - OFFSET_TO_ITRANS[o]=itrans - - if langinfo.is_consonant_offset(o): - ### for consonants, strip the schwa - add halant offset - ITRANS_TO_OFFSET[itrans[:-1]].extend([o,0x4d]) - else: - ### the append assumes that the maatra always comes after independent vowel in the df - ITRANS_TO_OFFSET[itrans].append(o) - - - DUPLICATE_ITRANS_REPRESENTATIONS = { - 'A': 'aa', - 'I': 'ii', - 'U': 'uu', - 'RRi': 'R^i', - 'RRI': 'R^I', - 'LLi': 'L^i', - 'LLI': 'L^I', - 'L': 'ld', - 'w': 'v', - 'x': 'kSh', - 'gj': 'j~n', - 'dny': 'j~n', - '.n': '.m', - 'M': '.m', - 'OM': 'AUM' - } - -class UnicodeIndicTransliterator(object): - """ - Base class for rule-based transliteration among Indian languages. - - Script pair specific transliterators should derive from this class and override the transliterate() method. - They can call the super class 'transliterate()' method to avail of the common transliteration - """ - - @staticmethod - def _correct_tamil_mapping(offset): - # handle missing unaspirated and voiced plosives in Tamil script - # replace by unvoiced, unaspirated plosives - - # for first 4 consonant rows of varnamala - # exception: ja has a mapping in Tamil - if offset>=0x15 and offset<=0x28 and \ - offset!=0x1c and \ - not ( (offset-0x15)%5==0 or (offset-0x15)%5==4 ) : - subst_char=(offset-0x15)//5 - offset=0x15+5*subst_char - - # for 5th consonant row of varnamala - if offset in [ 0x2b, 0x2c, 0x2d]: - offset=0x2a - - # 'sh' becomes 'Sh' - if offset==0x36: - offset=0x37 - - return offset - - @staticmethod - def transliterate(text,lang1_code,lang2_code): - """ - convert the source language script (lang1) to target language script (lang2) - - text: text to transliterate - lang1_code: language 1 code - lang1_code: language 2 code - """ - if lang1_code in langinfo.SCRIPT_RANGES and lang2_code in langinfo.SCRIPT_RANGES: - - # if Sinhala is source, do a mapping to Devanagari first - if lang1_code=='si': - text=sdt.sinhala_to_devanagari(text) - lang1_code='hi' - - # if Sinhala is target, make Devanagiri the intermediate target - org_lang2_code='' - if lang2_code=='si': - lang2_code='hi' - org_lang2_code='si' - - trans_lit_text=[] - for c in text: - newc=c - offset=ord(c)-langinfo.SCRIPT_RANGES[lang1_code][0] - if offset >=langinfo.COORDINATED_RANGE_START_INCLUSIVE and offset <= langinfo.COORDINATED_RANGE_END_INCLUSIVE and c!='\u0964' and c!='\u0965': - if lang2_code=='ta': - # tamil exceptions - offset=UnicodeIndicTransliterator._correct_tamil_mapping(offset) - newc=chr(langinfo.SCRIPT_RANGES[lang2_code][0]+offset) - - trans_lit_text.append(newc) - - # if Sinhala is source, do a mapping to Devanagari first - if org_lang2_code=='si': - return sdt.devanagari_to_sinhala(''.join(trans_lit_text)) - - return ''.join(trans_lit_text) - else: - return text - -class ItransTransliterator(object): - """ - Transliterator between Indian scripts and ITRANS - """ - - @staticmethod - def to_itrans(text,lang_code): - if lang_code in langinfo.SCRIPT_RANGES: - if lang_code=='ml': - # Change from chillus characters to corresponding consonant+halant - text=text.replace('\u0d7a','\u0d23\u0d4d') - text=text.replace('\u0d7b','\u0d28\u0d4d') - text=text.replace('\u0d7c','\u0d30\u0d4d') - text=text.replace('\u0d7d','\u0d32\u0d4d') - text=text.replace('\u0d7e','\u0d33\u0d4d') - text=text.replace('\u0d7f','\u0d15\u0d4d') - - offsets = [ isc.get_offset(c,lang_code) for c in text ] - - ### naive lookup - # itrans_l = [ OFFSET_TO_ITRANS.get(o, '-' ) for o in offsets ] - itrans_l=[] - for o in offsets: - itrans=OFFSET_TO_ITRANS.get(o, chr(langinfo.SCRIPT_RANGES[lang_code][0]+o) ) - if langinfo.is_halanta_offset(o): - itrans='' - if len(itrans_l)>0: - itrans_l.pop() - elif langinfo.is_vowel_sign_offset(o) and len(itrans_l)>0: - itrans_l.pop() - itrans_l.extend(itrans) - - return ''.join(itrans_l) - - else: - return text - - @staticmethod - def from_itrans(text,lang): - """ - TODO: Document this method properly - TODO: A little hack is used to handle schwa: needs to be documented - TODO: check for robustness - """ - - MAXCODE=4 ### TODO: Needs to be fixed - - ## handle_duplicate_itrans_representations - for k, v in DUPLICATE_ITRANS_REPRESENTATIONS.items(): - if k in text: - text=text.replace(k,v) - - start=0 - match=None - solution=[] - - i=start+1 - while i<=len(text): - - itrans=text[start:i] - - # print('===') - # print('i: {}'.format(i)) - # if i0 and langinfo.is_halanta(solution[-1],lang): - offs=[offs[1]] ## dependent vowel - else: - offs=[offs[0]] ## independent vowel - - c=''.join([ langinfo.offset_to_char(x,lang) for x in offs ]) - match=(i,c) - - elif len(itrans)==1: ## unknown character - match=(i,itrans) - elif i ") - sys.exit(1) - - if sys.argv[1]=='transliterate': - - src_language=sys.argv[4] - tgt_language=sys.argv[5] - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - transliterated_line=UnicodeIndicTransliterator.transliterate(line,src_language,tgt_language) - ofile.write(transliterated_line) - - elif sys.argv[1]=='romanize': - - language=sys.argv[4] - - ### temp fix to replace anusvara with corresponding nasal - #r1_nasal=re.compile(ur'\u0902([\u0915-\u0918])') - #r2_nasal=re.compile(ur'\u0902([\u091a-\u091d])') - #r3_nasal=re.compile(ur'\u0902([\u091f-\u0922])') - #r4_nasal=re.compile(ur'\u0902([\u0924-\u0927])') - #r5_nasal=re.compile(ur'\u0902([\u092a-\u092d])') - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - ### temp fix to replace anusvara with corresponding nasal - #line=r1_nasal.sub(u'\u0919\u094D\\1',line) - #line=r2_nasal.sub(u'\u091e\u094D\\1',line) - #line=r3_nasal.sub(u'\u0923\u094D\\1',line) - #line=r4_nasal.sub(u'\u0928\u094D\\1',line) - #line=r5_nasal.sub(u'\u092e\u094D\\1',line) - - transliterated_line=ItransTransliterator.to_itrans(line,language) - - ## temp fix to replace 'ph' to 'F' to match with Urdu transliteration scheme - transliterated_line=transliterated_line.replace('ph','f') - - ofile.write(transliterated_line) - - elif sys.argv[1]=='indicize': - - language=sys.argv[4] - - with open(sys.argv[2],'r', encoding='utf-8') as ifile: - with open(sys.argv[3],'w', encoding='utf-8') as ofile: - for line in ifile.readlines(): - transliterated_line=ItransTransliterator.from_itrans(line,language) - ofile.write(transliterated_line) - diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/tokenize.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/tokenize.py deleted file mode 100644 index e956fc404a282f76733503301cc357e386c668c4..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/data_measurements/tokenize.py +++ /dev/null @@ -1,53 +0,0 @@ -import pandas as pd -import utils -from sklearn.feature_extraction.text import CountVectorizer - -logs = utils.prepare_logging(__file__) - -TEXT = "text" -TOKENIZED_TEXT = "tokenized_text" - - -class Tokenize: - - def __init__(self, text_dset, feature=TEXT, tok_feature=TOKENIZED_TEXT, - lowercase=True): - self.text_dset = text_dset - self.feature = feature - self.tok_feature = tok_feature - self.lowercase = lowercase - # Pattern for tokenization - self.cvec = CountVectorizer(token_pattern="(?u)\\b\\w+\\b", - lowercase=lowercase) - self.tokenized_dset = self.do_tokenization() - - def do_tokenization(self): - """ - Tokenizes a Hugging Face dataset in the self.feature field. - :return: Hugging Face Dataset with tokenized text in self.tok_feature. - """ - sent_tokenizer = self.cvec.build_tokenizer() - - def tokenize_batch(examples): - if self.lowercase: - tok_sent = { - self.tok_feature: [tuple(sent_tokenizer(text.lower())) for - text in examples[self.feature]]} - else: - tok_sent = { - self.tok_feature: [tuple(sent_tokenizer(text)) for text in - examples[self.feature]]} - return tok_sent - - tokenized_dset = self.text_dset.map( - tokenize_batch, - batched=True - ) - logs.info("Tokenized the dataset.") - return tokenized_dset - - def get(self): - return self.tokenized_dset - - def get_df(self): - return pd.DataFrame(self.tokenized_dset) diff --git a/spaces/IDEA-Research/Grounded-SAM/segment_anything/linter.sh b/spaces/IDEA-Research/Grounded-SAM/segment_anything/linter.sh deleted file mode 100644 index df2e17436d30e89ff1728109301599f425f1ad6b..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/segment_anything/linter.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -{ - black --version | grep -E "23\." > /dev/null -} || { - echo "Linter requires 'black==23.*' !" - exit 1 -} - -ISORT_VERSION=$(isort --version-number) -if [[ "$ISORT_VERSION" != 5.12* ]]; then - echo "Linter requires isort==5.12.0 !" - exit 1 -fi - -echo "Running isort ..." -isort . --atomic - -echo "Running black ..." -black -l 100 . - -echo "Running flake8 ..." -if [ -x "$(command -v flake8)" ]; then - flake8 . -else - python3 -m flake8 . -fi - -echo "Running mypy..." - -mypy --exclude 'setup.py|notebooks' . diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/models/diffusion/ddpm.py b/spaces/Iceclear/StableSR/StableSR/ldm/models/diffusion/ddpm.py deleted file mode 100644 index 8a0c83d9904e447bfe058c22e39a292509f7020d..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,3234 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager -from functools import partial -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler - -from basicsr.utils import DiffJPEG, USMSharp -from basicsr.utils.img_process_util import filter2D -from basicsr.data.transforms import paired_random_crop, triplet_random_crop -from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt, random_add_speckle_noise_pt, random_add_saltpepper_noise_pt, bivariate_Gaussian -import random -import torch.nn.functional as F - -from ldm.modules.diffusionmodules.util import make_ddim_timesteps -import copy -import os -import cv2 -import matplotlib.pyplot as plt -from sklearn.decomposition import PCA - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - -def torch2img(input): - input_ = input[0] - input_ = input_.permute(1,2,0) - input_ = input_.data.cpu().numpy() - input_ = (input_ + 1.0) / 2 - cv2.imwrite('./test.png', input_[:,:,::-1]*255.0) - -def cal_pca_components(input, n_components=3): - pca = PCA(n_components=n_components) - c, h, w = input.size() - pca_data = input.permute(1,2,0) - pca_data = pca_data.reshape(h*w, c) - pca_data = pca.fit_transform(pca_data.data.cpu().numpy()) - pca_data = pca_data.reshape((h, w, n_components)) - return pca_data - -def visualize_fea(save_path, fea_img): - fig = plt.figure(figsize = (fea_img.shape[1]/10, fea_img.shape[0]/10)) # Your image (W)idth and (H)eight in inches - plt.subplots_adjust(left = 0, right = 1.0, top = 1.0, bottom = 0) - im = plt.imshow(fea_img, vmin=0.0, vmax=1.0, cmap='jet', aspect='auto') # Show the image - plt.savefig(save_path) - plt.clf() - -def calc_mean_std(feat, eps=1e-5): - """Calculate mean and std for adaptive_instance_normalization. - Args: - feat (Tensor): 4D tensor. - eps (float): A small value added to the variance to avoid - divide-by-zero. Default: 1e-5. - """ - size = feat.size() - assert len(size) == 4, 'The input feature should be 4D tensor.' - b, c = size[:2] - feat_var = feat.view(b, c, -1).var(dim=2) + eps - feat_std = feat_var.sqrt().view(b, c, 1, 1) - feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1) - return feat_mean, feat_std - -def adaptive_instance_normalization(content_feat, style_feat): - """Adaptive instance normalization. - Adjust the reference features to have the similar color and illuminations - as those in the degradate features. - Args: - content_feat (Tensor): The reference feature. - style_feat (Tensor): The degradate features. - """ - size = content_feat.size() - style_mean, style_std = calc_mean_std(style_feat) - content_mean, content_std = calc_mean_std(content_feat) - normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(size) - return normalized_feat * style_std.expand(size) + style_mean.expand(size) - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - - If the stride is a string starting with "ddim", then the fixed striding - from the DDIM paper is used, and only one section is allowed. - - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim"):]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError( - f"cannot create exactly {num_timesteps} steps with an integer stride" - ) - section_counts = [int(x) for x in section_counts.split(",")] #[250,] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError( - f"cannot divide section of {size} steps into {section_count}" - ) - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - ): - super().__init__() - assert parameterization in ["eps", "x0", "v"], 'currently only supporting "eps" and "x0" and "v"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - elif self.parameterization == "v": - lvlb_weights = torch.ones_like(self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod))) - else: - raise NotImplementedError("mu not supported") - # TODO how to choose this term - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print('<<<<<<<<<<<<>>>>>>>>>>>>>>>') - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - elif self.parameterization == "v": - x_recon = self.predict_start_from_z_and_v(x, model_out, t) - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def q_sample_respace(self, x_start, t, sqrt_alphas_cumprod, sqrt_one_minus_alphas_cumprod, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(sqrt_alphas_cumprod.to(noise.device), t, x_start.shape) * x_start + - extract_into_tensor(sqrt_one_minus_alphas_cumprod.to(noise.device), t, x_start.shape) * noise) - - def get_v(self, x, noise, t): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * noise - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * x - ) - - def predict_start_from_z_and_v(self, x, v, t): - return ( - extract_into_tensor(self.sqrt_alphas_cumprod, t, x.shape) * x - - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x.shape) * v - ) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - -class LatentDiffusion(DDPM): - """main class""" - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - *args, **kwargs): - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - # self.model.eval() - # self.model.train = disabled_train - # for param in self.model.parameters(): - # param.requires_grad = False - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None): - x = batch[k] - - x = F.interpolate( - x, - size=(self.image_size, - self.image_size), - mode='bicubic', - ) - - if len(x.shape) == 3: - x = x[..., None] - # x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox']: - # xc = batch[cond_key] - xc = ['']*x.size(0) - elif cond_key == 'class_label': - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - # import pudb; pudb.set_trace() - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - # same as above but without decorator - def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - df = self.split_input_params["vqf"] - self.split_input_params['original_image_size'] = x.shape[-2:] - bs, nc, h, w = x.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) - z = unfold(x) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) - o = o * weighting - - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization - return decoded - - else: - return self.first_stage_model.encode(x) - else: - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset - def rescale_bbox(bbox): - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - return x0, y0, w, h - - return [rescale_bbox(b) for b in bboxes] - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - if hasattr(self, "split_input_params"): - assert len(cond) == 1 # todo can only deal with one conditioning atm - assert not return_ids - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - - h, w = x_noisy.shape[-2:] - - fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) - - z = unfold(x_noisy) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] - - if self.cond_stage_key in ["image", "LR_image", "segmentation", - 'bbox_img'] and self.model.conditioning_key: # todo check for completeness - c_key = next(iter(cond.keys())) # get key - c = next(iter(cond.values())) # get value - assert (len(c) == 1) # todo extend to list with more than one elem - c = c[0] # get element - - c = unfold(c) - c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] - - elif self.cond_stage_key == 'coordinates_bbox': - assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' - - # assuming padding of unfold is always 0 and its dilation is always 1 - n_patches_per_row = int((w - ks[0]) / stride[0] + 1) - full_img_h, full_img_w = self.split_input_params['original_image_size'] - # as we are operating on latents, we need the factor from the original image size to the - # spatial latent size to properly rescale the crops for regenerating the bbox annotations - num_downs = self.first_stage_model.encoder.num_resolutions - 1 - rescale_latent = 2 ** (num_downs) - - # get top left postions of patches as conforming for the bbbox tokenizer, therefore we - # need to rescale the tl patch coordinates to be in between (0,1) - tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, - rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) - for patch_nr in range(z.shape[-1])] - - # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) - patch_limits = [(x_tl, y_tl, - rescale_latent * ks[0] / full_img_w, - rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] - # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] - - # tokenize crop coordinates for the bounding boxes of the respective patches - patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device) - for bbox in patch_limits] # list of length l with tensors of shape (1, 2) - print(patch_limits_tknzd[0].shape) - # cut tknzd crop position from conditioning - assert isinstance(cond, dict), 'cond must be dict to be fed into model' - cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device) - print(cut_cond.shape) - - adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) - adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') - print(adapted_cond.shape) - adapted_cond = self.get_learned_conditioning(adapted_cond) - print(adapted_cond.shape) - adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) - print(adapted_cond.shape) - - cond_list = [{'c_crossattn': [e]} for e in adapted_cond] - - else: - cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient - - # apply model by loop over crops - output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] - assert not isinstance(output_list[0], - tuple) # todo cant deal with multiple model outputs check this never happens - - o = torch.stack(output_list, axis=-1) - o = o * weighting - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - x_recon = fold(o) / normalization - - else: - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - elif self.parameterization == "v": - x_recon = self.predict_start_from_z_and_v(x, model_out, t) - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None,**kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size//8, self.image_size//8) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs): - - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, - shape,cond,verbose=False,**kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True,**kwargs) - - return samples, intermediates - - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=False, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, **kwargs): - - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - - # print(z.size()) - # print(x.size()) - # if self.model.conditioning_key is not None: - # if hasattr(self.cond_stage_model, "decode"): - # xc = self.cond_stage_model.decode(c) - # log["conditioning"] = xc - # elif self.cond_stage_key in ["caption"]: - # xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"]) - # log["conditioning"] = xc - # elif self.cond_stage_key == 'class_label': - # xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - # log['conditioning'] = xc - # elif isimage(xc): - # log["conditioning"] = xc - # if ismap(xc): - # log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with self.ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with self.ema_scope("Plotting Inpaint"): - - samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - with self.ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with self.ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - # params = list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - -class LatentDiffusionSRTextWT(DDPM): - """main class""" - def __init__(self, - first_stage_config, - cond_stage_config, - structcond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - unfrozen_diff=False, - random_size=False, - test_gt=False, - p2_gamma=None, - p2_k=None, - time_replace=None, - use_usm=False, - mix_ratio=0.0, - *args, **kwargs): - # put this in your init - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - self.unfrozen_diff = unfrozen_diff - self.random_size = random_size - self.test_gt = test_gt - self.time_replace = time_replace - self.use_usm = use_usm - self.mix_ratio = mix_ratio - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.instantiate_structcond_stage(structcond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - if not self.unfrozen_diff: - self.model.eval() - # self.model.train = disabled_train - for name, param in self.model.named_parameters(): - if 'spade' not in name: - param.requires_grad = False - else: - param.requires_grad = True - - print('>>>>>>>>>>>>>>>>model>>>>>>>>>>>>>>>>>>>>') - param_list = [] - for name, params in self.model.named_parameters(): - if params.requires_grad: - param_list.append(name) - print(param_list) - param_list = [] - print('>>>>>>>>>>>>>>>>>cond_stage_model>>>>>>>>>>>>>>>>>>>') - for name, params in self.cond_stage_model.named_parameters(): - if params.requires_grad: - param_list.append(name) - print(param_list) - param_list = [] - print('>>>>>>>>>>>>>>>>structcond_stage_model>>>>>>>>>>>>>>>>>>>>') - for name, params in self.structcond_stage_model.named_parameters(): - if params.requires_grad: - param_list.append(name) - print(param_list) - - # P2 weighting: https://github.com/jychoi118/P2-weighting - if p2_gamma is not None: - assert p2_k is not None - self.p2_gamma = p2_gamma - self.p2_k = p2_k - self.snr = 1.0 / (1 - self.alphas_cumprod) - 1 - else: - self.snr = None - - # Support time respacing during training - if self.time_replace is None: - self.time_replace = kwargs['timesteps'] - use_timesteps = set(space_timesteps(kwargs['timesteps'], [self.time_replace])) - last_alpha_cumprod = 1.0 - new_betas = [] - timestep_map = [] - for i, alpha_cumprod in enumerate(self.alphas_cumprod): - if i in use_timesteps: - new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) - last_alpha_cumprod = alpha_cumprod - timestep_map.append(i) - new_betas = [beta.data.cpu().numpy() for beta in new_betas] - self.register_schedule(given_betas=np.array(new_betas), timesteps=len(new_betas), linear_start=kwargs['linear_start'], linear_end=kwargs['linear_end']) - self.ori_timesteps = list(use_timesteps) - self.ori_timesteps.sort() - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - # self.cond_stage_model.train = disabled_train - for name, param in self.cond_stage_model.named_parameters(): - if 'final_projector' not in name: - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - self.cond_stage_model.train() - - def instantiate_structcond_stage(self, config): - model = instantiate_from_config(config) - self.structcond_stage_model = model - self.structcond_stage_model.train() - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def _dequeue_and_enqueue(self): - """It is the training pair pool for increasing the diversity in a batch, taken from Real-ESRGAN: - https://github.com/xinntao/Real-ESRGAN - - Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a - batch could not have different resize scaling factors. Therefore, we employ this training pair pool - to increase the degradation diversity in a batch. - """ - # initialize - b, c, h, w = self.lq.size() - if b == self.configs.data.params.batch_size: - if not hasattr(self, 'queue_size'): - self.queue_size = self.configs.data.params.train.params.get('queue_size', b*50) - if not hasattr(self, 'queue_lr'): - assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}' - self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda() - _, c, h, w = self.gt.size() - self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_ptr = 0 - if self.queue_ptr == self.queue_size: # the pool is full - # do dequeue and enqueue - # shuffle - idx = torch.randperm(self.queue_size) - self.queue_lr = self.queue_lr[idx] - self.queue_gt = self.queue_gt[idx] - # get first b samples - lq_dequeue = self.queue_lr[0:b, :, :, :].clone() - gt_dequeue = self.queue_gt[0:b, :, :, :].clone() - # update the queue - self.queue_lr[0:b, :, :, :] = self.lq.clone() - self.queue_gt[0:b, :, :, :] = self.gt.clone() - - self.lq = lq_dequeue - self.gt = gt_dequeue - else: - # only do enqueue - self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone() - self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone() - self.queue_ptr = self.queue_ptr + b - - def randn_cropinput(self, lq, gt, base_size=[64, 128, 256, 512]): - cur_size_h = random.choice(base_size) - cur_size_w = random.choice(base_size) - init_h = lq.size(-2)//2 - init_w = lq.size(-1)//2 - lq = lq[:, :, init_h-cur_size_h//2:init_h+cur_size_h//2, init_w-cur_size_w//2:init_w+cur_size_w//2] - gt = gt[:, :, init_h-cur_size_h//2:init_h+cur_size_h//2, init_w-cur_size_w//2:init_w+cur_size_w//2] - assert lq.size(-1)>=64 - assert lq.size(-2)>=64 - return [lq, gt] - - @torch.no_grad() - def get_input(self, batch, k=None, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None, val=False, text_cond=[''], return_gt=False, resize_lq=True): - - """Degradation pipeline, modified from Real-ESRGAN: - https://github.com/xinntao/Real-ESRGAN - """ - - if not hasattr(self, 'jpeger'): - jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts - if not hasattr(self, 'usm_sharpener'): - usm_sharpener = USMSharp().cuda() # do usm sharpening - - im_gt = batch['gt'].cuda() - if self.use_usm: - im_gt = usm_sharpener(im_gt) - im_gt = im_gt.to(memory_format=torch.contiguous_format).float() - kernel1 = batch['kernel1'].cuda() - kernel2 = batch['kernel2'].cuda() - sinc_kernel = batch['sinc_kernel'].cuda() - - ori_h, ori_w = im_gt.size()[2:4] - - # ----------------------- The first degradation process ----------------------- # - # blur - out = filter2D(im_gt, kernel1) - # random resize - updown_type = random.choices( - ['up', 'down', 'keep'], - self.configs.degradation['resize_prob'], - )[0] - if updown_type == 'up': - scale = random.uniform(1, self.configs.degradation['resize_range'][1]) - elif updown_type == 'down': - scale = random.uniform(self.configs.degradation['resize_range'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, scale_factor=scale, mode=mode) - # add noise - gray_noise_prob = self.configs.degradation['gray_noise_prob'] - if random.random() < self.configs.degradation['gaussian_noise_prob']: - out = random_add_gaussian_noise_pt( - out, - sigma_range=self.configs.degradation['noise_range'], - clip=True, - rounds=False, - gray_prob=gray_noise_prob, - ) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.configs.degradation['poisson_scale_range'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.configs.degradation['jpeg_range']) - out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts - out = jpeger(out, quality=jpeg_p) - - # ----------------------- The second degradation process ----------------------- # - # blur - if random.random() < self.configs.degradation['second_blur_prob']: - out = filter2D(out, kernel2) - # random resize - updown_type = random.choices( - ['up', 'down', 'keep'], - self.configs.degradation['resize_prob2'], - )[0] - if updown_type == 'up': - scale = random.uniform(1, self.configs.degradation['resize_range2'][1]) - elif updown_type == 'down': - scale = random.uniform(self.configs.degradation['resize_range2'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, - size=(int(ori_h / self.configs.sf * scale), - int(ori_w / self.configs.sf * scale)), - mode=mode, - ) - # add noise - gray_noise_prob = self.configs.degradation['gray_noise_prob2'] - if random.random() < self.configs.degradation['gaussian_noise_prob2']: - out = random_add_gaussian_noise_pt( - out, - sigma_range=self.configs.degradation['noise_range2'], - clip=True, - rounds=False, - gray_prob=gray_noise_prob, - ) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.configs.degradation['poisson_scale_range2'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False, - ) - - # JPEG compression + the final sinc filter - # We also need to resize images to desired sizes. We group [resize back + sinc filter] together - # as one operation. - # We consider two orders: - # 1. [resize back + sinc filter] + JPEG compression - # 2. JPEG compression + [resize back + sinc filter] - # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines. - if random.random() < 0.5: - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, - size=(ori_h // self.configs.sf, - ori_w // self.configs.sf), - mode=mode, - ) - out = filter2D(out, sinc_kernel) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.configs.degradation['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = jpeger(out, quality=jpeg_p) - else: - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.configs.degradation['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = jpeger(out, quality=jpeg_p) - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, - size=(ori_h // self.configs.sf, - ori_w // self.configs.sf), - mode=mode, - ) - out = filter2D(out, sinc_kernel) - - # clamp and round - im_lq = torch.clamp(out, 0, 1.0) - - # random crop - gt_size = self.configs.degradation['gt_size'] - im_gt, im_lq = paired_random_crop(im_gt, im_lq, gt_size, self.configs.sf) - self.lq, self.gt = im_lq, im_gt - - if resize_lq: - self.lq = F.interpolate( - self.lq, - size=(self.gt.size(-2), - self.gt.size(-1)), - mode='bicubic', - ) - - if random.random() < self.configs.degradation['no_degradation_prob'] or torch.isnan(self.lq).any(): - self.lq = self.gt - - # training pair pool - if not val and not self.random_size: - self._dequeue_and_enqueue() - # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue - self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract - self.lq = self.lq*2 - 1.0 - self.gt = self.gt*2 - 1.0 - - if self.random_size: - self.lq, self.gt = self.randn_cropinput(self.lq, self.gt) - - self.lq = torch.clamp(self.lq, -1.0, 1.0) - - x = self.lq - y = self.gt - if bs is not None: - x = x[:bs] - y = y[:bs] - x = x.to(self.device) - y = y.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - encoder_posterior_y = self.encode_first_stage(y) - z_gt = self.get_first_stage_encoding(encoder_posterior_y).detach() - - xc = None - if self.use_positional_encodings: - assert NotImplementedError - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - - while len(text_cond) < z.size(0): - text_cond.append(text_cond[-1]) - if len(text_cond) > z.size(0): - text_cond = text_cond[:z.size(0)] - assert len(text_cond) == z.size(0) - - out = [z, text_cond] - out.append(z_gt) - - if return_first_stage_outputs: - xrec = self.decode_first_stage(z_gt) - out.extend([x, self.gt, xrec]) - if return_original_cond: - out.append(xc) - - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - - # same as above but without decorator - def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - df = self.split_input_params["vqf"] - self.split_input_params['original_image_size'] = x.shape[-2:] - bs, nc, h, w = x.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) - z = unfold(x) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) - o = o * weighting - - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization - return decoded - - else: - return self.first_stage_model.encode(x) - else: - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c, gt = self.get_input(batch, self.first_stage_key) - loss = self(x, c, gt) - return loss - - def forward(self, x, c, gt, *args, **kwargs): - index = np.random.randint(0, self.num_timesteps, size=x.size(0)) - t = torch.from_numpy(index) - t = t.to(self.device).long() - - t_ori = torch.tensor([self.ori_timesteps[index_i] for index_i in index]) - t_ori = t_ori.long().to(x.device) - - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - else: - c = self.cond_stage_model(c) - if self.shorten_cond_schedule: # TODO: drop this option - print(s) - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - if self.test_gt: - struc_c = self.structcond_stage_model(gt, t_ori) - else: - struc_c = self.structcond_stage_model(x, t_ori) - return self.p_losses(gt, c, struc_c, t, t_ori, x, *args, **kwargs) - - def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset - def rescale_bbox(bbox): - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - return x0, y0, w, h - - return [rescale_bbox(b) for b in bboxes] - - def apply_model(self, x_noisy, t, cond, struct_cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - if hasattr(self, "split_input_params"): - assert len(cond) == 1 # todo can only deal with one conditioning atm - assert not return_ids - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - - h, w = x_noisy.shape[-2:] - - fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) - - z = unfold(x_noisy) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] - - if self.cond_stage_key in ["image", "LR_image", "segmentation", - 'bbox_img'] and self.model.conditioning_key: # todo check for completeness - c_key = next(iter(cond.keys())) # get key - c = next(iter(cond.values())) # get value - assert (len(c) == 1) # todo extend to list with more than one elem - c = c[0] # get element - - c = unfold(c) - c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] - - elif self.cond_stage_key == 'coordinates_bbox': - assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' - - # assuming padding of unfold is always 0 and its dilation is always 1 - n_patches_per_row = int((w - ks[0]) / stride[0] + 1) - full_img_h, full_img_w = self.split_input_params['original_image_size'] - # as we are operating on latents, we need the factor from the original image size to the - # spatial latent size to properly rescale the crops for regenerating the bbox annotations - num_downs = self.first_stage_model.encoder.num_resolutions - 1 - rescale_latent = 2 ** (num_downs) - - # get top left postions of patches as conforming for the bbbox tokenizer, therefore we - # need to rescale the tl patch coordinates to be in between (0,1) - tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, - rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) - for patch_nr in range(z.shape[-1])] - - # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) - patch_limits = [(x_tl, y_tl, - rescale_latent * ks[0] / full_img_w, - rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] - # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] - - # tokenize crop coordinates for the bounding boxes of the respective patches - patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device) - for bbox in patch_limits] # list of length l with tensors of shape (1, 2) - print(patch_limits_tknzd[0].shape) - # cut tknzd crop position from conditioning - assert isinstance(cond, dict), 'cond must be dict to be fed into model' - cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device) - print(cut_cond.shape) - - adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) - adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') - print(adapted_cond.shape) - adapted_cond = self.get_learned_conditioning(adapted_cond) - print(adapted_cond.shape) - adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) - print(adapted_cond.shape) - - cond_list = [{'c_crossattn': [e]} for e in adapted_cond] - - else: - cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient - - # apply model by loop over crops - output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] - assert not isinstance(output_list[0], - tuple) # todo cant deal with multiple model outputs check this never happens - - o = torch.stack(output_list, axis=-1) - o = o * weighting - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - x_recon = fold(o) / normalization - - else: - cond['struct_cond'] = struct_cond - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, struct_cond, t, t_ori, z_gt, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - - if self.mix_ratio > 0: - if random.random() < self.mix_ratio: - noise_new = default(noise, lambda: torch.randn_like(x_start)) - noise = noise_new * 0.5 + noise * 0.5 - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - - model_output = self.apply_model(x_noisy, t_ori, cond, struct_cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - elif self.parameterization == "v": - target = self.get_v(x_start, noise, t) - else: - raise NotImplementedError() - - model_output_ = model_output - - loss_simple = self.get_loss(model_output_, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - #P2 weighting - if self.snr is not None: - self.snr = self.snr.to(loss_simple.device) - weight = extract_into_tensor(1 / (self.p2_k + self.snr)**self.p2_gamma, t, target.shape) - loss_simple = weight * loss_simple - - logvar_t = self.logvar[t.cpu()].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output_, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, struct_cond, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None, t_replace=None): - if t_replace is None: - t_in = t - else: - t_in = t_replace - model_out = self.apply_model(x, t_in, c, struct_cond, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - elif self.parameterization == "v": - x_recon = self.predict_start_from_z_and_v(x, model_out, t) - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - def p_mean_variance_canvas(self, x, c, struct_cond, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None, t_replace=None, tile_size=64, tile_overlap=32, batch_size=4, tile_weights=None): - """ - Aggregation Sampling strategy for arbitrary-size image super-resolution - """ - assert tile_weights is not None - - if t_replace is None: - t_in = t - else: - t_in = t_replace - - _, _, h, w = x.size() - - grid_rows = 0 - cur_x = 0 - while cur_x < x.size(-1): - cur_x = max(grid_rows * tile_size-tile_overlap * grid_rows, 0)+tile_size - grid_rows += 1 - - grid_cols = 0 - cur_y = 0 - while cur_y < x.size(-2): - cur_y = max(grid_cols * tile_size-tile_overlap * grid_cols, 0)+tile_size - grid_cols += 1 - - input_list = [] - cond_list = [] - noise_preds = [] - for row in range(grid_rows): - noise_preds_row = [] - for col in range(grid_cols): - if col < grid_cols-1 or row < grid_rows-1: - # extract tile from input image - ofs_x = max(row * tile_size-tile_overlap * row, 0) - ofs_y = max(col * tile_size-tile_overlap * col, 0) - # input tile area on total image - if row == grid_rows-1: - ofs_x = w - tile_size - if col == grid_cols-1: - ofs_y = h - tile_size - - input_start_x = ofs_x - input_end_x = ofs_x + tile_size - input_start_y = ofs_y - input_end_y = ofs_y + tile_size - - # print('input_start_x', input_start_x) - # print('input_end_x', input_end_x) - # print('input_start_y', input_start_y) - # print('input_end_y', input_end_y) - - # input tile dimensions - input_tile_width = input_end_x - input_start_x - input_tile_height = input_end_y - input_start_y - input_tile = x[:, :, input_start_y:input_end_y, input_start_x:input_end_x] - input_list.append(input_tile) - cond_tile = struct_cond[:, :, input_start_y:input_end_y, input_start_x:input_end_x] - cond_list.append(cond_tile) - - if len(input_list) == batch_size or col == grid_cols-1: - input_list = torch.cat(input_list, dim=0) - cond_list = torch.cat(cond_list, dim=0) - - struct_cond_input = self.structcond_stage_model(cond_list, t_in[:input_list.size(0)]) - model_out = self.apply_model(input_list, t_in[:input_list.size(0)], c[:input_list.size(0)], struct_cond_input, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, input_list, t[:input_list.size(0)], c[:input_list.size(0)], **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - for sample_i in range(model_out.size(0)): - noise_preds_row.append(model_out[sample_i].unsqueeze(0)) - input_list = [] - cond_list = [] - - noise_preds.append(noise_preds_row) - - # Stitch noise predictions for all tiles - noise_pred = torch.zeros(x.shape, device=x.device) - contributors = torch.zeros(x.shape, device=x.device) - # Add each tile contribution to overall latents - for row in range(grid_rows): - for col in range(grid_cols): - if col < grid_cols-1 or row < grid_rows-1: - # extract tile from input image - ofs_x = max(row * tile_size-tile_overlap * row, 0) - ofs_y = max(col * tile_size-tile_overlap * col, 0) - # input tile area on total image - if row == grid_rows-1: - ofs_x = w - tile_size - if col == grid_cols-1: - ofs_y = h - tile_size - - input_start_x = ofs_x - input_end_x = ofs_x + tile_size - input_start_y = ofs_y - input_end_y = ofs_y + tile_size - # print(noise_preds[row][col].size()) - # print(tile_weights.size()) - # print(noise_pred.size()) - noise_pred[:, :, input_start_y:input_end_y, input_start_x:input_end_x] += noise_preds[row][col] * tile_weights - contributors[:, :, input_start_y:input_end_y, input_start_x:input_end_x] += tile_weights - # Average overlapping areas with more than 1 contributor - noise_pred /= contributors - # noise_pred /= torch.sqrt(contributors) - model_out = noise_pred - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t[:model_out.size(0)], noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - elif self.parameterization == "v": - x_recon = self.predict_start_from_z_and_v(x, model_out, t[:model_out.size(0)]) - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t[:x_recon.size(0)]) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, struct_cond, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, t_replace=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, struct_cond=struct_cond, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs, t_replace=t_replace) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_canvas(self, x, c, struct_cond, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None, t_replace=None, - tile_size=64, tile_overlap=32, batch_size=4, tile_weights=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance_canvas(x=x, c=c, struct_cond=struct_cond, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs, t_replace=t_replace, - tile_size=tile_size, tile_overlap=tile_overlap, batch_size=batch_size, tile_weights=tile_weights) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t[:b] == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, struct_cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, struct_cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, struct_cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None, time_replace=None, adain_fea=None, interfea_path=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - batch_list = [] - for i in iterator: - if time_replace is None or time_replace == 1000: - ts = torch.full((b,), i, device=device, dtype=torch.long) - t_replace=None - else: - ts = torch.full((b,), i, device=device, dtype=torch.long) - t_replace = repeat(torch.tensor([self.ori_timesteps[i]]), '1 -> b', b=img.size(0)) - t_replace = t_replace.long().to(device) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - if t_replace is not None: - if start_T is not None: - if self.ori_timesteps[i] > start_T: - continue - struct_cond_input = self.structcond_stage_model(struct_cond, t_replace) - else: - if start_T is not None: - if i > start_T: - continue - struct_cond_input = self.structcond_stage_model(struct_cond, ts) - - if interfea_path is not None: - batch_list.append(struct_cond_input) - - img = self.p_sample(img, cond, struct_cond_input, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, t_replace=t_replace) - - if adain_fea is not None: - if i < 1: - img = adaptive_instance_normalization(img, adain_fea) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - if len(batch_list) > 0: - num_batch = batch_list[0]['64'].size(0) - for batch_i in range(num_batch): - batch64_list = [] - batch32_list = [] - for num_i in range(len(batch_list)): - batch64_list.append(cal_pca_components(batch_list[num_i]['64'][batch_i], 3)) - batch32_list.append(cal_pca_components(batch_list[num_i]['32'][batch_i], 3)) - batch64_list = np.array(batch64_list) - batch32_list = np.array(batch32_list) - - batch64_list = batch64_list - np.min(batch64_list) - batch64_list = batch64_list / np.max(batch64_list) - batch32_list = batch32_list - np.min(batch32_list) - batch32_list = batch32_list / np.max(batch32_list) - - total_num = batch64_list.shape[0] - - for index in range(total_num): - os.makedirs(os.path.join(interfea_path, 'fea_'+str(batch_i)+'_64'), exist_ok=True) - cur_path = os.path.join(interfea_path, 'fea_'+str(batch_i)+'_64', 'step_'+str(total_num-index)+'.png') - visualize_fea(cur_path, batch64_list[index]) - os.makedirs(os.path.join(interfea_path, 'fea_'+str(batch_i)+'_32'), exist_ok=True) - cur_path = os.path.join(interfea_path, 'fea_'+str(batch_i)+'_32', 'step_'+str(total_num-index)+'.png') - visualize_fea(cur_path, batch32_list[index]) - - if return_intermediates: - return img, intermediates - return img - - def _gaussian_weights(self, tile_width, tile_height, nbatches): - """Generates a gaussian mask of weights for tile contributions""" - from numpy import pi, exp, sqrt - import numpy as np - - latent_width = tile_width - latent_height = tile_height - - var = 0.01 - midpoint = (latent_width - 1) / 2 # -1 because index goes from 0 to latent_width - 1 - x_probs = [exp(-(x-midpoint)*(x-midpoint)/(latent_width*latent_width)/(2*var)) / sqrt(2*pi*var) for x in range(latent_width)] - midpoint = latent_height / 2 - y_probs = [exp(-(y-midpoint)*(y-midpoint)/(latent_height*latent_height)/(2*var)) / sqrt(2*pi*var) for y in range(latent_height)] - - weights = np.outer(y_probs, x_probs) - return torch.tile(torch.tensor(weights, device=self.betas.device), (nbatches, self.configs.model.params.channels, 1, 1)) - - @torch.no_grad() - def p_sample_loop_canvas(self, cond, struct_cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None, time_replace=None, adain_fea=None, interfea_path=None, tile_size=64, tile_overlap=32, batch_size=4): - - assert tile_size is not None - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = batch_size - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - tile_weights = self._gaussian_weights(tile_size, tile_size, 1) - - for i in iterator: - if time_replace is None or time_replace == 1000: - ts = torch.full((b,), i, device=device, dtype=torch.long) - t_replace=None - else: - ts = torch.full((b,), i, device=device, dtype=torch.long) - t_replace = repeat(torch.tensor([self.ori_timesteps[i]]), '1 -> b', b=batch_size) - t_replace = t_replace.long().to(device) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - if interfea_path is not None: - for batch_i in range(struct_cond_input['64'].size(0)): - os.makedirs(os.path.join(interfea_path, 'fea_'+str(batch_i)+'_64'), exist_ok=True) - cur_path = os.path.join(interfea_path, 'fea_'+str(batch_i)+'_64', 'step_'+str(i)+'.png') - visualize_fea(cur_path, struct_cond_input['64'][batch_i, 0]) - os.makedirs(os.path.join(interfea_path, 'fea_'+str(batch_i)+'_32'), exist_ok=True) - cur_path = os.path.join(interfea_path, 'fea_'+str(batch_i)+'_32', 'step_'+str(i)+'.png') - visualize_fea(cur_path, struct_cond_input['32'][batch_i, 0]) - - img = self.p_sample_canvas(img, cond, struct_cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, t_replace=t_replace, - tile_size=tile_size, tile_overlap=tile_overlap, batch_size=batch_size, tile_weights=tile_weights) - - if adain_fea is not None: - if i < 1: - img = adaptive_instance_normalization(img, adain_fea) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, struct_cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None, time_replace=None, adain_fea=None, interfea_path=None, start_T=None, **kwargs): - - if shape is None: - shape = (batch_size, self.channels, self.image_size//8, self.image_size//8) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - struct_cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0, time_replace=time_replace, adain_fea=adain_fea, interfea_path=interfea_path, start_T=start_T) - - @torch.no_grad() - def sample_canvas(self, cond, struct_cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None, time_replace=None, adain_fea=None, interfea_path=None, tile_size=64, tile_overlap=32, batch_size_sample=4, log_every_t=None, **kwargs): - - if shape is None: - shape = (batch_size, self.channels, self.image_size//8, self.image_size//8) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key] if not isinstance(cond[key], list) else - list(map(lambda x: x, cond[key])) for key in cond} - else: - cond = [c for c in cond] if isinstance(cond, list) else cond - return self.p_sample_loop_canvas(cond, - struct_cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0, time_replace=time_replace, adain_fea=adain_fea, interfea_path=interfea_path, tile_size=tile_size, tile_overlap=tile_overlap, batch_size=batch_size_sample, log_every_t=log_every_t) - - @torch.no_grad() - def sample_log(self,cond,struct_cond,batch_size,ddim, ddim_steps,**kwargs): - - if ddim: - raise NotImplementedError - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size//8, self.image_size//8) - samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, - shape,cond,verbose=False,**kwargs) - - else: - samples, intermediates = self.sample(cond=cond, struct_cond=struct_cond, batch_size=batch_size, - return_intermediates=True,**kwargs) - - return samples, intermediates - - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=False, plot_denoise_rows=False, plot_progressive_rows=False, - plot_diffusion_rows=False, **kwargs): - - use_ddim = ddim_steps is not None - - log = dict() - z, c_lq, z_gt, x, gt, yrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N, val=True) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - if self.test_gt: - log["gt"] = gt - else: - log["inputs"] = x - log["reconstruction"] = gt - log["recon_lq"] = self.decode_first_stage(z) - - c = self.cond_stage_model(c_lq) - if self.test_gt: - struct_cond = z_gt - else: - struct_cond = z - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - noise = torch.randn_like(z) - ddim_sampler = DDIMSampler(self) - with self.ema_scope("Plotting"): - if self.time_replace is not None: - cur_time_step=self.time_replace - else: - cur_time_step = 1000 - - samples, z_denoise_row = self.sample(cond=c, struct_cond=struct_cond, batch_size=N, timesteps=cur_time_step, return_intermediates=True, time_replace=self.time_replace) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - with self.ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c,struct_cond=struct_cond,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta, - quantize_denoised=True, x_T=x_T) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if inpaint: - assert NotImplementedError - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with self.ema_scope("Plotting Inpaint"): - - samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - with self.ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with self.ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, struct_cond=struct_cond, - shape=(self.channels, self.image_size//8, self.image_size//8), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - params = params + list(self.cond_stage_model.parameters()) - params = params + list(self.structcond_stage_model.parameters()) - if self.learn_logvar: - assert not self.learn_logvar - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None, struct_cond=None, seg_cond=None): - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - cc = torch.cat(c_crossattn, 1) - if seg_cond is None: - out = self.diffusion_model(x, t, context=cc, struct_cond=struct_cond) - else: - out = self.diffusion_model(x, t, context=cc, struct_cond=struct_cond, seg_cond=seg_cond) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - -class Layout2ImgDiffusion(LatentDiffusion): - # TODO: move all layout-specific hacks to this class - def __init__(self, cond_stage_key, *args, **kwargs): - assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"' - super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs) - - def log_images(self, batch, N=8, *args, **kwargs): - logs = super().log_images(batch=batch, N=N, *args, **kwargs) - - key = 'train' if self.training else 'validation' - dset = self.trainer.datamodule.datasets[key] - mapper = dset.conditional_builders[self.cond_stage_key] - - bbox_imgs = [] - map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno)) - for tknzd_bbox in batch[self.cond_stage_key][:N]: - bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256)) - bbox_imgs.append(bboximg) - - cond_img = torch.stack(bbox_imgs, dim=0) - logs['bbox_image'] = cond_img - return logs diff --git a/spaces/Illumotion/Koboldcpp/common/grammar-parser.h b/spaces/Illumotion/Koboldcpp/common/grammar-parser.h deleted file mode 100644 index 9037d72728a42ed772f384f3d7ddcef01d0d15f5..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/common/grammar-parser.h +++ /dev/null @@ -1,29 +0,0 @@ -// Implements a parser for an extended Backus-Naur form (BNF), producing the -// binary context-free grammar format specified by llama.h. Supports character -// ranges, grouping, and repetition operators. As an example, a grammar for -// arithmetic might look like: -// -// root ::= expr -// expr ::= term ([-+*/] term)* -// term ::= num | "(" space expr ")" space -// num ::= [0-9]+ space -// space ::= [ \t\n]* - -#pragma once -#include "llama.h" -#include -#include
Mod Frozen City is a modified version of the original game Frozen City, which is a city-building simulation game set in an ice and snow apocalypse. In this game, you have to build your own city from scratch, manage your resources, recruit survivors, upgrade your buildings, and survive the harsh weather and enemies. Mod Frozen City offers unlimited money, coins, gems, and other resources, a menu mod that allows you to customize your gameplay settings, no ads, no root required, and more variety and content than the original game.
-In this article, we have shown you how to download Mod Frozen City for your Android device, how to play it, how to uninstall it, and what are the pros and cons of this modded game. We hope you have found this article useful and informative. If you have any questions or feedback about Mod Frozen City, feel free to leave a comment below. Thank you for reading!
-Here are some frequently asked questions about Mod Frozen City that you may want to know:
-Mod Frozen City is safe to download and play, as long as you download it from a reliable source and follow the instructions that we have provided above. However, you should be aware of the potential risks or issues that may arise from using a modded game, such as viruses, malware, bugs, crashes, bans, etc. You should also scan the APK file with an antivirus software before installing it on your device.
-Mod Frozen City is compatible with most Android devices that run on Android 4.4 or higher. However, it may not be compatible with some devices or Android versions that have different specifications or features. It may also not be compatible with other platforms, such as iOS, Windows, Mac, etc.
-Mod Frozen City requires an internet connection to download the APK file and play the game online. You cannot play Mod Frozen City offline, as it needs to connect to the server and sync your data and progress. You also need an internet connection to access some features of the game, such as trading, chatting, ranking, etc.
-Mod Frozen City allows you to play with your friends or other players online. You can join or create a clan with your friends or other players and cooperate or compete with them in various modes and events. You can also chat with them in real-time and share your experience and feedback.
-Mod Frozen City may not be updated regularly or frequently, as it is a modified version of the original game. The game may not have the latest features or bug fixes that are available in the original game. However, you can check for updates from the source that you downloaded the APK file from or from this link: Mod Frozen City Update. You can also follow us on our social media channels to get notified of any updates.
401be4b1e0however, neo-soul keys studio 2 is available for all major platforms. so, if youre looking for electric piano sounds, neo-soul keys 2 is the way to go. consequently, neo-soul keys studio 2 is a must have!
-Download ✒ ✒ ✒ https://cinurl.com/2uEXro
on top of that, neo-soul keys studio 2 is also available for all major platforms. so, if youre looking for electric piano sounds, neo-soul keys 2 is the way to go. consequently, neo-soul keys studio 2 is a must have!
-the sounds of the cp70 are perfectly reproduced in each of the four stages. the stage, stage v1, stage v2, and stage v3 emulations are completely compatible with the stage and stage v1 of the original. plus, the other two stages, stage v1, stage v2, and stage v3, are an improvement of an original version of neo-soul keys. in addition, neo-soul keys 3x represents an improvement of an original version of neo-soul keys. moreover, enjoy these sounds and more in neo soul keys studio. with such great detail, your electrical piano can be very sterile, lifeless, and without personality. in fact, you can also download protastructure suite enterprise 2018 free.
-in general, otherwise, do not worry our eps can sound as clear and clean as a buffer, but youre always welcome to dial up as much grime as you would like. especially, see what the late-great george duke has said concerning neo-soul keys and why he chose to use it reside, instead of a real piano. in case, we pride ourselves on the intangibles. in addition, neo-soul keys studio are some of the most realistic, natural, and hot sounding electric pianos ever created. the sounds of the cp70 are perfectly reproduced in each of the four stages. the stage, stage v1, stage v2, and stage v3 emulations are completely compatible with the stage and stage v1 of the original. in addition, the other two stages, stage v1, stage v2, and stage v3, are an improvement of an original version of neo-soul keys. for all these reasons, you can also download mytopc handycam pro
899543212bDownload File ––– https://cinurl.com/2uEZ66
Download Zip · https://bytlly.com/2uGiSp
DOWNLOAD ✔ https://bytlly.com/2uGlc3
DOWNLOAD > https://bytlly.com/2uGk9j
If you are a parent or a teacher who wants to teach your child or student how to read in Filipino, you might have heard of Abakada Unang Hakbang Sa Pagbasa Book. This book is a popular and effective tool for introducing young learners to the Filipino alphabet and language. But what exactly is this book, and why is it important for Filipino children? In this article, we will answer these questions and more. We will also give you some tips on how to use the book effectively, where to buy it, and what other resources you can use to enhance your child's or student's learning experience.
-Abakada Unang Hakbang Sa Pagbasa Book, which means "Abakada First Step in Reading Book" in English, is a book that teaches children how to read in Filipino using the Abakada alphabet. The Abakada alphabet is a simplified version of the original Baybayin script that was used by Filipinos before the Spanish colonization. It consists of 20 letters that represent the basic sounds of the Filipino language. The book uses colorful illustrations, simple words, and fun activities to help children recognize, pronounce, and write these letters.
-DOWNLOAD ►►► https://urlcod.com/2uK8IJ
The book was first published in 1978 by Liwayway A. Arceo, a renowned Filipino writer and educator. She wrote the book as a response to the need for a standardized and systematic way of teaching Filipino to young learners. She also wanted to preserve and promote the Filipino language and culture among the new generation. The book has since been revised and updated several times to reflect the changes and developments in the Filipino language and society.
-The book has several features and benefits that make it an ideal choice for parents and teachers who want to teach their children or students how to read in Filipino. Some of these are:
-To use the book effectively, parents and teachers should follow these steps:
-Abakada Unang Hakbang Sa Pagbasa Book is not only a useful tool for teaching children how to read in Filipino but also a valuable resource for developing their cognitive, linguistic, and cultural skills. Here are some reasons why this book is important for Filipino children:
-Abakada book for beginners in reading Filipino
-How to teach Abakada to kids using Unang Hakbang Sa Pagbasa
-Abakada Unang Hakbang Sa Pagbasa review and feedback
-Where to buy Abakada Unang Hakbang Sa Pagbasa book online
-Abakada Unang Hakbang Sa Pagbasa book pdf download
-Abakada Unang Hakbang Sa Pagbasa book price and availability
-Abakada Unang Hakbang Sa Pagbasa book sample pages and contents
-Abakada Unang Hakbang Sa Pagbasa book author and publisher
-Abakada Unang Hakbang Sa Pagbasa book benefits and features
-Abakada Unang Hakbang Sa Pagbasa book vs other Filipino reading books
-Abakada Unang Hakbang Sa Pagbasa book testimonials and ratings
-Abakada Unang Hakbang Sa Pagbasa book exercises and activities
-Abakada Unang Hakbang Sa Pagbasa book level and difficulty
-Abakada Unang Hakbang Sa Pagbasa book edition and year of publication
-Abakada Unang Hakbang Sa Pagbasa book format and size
-Abakada Unang Hakbang Sa Pagbasa book illustrations and design
-Abakada Unang Hakbang Sa Pagbasa book awards and recognition
-Abakada Unang Hakbang Sa Pagbasa book curriculum and standards
-Abakada Unang Hakbang Sa Pagbasa book for homeschooling and tutoring
-Abakada Unang Hakbang Sa Pagbasa book for preschool and kindergarten
-Abakada Unang Hakbang Sa Pagbasa book for grade 1 and grade 2
-Abakada Unang Hakbang Sa Pagbasa book for special education and dyslexia
-Abakada Unang Hakbang Sa Pagbasa book for bilingual and multilingual learners
-Abakada Unang Hakbang Sa Pagbasa book for Filipino heritage and culture
-Abakada Unang Hakbang Sa Pagbasa book for fun and enjoyment
-How to use Abakada Unang Hakbang Sa Pagbasa book effectively
-How to supplement Abakada Unang Hakbang Sa Pagbasa book with other resources
-How to track progress and improvement with Abakada Unang Hakbang Sa Pagbasa book
-How to motivate and encourage children with Abakada Unang Hakbang Sa Pagbasa book
-How to customize and personalize Abakada Unang Hakbang Sa Pagbasa book for different learners
-How to pronounce and write the letters in Abakada Unang Hakbang Sa Pagbasa book
-How to learn the sounds and syllables in Abakada Unang Hakbang Sa Pagbasa book
-How to read the words and sentences in Abakada Unang Hakbang Sa Pagbasa book
-How to understand the meaning and context in Abakada Unang Hakbang Sa Pagbasa book
-How to expand vocabulary and grammar with Abakada Unang Hakbang Sa Pagbasa book
-How to develop comprehension and fluency with Abakada Unang Hakbang Sa Pagbasa book
-How to enhance creativity and imagination with Abakada Unang Hakbang Sa Pagbasa book
-How to foster critical thinking and problem-solving with Abakada Unang Hakbang Sa Pagbasa book
-How to build confidence and self-esteem with Abakada Unang Hakbang Sa Pagbasa book
-How to inspire love for reading and learning with Abakada Unang Hakbang Sa Pagbasa book
-The history and origin of the Abakada alphabet in the Philippines
-The significance and relevance of the Abakada alphabet in the Filipino language
-The challenges and opportunities of the Abakada alphabet in the modern world
-The comparison and contrast of the Abakada alphabet with other alphabets in the world
-The evolution and development of the Abakada alphabet over time
-The variations and adaptations of the Abakada alphabet in different regions and dialects in the Philippines
-The influence and impact of the Abakada alphabet on Filipino literature, art, music, culture, identity, etc.
-The future and prospects of the Abakada alphabet in the global society
Learning Filipino as a first language has many advantages for children's cognitive development. According to research, learning one's native language can enhance one's memory, creativity, problem-solving, and critical thinking skills. It can also improve one's communication, comprehension, and expression skills. Moreover, learning one's native language can foster one's sense of identity, belonging, and pride. It can also help one appreciate one's heritage, traditions, and values.
-The Philippines is a multilingual country with more than 170 languages spoken by different ethnic groups. The official languages are Filipino (based on Tagalog) and English, which are used as mediums of instruction in schools. However, many children also speak their regional languages at home, such as Cebuano, Ilocano, Hiligaynon, Bicolano, and others. This poses some challenges for bilingual education in terms of curriculum design, teacher training, material development, and assessment methods. However, it also offers some opportunities for promoting linguistic diversity, cultural awareness, and social integration. Abakada Unang Hakbang Sa Pagbasa Book can help address these challenges and opportunities by providing a common foundation for learning Filipino as a national language while also respecting and supporting other regional languages.
-Abakada Unang Hakbang Sa Pagbasa Book can also play a significant role in promoting Filipino culture and identity among children. The book introduces children to various aspects of Filipino culture such as literature, art, music, history, geography, science, religion, sports, and cuisine. It also exposes them to different values such as respect, honesty, hard work, cooperation, and patriotism. By doing so, the book helps children develop a positive attitude towards their own culture while also appreciating other cultures. It also helps them form a strong sense of identity as Filipinos who are proud of their roots but also open to new ideas.
-If you are interested in buying Abakada Unang Hakbang Sa Pagbasa Book for your child or student, you have several options available online and offline. Here are some of them:
-You can buy Abakada Unang Hakbang Sa Pagbasa Book online from various websites such as Lazada, Shopee, Amazon, and others. the seller, the edition, and the condition of the book. You can also avail of discounts, free shipping, and cash on delivery options from some online sellers. You can also buy Abakada Unang Hakbang Sa Pagbasa Book offline from various bookstores such as National Bookstore, Fully Booked, Booksale, and others. The price is similar to the online price, but you might have to pay extra for transportation and handling fees. You can also check the availability and quality of the book before buying it.
-Abakada Unang Hakbang Sa Pagbasa Book has received many positive reviews and testimonials from customers and experts who have used or recommended the book. Some of these are:
---"I bought this book for my daughter who is in kindergarten. She loves it! She learned how to read in Filipino in just a few weeks. The book is very easy to follow and has many fun activities. She also enjoys the songs and stories in the book. I highly recommend this book to all parents who want to teach their children how to read in Filipino."
-- Maria, a mother from Quezon City -
--"I use this book as a reference for teaching Filipino to my students who are non-native speakers. The book is very helpful and effective in introducing the basic sounds and words of Filipino. The book also has a lot of cultural information that enriches the learning experience. My students love the book and have improved their Filipino skills a lot."
-- John, a teacher from Cebu -
--"This book is a classic and a must-have for every Filipino child. The book is not only a tool for teaching how to read in Filipino but also a treasure for preserving and promoting Filipino culture and identity. The book is well-written, well-illustrated, and well-designed. It is suitable for children of different ages and levels."
-- Dr. Jose, a linguist and educator from Manila -
Abakada Unang Hakbang Sa Pagbasa Book is not the only resource you can use to teach your child or student how to read in Filipino. There are also other alternatives and supplements that you can use to enhance your learning experience. Some of these are:
-Abakada Unang Hakbang Sa Pagbasa Book is a great resource for teaching children how to read in Filipino using the Abakada alphabet. The book has a history and purpose that make it relevant and meaningful for Filipino children. The book has features and benefits that make it easy and enjoyable for parents and teachers to use. The book has an importance that make it valuable and essential for children's cognitive, linguistic, and cultural development. The book has an availability and price that make it accessible and affordable for everyone. The book has reviews and testimonials that make it reliable and trustworthy for customers and experts. The book has alternatives and supplements that make it flexible and adaptable for different needs and preferences.
-If you are looking for a way to teach your child or student how to read in Filipino, you should definitely consider buying Abakada Unang Hakbang Sa Pagbasa Book. It will not only help them learn how to read but also help them appreciate their own language, culture, and identity as Filipinos. So what are you waiting for? Order your copy today!
-Here are some frequently asked questions about Abakada Unang Hakbang Sa Pagbasa Book:
-If you are looking for a reliable and efficient backup solution for your data, you might want to consider CA ARCserve Backup R16 LuLZiSO.rar. This is a compressed file that contains the installation files for CA ARCserve Backup R16, a powerful software that can protect your data from any disaster. In this article, we will review the features, benefits and installation steps of CA ARCserve Backup R16 LuLZiSO.rar.
-DOWNLOAD ✓ https://urlcod.com/2uK3vL
CA ARCserve Backup R16 LuLZiSO.rar is a file that you can download from the internet for free. It contains the setup files for CA ARCserve Backup R16, a software that can backup and restore your data across different platforms and devices. CA ARCserve Backup R16 is developed by Arcserve, a leading provider of data protection solutions. The software has been tested and verified by LuLZiSO, a group of hackers who crack and distribute software for free.
-CA ARCserve Backup R16 LuLZiSO.rar has many features that make it a superior backup solution for your data. Some of the features are:
-CA ARCserve Backup R16 LuLZiSO.rar has many benefits that make it a valuable backup solution for your data. Some of the benefits are:
-To install CA ARCserve Backup R16 LuLZiSO.rar, you need to follow these steps:
-CA ARCserve Backup R16 software download
-CA ARCserve Backup R16 crack by LuLZiSO
-CA ARCserve Backup R16 full version rar file
-How to install CA ARCserve Backup R16 LuLZiSO
-CA ARCserve Backup R16 license key generator
-CA ARCserve Backup R16 torrent link
-CA ARCserve Backup R16 free trial download
-CA ARCserve Backup R16 patch by LuLZiSO
-CA ARCserve Backup R16 activation code
-CA ARCserve Backup R16 serial number
-CA ARCserve Backup R16 system requirements
-CA ARCserve Backup R16 user manual pdf
-CA ARCserve Backup R16 features and benefits
-CA ARCserve Backup R16 reviews and ratings
-CA ARCserve Backup R16 comparison with other backup software
-CA ARCserve Backup R16 technical support contact
-CA ARCserve Backup R16 online training courses
-CA ARCserve Backup R16 best practices and tips
-CA ARCserve Backup R16 troubleshooting guide
-CA ARCserve Backup R16 upgrade options
-CA ARCserve Backup R16 discount coupon code
-CA ARCserve Backup R16 alternative solutions
-CA ARCserve Backup R16 compatible devices and platforms
-CA ARCserve Backup R16 data recovery tools
-CA ARCserve Backup R16 cloud backup integration
-CA ARCserve Backup R16 encryption and security features
-CA ARCserve Backup R16 backup schedule and automation
-CA ARCserve Backup R16 restore and recovery options
-CA ARCserve Backup R16 backup verification and validation
-CA ARCserve Backup R16 backup compression and deduplication
-CA ARCserve Backup R16 backup types and modes
-CA ARCserve Backup R16 backup destination and media
-CA ARCserve Backup R16 backup retention and rotation policies
-CA ARCserve Backup R16 backup reports and alerts
-CA ARCserve Backup R16 backup performance and optimization
-CA ARCserve Backup R16 backup migration and conversion tools
-CA ARCserve Backup R16 backup replication and synchronization features
-CA ARCserve Backup R16 backup disaster recovery plan
-CA ARCserve Backup R16 backup virtualization support
-CA ARCserve Backup R16 backup bare metal restore feature
-Who is LuLZiSO and how to contact them?
-What is a rar file and how to open it?
-How to avoid malware and viruses from downloading rar files?
-How to verify the authenticity and integrity of rar files?
-How to extract rar files using WinRAR or other tools?
-How to split or join rar files using HJSplit or other tools?
-How to password protect or encrypt rar files using WinRAR or other tools?
-How to repair corrupted or damaged rar files using WinRAR or other tools?
-How to create or edit rar files using WinRAR or other tools?
-How to convert rar files to other formats using WinRAR or other tools?
CA ARCserve Backup R16 LuLZiSO.rar is not a perfect backup solution for your data. It has some drawbacks that you should be aware of before using it. Some of the drawbacks are:
-If you are looking for a different backup solution for your data, you might want to consider some alternatives to CA ARCserve Backup R16 LuLZiSO.rar. Some of the alternatives are:
-After installing CA ARCserve Backup R16 LuLZiSO.rar, you can use it to backup and restore your data easily and efficiently. Here are some steps to use the software:
-CA ARCserve Backup R16 LuLZiSO.rar is a file that contains the installation files for CA ARCserve Backup R16, a software that can backup and restore your data across different platforms and devices. It has many features and benefits that make it a superior backup solution for your data. However, it also has some drawbacks that you should be aware of before using it. It is illegal, risky, unsupported and unethical to use CA ARCserve Backup R16 LuLZiSO.rar without a valid license from Arcserve. You might want to consider some alternatives to CA ARCserve Backup R16 LuLZiSO.rar if you are looking for a different backup solution for your data. If you decide to use CA ARCserve Backup R16 LuLZiSO.rar, you can follow some steps to install and use it easily and efficiently. We hope this article has helped you understand more about CA ARCserve Backup R16 LuLZiSO.rar and how to use it for your data protection needs.
-CA ARCserve Backup R16 LuLZiSO.rar is a file that contains the installation files for CA ARCserve Backup R16, a software that can backup and restore your data across different platforms and devices. It has many features and benefits that make it a superior backup solution for your data. However, it also has some drawbacks that you should be aware of before using it. It is illegal, risky, unsupported and unethical to use CA ARCserve Backup R16 LuLZiSO.rar without a valid license from Arcserve. You might want to consider some alternatives to CA ARCserve Backup R16 LuLZiSO.rar if you are looking for a different backup solution for your data. If you decide to use CA ARCserve Backup R16 LuLZiSO.rar, you can follow some steps to install and use it easily and efficiently. We hope this article has helped you understand more about CA ARCserve Backup R16 LuLZiSO.rar and how to use it for your data protection needs.
679dcb208eIf you are a fan of car racing games, you might have heard of 07 Masin Oyunu, a popular game that lets you drive various models of VAZ cars on different tracks and terrains. In this article, we will tell you everything you need to know about this game, including how to download it for free, how to play it online or offline, how to customize and upgrade your car, and how to master the skills and tricks of the game.
-07 Masin Oyunu is a car racing game that was developed by a Turkish company called Seze Oyun. The game was released in 2019 and has gained a lot of popularity among car enthusiasts, especially in Azerbaijan and Turkey. The game features realistic graphics, physics, and sounds, as well as various modes, tracks, cars, and customization options.
-Download 🔗 https://bltlly.com/2uOra0
The gameplay of 07 Masin Oyunu is simple and fun. You can choose from different models of VAZ cars, such as VAZ 2107, VAZ 2105, VAZ 2106, VAZ 21011, VAZ 2101, and Lada Niva. You can also modify your car's appearance, color, wheels, spoilers, stickers, and more. You can then select from different tracks and terrains, such as city streets, highways, dirt roads, mountains, deserts, snowfields, etc. You can also choose from different weather conditions, such as sunny, rainy, foggy, snowy, etc.
-The game has two main modes: online multiplayer mode and offline single-player mode. In online multiplayer mode, you can join or create a server and race against other players from around the world. You can chat with them, challenge them, or join their teams. You can also participate in tournaments and events that are organized by the game developers or the community. In offline single-player mode, you can race against the computer or practice your skills in free roam mode.
-If you want to download 07 Masin Oyunu for free, you can visit the official website of the game at . There you can find all the information about the game, such as its features, screenshots, videos, reviews, updates, etc. You can also find the links to download the game for Windows PC or Android devices. The game is compatible with Windows XP/Vista/7/8/10 and Android 4.0 or higher.
-Before you download 07 Masin Oyunu, you should make sure that your device meets the minimum system requirements of the game. Here are the system requirements for Windows PC and Android devices:
-Device | -Minimum System Requirements | -
---|---|
Windows PC | -
-
|
-
Android Device | -
-
|
-
Once you have downloaded the game file from the official website, you can follow these steps to install the game on your device:
-If you want to play 07 Masin Oyunu online with other players, you need to have an internet connection and a registered account on the game website. You can create an account for free by entering your username, email, and password. Once you have an account, you can log in to the game and select the online multiplayer mode. You can then join or create a server and choose a car, a track, and a weather condition. You can also chat with other players, invite them to your team, or challenge them to a race.
-The game has several servers that are hosted by the game developers or the community. You can find a list of servers on the game website or in the game menu. You can also create your own server by selecting the create server option and setting up your own rules and settings. You can then invite other players to join your server or make it public for anyone to join.
-If you want to play 07 Masin Oyunu offline without an internet connection, you can select the offline single-player mode. You can then choose a car, a track, and a weather condition. You can also adjust the difficulty level and the number of opponents. You can then start the race and compete against the computer or practice your skills in free roam mode.
-07 masin oyunu indir pulsuz
-07 masin oyunu indir komputer ucun
-07 masin oyunu indir microsoft store
-07 masin oyunu indir windows 10
-07 masin oyunu indir online
-07 masin oyunu indir apk
-07 masin oyunu indir android
-07 masin oyunu indir yukle
-07 masin oyunu indir full
-07 masin oyunu indir son versiya
-07 masin yarisi oyunu indir
-07 masin park etme oyunu indir
-07 masin surme oyunu indir
-07 masin modifiye oyunu indir
-07 masin drift oyunu indir
-07 masin simulator oyunu indir
-07 masin yarislarinin en yaxsi oyunu indir
-07 masin avtobus oyunu indir
-07 masin cip oyunu indir
-07 masin limuzin oyunu indir
-07 masin polis oyunu indir
-07 masin taxi oyunu indir
-07 masin qacis oyunu indir
-07 masin dovlet nomresi ile oyunu indir
-07 masin qarajda tamir etme oyunu indir
-07 masin reng deyisme oyunu indir
-07 masin aksesuar elave etme oyunu indir
-07 masin sekilleri yukleme oyunu indir
-07 masin sevgilisi ile gezintiye cixmaq oyunu indir
-07 masin qazanmaq ucun yaris etme oyunu indir
-07 masin ile seherde gezmek oyunu indir
-07 masin ile kendi yollarinda surmek oyunu indir
-07 masin ile daglarda macera yasamaq oyunu indir
-07 masin ile deniz kenarinda gezintiye cixmaq oyunu indir
-07 masin ile qarli yollarda surmek oyunu indir
-07 masin ile cirkli yollarda surmek oyunu indir
-07 masin ile qeyri-resmi yarislar yapmaq oyunu indir
-07 masin ile polisten qacmaq ve saklanmaq oyunu indir
-07 masin ile banka soygunlari yapmaq ve qacmaq oyunu indir
-07 masin ile mafya isleri yapmaq ve dusmenlerden qacmaq oyunu indir
-07 masinin en yeni modelini almaq ve test etmek oyunu indir
-07 masinin en guclunu almaq ve modifiye etmek oyunu indir
-07 masinin en gozelini almaq ve rengini deyismek oyunu indir
-07 masinin en sade ve ucuzunu almaq ve tamire salmaq oyunu indir
-07 masinin en tezini almaq ve rekorlar qirmaq oyunu indir
The offline single-player mode also has some options that you can access from the pause menu. You can change your car, track, or weather condition at any time during the game. You can also restart or quit the race, save or load your progress, adjust the sound and graphics settings, or view your statistics and achievements.
-One of the most fun aspects of 07 Masin Oyunu is that you can customize and upgrade your car in various ways. You can choose from different models of VAZ cars, such as VAZ 2107, VAZ 2105, VAZ 2106, VAZ 21011, VAZ 2101, and Lada Niva. Each car has its own characteristics and performance, such as speed, acceleration, handling, braking, etc.
-You can also modify your car's appearance by changing its color, wheels, spoilers, stickers, lights, mirrors, windows, etc. You can also add some accessories to your car, such as flags, horns, sirens, etc. You can access the car selection and modification features from the main menu or the pause menu of the game.
-Besides modifying your car's appearance, you can also upgrade your car's performance by changing its engine, transmission, suspension, brakes, tires, etc. You can also tune your car's settings, such as the gear ratio, the camber angle, the tire pressure, etc. You can access the car performance and tuning features from the garage menu of the game.
-By customizing and upgrading your car, you can improve its speed, acceleration, handling, braking, etc. You can also make your car more suitable for different tracks and terrains. However, you should also be careful not to overdo it or damage your car. You can check your car's condition and repair it from the garage menu of the game.
-Before you start racing in 07 Masin Oyunu, you should familiarize yourself with the basic controls and tips of the game. Here are the default controls for Windows PC and Android devices:
-Device | -Default Controls | -
---|---|
Windows PC | -
-
|
-
Android Device | -
-
|
-
You can also customize your controls from the settings menu of the game. Here are some tips to help you drive better in 07 Masin Oyunu:
-If you want to master the skills and tricks of 07 Masin Oyunu, you should practice a lot and learn from your mistakes. You should also try to learn some advanced techniques and strategies that can give you an edge over your opponents. Here are some examples of advanced techniques and strategies that you can use in 07 Masin Oyunu:
-07 Masin Oyunu is a fun and exciting car racing game that you can download and play for free on your Windows PC or Android device. You can choose from different models of VAZ cars, customize and upgrade them, and race on various tracks and terrains. You can also play online with other players or offline with the computer. You can also learn and master the skills and tricks of the game, such as overtaking, nitro management, tuning, drifting, and stunts. If you are looking for a realistic, challenging, and entertaining car racing game, you should definitely try 07 Masin Oyunu.
-A: Yes, 07 Masin Oyunu is safe to download and play. The game does not contain any viruses, malware, or spyware. The game also does not require any personal information or permissions from your device. However, you should always download the game from the official website or a trusted source to avoid any problems or risks.
-A: You can update 07 Masin Oyunu to the latest version by visiting the official website of the game at . There you can find the latest updates and patches for the game. You can also check for updates from the game menu or settings. You should always update the game to enjoy the new features, improvements, and bug fixes.
-A: You can contact the developers or the community of 07 Masin Oyunu by visiting their social media pages or forums. You can find their links on the official website of the game at . There you can ask questions, give feedback, report issues, suggest ideas, or join discussions about the game.
-A: You can support 07 Masin Oyunu financially by making a donation to the developers or buying some in-game items or features. You can find their donation link or their in-game store on the official website of the game at . There you can choose how much you want to donate or what you want to buy. Your support will help them maintain and improve the game.
-A: You can share your screenshots or videos of 07 Masin Oyunu by using the built-in screenshot or video capture feature of your device or by using a third-party app or software. You can then upload your screenshots or videos to your social media pages or platforms, such as Facebook, Instagram, YouTube, etc. You can also tag the developers or the community of 07 Masin Oyunu to show them your achievements or creations.
401be4b1e0If you are a fan of Indian cinema, you must have heard of AR Rahman, the legendary composer who has won numerous awards and accolades for his musical scores. AR Rahman, whose full name is Allah Rakha Rahman, was born as A.S. Dileep Kumar in Chennai, India, on January 6, 1967. He started his career as a keyboard player and jingle composer before making his debut as a film composer with Roja in 1992. Since then, he has composed music for over 100 films in various languages, including Tamil, Hindi, Telugu, Malayalam, English, and Chinese.
-Download File ✅ https://bltlly.com/2uOon5
AR Rahman's musical style is a fusion of Indian classical, folk, pop, rock, electronic, and world music. He is known for his innovative use of instruments, sounds, rhythms, and melodies. He is also famous for his background music (BGM), which enhances the mood and emotion of the scenes in the films. His BGMs are often catchy, memorable, and inspiring.
In this article, we will explore some of his best BGMs from the 90s and how you can download them to enjoy them anytime and anywhere.
The 90s was a golden era for AR Rahman's music. He collaborated with some of the finest directors and actors in Indian cinema and created some of his most iconic soundtracks. Here are some of his best BGMs from the 90s that you should not miss:
ar rahman 90s bgm collection free download
-ar rahman 90s bgm mp3 download
-ar rahman 90s bgm zip download
-ar rahman 90s bgm soundcloud
-ar rahman 90s bgm internet archive
-ar rahman 90s tamil bgm download
-ar rahman 90s hindi bgm download
-ar rahman 90s telugu bgm download
-ar rahman 90s malayalam bgm download
-ar rahman 90s instrumental bgm download
-ar rahman 90s romantic bgm download
-ar rahman 90s love theme bgm download
-ar rahman 90s flute bgm download
-ar rahman 90s piano bgm download
-ar rahman 90s guitar bgm download
-ar rahman 90s interlude bgm download
-ar rahman 90s roja bgm download
-ar rahman 90s gentleman bgm download
-ar rahman 90s bombay bgm download
-ar rahman 90s kadhalan bgm download
-ar rahman 90s duet bgm download
-ar rahman 90s iruvar bgm download
-ar rahman 90s minsara kanavu bgm download
-ar rahman 90s dil se bgm download
-ar rahman 90s jeans bgm download
-ar rahman 90s kadhalar dhinam bgm download
-ar rahman 90s alaipayuthey bgm download
-ar rahman 90s lagaan bgm download
-ar rahman 90s taal bgm download
-ar rahman 90s kandukondain kandukondain bgm download
-ar rahman 90s kannathil muthamittal bgm download
-ar rahman 90s saathiya bgm download
-ar rahman 90s guru bgm download
-ar rahman 90s slumdog millionaire bgm download
-ar rahman 90s jodhaa akbar bgm download
-ar rahman 90s delhi 6 bgm download
-ar rahman 90s vinnaithaandi varuvaayaa bgm download
-ar rahman 90s rockstar bgm download
-ar rahman 90s raanjhanaa bgm download
-ar rahman 90s highway bgm download
-ar rahman 90s ok kanmani bgm download
-best of ar rahman 90s bgm download
-how to download ar rahman 90s bgm
-where to find ar rahman 90s bgm
-sites to download ar rahman 90s bgm
-apps to listen to ar rahman 90s bgm
-youtube playlist of ar rahman 90s bgm
-spotify playlist of ar rahman 90s bgm
-gaana playlist of ar rahman 90s bgm
-jiosaavn playlist of ar rahman 90s bgm
AR-media Plugin for Autodesk 3ds Max is a tool that allows you to export your 3D models from 3ds Max to the AR-media Studio platform, where you can create immersive and interactive experiences for Augmented and Virtual Reality. In this article, we will show you how to install and use the plugin, as well as some examples of what you can achieve with it.
-To install the AR-media Plugin for 3ds Max, you need to download it from the Autodesk App Store[^1^]. The plugin installs itself as a script utility, meaning that it will be available through the 3ds Max's Utility panel. To start the plugin:
-Download ✶ https://urlcod.com/2uHwjY
The plugin's interface will be displayed as a rollout and related toolbar.
-When you install the plugin, you will see the AR-media Plugin's toolbar which is your quick access point to the main functionalities:
-Using the toolbar you can:
-You usually have to go to your AR-media Studio account to:
-Before any export, you need to set your personal plugin token (that you retrieve from your personal account data) to bind the AR-media Extension to your space and resources available on the AR-media Studio platform. When you set a token you can check the "Remember me" option to make the token available among different sessions. Un-check it if you are working on a shared computer and do not want to allow other users to upload models to your account thus consuming your personal storage and limits.
- -Then to actually export a new model, you need to choose a name for your model. If you want to update a previously exported model instead, you must also specify a model token. The model token is automatically generated each time you export a model and is made available to let you keep working on the very same model asset instead of creating a new asset each time you export your content (this will also help you keeping the platform usage as low as possible).
-Upon a successful export your model will appear in the AR-media Studio Asset Manager with the chosen model name.
-Note: Model names have not to be unique even though you'd better choose unique names to distinguish them later on.
-You set the plugin token, the model name (and optionally the model token) using the plugin's rollout:
-To export the whole scene click the All button, whereas if you want to export only the current selection then click the Selected button. By clicking the Account button you will be able to login into your account to retrieve the plugin token, a model token (if required), or to verify, manage and test exported models. The Toolbar button will show the plugin's toolbar in case it has been closed by the user.
-When exporting models from 3ds Max you can choose to include animations and eventually bake animations if you are experiencing problems with the default export method. To this regard you can rely on the Export Animations and Bake Animations checkboxes respectively.
-Here are some examples of what you can do with AR-media Plugin for 3ds Max and AR-media Studio:
-Matru Ki Bijlee Ka Mandola is a 2013 Bollywood comedy-drama film directed by Vishal Bhardwaj and starring Imran Khan, Anushka Sharma and Pankaj Kapur. The film revolves around the lives of three characters: Matru, a village boy who works for a rich landlord; Bijlee, the landlord's daughter who is in love with Matru; and Mandola, the landlord who is addicted to alcohol and wants to sell his land to a corrupt politician.
-DOWNLOAD ✏ https://urlcod.com/2uHva4
If you are a fan of this film and want to watch it in Telugu language with Hindi subtitles, you might be wondering how to do that without paying any money. Well, you are in luck because there is a website that offers this service for free. The website is called MovierulzHD and it has a huge collection of multi-audio movies in various languages. You can watch Matru Ki Bijlee Ka Mandola Telugu movie dubbed in Hindi for free on this website.
-Here are the steps to watch Matru Ki Bijlee Ka Mandola Telugu movie dubbed in Hindi for free on MovierulzHD:
-MovierulzHD is a reliable and fast streaming service that lets you watch multi-audio movies for free. You can also download the movies if you want to watch them offline. However, you should be aware that watching or downloading pirated movies is illegal and may have legal consequences. We do not endorse or promote piracy in any way. This article is for informational purposes only.
- - -If you are looking for more movies like Matru Ki Bijlee Ka Mandola Telugu movie dubbed in Hindi, you can check out the other movies on MovierulzHD. Some of the popular movies that you can watch on this website are:
-These are just some of the movies that you can watch on MovierulzHD for free. You can also browse through the categories and genres to find more movies that suit your taste. Whether you want to watch comedy, drama, romance, action, thriller or horror movies, you can find them all on MovierulzHD.
7b8c122e87Chupulu Kalisina Subhavela Episode 185 full clip, Chupulu Kalisina ... Chupulukalisina subhavela KUSHI PERFORMANCE ON MAA TV download ... Chupulu kalisina subha vela telugu serial emotion scene|idhu kadhala| iss luar ko kya ... mp4, flv, webm, avi and mp3 Search and Download youtube videos 3gp, mp4, flv, ...
-Chupulu kalisina subhavela serial is a dubbed version of the Hindi television serial, ... Watch Chupulu Kalisina Shubhavela Serial by MAA TV - All Episodes Online at ... Gossips, MP3 songs, film news, Andhra Pradesh News, Youtube Movies.
-Download File ✪✪✪ https://urlcod.com/2uyUNT
- -【Passaggio facoltativo】 Se si desidera supportare ChatGLM di Tsinghua/MOSS di Fudan come backend, è necessario installare ulteriori dipendenze (prerequisiti: conoscenza di Python, esperienza con Pytorch e computer sufficientemente potente): -```sh -# 【Passaggio facoltativo I】 Supporto a ChatGLM di Tsinghua. Note su ChatGLM di Tsinghua: in caso di errore "Call ChatGLM fail 不能正常加载ChatGLM的参数" , fare quanto segue: 1. Per impostazione predefinita, viene installata la versione di torch + cpu; per usare CUDA, è necessario disinstallare torch e installare nuovamente torch + cuda; 2. Se non è possibile caricare il modello a causa di una configurazione insufficiente del computer, è possibile modificare la precisione del modello in request_llm/bridge_chatglm.py, cambiando AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) in AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True) -python -m pip install -r request_llm/requirements_chatglm.txt - -# 【Passaggio facoltativo II】 Supporto a MOSS di Fudan -python -m pip install -r request_llm/requirements_moss.txt -git clone https://github.com/OpenLMLab/MOSS.git request_llm/moss # Si prega di notare che quando si esegue questa riga di codice, si deve essere nella directory radice del progetto - -# 【Passaggio facoltativo III】 Assicurati che il file di configurazione config.py includa tutti i modelli desiderati, al momento tutti i modelli supportati sono i seguenti (i modelli della serie jittorllms attualmente supportano solo la soluzione docker): -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "newbing", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"] -``` - -
-