diff --git a/spaces/1gistliPinn/ChatGPT4/Crea Il Tuo Personaggio Cartoon Fix.md b/spaces/1gistliPinn/ChatGPT4/Crea Il Tuo Personaggio Cartoon Fix.md deleted file mode 100644 index 5692a5d88d2eedc81f4a1a40d597c6228ddf65f2..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Crea Il Tuo Personaggio Cartoon Fix.md +++ /dev/null @@ -1,64 +0,0 @@ -## crea il tuo personaggio cartoon - - - - - - - - - -**DOWNLOAD [https://lomasmavi.blogspot.com/?c=2txmL8](https://lomasmavi.blogspot.com/?c=2txmL8)** - - - - - - - - - - - - Here is a possible title and article with html formatting for the keyword "crea il tuo personaggio cartoon": - -# Come creare il tuo personaggio cartoon online gratis - - - -Se ti piacciono i cartoni animati e vuoi creare il tuo personaggio cartoon personalizzato, ci sono diversi strumenti online che ti possono aiutare. In questo articolo ti presentiamo tre opzioni gratuite e facili da usare per realizzare il tuo avatar cartoon in pochi minuti. - - - -## Canva - - - -Canva è una piattaforma di design online che ti permette di creare avatar divertenti e simpatici per te o per il tuo brand. Puoi scegliere tra vari modelli personalizzabili e utilizzare le applicazioni integrate per creare avatar da stilizzare a modo tuo. Puoi anche caricare le tue foto e trasformarle in avatar con l'app Bitmoji. Canva ti offre anche la possibilità di aggiungere bordi, sfondi, testi e illustrazioni al tuo avatar per renderlo unico e originale. Puoi scaricare il tuo avatar o condividerlo sui social media, sul tuo sito web o su qualsiasi materiale pubblicitario o di marketing.[^1^] - - - -## Animaker - - - -Animaker è uno strumento online per creare video animati in modo rapido e facile. Puoi scegliere tra una vasta gamma di modelli video disponibili nella libreria e modificarli come preferisci. Puoi anche creare il tuo video da zero e personalizzare ogni elemento con testi animati, immagini, personaggi dei cartoni, sfondi o proprietà . Animaker ti offre anche la possibilità di creare personaggi dei cartoni personalizzati con il suo strumento di creazione dei personaggi. Puoi scegliere tra una vastissima gamma di accessori, costumi, caratteristiche facciali ed espressioni per realizzare miliardi di personaggi unici. Inoltre, puoi aggiungere voice over realistici ai tuoi personaggi con il motore Text-to-Speech e sincronizzare automaticamente le labbra con il movimento delle labbra. Puoi scaricare il tuo video cartone o pubblicarlo sui tuoi profili social.[^2^] - - - -## Adobe Express - - - -Adobe Express è un servizio online che ti permette di creare avatar personalizzati per i tuoi profili social, per Twitch, YouTube e altro ancora. Puoi esplorare la collezione di icone e immagini di Adobe Express per progettare un avatar che rispecchi la tua personalità online. Puoi anche caricare le tue foto e applicare vari filtri ed effetti per trasformarle in avatar cartoon. Adobe Express ti offre anche la possibilità di aggiustare le dimensioni, il colore e la posizione del tuo avatar per adattarlo al formato desiderato. Puoi salvare il tuo avatar o condividerlo direttamente sui tuoi canali online.[^3^] - -Here is a possible continuation of the article: - -Questi sono solo alcuni dei tanti strumenti online che ti permettono di creare il tuo personaggio cartoon gratis. Ognuno di essi ha i suoi vantaggi e svantaggi, quindi ti consigliamo di provarli tutti e scegliere quello che più si adatta alle tue esigenze e preferenze. Creare il tuo personaggio cartoon online è un modo divertente e creativo per esprimere la tua personalità e il tuo stile. Puoi usare il tuo avatar per comunicare con i tuoi amici, i tuoi fan o i tuoi clienti, per creare contenuti originali e accattivanti o per promuovere il tuo brand o il tuo progetto. Che aspetti? Inizia subito a creare il tuo personaggio cartoon online gratis! - - dfd1c89656 - - - - - diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Enjoy Lords of the Realm III Without Verification Download Here.md b/spaces/1gistliPinn/ChatGPT4/Examples/Enjoy Lords of the Realm III Without Verification Download Here.md deleted file mode 100644 index 23c309b5abbcfb1df820093ee6b7af44f501b069..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Enjoy Lords of the Realm III Without Verification Download Here.md +++ /dev/null @@ -1,5 +0,0 @@ - -

8.2 We do not pre-screen, approve, endorse, or own Your UGC (as well as UGC of other users of the Game), which You have uploaded or made available to other users via the Game or the Services. You create, download, and use the User Generated Content at Your own risk. Still by uploading or making available Your UGC via the Game or the Services, You grant us a non-exclusive, transferable, sublicensable, worldwide, irrevocable license to store, publish, print, distribute, reproduce, copy, fix, perform, adapt, modify, transfer, and use for commercial purposes (including, but not limited to, the use in for advertisement purposes) Your UGC without any notice or further compensation to You.

-

Lords of the Realm III download without verification


Downloadhttps://imgfil.com/2uxZ5i



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Dark Riddle Hack How I Beat the Game with Unlimited Resources.md b/spaces/1phancelerku/anime-remove-background/Dark Riddle Hack How I Beat the Game with Unlimited Resources.md deleted file mode 100644 index b64d6216770a862efd354214375b05316a31158f..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Dark Riddle Hack How I Beat the Game with Unlimited Resources.md +++ /dev/null @@ -1,82 +0,0 @@ -
-

Dark Riddle Hack: How to Unlock All Skins and Quests

-

If you are a fan of stealth games, you might have heard of Dark Riddle, a popular mobile game that lets you explore your neighbor's house and discover his secrets. But did you know that you can use a hack tool to unlock all the skins and quests in the game? In this article, we will show you how to do that and more.

-

dark riddle hack


Download Filehttps://jinyurl.com/2uNS0x



-

What is Dark Riddle?

-

Dark Riddle is a 3D adventure game developed by Nika Entertainment. It is available for both Android and iOS devices. In this game, you play as a curious character who wants to find out what your neighbor is hiding in his basement. You have to sneak into his house, avoid his traps and cameras, and solve puzzles to progress. You can also interact with various objects and characters in the game, such as a cat, a dog, a crow, a pizza delivery guy, and more.

-

The game has many features that make it fun and challenging, such as:

- -

Why do you need Dark Riddle hack?

-

Dark Riddle is a free-to-play game, but it also has some in-app purchases that can enhance your gaming experience. For example, you can buy coins, gems, keys, hints, and premium skins using real money. However, not everyone can afford or want to spend money on these items. That's where Dark Riddle hack comes in handy.

-

The benefits of using the hack tool

-

By using Dark Riddle hack, you can get access to a hidden menu that gives you unlimited resources and options. You can use these to:

- -

The risks of using the hack tool

-

However, using Dark Riddle hack also comes with some risks that you should be aware of. These include:

- -

Therefore, you should use Dark Riddle the hack tool to change your size, but don't set it too big or small or you will have trouble with doors or objects. -

  • Use your gravity wisely. Gravity can help you jump higher or lower, but it also affects your landing and balance. You can use the hack tool to change your gravity, but don't set it too high or low or you will fall hard or float away.
  • -
  • Use your invisibility carefully. Invisibility can help you avoid detection or surprise your enemy, but it also affects your interaction and vision. You can use the hack tool to make yourself invisible, but don't set it too long or short or you will miss some actions or alerts.
  • -
  • Use your cheats sparingly. Cheats can help you have fun with different modes, such as flying mode, ghost mode, god mode, etc. You can use the hack tool to activate them, but don't abuse them or you will lose the fun and challenge of the game.
  • -
  • Use your settings moderately. Settings can help you modify the game's environment, such as the time of day, the weather, the sound effects, and more. You can use the hack tool to change them, but don't alter them too much or you will ruin the atmosphere and realism of the game.
  • - -

    Conclusion

    -

    Dark Riddle is a great game that offers a lot of fun and challenge for stealth game lovers. However, if you want to unlock all the skins and quests in the game without spending any money or completing any tasks, you can use Dark Riddle hack. This hack tool allows you to access a hidden menu that gives you unlimited resources and options to customize your game experience. However, you should also be aware of the risks of using the hack tool and use it with caution and moderation. We hope this article has helped you learn how to use Dark Riddle hack and enjoy the game more.

    -

    If you liked this article, please share it with your friends and leave a comment below. Also, if you have any questions or suggestions about Dark Riddle hack, feel free to contact us. We would love to hear from you.

    -

    FAQs

    -

    Here are some frequently asked questions about Dark Riddle hack:

    -

    dark riddle hack menu all skin
    -dark riddle hack apk download
    -dark riddle hack mod unlimited money
    -dark riddle hack ios no jailbreak
    -dark riddle hack online generator
    -dark riddle hack version latest
    -dark riddle hack cheats codes
    -dark riddle hack android no root
    -dark riddle hack free gems and coins
    -dark riddle hack tool without survey
    -dark riddle hack gameplay walkthrough
    -dark riddle hack new quest and skin
    -dark riddle hack pc windows 10
    -dark riddle hack reddit tips and tricks
    -dark riddle hack update 2023
    -dark riddle hack no verification or password
    -dark riddle hack how to get premium items
    -dark riddle hack easy and fast
    -dark riddle hack for iphone and ipad
    -dark riddle hack best guide and tutorial
    -dark riddle hack review and rating
    -dark riddle hack glitch and bug fix
    -dark riddle hack secrets and hidden features
    -dark riddle hack fun and funny moments
    -dark riddle hack support and feedback

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download 8 Ball Pool for Java and Play with Friends Online.md b/spaces/1phancelerku/anime-remove-background/Download 8 Ball Pool for Java and Play with Friends Online.md deleted file mode 100644 index ddb4e7b9e397c28f4066d0db5143f1c540794c97..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download 8 Ball Pool for Java and Play with Friends Online.md +++ /dev/null @@ -1,95 +0,0 @@ - -

    Download 8 Ball Pool for Java: A Guide for Pool Lovers

    -

    If you are a fan of pool games, you might have heard of 8 Ball Pool, one of the most popular and addictive online pool games in the world. But did you know that you can also play it on your Java platform? In this article, we will show you how to download and play 8 Ball Pool on Java, as well as some tips and tricks to improve your game. Let's get started!

    -

    download 8 ball pool for java


    DOWNLOAD ———>>> https://jinyurl.com/2uNSUQ



    -

    What is 8 Ball Pool?

    -

    8 Ball Pool is an online pool game where you can compete with other players from around the world in various game modes, such as PvP, Tournaments, 9 Ball, and more. You can also customize your cue and table, buy new items in the Pool Shop, and join clubs to chat with other players. The game is developed by Miniclip, a leading online gaming company, and has over 500 million downloads on Google Play. It is also available on other platforms, such as iOS, Windows, and web browsers.

    -

    What is Java Platform?

    -

    Java platform is a suite of programs that facilitate developing and running programs written in the Java programming language. It includes an execution engine (called a virtual machine), a compiler, and a set of libraries. Java platform is independent of any particular operating system, which makes Java programs run identically on all of them. Java is a programming language and computing platform first released by Sun Microsystems in 1995. The Java Virtual Machine (JVM) is platform-dependent, meaning that different operating systems require different JVMs.

    -

    How to Download 8 Ball Pool for Java?

    -

    To download and play 8 Ball Pool on Java, you need to follow these steps:

    -

    Step 1: Check your system requirements and compatibility

    -

    Before you download the game, make sure that your device meets the minimum system requirements for running Java programs. You can check them here. Also, make sure that your device supports the Jar format, which is the file format for Java applications. You can check this by looking at the file extension of your downloaded files. If they end with .jar. , then they are Jar files and you can run them on your Java platform. If they end with .jad, then they are Jad files and you need to convert them to Jar files first. You can use online tools like this one to do that.

    -

    Step 2: Download and install Java Runtime Environment (JRE)

    -

    Java Runtime Environment (JRE) is a software package that provides the libraries and components needed to run Java applications. You can download the latest version of JRE from the official website. Choose the version that matches your operating system and device architecture. After downloading, follow the instructions to install JRE on your device.

    -

    Step 3: Download and install 8 Ball Pool from a trusted source

    -

    There are many websites that offer 8 Ball Pool for Java, but not all of them are safe and reliable. Some of them may contain malware or viruses that can harm your device or steal your personal information. To avoid this, you should only download 8 Ball Pool from a trusted source, such as the official Miniclip website. You can also use other reputable sources, such as Mobile9 or Phoneky. After downloading, locate the Jar file on your device and open it to install 8 Ball Pool on your Java platform.

    -

    How to download 8 ball pool for java phones
    -Download 8 ball pool for java mobile free
    -Download 8 ball pool for java jar file
    -Download 8 ball pool for java touch screen
    -Download 8 ball pool for java game online
    -Download 8 ball pool for java from miniclip.com[^1^]
    -Download 8 ball pool for java github projects[^2^]
    -Download 8 ball pool for java mod apk
    -Download 8 ball pool for java hack tool
    -Download 8 ball pool for java cheat codes
    -Download 8 ball pool for java offline mode
    -Download 8 ball pool for java multiplayer support
    -Download 8 ball pool for java latest version
    -Download 8 ball pool for java update patch
    -Download 8 ball pool for java tutorial guide
    -Download 8 ball pool for java best tips and tricks
    -Download 8 ball pool for java reviews and ratings
    -Download 8 ball pool for java gameplay videos
    -Download 8 ball pool for java screenshots and wallpapers
    -Download 8 ball pool for java system requirements
    -Download 8 ball pool for java installation steps
    -Download 8 ball pool for java error fixes
    -Download 8 ball pool for java alternatives and similar games
    -Download 8 ball pool for java FAQs and forums
    -Download 8 ball pool for java customer service and support
    -Download 8 ball pool for java official website[^3^]
    -Download 8 ball pool for java free trial and demo
    -Download 8 ball pool for java premium features and benefits
    -Download 8 ball pool for java discount and coupon codes
    -Download 8 ball pool for java refund and cancellation policy
    -Download 8 ball pool for java terms and conditions
    -Download 8 ball pool for java privacy policy and security
    -Download 8 ball pool for java awards and achievements
    -Download 8 ball pool for java history and development
    -Download 8 ball pool for java team and developers
    -Download 8 ball pool for java testimonials and feedbacks
    -Download 8 ball pool for java news and updates
    -Download 8 ball pool for java events and tournaments
    -Download 8 ball pool for java challenges and missions
    -Download 8 ball pool for java rules and regulations
    -Download 8 ball pool for java strategies and techniques
    -Download 8 ball pool for java skills and levels
    -Download 8 ball pool for java coins and cash generator
    -Download 8 ball pool for java cues and tables collection
    -Download 8 ball pool for java friends and chat feature
    -Download 8 ball pool for java leaderboard and ranking system
    -Download 8 ball pool for java statistics and analytics
    -Download 8 ball pool for java customizations and settings

    -

    How to Play 8 Ball Pool on Java?

    -

    Playing 8 Ball Pool on Java is similar to playing it on other platforms, but there are some differences in the controls and interface. Here are the steps to play 8 Ball Pool on Java:

    -

    Step 1: Launch the game and sign in with your account

    -

    After installing 8 Ball Pool on your Java platform, launch the game by clicking on its icon. You will see the main menu with several options, such as Play Online, Play Offline, Pool Shop, Settings, and more. To play online, you need to sign in with your Miniclip account or create one if you don't have one. You can also sign in with your Facebook account if you want to sync your progress and access your friends list. To play offline, you don't need to sign in, but you will have limited features and modes.

    -

    Step 2: Choose a game mode and a table

    -

    After signing in, you can choose a game mode from the following options: PvP, where you can play against another player in real time; Tournaments, where you can join a bracket of players and compete for prizes; 9 Ball, where you can play a different variation of pool with only nine balls; and Practice, where you can practice your shots without any pressure. You can also choose a table from different themes and styles, such as London, Sydney, Moscow, and more. Each table has a different entry fee and reward, so choose wisely according to your skill level and budget.

    -

    Step 3: Aim and shoot your cue ball

    -

    Once you enter a game, you will see the pool table with the balls arranged in a triangle. The game follows the standard rules of 8 ball pool, which means that you have to pot all the balls of your assigned group (solid or striped) before potting the black 8 ball. To aim your cue ball, use the arrow keys or the number keys on your keypad. To adjust the power of your shot, use the * key or the # key on your keypad. To shoot, press the OK button or the 5 key on your keypad. You can also use the spin feature by pressing the left soft key or the right soft key on your keypad. This will allow you to control the direction and speed of your cue ball after hitting another ball.

    -

    Tips and Tricks for Playing 8 Ball Pool on Java

    -

    To improve your game and have more fun playing 8 Ball Pool on Java, here are some tips and tricks that you should know:

    -

    Tip 1: Use the spin feature to control the cue ball

    -

    The spin feature is one of the most useful tools in 8 Ball Pool, as it can help you avoid scratches, get better positions, and make trick shots. To use it, press the left soft key or the right soft key on your keypad when aiming your cue ball. You will see a circle with four arrows around it, indicating the direction of the spin. You can choose from four types of spin: top spin, back spin, left spin, and right spin. Each type of spin has a different effect on the cue ball's movement and angle. For example, top spin will make the cue ball move forward after hitting another ball, while back spin will make it move backward. Left spin and right spin will make the cue ball curve to the left or right, respectively. You can use these effects to avoid obstacles, get closer to your target ball, or make difficult shots.

    -

    Tip 2: Practice your shots in offline mode

    -

    If you want to improve your skills and confidence in 8 Ball Pool, you should practice your shots in offline mode. Offline mode allows you to play against the computer or yourself without any internet connection or entry fee. You can choose from different difficulty levels and table themes, and you can also adjust the rules and settings of the game. Offline mode is a great way to learn the basics of the game, test your strategies, and have fun without any pressure or risk.

    -

    Tip 3: Challenge your friends and other players online

    -

    One of the best features of 8 Ball Pool is that you can challenge your friends and other players online. You can invite your friends from Facebook or Miniclip to play with you, or you can join a random match with someone from anywhere in the world. You can also chat with your opponent during the game, send them emojis, and add them as friends. Playing online is not only fun and exciting, but also rewarding and challenging. You can earn coins, items, trophies, and ranking points by winning matches, and you can also join clubs and tournaments to compete with other players and teams.

    -

    Conclusion

    -

    8 Ball Pool is a fantastic online pool game that you can play on your Java platform. It has amazing graphics, realistic physics, and addictive gameplay. You can download it from a trusted source, install it on your device, and enjoy playing it anytime and anywhere. You can also use some tips and tricks to improve your game and have more fun. Whether you are a beginner or a pro, 8 Ball Pool has something for everyone. So what are you waiting for? Download 8 Ball Pool for Java today and join the millions of pool lovers around the world!

    -

    FAQs

    -

    Q1: Is 8 Ball Pool free to play on Java?

    -

    A1: Yes, 8 Ball Pool is free to play on Java. However, some features and items may require in-app purchases or real money.

    -

    Q2: How can I get more coins and items in 8 Ball Pool?

    -

    A2: You can get more coins and items in 8 Ball Pool by winning matches, completing missions, spinning the wheel, watching ads, or buying them with real money.

    -

    Q3: How can I improve my skills and ranking in 8 Ball Pool?

    -

    A3: You can improve your skills and ranking in 8 Ball Pool by practicing your shots in offline mode, learning from other players online, using the spin feature wisely, and joining clubs and tournaments.

    -

    Q4: What are the differences between 8 Ball Pool on Java and other platforms?

    -

    A4: The main differences between 8 Ball Pool on Java and other platforms are the controls and interface. On Java platform, you use the keypad to aim and shoot your cue ball, while on other platforms, you use the touch screen or the mouse. The interface on Java platform is also simpler and less cluttered than on other platforms.

    -

    Q5: What are the best sources to download 8 Ball Pool for Java?

    -

    A5: The best sources to download 8 Ball Pool for Java are the official Miniclip website, Mobile9, Phoneky, or any other reputable website that offers safe and reliable downloads.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Free Download IGNOU Solved Assignment for M.Com 2023 All Subjects and Languages.md b/spaces/1phancelerku/anime-remove-background/Free Download IGNOU Solved Assignment for M.Com 2023 All Subjects and Languages.md deleted file mode 100644 index 841ebd7d07c2d256d6a662f9baf4caceada9f1d9..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Free Download IGNOU Solved Assignment for M.Com 2023 All Subjects and Languages.md +++ /dev/null @@ -1,88 +0,0 @@ - -

    IGNOU Solved Assignment Free Download M.Com: A Complete Guide

    -

    If you are pursuing a Master of Commerce (M.Com) course from Indira Gandhi National Open University (IGNOU), you might be wondering how to get the solved assignments for your course. Solved assignments are important for completing your course and getting good marks in the exams. In this article, we will tell you everything you need to know about IGNOU solved assignment free download M.Com, including what is the M.Com course, why do you need solved assignments, and how to download them easily.

    -

    ignou solved assignment free download m.com


    Download File --->>> https://jinyurl.com/2uNQWO



    -

    What is IGNOU M.Com Course?

    -

    IGNOU M.Com course is a two-year postgraduate program that offers a comprehensive and advanced study of various aspects of commerce, such as accounting, finance, marketing, management, economics, and taxation. The course aims to develop the skills and knowledge of the students in the field of commerce and prepare them for various career opportunities in the public and private sectors.

    -

    Eligibility Criteria

    -

    To be eligible for admission to IGNOU M.Com course, you need to have a bachelor's degree or equivalent in any discipline from a recognized university. You also need to have at least 50% marks in aggregate or equivalent grade point average (GPA). However, there is a relaxation of 5% marks for SC/ST/OBC/PWD candidates.

    -

    Course Structure

    -

    The IGNOU M.Com course consists of 12 courses, out of which six are compulsory and six are elective. The compulsory courses cover the core subjects of commerce, such as business environment, financial management, marketing management, organizational behavior, and research methodology. The elective courses allow the students to choose from various specializations, such as accounting and finance, banking and insurance, business policy and corporate governance, international business operations, and management accounting and financial strategies.

    -

    Course Fee

    -

    The total fee for IGNOU M.Com course is Rs. 13,200/-, which is payable in two installments of Rs. 6,600/- each. The fee includes the registration fee, examination fee, study material fee, and other charges. The fee can be paid online through debit card/credit card/net banking or offline through demand draft/bank challan.

    -

    Why Do You Need IGNOU Solved Assignments?

    -

    IGNOU solved assignments are an essential part of your M.Com course. They are written assignments that you need to submit to your study center before the due date. They carry 30% weightage in your final marks and help you to improve your understanding of the course content.

    -

    Benefits of Solved Assignments

    -

    Some of the benefits of solved assignments are:

    -

    ignou m.com solved assignment 2022-2023 pdf free download
    -ignou m.com first year solved assignment free download
    -ignou m.com second year solved assignment free download
    -ignou m.com ibo solved assignment free download
    -ignou m.com mco solved assignment free download
    -ignou m.com books with solved assignment free download
    -ignou m.com assignment solution free download
    -ignou m.com assignment answer key free download
    -ignou m.com assignment question paper free download
    -ignou m.com assignment submission date 2022-2023
    -ignou m.com assignment status 2022-2023
    -ignou m.com assignment marks 2022-2023
    -ignou m.com assignment grade card 2022-2023
    -ignou m.com assignment result 2022-2023
    -ignou m.com assignment online submission 2022-2023
    -ignou m.com assignment online payment 2022-2023
    -ignou m.com assignment online verification 2022-2023
    -ignou m.com assignment online correction 2022-2023
    -ignou m.com assignment online help 2022-2023
    -ignou m.com assignment online support 2022-2023
    -ignou m.com solved assignment sample free download
    -ignou m.com solved assignment format free download
    -ignou m.com solved assignment guide free download
    -ignou m.com solved assignment tips free download
    -ignou m.com solved assignment tricks free download
    -ignou m.com solved assignment best site free download
    -ignou m.com solved assignment latest edition free download
    -ignou m.com solved assignment updated version free download
    -ignou m.com solved assignment quality content free download
    -ignou m.com solved assignment high score free download

    - -

    How to Submit Solved Assignments

    -

    To submit your solved assignments, you need to follow these steps:

    -
      -
    1. Download the assignment questions from the official website of IGNOU or collect them from your study center.
    2. -
    3. Solve the assignments by referring to the study material and using your own words and examples.
    4. -
    5. Write your name, enrollment number, course code, course title, assignment code, study center code, and date on the first page of each assignment.
    6. -
    7. Make sure that your handwriting is neat and legible and that you follow the word limit and format specified in the assignment guidelines.
    8. -
    9. Attach a copy of the assignment submission form with each assignment and keep a copy of the assignments and the form for your reference.
    10. -
    11. Submit your assignments to your study center coordinator before the last date of submission.
    12. -
    -

    How to Download IGNOU Solved Assignments for M.Com?

    -

    If you are looking for IGNOU solved assignments for M.Com, you can download them for free from various online sources. However, you should be careful about the quality and authenticity of the solved assignments and use them only as a reference and not as a substitute for your own work. To download IGNOU solved assignments for M.Com, you can follow these steps:

    -

    Visit the Official Website of IGNOU

    -

    The first step is to visit the official website of IGNOU at www.ignou.ac.in. Here, you can find the latest updates and notifications regarding the M.Com course and the assignments. You can also access the study material and other resources for your course.

    -

    Select Your Course and Session

    -

    The next step is to select your course and session from the drop-down menu on the homepage. You will be redirected to a new page where you can see the list of courses offered by IGNOU. Click on the M.Com course and then choose your session (July or January). You will see the details of the course, such as the syllabus, admission procedure, evaluation scheme, etc.

    -

    Download the Solved Assignments in PDF Format

    -

    The final step is to download the solved assignments in PDF format from the links provided on the same page. You can find the solved assignments for both compulsory and elective courses for each session. You can also download the assignment questions and guidelines from here. You can save the solved assignments on your device or print them out for your convenience.

    -

    Conclusion

    -

    IGNOU M.Com course is a great option for those who want to pursue higher studies in commerce and enhance their career prospects. However, to complete the course successfully, you need to submit the solved assignments on time and with quality. You can download IGNOU solved assignment free download M.Com from various online sources, but you should use them only as a reference and not as a copy. You should also follow the instructions and guidelines given by IGNOU for writing and submitting your assignments. By doing so, you can improve your learning outcomes and achieve your academic goals.

    -

    FAQs

    -

    Here are some frequently asked questions about IGNOU solved assignment free download M.Com:

    -

    Q1: What is the last date of submission of IGNOU M.Com assignments?

    -

    A1: The last date of submission of IGNOU M.Com assignments depends on your session. For July session, it is 31st March of the next year. For January session, it is 30th September of the same year.

    -

    Q2: How many marks are required to pass IGNOU M.Com assignments?

    -

    A2: You need to score at least 40% marks in each assignment to pass it. The marks obtained in the assignments are added to your term-end examination marks to calculate your final grade.

    -

    Q3: How can I check my IGNOU M.Com assignment status?

    -

    A3: You can check your IGNOU M.Com assignment status online by visiting https://admission.ignou.ac.in/changeadmdata/StatusAssignment.ASP. Here, you need to enter your enrollment number, program code, and date of birth to view your assignment status.

    -

    Q4: Can I re-submit my IGNOU M.Com assignment if I am not satisfied with my marks?

    -

    A4: No, you cannot re-submit your IGNOU M.Com assignment once it is submitted. However, you can improve your marks by performing well in the term-end examinations.

    -

    Q5: Where can I get more information about IGNOU M.Com course and assignments?

    -

    A5: You can get more information about IGNOU M.Com course and assignments from http://www.ignou.ac.in/ignou/aboutignou/school/soms/programmes/detail/164/2. Here you can find the course objectives, outcomes, curriculum, faculty, and contact details. You can also download the prospectus and application form from here.

    -

    I hope this article has helped you to understand IGNOU solved assignment free download M.Com better. If you have any queries or suggestions, please feel free to leave a comment below. Thank you for reading and happy learning!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/pndm/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/pndm/__init__.py deleted file mode 100644 index 132f733020f3f30109687fc5e7b1bd53ac83eed1..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/pndm/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# flake8: noqa -from .pipeline_pndm import PNDMPipeline diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_glove.sh b/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_glove.sh deleted file mode 100644 index 058599aa32c9c97e0e3fc0a9658822e9c904955a..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/dataset/prepare/download_glove.sh +++ /dev/null @@ -1,9 +0,0 @@ -echo -e "Downloading glove (in use by the evaluators)" -gdown --fuzzy https://drive.google.com/file/d/1bCeS6Sh_mLVTebxIgiUHgdPrroW06mb6/view?usp=sharing -rm -rf glove - -unzip glove.zip -echo -e "Cleaning\n" -rm glove.zip - -echo -e "Downloading done!" \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/__init__.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/__init__.py deleted file mode 100644 index 737ddaad53b77109b1edc05015e41bfde7651476..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import librosa -import numpy as np -import pyloudnorm as pyln - -from text_to_speech.utils.audio.vad import trim_long_silences - - -def librosa_pad_lr(x, fsize, fshift, pad_sides=1): - '''compute right padding (final frame) or both sides padding (first and final frames) - ''' - assert pad_sides in (1, 2) - # return int(fsize // 2) - pad = (x.shape[0] // fshift + 1) * fshift - x.shape[0] - if pad_sides == 1: - return 0, pad - else: - return pad // 2, pad // 2 + pad % 2 - - -def amp_to_db(x): - return 20 * np.log10(np.maximum(1e-5, x)) - - -def db_to_amp(x): - return 10.0 ** (x * 0.05) - - -def normalize(S, min_level_db): - return (S - min_level_db) / -min_level_db - - -def denormalize(D, min_level_db): - return (D * -min_level_db) + min_level_db - - -def librosa_wav2spec(wav_path, - fft_size=1024, - hop_size=256, - win_length=1024, - window="hann", - num_mels=80, - fmin=80, - fmax=-1, - eps=1e-6, - sample_rate=22050, - loud_norm=False, - trim_long_sil=False): - if isinstance(wav_path, str): - if trim_long_sil: - wav, _, _ = trim_long_silences(wav_path, sample_rate) - else: - wav, _ = librosa.core.load(wav_path, sr=sample_rate) - else: - wav = wav_path - - if loud_norm: - meter = pyln.Meter(sample_rate) # create BS.1770 meter - loudness = meter.integrated_loudness(wav) - wav = pyln.normalize.loudness(wav, loudness, -22.0) - if np.abs(wav).max() > 1: - wav = wav / np.abs(wav).max() - - # get amplitude spectrogram - x_stft = librosa.stft(wav, n_fft=fft_size, hop_length=hop_size, - win_length=win_length, window=window, pad_mode="constant") - linear_spc = np.abs(x_stft) # (n_bins, T) - - # get mel basis - fmin = 0 if fmin == -1 else fmin - fmax = sample_rate / 2 if fmax == -1 else fmax - mel_basis = librosa.filters.mel(sample_rate, fft_size, num_mels, fmin, fmax) - - # calculate mel spec - mel = mel_basis @ linear_spc - mel = np.log10(np.maximum(eps, mel)) # (n_mel_bins, T) - l_pad, r_pad = librosa_pad_lr(wav, fft_size, hop_size, 1) - wav = np.pad(wav, (l_pad, r_pad), mode='constant', constant_values=0.0) - wav = wav[:mel.shape[1] * hop_size] - - # log linear spec - linear_spc = np.log10(np.maximum(eps, linear_spc)) - return {'wav': wav, 'mel': mel.T, 'linear': linear_spc.T, 'mel_basis': mel_basis} diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/custom_openaimodel.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/custom_openaimodel.py deleted file mode 100644 index 4412eac52c294266dee21680f698b10a4614b4fa..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/custom_openaimodel.py +++ /dev/null @@ -1,368 +0,0 @@ -from abc import abstractmethod -from functools import partial -import math -from typing import Iterable - -import numpy as np -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from ldm.modules.diffusionmodules.util import ( - checkpoint, - conv_nd, - linear, - avg_pool_nd, - zero_module, - normalization, - timestep_embedding, -) -from ldm.modules.attention import SpatialTransformer -from ldm.modules.diffusionmodules.openaimodel import convert_module_to_f16, convert_module_to_f32, AttentionPool2d, \ - TimestepBlock, TimestepEmbedSequential, Upsample, TransposedUpsample, Downsample, ResBlock, AttentionBlock, count_flops_attn, \ - QKVAttentionLegacy, QKVAttention - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - :param use_new_attention_order: use a different attention pattern for potentially - increased efficiency. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - use_context_project=False, # custom text to audio support - use_context_attn=True # custom text to audio support - ): - super().__init__() - if use_spatial_transformer: - assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...' - - if context_dim is not None and not use_context_project: - assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...' - from omegaconf.listconfig import ListConfig - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set' - - if num_head_channels == -1: - assert num_heads != -1, 'Either num_heads or num_head_channels has to be set' - - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - self.num_res_blocks = num_res_blocks - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for _ in range(num_res_blocks): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(num_res_blocks + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=model_channels * mult, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = model_channels * mult - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim - ) - ) - if level and i == num_res_blocks: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)), - ) - if self.predict_codebook_ids: - self.id_predictor = nn.Sequential( - normalization(ch), - conv_nd(dims, model_channels, n_embed, 1), - #nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits - ) - - self.use_context_project = use_context_project - if use_context_project: - self.context_project = linear(context_dim, time_embed_dim) - self.use_context_attn = use_context_attn - - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps=None, context=None, y=None,**kwargs): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param context: conditioning plugged in via crossattn - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - hs = [] - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb) - - if self.num_classes is not None: - assert y.shape == (x.shape[0],) - emb = emb + self.label_emb(y) - - # For text-to-audio using global CLIP - if self.use_context_project: - context = self.context_project(context) - emb = emb + context.squeeze(1) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, context if self.use_context_attn else None) - hs.append(h) - h = self.middle_block(h, emb, context if self.use_context_attn else None) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb, context if self.use_context_attn else None) - h = h.type(x.dtype) - if self.predict_codebook_ids: - return self.id_predictor(h) - else: - return self.out(h) diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/lstm.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/lstm.py deleted file mode 100644 index c0866175950c1ca4f6cca98649525e6481853bba..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/modules/lstm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class StreamableLSTM(nn.Module): - """LSTM without worrying about the hidden state, nor the layout of the data. - Expects input as convolutional layout. - """ - def __init__(self, dimension: int, num_layers: int = 2, skip: bool = True): - super().__init__() - self.skip = skip - self.lstm = nn.LSTM(dimension, dimension, num_layers) - - def forward(self, x): - x = x.permute(2, 0, 1) - y, _ = self.lstm(x) - if self.skip: - y = y + x - y = y.permute(1, 2, 0) - return y diff --git a/spaces/AgentVerse/agentVerse/agentverse/output_parser/__init__.py b/spaces/AgentVerse/agentVerse/agentverse/output_parser/__init__.py deleted file mode 100644 index 8b54c8edd671343e85508b1e33e8d40c12415cb8..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/output_parser/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from agentverse.registry import Registry - -output_parser_registry = Registry(name="OutputParserRegistry") - -from .output_parser import * \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/LineProgressCanvas.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/LineProgressCanvas.d.ts deleted file mode 100644 index 7d855d01aa179ebe72cd5319c2ba17858eb86634..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogresscanvas/LineProgressCanvas.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import LineProgressCanvas from "../../../plugins/lineprogresscanvas"; -export default LineProgressCanvas; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Factory.js deleted file mode 100644 index 9abb53984852529f5041ee26646e4b77b741539a..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Factory.js +++ /dev/null @@ -1,11 +0,0 @@ -import Perspective from './Perspective.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('perspective', function (gameObject, config) { - return new Perspective(gameObject, config); -}); - -SetValue(window, 'RexPlugins.UI.Perspective', Perspective); - -export default Perspective; \ No newline at end of file diff --git a/spaces/Alexxggs/ggvpnewen/README.md b/spaces/Alexxggs/ggvpnewen/README.md deleted file mode 100644 index d0822e29f6bc0a7e3313ed49e11dba1d20fd1f05..0000000000000000000000000000000000000000 --- a/spaces/Alexxggs/ggvpnewen/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ggvpnewen -emoji: ⚡ -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AlgoveraAI/ocean-marketplace/README.md b/spaces/AlgoveraAI/ocean-marketplace/README.md deleted file mode 100644 index d40fd60e032ea96989b9571b4eabbac346750755..0000000000000000000000000000000000000000 --- a/spaces/AlgoveraAI/ocean-marketplace/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Ocean Marketplace -emoji: 🧺 -colorFrom: indigo -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false -license: mit ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Alpaca233/SadTalker/scripts/download_models.sh b/spaces/Alpaca233/SadTalker/scripts/download_models.sh deleted file mode 100644 index 6898648b153a2826557693dabb5adaf13bee2645..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/scripts/download_models.sh +++ /dev/null @@ -1,32 +0,0 @@ -mkdir ./checkpoints - -# lagency download link -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/auido2exp_00300-model.pth -O ./checkpoints/auido2exp_00300-model.pth -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/auido2pose_00140-model.pth -O ./checkpoints/auido2pose_00140-model.pth -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/epoch_20.pth -O ./checkpoints/epoch_20.pth -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/facevid2vid_00189-model.pth.tar -O ./checkpoints/facevid2vid_00189-model.pth.tar -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/shape_predictor_68_face_landmarks.dat -O ./checkpoints/shape_predictor_68_face_landmarks.dat -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/wav2lip.pth -O ./checkpoints/wav2lip.pth -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/mapping_00229-model.pth.tar -O ./checkpoints/mapping_00229-model.pth.tar -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/mapping_00109-model.pth.tar -O ./checkpoints/mapping_00109-model.pth.tar -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/hub.zip -O ./checkpoints/hub.zip -# unzip -n ./checkpoints/hub.zip -d ./checkpoints/ - - -#### download the new links. -wget -nc https://github.com/OpenTalker/SadTalker/releases/download/v0.0.2-rc/mapping_00109-model.pth.tar -O ./checkpoints/mapping_00109-model.pth.tar -wget -nc https://github.com/OpenTalker/SadTalker/releases/download/v0.0.2-rc/mapping_00229-model.pth.tar -O ./checkpoints/mapping_00229-model.pth.tar -wget -nc https://github.com/OpenTalker/SadTalker/releases/download/v0.0.2-rc/SadTalker_V0.0.2_256.safetensors -O ./checkpoints/SadTalker_V0.0.2_256.safetensors -wget -nc https://github.com/OpenTalker/SadTalker/releases/download/v0.0.2-rc/SadTalker_V0.0.2_512.safetensors -O ./checkpoints/SadTalker_V0.0.2_512.safetensors - - -# wget -nc https://github.com/Winfredy/SadTalker/releases/download/v0.0.2/BFM_Fitting.zip -O ./checkpoints/BFM_Fitting.zip -# unzip -n ./checkpoints/BFM_Fitting.zip -d ./checkpoints/ - -### enhancer -mkdir -p ./gfpgan/weights -wget -nc https://github.com/xinntao/facexlib/releases/download/v0.1.0/alignment_WFLW_4HG.pth -O ./gfpgan/weights/alignment_WFLW_4HG.pth -wget -nc https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth -O ./gfpgan/weights/detection_Resnet50_Final.pth -wget -nc https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -O ./gfpgan/weights/GFPGANv1.4.pth -wget -nc https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth -O ./gfpgan/weights/parsing_parsenet.pth - diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/projectors/w_projector.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/projectors/w_projector.py deleted file mode 100644 index 12553b8c4450dc8bb605b0eab0f55d90ba2d051f..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/training/projectors/w_projector.py +++ /dev/null @@ -1,161 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Project given image to the latent space of pretrained network pickle.""" - -import copy -import wandb -import numpy as np -import torch -import torch.nn.functional as F -from tqdm import tqdm -from pti.pti_configs import global_config, hyperparameters -from utils import log_utils -import dnnlib - - -def project( - G, - # [C,H,W] and dynamic range [0,255], W & H must match G output resolution - target: torch.Tensor, - *, - num_steps=1000, - w_avg_samples=10000, - initial_learning_rate=0.01, - initial_noise_factor=0.05, - lr_rampdown_length=0.25, - lr_rampup_length=0.05, - noise_ramp_length=0.75, - regularize_noise_weight=1e5, - verbose=False, - device: torch.device, - use_wandb=False, - initial_w=None, - image_log_step=global_config.image_rec_result_log_snapshot, - w_name: str -): - print(target.shape, G.img_channels, G.img_resolution, G.img_resolution//2) - assert target.shape == ( - G.img_channels, G.img_resolution, G.img_resolution // 2) - - def logprint(*args): - if verbose: - print(*args) - - G = copy.deepcopy(G).eval().requires_grad_( - False).to(device).float() # type: ignore - - # Compute w stats. - logprint( - f'Computing W midpoint and stddev using {w_avg_samples} samples...') - z_samples = np.random.RandomState(123).randn(w_avg_samples, G.z_dim) - w_samples = G.mapping(torch.from_numpy( - z_samples).to(device), None) # [N, L, C] - w_samples = w_samples[:, :1, :].cpu( - ).numpy().astype(np.float32) # [N, 1, C] - w_avg = np.mean(w_samples, axis=0, keepdims=True) # [1, 1, C] - w_avg_tensor = torch.from_numpy(w_avg).to(global_config.device) - w_std = (np.sum((w_samples - w_avg) ** 2) / w_avg_samples) ** 0.5 - - start_w = initial_w if initial_w is not None else w_avg - - # Setup noise inputs. - noise_bufs = {name: buf for ( - name, buf) in G.synthesis.named_buffers() if 'noise_const' in name} - - # Load VGG16 feature detector. - url = 'https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt' - with dnnlib.util.open_url(url) as f: - vgg16 = torch.jit.load(f).eval().to(device) - - # Features for target image. - target_images = target.unsqueeze(0).to(device).to(torch.float32) - if target_images.shape[2] > 256: - target_images = F.interpolate( - target_images, size=(256, 256), mode='area') - target_features = vgg16( - target_images, resize_images=False, return_lpips=True) - - w_opt = torch.tensor(start_w, dtype=torch.float32, device=device, - requires_grad=True) # pylint: disable=not-callable - optimizer = torch.optim.Adam([w_opt] + list(noise_bufs.values()), betas=(0.9, 0.999), - lr=hyperparameters.first_inv_lr) - - # Init noise. - for buf in noise_bufs.values(): - buf[:] = torch.randn_like(buf) - buf.requires_grad = True - - for step in range(num_steps): - - # Learning rate schedule. - t = step / num_steps - w_noise_scale = w_std * initial_noise_factor * \ - max(0.0, 1.0 - t / noise_ramp_length) ** 2 - lr_ramp = min(1.0, (1.0 - t) / lr_rampdown_length) - lr_ramp = 0.5 - 0.5 * np.cos(lr_ramp * np.pi) - lr_ramp = lr_ramp * min(1.0, t / lr_rampup_length) - lr = initial_learning_rate * lr_ramp - for param_group in optimizer.param_groups: - param_group['lr'] = lr - - # Synth images from opt_w. - w_noise = torch.randn_like(w_opt) * w_noise_scale - ws = (w_opt + w_noise).repeat([1, G.mapping.num_ws, 1]) - synth_images = G.synthesis(ws, noise_mode='const', force_fp32=True) - - # Downsample image to 256x256 if it's larger than that. VGG was built for 224x224 images. - synth_images = (synth_images + 1) * (255 / 2) - if synth_images.shape[2] > 256: - synth_images = F.interpolate( - synth_images, size=(256, 256), mode='area') - - # Features for synth images. - synth_features = vgg16( - synth_images, resize_images=False, return_lpips=True) - dist = (target_features - synth_features).square().sum() - - # Noise regularization. - reg_loss = 0.0 - for v in noise_bufs.values(): - noise = v[None, None, :, :] # must be [1,1,H,W] for F.avg_pool2d() - while True: - reg_loss += (noise * torch.roll(noise, - shifts=1, dims=3)).mean() ** 2 - reg_loss += (noise * torch.roll(noise, - shifts=1, dims=2)).mean() ** 2 - if noise.shape[2] <= 8: - break - noise = F.avg_pool2d(noise, kernel_size=2) - loss = dist + reg_loss * regularize_noise_weight - if step % 10 == 0: - print("project loss", step, loss.data) - if step % image_log_step == 0: - with torch.no_grad(): - if use_wandb: - global_config.training_step += 1 - wandb.log({f'first projection _{w_name}': loss.detach( - ).cpu()}, step=global_config.training_step) - log_utils.log_image_from_w(w_opt.repeat( - [1, G.mapping.num_ws, 1]), G, w_name) - - # Step - optimizer.zero_grad(set_to_none=True) - loss.backward() - optimizer.step() - logprint( - f'step {step + 1:>4d}/{num_steps}: dist {dist:<4.2f} loss {float(loss):<5.2f}') - - # Normalize noise. - with torch.no_grad(): - for buf in noise_bufs.values(): - buf -= buf.mean() - buf *= buf.square().mean().rsqrt() - - del G - return w_opt.repeat([1, 18, 1]) diff --git a/spaces/Amrrs/DragGan-Inversion/torch_utils/training_stats.py b/spaces/Amrrs/DragGan-Inversion/torch_utils/training_stats.py deleted file mode 100644 index aa5837c2948372ecdb3e34076f4b3f4f42c81fef..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/torch_utils/training_stats.py +++ /dev/null @@ -1,283 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -# ---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -# Data type to use for initial per-tensor reduction. -_reduce_dtype = torch.float32 -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -# Device to use for multiprocess communication. None = single-process. -_sync_device = None -_sync_called = False # Has _sync() been called yet? -# Running counters on each device, updated by report(): name => device => torch.Tensor -_counters = dict() -# Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor -_cumulative = dict() - -# ---------------------------------------------------------------------------- - - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -# ---------------------------------------------------------------------------- - - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -# ---------------------------------------------------------------------------- - - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros( - [_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros( - [_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num( - name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -# ---------------------------------------------------------------------------- - - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros( - [_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros( - [_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -# ---------------------------------------------------------------------------- diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/colossalai/train_dreambooth_colossalai.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/colossalai/train_dreambooth_colossalai.py deleted file mode 100644 index 3d4466bf94b74c5b324b970913c142342871cf78..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/colossalai/train_dreambooth_colossalai.py +++ /dev/null @@ -1,673 +0,0 @@ -import argparse -import hashlib -import math -import os -from pathlib import Path - -import colossalai -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from colossalai.context.parallel_mode import ParallelMode -from colossalai.core import global_context as gpc -from colossalai.logging import disable_existing_loggers, get_dist_logger -from colossalai.nn.optimizer.gemini_optimizer import GeminiAdamOptimizer -from colossalai.nn.parallel.utils import get_static_torch_model -from colossalai.utils import get_current_device -from colossalai.utils.model.colo_init_context import ColoInitContext -from huggingface_hub import create_repo, upload_folder -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import AutoTokenizer, PretrainedConfig - -from diffusers import AutoencoderKL, DDPMScheduler, DiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler - - -disable_existing_loggers() -logger = get_dist_logger() - - -def import_model_class_from_model_name_or_path(pretrained_model_name_or_path: str): - text_encoder_config = PretrainedConfig.from_pretrained( - pretrained_model_name_or_path, - subfolder="text_encoder", - revision=args.revision, - ) - model_class = text_encoder_config.architectures[0] - - if model_class == "CLIPTextModel": - from transformers import CLIPTextModel - - return CLIPTextModel - elif model_class == "RobertaSeriesModelWithTransformation": - from diffusers.pipelines.alt_diffusion.modeling_roberta_series import RobertaSeriesModelWithTransformation - - return RobertaSeriesModelWithTransformation - else: - raise ValueError(f"{model_class} is not supported.") - - -def parse_args(input_args=None): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--instance_data_dir", - type=str, - default=None, - required=True, - help="A folder containing the training data of instance images.", - ) - parser.add_argument( - "--class_data_dir", - type=str, - default=None, - required=False, - help="A folder containing the training data of class images.", - ) - parser.add_argument( - "--instance_prompt", - type=str, - default="a photo of sks dog", - required=False, - help="The prompt with identifier specifying the instance", - ) - parser.add_argument( - "--class_prompt", - type=str, - default=None, - help="The prompt to specify images in the same class as provided instance images.", - ) - parser.add_argument( - "--with_prior_preservation", - default=False, - action="store_true", - help="Flag to add prior preservation loss.", - ) - parser.add_argument("--prior_loss_weight", type=float, default=1.0, help="The weight of prior preservation loss.") - parser.add_argument( - "--num_class_images", - type=int, - default=100, - help=( - "Minimal class images for prior preservation loss. If there are not enough images already present in" - " class_data_dir, additional images will be sampled with class_prompt." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--placement", - type=str, - default="cpu", - help="Placement Policy for Gemini. Valid when using colossalai as dist plan.", - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--train_batch_size", type=int, default=4, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--sample_batch_size", type=int, default=4, help="Batch size (per device) for sampling images." - ) - parser.add_argument("--num_train_epochs", type=int, default=1) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument("--save_steps", type=int, default=500, help="Save checkpoint every X updates steps.") - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=5e-6, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--use_8bit_adam", action="store_true", help="Whether or not to use 8-bit Adam from bitsandbytes." - ) - - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default=None, - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" - " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" - " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - if input_args is not None: - args = parser.parse_args(input_args) - else: - args = parser.parse_args() - - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.with_prior_preservation: - if args.class_data_dir is None: - raise ValueError("You must specify a data directory for class images.") - if args.class_prompt is None: - raise ValueError("You must specify prompt for class images.") - else: - if args.class_data_dir is not None: - logger.warning("You need not use --class_data_dir without --with_prior_preservation.") - if args.class_prompt is not None: - logger.warning("You need not use --class_prompt without --with_prior_preservation.") - - return args - - -class DreamBoothDataset(Dataset): - """ - A dataset to prepare the instance and class images with the prompts for fine-tuning the model. - It pre-processes the images and the tokenizes prompts. - """ - - def __init__( - self, - instance_data_root, - instance_prompt, - tokenizer, - class_data_root=None, - class_prompt=None, - size=512, - center_crop=False, - ): - self.size = size - self.center_crop = center_crop - self.tokenizer = tokenizer - - self.instance_data_root = Path(instance_data_root) - if not self.instance_data_root.exists(): - raise ValueError("Instance images root doesn't exists.") - - self.instance_images_path = list(Path(instance_data_root).iterdir()) - self.num_instance_images = len(self.instance_images_path) - self.instance_prompt = instance_prompt - self._length = self.num_instance_images - - if class_data_root is not None: - self.class_data_root = Path(class_data_root) - self.class_data_root.mkdir(parents=True, exist_ok=True) - self.class_images_path = list(self.class_data_root.iterdir()) - self.num_class_images = len(self.class_images_path) - self._length = max(self.num_class_images, self.num_instance_images) - self.class_prompt = class_prompt - else: - self.class_data_root = None - - self.image_transforms = transforms.Compose( - [ - transforms.Resize(size, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(size) if center_crop else transforms.RandomCrop(size), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def __len__(self): - return self._length - - def __getitem__(self, index): - example = {} - instance_image = Image.open(self.instance_images_path[index % self.num_instance_images]) - if not instance_image.mode == "RGB": - instance_image = instance_image.convert("RGB") - example["instance_images"] = self.image_transforms(instance_image) - example["instance_prompt_ids"] = self.tokenizer( - self.instance_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - if self.class_data_root: - class_image = Image.open(self.class_images_path[index % self.num_class_images]) - if not class_image.mode == "RGB": - class_image = class_image.convert("RGB") - example["class_images"] = self.image_transforms(class_image) - example["class_prompt_ids"] = self.tokenizer( - self.class_prompt, - padding="do_not_pad", - truncation=True, - max_length=self.tokenizer.model_max_length, - ).input_ids - - return example - - -class PromptDataset(Dataset): - "A simple dataset to prepare the prompts to generate class images on multiple GPUs." - - def __init__(self, prompt, num_samples): - self.prompt = prompt - self.num_samples = num_samples - - def __len__(self): - return self.num_samples - - def __getitem__(self, index): - example = {} - example["prompt"] = self.prompt - example["index"] = index - return example - - -# Gemini + ZeRO DDP -def gemini_zero_dpp(model: torch.nn.Module, placememt_policy: str = "auto"): - from colossalai.nn.parallel import GeminiDDP - - model = GeminiDDP( - model, device=get_current_device(), placement_policy=placememt_policy, pin_memory=True, search_range_mb=64 - ) - return model - - -def main(args): - if args.seed is None: - colossalai.launch_from_torch(config={}) - else: - colossalai.launch_from_torch(config={}, seed=args.seed) - - local_rank = gpc.get_local_rank(ParallelMode.DATA) - world_size = gpc.get_world_size(ParallelMode.DATA) - - if args.with_prior_preservation: - class_images_dir = Path(args.class_data_dir) - if not class_images_dir.exists(): - class_images_dir.mkdir(parents=True) - cur_class_images = len(list(class_images_dir.iterdir())) - - if cur_class_images < args.num_class_images: - torch_dtype = torch.float16 if get_current_device() == "cuda" else torch.float32 - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - torch_dtype=torch_dtype, - safety_checker=None, - revision=args.revision, - ) - pipeline.set_progress_bar_config(disable=True) - - num_new_images = args.num_class_images - cur_class_images - logger.info(f"Number of class images to sample: {num_new_images}.") - - sample_dataset = PromptDataset(args.class_prompt, num_new_images) - sample_dataloader = torch.utils.data.DataLoader(sample_dataset, batch_size=args.sample_batch_size) - - pipeline.to(get_current_device()) - - for example in tqdm( - sample_dataloader, - desc="Generating class images", - disable=not local_rank == 0, - ): - images = pipeline(example["prompt"]).images - - for i, image in enumerate(images): - hash_image = hashlib.sha1(image.tobytes()).hexdigest() - image_filename = class_images_dir / f"{example['index'][i] + cur_class_images}-{hash_image}.jpg" - image.save(image_filename) - - del pipeline - - # Handle the repository creation - if local_rank == 0: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load the tokenizer - if args.tokenizer_name: - logger.info(f"Loading tokenizer from {args.tokenizer_name}", ranks=[0]) - tokenizer = AutoTokenizer.from_pretrained( - args.tokenizer_name, - revision=args.revision, - use_fast=False, - ) - elif args.pretrained_model_name_or_path: - logger.info("Loading tokenizer from pretrained model", ranks=[0]) - tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="tokenizer", - revision=args.revision, - use_fast=False, - ) - # import correct text encoder class - text_encoder_cls = import_model_class_from_model_name_or_path(args.pretrained_model_name_or_path) - - # Load models and create wrapper for stable diffusion - - logger.info(f"Loading text_encoder from {args.pretrained_model_name_or_path}", ranks=[0]) - - text_encoder = text_encoder_cls.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="text_encoder", - revision=args.revision, - ) - - logger.info(f"Loading AutoencoderKL from {args.pretrained_model_name_or_path}", ranks=[0]) - vae = AutoencoderKL.from_pretrained( - args.pretrained_model_name_or_path, - subfolder="vae", - revision=args.revision, - ) - - logger.info(f"Loading UNet2DConditionModel from {args.pretrained_model_name_or_path}", ranks=[0]) - with ColoInitContext(device=get_current_device()): - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision, low_cpu_mem_usage=False - ) - - vae.requires_grad_(False) - text_encoder.requires_grad_(False) - - if args.gradient_checkpointing: - unet.enable_gradient_checkpointing() - - if args.scale_lr: - args.learning_rate = args.learning_rate * args.train_batch_size * world_size - - unet = gemini_zero_dpp(unet, args.placement) - - # config optimizer for colossalai zero - optimizer = GeminiAdamOptimizer( - unet, lr=args.learning_rate, initial_scale=2**5, clipping_norm=args.max_grad_norm - ) - - # load noise_scheduler - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - - # prepare dataset - logger.info(f"Prepare dataset from {args.instance_data_dir}", ranks=[0]) - train_dataset = DreamBoothDataset( - instance_data_root=args.instance_data_dir, - instance_prompt=args.instance_prompt, - class_data_root=args.class_data_dir if args.with_prior_preservation else None, - class_prompt=args.class_prompt, - tokenizer=tokenizer, - size=args.resolution, - center_crop=args.center_crop, - ) - - def collate_fn(examples): - input_ids = [example["instance_prompt_ids"] for example in examples] - pixel_values = [example["instance_images"] for example in examples] - - # Concat class and instance examples for prior preservation. - # We do this to avoid doing two forward passes. - if args.with_prior_preservation: - input_ids += [example["class_prompt_ids"] for example in examples] - pixel_values += [example["class_images"] for example in examples] - - pixel_values = torch.stack(pixel_values) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - - input_ids = tokenizer.pad( - {"input_ids": input_ids}, - padding="max_length", - max_length=tokenizer.model_max_length, - return_tensors="pt", - ).input_ids - - batch = { - "input_ids": input_ids, - "pixel_values": pixel_values, - } - return batch - - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, collate_fn=collate_fn, num_workers=1 - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader)) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps, - num_training_steps=args.max_train_steps, - ) - weight_dtype = torch.float32 - if args.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif args.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move text_encode and vae to gpu. - # For mixed precision training we cast the text_encoder and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - vae.to(get_current_device(), dtype=weight_dtype) - text_encoder.to(get_current_device(), dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader)) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # Train! - total_batch_size = args.train_batch_size * world_size - - logger.info("***** Running training *****", ranks=[0]) - logger.info(f" Num examples = {len(train_dataset)}", ranks=[0]) - logger.info(f" Num batches each epoch = {len(train_dataloader)}", ranks=[0]) - logger.info(f" Num Epochs = {args.num_train_epochs}", ranks=[0]) - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}", ranks=[0]) - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}", ranks=[0]) - logger.info(f" Total optimization steps = {args.max_train_steps}", ranks=[0]) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not local_rank == 0) - progress_bar.set_description("Steps") - global_step = 0 - - torch.cuda.synchronize() - for epoch in range(args.num_train_epochs): - unet.train() - for step, batch in enumerate(train_dataloader): - torch.cuda.reset_peak_memory_stats() - # Move batch to gpu - for key, value in batch.items(): - batch[key] = value.to(get_current_device(), non_blocking=True) - - # Convert images to latent space - optimizer.zero_grad() - - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - if args.with_prior_preservation: - # Chunk the noise and model_pred into two parts and compute the loss on each part separately. - model_pred, model_pred_prior = torch.chunk(model_pred, 2, dim=0) - target, target_prior = torch.chunk(target, 2, dim=0) - - # Compute instance loss - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).mean() - - # Compute prior loss - prior_loss = F.mse_loss(model_pred_prior.float(), target_prior.float(), reduction="mean") - - # Add the prior loss to the instance loss. - loss = loss + args.prior_loss_weight * prior_loss - else: - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - optimizer.backward(loss) - - optimizer.step() - lr_scheduler.step() - logger.info(f"max GPU_mem cost is {torch.cuda.max_memory_allocated()/2**20} MB", ranks=[0]) - # Checks if the accelerator has performed an optimization step behind the scenes - progress_bar.update(1) - global_step += 1 - logs = { - "loss": loss.detach().item(), - "lr": optimizer.param_groups[0]["lr"], - } # lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - - if global_step % args.save_steps == 0: - torch.cuda.synchronize() - torch_unet = get_static_torch_model(unet) - if local_rank == 0: - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=torch_unet, - revision=args.revision, - ) - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - pipeline.save_pretrained(save_path) - logger.info(f"Saving model checkpoint to {save_path}", ranks=[0]) - if global_step >= args.max_train_steps: - break - - torch.cuda.synchronize() - unet = get_static_torch_model(unet) - - if local_rank == 0: - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - unet=unet, - revision=args.revision, - ) - - pipeline.save_pretrained(args.output_dir) - logger.info(f"Saving model checkpoint to {args.output_dir}", ranks=[0]) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_unclip_txt2img_to_image_variation.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_unclip_txt2img_to_image_variation.py deleted file mode 100644 index 07f8ebf2a3d012600a533dcfa642b609c31a3d8c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_unclip_txt2img_to_image_variation.py +++ /dev/null @@ -1,41 +0,0 @@ -import argparse - -from transformers import CLIPImageProcessor, CLIPVisionModelWithProjection - -from diffusers import UnCLIPImageVariationPipeline, UnCLIPPipeline - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - - parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.") - - parser.add_argument( - "--txt2img_unclip", - default="kakaobrain/karlo-v1-alpha", - type=str, - required=False, - help="The pretrained txt2img unclip.", - ) - - args = parser.parse_args() - - txt2img = UnCLIPPipeline.from_pretrained(args.txt2img_unclip) - - feature_extractor = CLIPImageProcessor() - image_encoder = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-large-patch14") - - img2img = UnCLIPImageVariationPipeline( - decoder=txt2img.decoder, - text_encoder=txt2img.text_encoder, - tokenizer=txt2img.tokenizer, - text_proj=txt2img.text_proj, - feature_extractor=feature_extractor, - image_encoder=image_encoder, - super_res_first=txt2img.super_res_first, - super_res_last=txt2img.super_res_last, - decoder_scheduler=txt2img.decoder_scheduler, - super_res_scheduler=txt2img.super_res_scheduler, - ) - - img2img.save_pretrained(args.dump_path) diff --git a/spaces/Andy1621/UniFormerV2_mit_demo/uniformerv2.py b/spaces/Andy1621/UniFormerV2_mit_demo/uniformerv2.py deleted file mode 100644 index 5ca7c3d511f4e3c2c8c6e89ace89e2ad8680d34f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/UniFormerV2_mit_demo/uniformerv2.py +++ /dev/null @@ -1,510 +0,0 @@ -#!/usr/bin/env python -import os -from collections import OrderedDict - -from timm.models.layers import DropPath -import torch -from torch import nn -from torch.nn import MultiheadAttention -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint - - -MODEL_PATH = './' -_MODELS = { - "ViT-B/16": os.path.join(MODEL_PATH, "vit_b16.pth"), - "ViT-L/14": os.path.join(MODEL_PATH, "vit_l14.pth"), - "ViT-L/14_336": os.path.join(MODEL_PATH, "vit_l14_336.pth"), -} - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(1.702 * x) - - -class Local_MHRA(nn.Module): - def __init__(self, d_model, dw_reduction=1.5, pos_kernel_size=3): - super().__init__() - - padding = pos_kernel_size // 2 - re_d_model = int(d_model // dw_reduction) - self.pos_embed = nn.Sequential( - nn.BatchNorm3d(d_model), - nn.Conv3d(d_model, re_d_model, kernel_size=1, stride=1, padding=0), - nn.Conv3d(re_d_model, re_d_model, kernel_size=(pos_kernel_size, 1, 1), stride=(1, 1, 1), padding=(padding, 0, 0), groups=re_d_model), - nn.Conv3d(re_d_model, d_model, kernel_size=1, stride=1, padding=0), - ) - - # init zero - print('Init zero for Conv in pos_emb') - nn.init.constant_(self.pos_embed[3].weight, 0) - nn.init.constant_(self.pos_embed[3].bias, 0) - - def forward(self, x): - return self.pos_embed(x) - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, d_model, n_head, attn_mask=None, drop_path=0.0, - dw_reduction=1.5, no_lmhra=False, double_lmhra=True - ): - super().__init__() - - self.n_head = n_head - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - print(f'Drop path rate: {drop_path}') - - self.no_lmhra = no_lmhra - self.double_lmhra = double_lmhra - print(f'No L_MHRA: {no_lmhra}') - print(f'Double L_MHRA: {double_lmhra}') - if not no_lmhra: - self.lmhra1 = Local_MHRA(d_model, dw_reduction=dw_reduction) - if double_lmhra: - self.lmhra2 = Local_MHRA(d_model, dw_reduction=dw_reduction) - - # spatial - self.attn = MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x, T=8, use_checkpoint=False): - # x: 1+HW, NT, C - if not self.no_lmhra: - # Local MHRA - tmp_x = x[1:, :, :] - L, NT, C = tmp_x.shape - N = NT // T - H = W = int(L ** 0.5) - tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous() - tmp_x = tmp_x + self.drop_path(self.lmhra1(tmp_x)) - tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C) - x = torch.cat([x[:1, :, :], tmp_x], dim=0) - # MHSA - if use_checkpoint: - attn_out = checkpoint.checkpoint(self.attention, self.ln_1(x)) - x = x + self.drop_path(attn_out) - else: - x = x + self.drop_path(self.attention(self.ln_1(x))) - # Local MHRA - if not self.no_lmhra and self.double_lmhra: - tmp_x = x[1:, :, :] - tmp_x = tmp_x.view(H, W, N, T, C).permute(2, 4, 3, 0, 1).contiguous() - tmp_x = tmp_x + self.drop_path(self.lmhra2(tmp_x)) - tmp_x = tmp_x.view(N, C, T, L).permute(3, 0, 2, 1).contiguous().view(L, NT, C) - x = torch.cat([x[:1, :, :], tmp_x], dim=0) - # FFN - if use_checkpoint: - mlp_out = checkpoint.checkpoint(self.mlp, self.ln_2(x)) - x = x + self.drop_path(mlp_out) - else: - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Extractor(nn.Module): - def __init__( - self, d_model, n_head, attn_mask=None, - mlp_factor=4.0, dropout=0.0, drop_path=0.0, - ): - super().__init__() - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - print(f'Drop path rate: {drop_path}') - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = nn.LayerNorm(d_model) - d_mlp = round(mlp_factor * d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_mlp)), - ("gelu", QuickGELU()), - ("dropout", nn.Dropout(dropout)), - ("c_proj", nn.Linear(d_mlp, d_model)) - ])) - self.ln_2 = nn.LayerNorm(d_model) - self.ln_3 = nn.LayerNorm(d_model) - self.attn_mask = attn_mask - - # zero init - nn.init.xavier_uniform_(self.attn.in_proj_weight) - nn.init.constant_(self.attn.out_proj.weight, 0.) - nn.init.constant_(self.attn.out_proj.bias, 0.) - nn.init.xavier_uniform_(self.mlp[0].weight) - nn.init.constant_(self.mlp[-1].weight, 0.) - nn.init.constant_(self.mlp[-1].bias, 0.) - - def attention(self, x, y): - d_model = self.ln_1.weight.size(0) - q = (x @ self.attn.in_proj_weight[:d_model].T) + self.attn.in_proj_bias[:d_model] - - k = (y @ self.attn.in_proj_weight[d_model:-d_model].T) + self.attn.in_proj_bias[d_model:-d_model] - v = (y @ self.attn.in_proj_weight[-d_model:].T) + self.attn.in_proj_bias[-d_model:] - Tx, Ty, N = q.size(0), k.size(0), q.size(1) - q = q.view(Tx, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - k = k.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - v = v.view(Ty, N, self.attn.num_heads, self.attn.head_dim).permute(1, 2, 0, 3) - aff = (q @ k.transpose(-2, -1) / (self.attn.head_dim ** 0.5)) - - aff = aff.softmax(dim=-1) - out = aff @ v - out = out.permute(2, 0, 1, 3).flatten(2) - out = self.attn.out_proj(out) - return out - - def forward(self, x, y): - x = x + self.drop_path(self.attention(self.ln_1(x), self.ln_3(y))) - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Transformer(nn.Module): - def __init__( - self, width, layers, heads, attn_mask=None, backbone_drop_path_rate=0., - use_checkpoint=False, checkpoint_num=[0], t_size=8, dw_reduction=2, - no_lmhra=False, double_lmhra=True, - return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], - n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, - ): - super().__init__() - self.T = t_size - self.return_list = return_list - # backbone - b_dpr = [x.item() for x in torch.linspace(0, backbone_drop_path_rate, layers)] - self.resblocks = nn.ModuleList([ - ResidualAttentionBlock( - width, heads, attn_mask, - drop_path=b_dpr[i], - dw_reduction=dw_reduction, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - ) for i in range(layers) - ]) - # checkpoint - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.n_layers = n_layers - print(f'Use checkpoint: {self.use_checkpoint}') - print(f'Checkpoint number: {self.checkpoint_num}') - - # global block - assert n_layers == len(return_list) - if n_layers > 0: - self.temporal_cls_token = nn.Parameter(torch.zeros(1, 1, n_dim)) - self.dpe = nn.ModuleList([ - nn.Conv3d(n_dim, n_dim, kernel_size=3, stride=1, padding=1, bias=True, groups=n_dim) - for i in range(n_layers) - ]) - for m in self.dpe: - nn.init.constant_(m.bias, 0.) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, n_layers)] - self.dec = nn.ModuleList([ - Extractor( - n_dim, n_head, mlp_factor=mlp_factor, - dropout=mlp_dropout[i], drop_path=dpr[i], - ) for i in range(n_layers) - ]) - self.balance = nn.Parameter(torch.zeros((n_dim))) - self.sigmoid = nn.Sigmoid() - # projection - self.proj = nn.Sequential( - nn.LayerNorm(n_dim), - nn.Dropout(cls_dropout), - nn.Linear(n_dim, num_classes), - ) - - def forward(self, x): - T_down = self.T - L, NT, C = x.shape - N = NT // T_down - H = W = int((L - 1) ** 0.5) - - if self.n_layers > 0: - cls_token = self.temporal_cls_token.repeat(1, N, 1) - - j = -1 - for i, resblock in enumerate(self.resblocks): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = resblock(x, self.T, use_checkpoint=True) - else: - x = resblock(x, T_down) - if i in self.return_list: - j += 1 - tmp_x = x.clone() - tmp_x = tmp_x.view(L, N, T_down, C) - # dpe - _, tmp_feats = tmp_x[:1], tmp_x[1:] - tmp_feats = tmp_feats.permute(1, 3, 2, 0).reshape(N, C, T_down, H, W) - tmp_feats = self.dpe[j](tmp_feats).view(N, C, T_down, L - 1).permute(3, 0, 2, 1).contiguous() - tmp_x[1:] = tmp_x[1:] + tmp_feats - # global block - tmp_x = tmp_x.permute(2, 0, 1, 3).flatten(0, 1) # T * L, N, C - cls_token = self.dec[j](cls_token, tmp_x) - - if self.n_layers > 0: - weight = self.sigmoid(self.balance) - residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C - return self.proj((1 - weight) * cls_token[0, :, :] + weight * residual) - else: - residual = x.view(L, N, T_down, C)[0].mean(1) # L, N, T, C - return self.proj(residual) - - -class VisionTransformer(nn.Module): - def __init__( - self, - # backbone - input_resolution, patch_size, width, layers, heads, output_dim, backbone_drop_path_rate=0., - use_checkpoint=False, checkpoint_num=[0], t_size=8, kernel_size=3, dw_reduction=1.5, - temporal_downsample=True, - no_lmhra=-False, double_lmhra=True, - # global block - return_list=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], - n_layers=12, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, - ): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - padding = (kernel_size - 1) // 2 - if temporal_downsample: - self.conv1 = nn.Conv3d(3, width, (kernel_size, patch_size, patch_size), (2, patch_size, patch_size), (padding, 0, 0), bias=False) - t_size = t_size // 2 - else: - self.conv1 = nn.Conv3d(3, width, (1, patch_size, patch_size), (1, patch_size, patch_size), (0, 0, 0), bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer( - width, layers, heads, dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - use_checkpoint=use_checkpoint, checkpoint_num=checkpoint_num, t_size=t_size, - no_lmhra=no_lmhra, double_lmhra=double_lmhra, - return_list=return_list, n_layers=n_layers, n_dim=n_dim, n_head=n_head, - mlp_factor=mlp_factor, drop_path_rate=drop_path_rate, mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, num_classes=num_classes, - ) - - def forward(self, x): - x = self.conv1(x) # shape = [*, width, grid, grid] - N, C, T, H, W = x.shape - x = x.permute(0, 2, 3, 4, 1).reshape(N * T, H * W, C) - - x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - out = self.transformer(x) - return out - - -def inflate_weight(weight_2d, time_dim, center=True): - print(f'Init center: {center}') - if center: - weight_3d = torch.zeros(*weight_2d.shape) - weight_3d = weight_3d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1) - middle_idx = time_dim // 2 - weight_3d[:, :, middle_idx, :, :] = weight_2d - else: - weight_3d = weight_2d.unsqueeze(2).repeat(1, 1, time_dim, 1, 1) - weight_3d = weight_3d / time_dim - return weight_3d - - -def load_state_dict(model, state_dict): - state_dict_3d = model.state_dict() - for k in state_dict.keys(): - if state_dict[k].shape != state_dict_3d[k].shape: - if len(state_dict_3d[k].shape) <= 2: - print(f'Ignore: {k}') - continue - print(f'Inflate: {k}, {state_dict[k].shape} => {state_dict_3d[k].shape}') - time_dim = state_dict_3d[k].shape[2] - state_dict[k] = inflate_weight(state_dict[k], time_dim) - model.load_state_dict(state_dict, strict=False) - - -def uniformerv2_b16( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[8, 9, 10, 11], - n_layers=4, n_dim=768, n_head=12, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=224, - patch_size=16, - width=768, - layers=12, - heads=12, - output_dim=512, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - temporal_downsample=temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-B/16"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -def uniformerv2_l14( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[20, 21, 22, 23], - n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=224, - patch_size=14, - width=1024, - layers=24, - heads=16, - output_dim=768, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - temporal_downsample=temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-L/14"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -def uniformerv2_l14_336( - pretrained=True, use_checkpoint=False, checkpoint_num=[0], - t_size=16, dw_reduction=1.5, backbone_drop_path_rate=0., - no_temporal_downsample=True, - no_lmhra=False, double_lmhra=True, - return_list=[20, 21, 22, 23], - n_layers=4, n_dim=1024, n_head=16, mlp_factor=4.0, drop_path_rate=0., - mlp_dropout=[0.5, 0.5, 0.5, 0.5], - cls_dropout=0.5, num_classes=400, -): - model = VisionTransformer( - input_resolution=336, - patch_size=14, - width=1024, - layers=24, - heads=16, - output_dim=768, - use_checkpoint=use_checkpoint, - checkpoint_num=checkpoint_num, - t_size=t_size, - dw_reduction=dw_reduction, - backbone_drop_path_rate=backbone_drop_path_rate, - no_temporal_downsample=no_temporal_downsample, - no_lmhra=no_lmhra, - double_lmhra=double_lmhra, - return_list=return_list, - n_layers=n_layers, - n_dim=n_dim, - n_head=n_head, - mlp_factor=mlp_factor, - drop_path_rate=drop_path_rate, - mlp_dropout=mlp_dropout, - cls_dropout=cls_dropout, - num_classes=num_classes, - ) - - if pretrained: - print('load pretrained weights') - state_dict = torch.load(_MODELS["ViT-L/14_336"], map_location='cpu') - load_state_dict(model, state_dict) - return model.eval() - - -if __name__ == '__main__': - import time - from fvcore.nn import FlopCountAnalysis - from fvcore.nn import flop_count_table - import numpy as np - - seed = 4217 - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - num_frames = 16 - - model = uniformerv2_l14( - pretrained=False, - t_size=num_frames, backbone_drop_path_rate=0., drop_path_rate=0., - dw_reduction=1.5, - no_lmhra=False, - temporal_downsample=True, - return_list=[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], - mlp_dropout=[0.5]*16, - n_layers=16 - ) - print(model) - - flops = FlopCountAnalysis(model, torch.rand(1, 3, num_frames, 224, 224)) - s = time.time() - print(flop_count_table(flops, max_depth=1)) - print(time.time()-s) \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/regnet/README.md b/spaces/Andy1621/uniformer_image_detection/configs/regnet/README.md deleted file mode 100644 index 0ccd4077b0b00557d4bd214dcc855561a25ec144..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/regnet/README.md +++ /dev/null @@ -1,96 +0,0 @@ -# Designing Network Design Spaces - -## Introduction - -[BACKBONE] - -We implement RegNetX and RegNetY models in detection systems and provide their first results on Mask R-CNN, Faster R-CNN and RetinaNet. - -The pre-trained modles are converted from [model zoo of pycls](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md). - -```latex -@article{radosavovic2020designing, - title={Designing Network Design Spaces}, - author={Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Dollár}, - year={2020}, - eprint={2003.13678}, - archivePrefix={arXiv}, - primaryClass={cs.CV} -} -``` - -## Usage - -To use a regnet model, there are two steps to do: - -1. Convert the model to ResNet-style supported by MMDetection -2. Modify backbone and neck in config accordingly - -### Convert model - -We already prepare models of FLOPs from 400M to 12G in our model zoo. - -For more general usage, we also provide script `regnet2mmdet.py` in the tools directory to convert the key of models pretrained by [pycls](https://github.com/facebookresearch/pycls/) to -ResNet-style checkpoints used in MMDetection. - -```bash -python -u tools/model_converters/regnet2mmdet.py ${PRETRAIN_PATH} ${STORE_PATH} -``` - -This script convert model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`. - -### Modify config - -The users can modify the config's `depth` of backbone and corresponding keys in `arch` according to the configs in the [pycls model zoo](https://github.com/facebookresearch/pycls/blob/master/MODEL_ZOO.md). -The parameter `in_channels` in FPN can be found in the Figure 15 & 16 of the paper (`wi` in the legend). -This directory already provides some configs with their performance, using RegNetX from 800MF to 12GF level. -For other pre-trained models or self-implemented regnet models, the users are responsible to check these parameters by themselves. - -**Note**: Although Fig. 15 & 16 also provide `w0`, `wa`, `wm`, `group_w`, and `bot_mul` for `arch`, they are quantized thus inaccurate, using them sometimes produces different backbone that does not match the key in the pre-trained model. - -## Results - -### Mask R-CNN - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :---------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: | -| [R-50-FPN](../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py)| pytorch | 1x | 4.4 | 12.0 | 38.2 | 34.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205_050542.log.json) | -|[RegNetX-3.2GF-FPN](./mask_rcnn_regnetx-3.2GF_fpn_1x_coco.py)| pytorch | 1x |5.0 ||40.3|36.6|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco/mask_rcnn_regnetx-3.2GF_fpn_1x_coco_20200520_163141-2a9d1814.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco/mask_rcnn_regnetx-3.2GF_fpn_1x_coco_20200520_163141.log.json) | -|[RegNetX-4.0GF-FPN](./mask_rcnn_regnetx-4GF_fpn_1x_coco.py)| pytorch | 1x |5.5||41.5|37.4|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-4GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-4GF_fpn_1x_coco/mask_rcnn_regnetx-4GF_fpn_1x_coco_20200517_180217-32e9c92d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-4GF_fpn_1x_coco/mask_rcnn_regnetx-4GF_fpn_1x_coco_20200517_180217.log.json) | -| [R-101-FPN](../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py)| pytorch | 1x | 6.4 | 10.3 | 40.0 | 36.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r101_fpn_1x_coco/mask_rcnn_r101_fpn_1x_coco_20200204-1efe0ed5.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r101_fpn_1x_coco/mask_rcnn_r101_fpn_1x_coco_20200204_144809.log.json) | -|[RegNetX-6.4GF-FPN](./mask_rcnn_regnetx-6.4GF_fpn_1x_coco.py)| pytorch | 1x |6.1 ||41.0|37.1|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-6.4GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-6.4GF_fpn_1x_coco/mask_rcnn_regnetx-6.4GF_fpn_1x_coco_20200517_180439-3a7aae83.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-6.4GF_fpn_1x_coco/mask_rcnn_regnetx-6.4GF_fpn_1x_coco_20200517_180439.log.json) | -| [X-101-32x4d-FPN](../mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py) | pytorch | 1x | 7.6 | 9.4 | 41.9 | 37.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco/mask_rcnn_x101_32x4d_fpn_1x_coco_20200205-478d0b67.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_x101_32x4d_fpn_1x_coco/mask_rcnn_x101_32x4d_fpn_1x_coco_20200205_034906.log.json) | -|[RegNetX-8.0GF-FPN](./mask_rcnn_regnetx-8GF_fpn_1x_coco.py)| pytorch | 1x |6.4 ||41.7|37.5|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-8GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-8GF_fpn_1x_coco/mask_rcnn_regnetx-8GF_fpn_1x_coco_20200517_180515-09daa87e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-8GF_fpn_1x_coco/mask_rcnn_regnetx-8GF_fpn_1x_coco_20200517_180515.log.json) | -|[RegNetX-12GF-FPN](./mask_rcnn_regnetx-12GF_fpn_1x_coco.py)| pytorch | 1x |7.4 ||42.2|38|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-12GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-12GF_fpn_1x_coco/mask_rcnn_regnetx-12GF_fpn_1x_coco_20200517_180552-b538bd8b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-12GF_fpn_1x_coco/mask_rcnn_regnetx-12GF_fpn_1x_coco_20200517_180552.log.json) | -|[RegNetX-3.2GF-FPN-DCN-C3-C5](./mask_rcnn_regnetx-3.2GF_fpn_mdconv_c3-c5_1x_coco.py)| pytorch | 1x |5.0 ||40.3|36.6|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mdconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_mdconv_c3-c5_1x_coco/mask_rcnn_regnetx-3.2GF_fpn_mdconv_c3-c5_1x_coco_20200520_172726-75f40794.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_mdconv_c3-c5_1x_coco/mask_rcnn_regnetx-3.2GF_fpn_mdconv_c3-c5_1x_coco_20200520_172726.log.json) | - -### Faster R-CNN - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :---------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: | -| [R-50-FPN](../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py)| pytorch | 1x | 4.0 | 18.2 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130_204655.log.json) | -|[RegNetX-3.2GF-FPN](./faster_rcnn_regnetx-3.2GF_fpn_1x_coco.py)| pytorch | 1x | 4.5||39.9|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/faster_rcnn_regnetx-3.2GF_fpn_1x_coco/faster_rcnn_regnetx-3.2GF_fpn_1x_coco_20200517_175927-126fd9bf.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/faster_rcnn_regnetx-3.2GF_fpn_1x_coco/faster_rcnn_regnetx-3.2GF_fpn_1x_coco_20200517_175927.log.json) | -|[RegNetX-3.2GF-FPN](./faster_rcnn_regnetx-3.2GF_fpn_2x_coco.py)| pytorch | 2x | 4.5||41.1|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/faster_rcnn_regnetx-3.2GF_fpn_2x_coco/faster_rcnn_regnetx-3.2GF_fpn_2x_coco_20200520_223955-e2081918.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/faster_rcnn_regnetx-3.2GF_fpn_2x_coco/faster_rcnn_regnetx-3.2GF_fpn_2x_coco_20200520_223955.log.json) | - -### RetinaNet - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :---------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: | -| [R-50-FPN](../retinanet/retinanet_r50_fpn_1x_coco.py) | pytorch | 1x | 3.8 | 16.6 | 36.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet/retinanet_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_1x_coco/retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_1x_coco/retinanet_r50_fpn_1x_coco_20200130_002941.log.json) | -|[RegNetX-800MF-FPN](./retinanet_regnetx-800MF_fpn_1x_coco.py)| pytorch | 1x |2.5||35.6|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/retinanet_regnetx-800MF_fpn_1x_coco/retinanet_regnetx-800MF_fpn_1x_coco_20200517_191403-f6f91d10.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/retinanet_regnetx-800MF_fpn_1x_coco/retinanet_regnetx-800MF_fpn_1x_coco_20200517_191403.log.json) | -|[RegNetX-1.6GF-FPN](./retinanet_regnetx-1.6GF_fpn_1x_coco.py)| pytorch | 1x |3.3||37.3|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco/retinanet_regnetx-1.6GF_fpn_1x_coco_20200517_191403-37009a9d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco/retinanet_regnetx-1.6GF_fpn_1x_coco_20200517_191403.log.json) | -|[RegNetX-3.2GF-FPN](./retinanet_regnetx-3.2GF_fpn_1x_coco.py)| pytorch | 1x |4.2 ||39.1|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/retinanet_regnetx-3.2GF_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/retinanet_regnetx-3.2GF_fpn_1x_coco/retinanet_regnetx-3.2GF_fpn_1x_coco_20200520_163141-cb1509e8.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/retinanet_regnetx-3.2GF_fpn_1x_coco/retinanet_regnetx-3.2GF_fpn_1x_coco_20200520_163141.log.json) | - -### Pre-trained models - -We also train some models with longer schedules and multi-scale training. The users could finetune them for downstream tasks. - -| Method | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-----: | :-----: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: | -|Faster RCNN |[RegNetX-3.2GF-FPN](./faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py)| pytorch | 3x |5.0 ||42.2|-|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco_20200520_224253-bf85ae3e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco/faster_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco_20200520_224253.log.json) | -|Mask RCNN |[RegNetX-3.2GF-FPN](./mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py)| pytorch | 3x |5.0 ||43.1|38.7|[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco_20200521_202221-99879813.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco/mask_rcnn_regnetx-3.2GF_fpn_mstrain_3x_coco_20200521_202221.log.json) | - -### Notice - -1. The models are trained using a different weight decay, i.e., `weight_decay=5e-5` according to the setting in ImageNet training. This brings improvement of at least 0.7 AP absolute but does not improve the model using ResNet-50. -2. RetinaNets using RegNets are trained with learning rate 0.02 with gradient clip. We find that using learning rate 0.02 could improve the results by at least 0.7 AP absolute and gradient clip is necessary to stabilize the training. However, this does not improve the performance of ResNet-50-FPN RetinaNet. diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x512_80k_ade20k.py deleted file mode 100644 index 94151004ea88394373cf8f135b065d5056b11179..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './encnet_r50-d8_512x512_80k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/README.md b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/README.md deleted file mode 100644 index 270781b48b99fe13628f3b3f81fb7457a9052bc9..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/README.md +++ /dev/null @@ -1,85 +0,0 @@ -# Fully Convolutional Networks for Semantic Segmentation - -## Introduction - - - -```latex -@article{shelhamer2017fully, - title={Fully convolutional networks for semantic segmentation}, - author={Shelhamer, Evan and Long, Jonathan and Darrell, Trevor}, - journal={IEEE transactions on pattern analysis and machine intelligence}, - volume={39}, - number={4}, - pages={640--651}, - year={2017}, - publisher={IEEE Trans Pattern Anal Mach Intell} -} -``` - -## Results and models - -### Cityscapes - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | ---------- | --------- | ------: | -------- | -------------- | ----: | ------------: | -------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| FCN | R-50-D8 | 512x1024 | 40000 | 5.7 | 4.17 | 72.25 | 73.36 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x1024_40k_cityscapes/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x1024_40k_cityscapes/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608.log.json) | -| FCN | R-101-D8 | 512x1024 | 40000 | 9.2 | 2.66 | 75.45 | 76.58 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x1024_40k_cityscapes/fcn_r101-d8_512x1024_40k_cityscapes_20200604_181852-a883d3a1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x1024_40k_cityscapes/fcn_r101-d8_512x1024_40k_cityscapes_20200604_181852.log.json) | -| FCN | R-50-D8 | 769x769 | 40000 | 6.5 | 1.80 | 71.47 | 72.54 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_769x769_40k_cityscapes/fcn_r50-d8_769x769_40k_cityscapes_20200606_113104-977b5d02.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_769x769_40k_cityscapes/fcn_r50-d8_769x769_40k_cityscapes_20200606_113104.log.json) | -| FCN | R-101-D8 | 769x769 | 40000 | 10.4 | 1.19 | 73.93 | 75.14 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_769x769_40k_cityscapes/fcn_r101-d8_769x769_40k_cityscapes_20200606_113208-7d4ab69c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_769x769_40k_cityscapes/fcn_r101-d8_769x769_40k_cityscapes_20200606_113208.log.json) | -| FCN | R-18-D8 | 512x1024 | 80000 | 1.7 | 14.65 | 71.11 | 72.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r18-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18-d8_512x1024_80k_cityscapes/fcn_r18-d8_512x1024_80k_cityscapes_20201225_021327-6c50f8b4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18-d8_512x1024_80k_cityscapes/fcn_r18-d8_512x1024_80k_cityscapes-20201225_021327.log.json) | -| FCN | R-50-D8 | 512x1024 | 80000 | - | | 73.61 | 74.24 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x1024_80k_cityscapes/fcn_r50-d8_512x1024_80k_cityscapes_20200606_113019-03aa804d.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x1024_80k_cityscapes/fcn_r50-d8_512x1024_80k_cityscapes_20200606_113019.log.json) | -| FCN | R-101-D8 | 512x1024 | 80000 | - | - | 75.13 | 75.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x1024_80k_cityscapes/fcn_r101-d8_512x1024_80k_cityscapes_20200606_113038-3fb937eb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x1024_80k_cityscapes/fcn_r101-d8_512x1024_80k_cityscapes_20200606_113038.log.json) | -| FCN | R-18-D8 | 769x769 | 80000 | 1.9 | 6.40 | 70.80 | 73.16 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r18-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18-d8_769x769_80k_cityscapes/fcn_r18-d8_769x769_80k_cityscapes_20201225_021451-9739d1b8.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18-d8_769x769_80k_cityscapes/fcn_r18-d8_769x769_80k_cityscapes-20201225_021451.log.json) | -| FCN | R-50-D8 | 769x769 | 80000 | - | - | 72.64 | 73.32 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_769x769_80k_cityscapes/fcn_r50-d8_769x769_80k_cityscapes_20200606_195749-f5caeabc.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_769x769_80k_cityscapes/fcn_r50-d8_769x769_80k_cityscapes_20200606_195749.log.json) | -| FCN | R-101-D8 | 769x769 | 80000 | - | - | 75.52 | 76.61 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_769x769_80k_cityscapes/fcn_r101-d8_769x769_80k_cityscapes_20200606_214354-45cbac68.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_769x769_80k_cityscapes/fcn_r101-d8_769x769_80k_cityscapes_20200606_214354.log.json) | -| FCN | R-18b-D8 | 512x1024 | 80000 | 1.6 | 16.74 | 70.24 | 72.77 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r18b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18b-d8_512x1024_80k_cityscapes/fcn_r18b-d8_512x1024_80k_cityscapes_20201225_230143-92c0f445.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18b-d8_512x1024_80k_cityscapes/fcn_r18b-d8_512x1024_80k_cityscapes-20201225_230143.log.json) | -| FCN | R-50b-D8 | 512x1024 | 80000 | 5.6 | 4.20 | 75.65 | 77.59 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50b-d8_512x1024_80k_cityscapes/fcn_r50b-d8_512x1024_80k_cityscapes_20201225_094221-82957416.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50b-d8_512x1024_80k_cityscapes/fcn_r50b-d8_512x1024_80k_cityscapes-20201225_094221.log.json) | -| FCN | R-101b-D8 | 512x1024 | 80000 | 9.1 | 2.73 | 77.37 | 78.77 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101b-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101b-d8_512x1024_80k_cityscapes/fcn_r101b-d8_512x1024_80k_cityscapes_20201226_160213-4543858f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101b-d8_512x1024_80k_cityscapes/fcn_r101b-d8_512x1024_80k_cityscapes-20201226_160213.log.json) | -| FCN | R-18b-D8 | 769x769 | 80000 | 1.7 | 6.70 | 69.66 | 72.07 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r18b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18b-d8_769x769_80k_cityscapes/fcn_r18b-d8_769x769_80k_cityscapes_20201226_004430-32d504e5.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r18b-d8_769x769_80k_cityscapes/fcn_r18b-d8_769x769_80k_cityscapes-20201226_004430.log.json) | -| FCN | R-50b-D8 | 769x769 | 80000 | 6.3 | 1.82 | 73.83 | 76.60 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50b-d8_769x769_80k_cityscapes/fcn_r50b-d8_769x769_80k_cityscapes_20201225_094223-94552d38.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50b-d8_769x769_80k_cityscapes/fcn_r50b-d8_769x769_80k_cityscapes-20201225_094223.log.json) | -| FCN | R-101b-D8 | 769x769 | 80000 | 10.3 | 1.15 | 77.02 | 78.67 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101b-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101b-d8_769x769_80k_cityscapes/fcn_r101b-d8_769x769_80k_cityscapes_20201226_170012-82be37e2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101b-d8_769x769_80k_cityscapes/fcn_r101b-d8_769x769_80k_cityscapes-20201226_170012.log.json) | -| FCN-D6 | R-50-D16 | 512x1024 | 40000 | 3.4 | 10.22 | 77.06 | 78.85 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r50-d16_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_512x1024_40k_cityscapes/fcn_d6_r50-d16_512x1024_40k_cityscapes-98d5d1bc.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_512x1024_40k_cityscapes/fcn_d6_r50-d16_512x1024_40k_cityscapes-20210305_130133.log.json) | -| FCN-D6 | R-50-D16 | 512x1024 | 80000 | - | 10.35 | 77.27 | 78.88 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes/fcn_d6_r50-d16_512x1024_40k_cityscapes-98d5d1bc.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes/fcn_d6_r50-d16_512x1024_80k_cityscapes-20210306_115604.log.json) | -| FCN-D6 | R-50-D16 | 769x769 | 40000 | 3.7 | 4.17 | 76.82 | 78.22 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes/fcn_d6_r50-d16_769x769_40k_cityscapes-1aab18ed.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_769x769_40k_cityscapes/fcn_d6_r50-d16_769x769_40k_cityscapes-20210305_185744.log.json) | -| FCN-D6 | R-50-D16 | 769x769 | 80000 | - | 4.15 | 77.04 | 78.40 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r50-d16_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_769x769_80k_cityscapes/fcn_d6_r50-d16_769x769_80k_cityscapes-109d88eb.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50-d16_769x769_80k_cityscapes/fcn_d6_r50-d16_769x769_80k_cityscapes-20210305_200413.log.json) | -| FCN-D6 | R-101-D16 | 512x1024 | 40000 | 4.5 | 8.04 | 77.36 | 79.18 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes/fcn_d6_r101-d16_512x1024_40k_cityscapes-9cf2b450.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_512x1024_40k_cityscapes/fcn_d6_r101-d16_512x1024_40k_cityscapes-20210305_130337.log.json) | -| FCN-D6 | R-101-D16 | 512x1024 | 80000 | - | 8.26 | 78.46 | 80.42 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r101-d16_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_512x1024_80k_cityscapes/fcn_d6_r101-d16_512x1024_80k_cityscapes-cb336445.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_512x1024_80k_cityscapes/fcn_d6_r101-d16_512x1024_80k_cityscapes-20210308_102747.log.json) | -| FCN-D6 | R-101-D16 | 769x769 | 40000 | 5.0 | 3.12 | 77.28 | 78.95 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r101-d16_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_769x769_40k_cityscapes/fcn_d6_r101-d16_769x769_40k_cityscapes-60b114e9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_769x769_40k_cityscapes/fcn_d6_r101-d16_769x769_40k_cityscapes-20210308_102453.log.json) | -| FCN-D6 | R-101-D16 | 769x769 | 80000 | - | 3.21 | 78.06 | 79.58 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r101-d16_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_769x769_80k_cityscapes/fcn_d6_r101-d16_769x769_80k_cityscapes-e33adc4f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101-d16_769x769_80k_cityscapes/fcn_d6_r101-d16_769x769_80k_cityscapes-20210306_120016.log.json) | -| FCN-D6 | R-50b-D16 | 512x1024 | 80000 | 3.2 | 10.16 | 76.99 | 79.03 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r50b_d16_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50b_d16_512x1024_80k_cityscapes/fcn_d6_r50b_d16_512x1024_80k_cityscapes-6a0b62e9.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50b_d16_512x1024_80k_cityscapes/fcn_d6_r50b_d16_512x1024_80k_cityscapes-20210311_125550.log.json) | -| FCN-D6 | R-50b-D16 | 769x769 | 80000 | 3.6 | 4.17 | 76.86 | 78.52 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r50b_d16_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50b_d16_769x769_80k_cityscapes/fcn_d6_r50b_d16_769x769_80k_cityscapes-d665f231.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r50b_d16_769x769_80k_cityscapes/fcn_d6_r50b_d16_769x769_80k_cityscapes-20210311_131012.log.json) | -| FCN-D6 | R-101b-D16 | 512x1024 | 80000 | 4.3 | 8.46 | 77.72 | 79.53 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r101b_d16_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101b_d16_512x1024_80k_cityscapes/fcn_d6_r101b_d16_512x1024_80k_cityscapes-3f2eb5b4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101b_d16_512x1024_80k_cityscapes/fcn_d6_r101b_d16_512x1024_80k_cityscapes-20210311_144305.log.json) | -| FCN-D6 | R-101b-D16 | 769x769 | 80000 | 4.8 | 3.32 | 77.34 | 78.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_d6_r101b_d16_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101b_d16_769x769_80k_cityscapes/fcn_d6_r101b_d16_769x769_80k_cityscapes-c4d8bfbc.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_d6_r101b_d16_769x769_80k_cityscapes/fcn_d6_r101b_d16_769x769_80k_cityscapes-20210311_154527.log.json) | - -### ADE20K - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| FCN | R-50-D8 | 512x512 | 80000 | 8.5 | 23.49 | 35.94 | 37.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_80k_ade20k/fcn_r50-d8_512x512_80k_ade20k_20200614_144016-f8ac5082.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_80k_ade20k/fcn_r50-d8_512x512_80k_ade20k_20200614_144016.log.json) | -| FCN | R-101-D8 | 512x512 | 80000 | 12 | 14.78 | 39.61 | 40.83 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_80k_ade20k/fcn_r101-d8_512x512_80k_ade20k_20200615_014143-bc1809f7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_80k_ade20k/fcn_r101-d8_512x512_80k_ade20k_20200615_014143.log.json) | -| FCN | R-50-D8 | 512x512 | 160000 | - | - | 36.10 | 38.08 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_160k_ade20k/fcn_r50-d8_512x512_160k_ade20k_20200615_100713-4edbc3b4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_160k_ade20k/fcn_r50-d8_512x512_160k_ade20k_20200615_100713.log.json) | -| FCN | R-101-D8 | 512x512 | 160000 | - | - | 39.91 | 41.40 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_160k_ade20k/fcn_r101-d8_512x512_160k_ade20k_20200615_105816-fd192bd5.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_160k_ade20k/fcn_r101-d8_512x512_160k_ade20k_20200615_105816.log.json) | - -### Pascal VOC 2012 + Aug - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| FCN | R-50-D8 | 512x512 | 20000 | 5.7 | 23.28 | 67.08 | 69.94 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_20k_voc12aug/fcn_r50-d8_512x512_20k_voc12aug_20200617_010715-52dc5306.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_20k_voc12aug/fcn_r50-d8_512x512_20k_voc12aug_20200617_010715.log.json) | -| FCN | R-101-D8 | 512x512 | 20000 | 9.2 | 14.81 | 71.16 | 73.57 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_20k_voc12aug/fcn_r101-d8_512x512_20k_voc12aug_20200617_010842-0bb4e798.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_20k_voc12aug/fcn_r101-d8_512x512_20k_voc12aug_20200617_010842.log.json) | -| FCN | R-50-D8 | 512x512 | 40000 | - | - | 66.97 | 69.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_40k_voc12aug/fcn_r50-d8_512x512_40k_voc12aug_20200613_161222-5e2dbf40.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r50-d8_512x512_40k_voc12aug/fcn_r50-d8_512x512_40k_voc12aug_20200613_161222.log.json) | -| FCN | R-101-D8 | 512x512 | 40000 | - | - | 69.91 | 72.38 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_40k_voc12aug/fcn_r101-d8_512x512_40k_voc12aug_20200613_161240-4c8bcefd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_512x512_40k_voc12aug/fcn_r101-d8_512x512_40k_voc12aug_20200613_161240.log.json) | - -### Pascal Context - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| FCN | R-101-D8 | 480x480 | 40000 | - | 9.93 | 44.43 | 45.63 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_40k_pascal_context/fcn_r101-d8_480x480_40k_pascal_context-20210421_154757-b5e97937.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_40k_pascal_context/fcn_r101-d8_480x480_40k_pascal_context-20210421_154757.log.json) | -| FCN | R-101-D8 | 480x480 | 80000 | - | - | 44.13 | 45.26 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_480x480_80k_pascal_context.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_80k_pascal_context/fcn_r101-d8_480x480_80k_pascal_context-20210421_163310-4711813f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_80k_pascal_context/fcn_r101-d8_480x480_80k_pascal_context-20210421_163310.log.json) | - -### Pascal Context 59 - -| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download | -| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| FCN | R-101-D8 | 480x480 | 40000 | - | - | 48.42 | 50.4 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_480x480_40k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_40k_pascal_context_59/fcn_r101-d8_480x480_40k_pascal_context_59_20210415_230724-8cf83682.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_40k_pascal_context_59/fcn_r101-d8_480x480_40k_pascal_context_59-20210415_230724.log.json) | -| FCN | R-101-D8 | 480x480 | 80000 | - | - | 49.35 | 51.38 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/fcn/fcn_r101-d8_480x480_80k_pascal_context_59.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_80k_pascal_context_59/fcn_r101-d8_480x480_80k_pascal_context_59_20210416_110804-9a6f2c94.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/fcn/fcn_r101-d8_480x480_80k_pascal_context_59/fcn_r101-d8_480x480_80k_pascal_context_59-20210416_110804.log.json) | diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-tests.sh b/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-tests.sh deleted file mode 100644 index 52121b0317fd794d3ca23a26e1456060c7aca7e1..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-tests.sh +++ /dev/null @@ -1,2 +0,0 @@ -#!/usr/bin/env sh -poetry run pytest diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/notes/benchmarks.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/notes/benchmarks.md deleted file mode 100644 index b41588daf3a039b9034e80366c2710e90ba3e056..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/notes/benchmarks.md +++ /dev/null @@ -1,196 +0,0 @@ - -# Benchmarks - -Here we benchmark the training speed of a Mask R-CNN in detectron2, -with some other popular open source Mask R-CNN implementations. - - -### Settings - -* Hardware: 8 NVIDIA V100s with NVLink. -* Software: Python 3.7, CUDA 10.1, cuDNN 7.6.5, PyTorch 1.5, - TensorFlow 1.15.0rc2, Keras 2.2.5, MxNet 1.6.0b20190820. -* Model: an end-to-end R-50-FPN Mask-RCNN model, using the same hyperparameter as the - [Detectron baseline config](https://github.com/facebookresearch/Detectron/blob/master/configs/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml) - (it does not have scale augmentation). -* Metrics: We use the average throughput in iterations 100-500 to skip GPU warmup time. - Note that for R-CNN-style models, the throughput of a model typically changes during training, because - it depends on the predictions of the model. Therefore this metric is not directly comparable with - "train speed" in model zoo, which is the average speed of the entire training run. - - -### Main Results - -```eval_rst -+-------------------------------+--------------------+ -| Implementation | Throughput (img/s) | -+===============================+====================+ -| |D2| |PT| | 62 | -+-------------------------------+--------------------+ -| mmdetection_ |PT| | 53 | -+-------------------------------+--------------------+ -| maskrcnn-benchmark_ |PT| | 53 | -+-------------------------------+--------------------+ -| tensorpack_ |TF| | 50 | -+-------------------------------+--------------------+ -| simpledet_ |mxnet| | 39 | -+-------------------------------+--------------------+ -| Detectron_ |C2| | 19 | -+-------------------------------+--------------------+ -| `matterport/Mask_RCNN`__ |TF| | 14 | -+-------------------------------+--------------------+ - -.. _maskrcnn-benchmark: https://github.com/facebookresearch/maskrcnn-benchmark/ -.. _tensorpack: https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN -.. _mmdetection: https://github.com/open-mmlab/mmdetection/ -.. _simpledet: https://github.com/TuSimple/simpledet/ -.. _Detectron: https://github.com/facebookresearch/Detectron -__ https://github.com/matterport/Mask_RCNN/ - -.. |D2| image:: https://github.com/facebookresearch/detectron2/raw/main/.github/Detectron2-Logo-Horz.svg?sanitize=true - :height: 15pt - :target: https://github.com/facebookresearch/detectron2/ -.. |PT| image:: https://pytorch.org/assets/images/logo-icon.svg - :width: 15pt - :height: 15pt - :target: https://pytorch.org -.. |TF| image:: https://static.nvidiagrid.net/ngc/containers/tensorflow.png - :width: 15pt - :height: 15pt - :target: https://tensorflow.org -.. |mxnet| image:: https://github.com/dmlc/web-data/raw/master/mxnet/image/mxnet_favicon.png - :width: 15pt - :height: 15pt - :target: https://mxnet.apache.org/ -.. |C2| image:: https://caffe2.ai/static/logo.svg - :width: 15pt - :height: 15pt - :target: https://caffe2.ai -``` - - -Details for each implementation: - -* __Detectron2__: with release v0.1.2, run: - ``` - python tools/train_net.py --config-file configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml --num-gpus 8 - ``` - -* __mmdetection__: at commit `b0d845f`, run - ``` - ./tools/dist_train.sh configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_1x_coco.py 8 - ``` - -* __maskrcnn-benchmark__: use commit `0ce8f6f` with `sed -i 's/torch.uint8/torch.bool/g' **/*.py; sed -i 's/AT_CHECK/TORCH_CHECK/g' **/*.cu` - to make it compatible with PyTorch 1.5. Then, run training with - ``` - python -m torch.distributed.launch --nproc_per_node=8 tools/train_net.py --config-file configs/e2e_mask_rcnn_R_50_FPN_1x.yaml - ``` - The speed we observed is faster than its model zoo, likely due to different software versions. - -* __tensorpack__: at commit `caafda`, `export TF_CUDNN_USE_AUTOTUNE=0`, then run - ``` - mpirun -np 8 ./train.py --config DATA.BASEDIR=/data/coco TRAINER=horovod BACKBONE.STRIDE_1X1=True TRAIN.STEPS_PER_EPOCH=50 --load ImageNet-R50-AlignPadding.npz - ``` - -* __SimpleDet__: at commit `9187a1`, run - ``` - python detection_train.py --config config/mask_r50v1_fpn_1x.py - ``` - -* __Detectron__: run - ``` - python tools/train_net.py --cfg configs/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_1x.yaml - ``` - Note that many of its ops run on CPUs, therefore the performance is limited. - -* __matterport/Mask_RCNN__: at commit `3deaec`, apply the following diff, `export TF_CUDNN_USE_AUTOTUNE=0`, then run - ``` - python coco.py train --dataset=/data/coco/ --model=imagenet - ``` - Note that many small details in this implementation might be different - from Detectron's standards. - -
    - - (diff to make it use the same hyperparameters - click to expand) - - - ```diff - diff --git i/mrcnn/model.py w/mrcnn/model.py - index 62cb2b0..61d7779 100644 - --- i/mrcnn/model.py - +++ w/mrcnn/model.py - @@ -2367,8 +2367,8 @@ class MaskRCNN(): - epochs=epochs, - steps_per_epoch=self.config.STEPS_PER_EPOCH, - callbacks=callbacks, - - validation_data=val_generator, - - validation_steps=self.config.VALIDATION_STEPS, - + #validation_data=val_generator, - + #validation_steps=self.config.VALIDATION_STEPS, - max_queue_size=100, - workers=workers, - use_multiprocessing=True, - diff --git i/mrcnn/parallel_model.py w/mrcnn/parallel_model.py - index d2bf53b..060172a 100644 - --- i/mrcnn/parallel_model.py - +++ w/mrcnn/parallel_model.py - @@ -32,6 +32,7 @@ class ParallelModel(KM.Model): - keras_model: The Keras model to parallelize - gpu_count: Number of GPUs. Must be > 1 - """ - + super().__init__() - self.inner_model = keras_model - self.gpu_count = gpu_count - merged_outputs = self.make_parallel() - diff --git i/samples/coco/coco.py w/samples/coco/coco.py - index 5d172b5..239ed75 100644 - --- i/samples/coco/coco.py - +++ w/samples/coco/coco.py - @@ -81,7 +81,10 @@ class CocoConfig(Config): - IMAGES_PER_GPU = 2 - - # Uncomment to train on 8 GPUs (default is 1) - - # GPU_COUNT = 8 - + GPU_COUNT = 8 - + BACKBONE = "resnet50" - + STEPS_PER_EPOCH = 50 - + TRAIN_ROIS_PER_IMAGE = 512 - - # Number of classes (including background) - NUM_CLASSES = 1 + 80 # COCO has 80 classes - @@ -496,29 +499,10 @@ if __name__ == '__main__': - # *** This training schedule is an example. Update to your needs *** - - # Training - Stage 1 - - print("Training network heads") - model.train(dataset_train, dataset_val, - learning_rate=config.LEARNING_RATE, - epochs=40, - - layers='heads', - - augmentation=augmentation) - - - - # Training - Stage 2 - - # Finetune layers from ResNet stage 4 and up - - print("Fine tune Resnet stage 4 and up") - - model.train(dataset_train, dataset_val, - - learning_rate=config.LEARNING_RATE, - - epochs=120, - - layers='4+', - - augmentation=augmentation) - - - - # Training - Stage 3 - - # Fine tune all layers - - print("Fine tune all layers") - - model.train(dataset_train, dataset_val, - - learning_rate=config.LEARNING_RATE / 10, - - epochs=160, - - layers='all', - + layers='3+', - augmentation=augmentation) - - elif args.command == "evaluate": - ``` - -
    diff --git a/spaces/Ayanoaisho/L/README.md b/spaces/Ayanoaisho/L/README.md deleted file mode 100644 index 7e18b0fd1b94654b2528cfb48d7a74dafcde910d..0000000000000000000000000000000000000000 --- a/spaces/Ayanoaisho/L/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: L -emoji: ⚡ -colorFrom: green -colorTo: yellow -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/tone_sandhi.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/Benson/text-generation/Examples/Carx Deriva Carreras Mod Apk Vieja Versin.md b/spaces/Benson/text-generation/Examples/Carx Deriva Carreras Mod Apk Vieja Versin.md deleted file mode 100644 index 29f23ed10aa6900ca080c663af825ec307e92425..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Carx Deriva Carreras Mod Apk Vieja Versin.md +++ /dev/null @@ -1,88 +0,0 @@ -
    -

    CarX Drift Racing Mod APK versión antigua: Una revisión

    -

    Si usted es un fan de los juegos de carreras de coches, es posible que haya oído hablar de CarX Drift Racing. Es uno de los juegos de deriva más populares y realistas en Android. Puede elegir entre una variedad de coches, personalizarlos y competir en diferentes pistas. También puede competir con otros jugadores en línea y mostrar sus habilidades a la deriva.

    -

    carx deriva carreras mod apk vieja versión


    Download Zip ››››› https://bltlly.com/2v6Kna



    -

    Pero ¿y si quieres disfrutar del juego sin limitaciones? ¿Qué pasa si quieres tener dinero y oro ilimitados, desbloquear todos los coches y pistas, y experimentar los mejores gráficos y la física? Bueno, usted puede hacer eso mediante la descarga de la versión apk mod de CarX Drift Racing. Y no solo cualquier apk mod, pero la versión anterior de la misma.

    -

    ¿Por qué la versión anterior, se pregunta? Bueno, hay algunas razones por las que podría preferir la versión anterior a la última. En este artículo, vamos a revisar el CarX Drift Racing mod apk versión antigua, y le dirá cómo descargar e instalar en su dispositivo. También vamos a enumerar algunos pros y contras de usar este apk mod, y responder a algunas preguntas frecuentes.

    -

    ¿Qué es CarX Drift Racing?

    -

    CarX Drift Racing es un juego de carreras desarrollado por CarX Technologies. Fue lanzado en 2014 para dispositivos Android e iOS. El juego se centra en la deriva, que es una técnica de conducción donde el conductor sobreviraje intencionalmente el coche para que se deslice hacia los lados. El juego cuenta con física realista, gráficos impresionantes y controles suaves. Puedes sentir la emoción de la deriva en varios modos, como el modo carrera, el modo en línea, el modo de ataque en el tiempo y el modo de entrenamiento.

    -

    - -

    Características de CarX Drift Racing

    -

    Algunas de las características principales de CarX Drift Racing son:

    - -

    ¿Por qué descargar la versión antigua de CarX Drift Racing mod apk?

    -

    CarX Drift Racing es un gran juego, pero también tiene algunas limitaciones. Por ejemplo, necesitas gastar mucho dinero y oro para desbloquear y mejorar tus coches y pistas. También necesitas ver anuncios para obtener recompensas o bonificaciones. Y es posible que tengas problemas de compatibilidad o errores con la última versión del juego.

    -

    Es por eso que algunos jugadores prefieren descargar la versión anterior de CarX Drift Racing mod apk. Un apk mod es una versión modificada del juego original que tiene algunas características o cambios que no están disponibles en la versión oficial. Por ejemplo, la versión anterior de CarX Drift Racing mod apk ha:

    - -

    Al descargar la versión anterior de CarX Drift Racing mod apk, usted puede tener más diversión y libertad en el juego. También puede evitar algunos de los problemas que podrían ocurrir con la última versión del juego.

    -

    ¿Cómo descargar e instalar CarX Drift Racing mod apk versión antigua?

    -

    Si desea descargar e instalar la versión antigua de CarX Drift Racing mod apk en su dispositivo, debe seguir estos pasos:

    -

    Paso 1: Descargar el archivo apk

    -

    El primer paso es descargar el archivo apk de CarX Drift Racing mod apk versión antigua. Puede encontrarlo en varios sitios web que ofrecen juegos y aplicaciones modificadas. Sin embargo, debe tener cuidado y elegir una fuente confiable y segura. Algunos sitios web pueden tener archivos falsos o maliciosos que pueden dañar su dispositivo o robar sus datos.

    -

    Uno de los sitios web que recomendamos es [APKPure]. Es un sitio web confiable y popular que proporciona archivos apk libres y puros para los usuarios de Android. Puede descargar CarX Drift Racing mod apk versión antigua de este [enlace]. El tamaño del archivo es de unos 300 MB, así que asegúrate de tener suficiente espacio en tu dispositivo.

    -

    Paso 2: Habilitar fuentes desconocidas

    -

    El siguiente paso es habilitar fuentes desconocidas en su dispositivo. Esta es una configuración de seguridad que le permite instalar aplicaciones desde fuentes distintas de Google Play Store. Para habilitar fuentes desconocidas, debe ir a la configuración del dispositivo, luego a la seguridad y luego a fuentes desconocidas. Pulse en el interruptor o en la casilla de verificación para activarlo.

    -

    Paso 3: Instalar el archivo apk

    -

    El tercer paso es instalar el archivo apk de CarX Drift Racing mod apk versión antigua. Para ello, debe localizar el archivo en el administrador de archivos o la carpeta de descargas de su dispositivo. Toque en el archivo y siga las instrucciones en la pantalla. La instalación puede tardar unos minutos.

    - -

    El paso final es lanzar el juego y disfrutarlo. Puede encontrar el icono del juego en la pantalla de inicio del dispositivo o en el cajón de la aplicación. Tócalo y empieza a jugar. Verás que tienes dinero y oro ilimitados, coches y pistas desbloqueados, y física y gráficos realistas.

    -

    Pros y contras de CarX Drift Racing mod apk versión antigua

    -

    Como con cualquier juego modificado, hay algunos pros y contras de usar CarX Drift Racing mod apk versión antigua. Aquí están algunos de ellos:

    -

    Pros

    - -

    Contras

    - -

    Conclusión

    -

    CarX Drift Racing es un juego de deriva divertido y realista que puedes jugar en tu dispositivo Android. Sin embargo, si quieres tener más libertad y diversión en el juego, se puede descargar la antigua versión de CarX Drift Racing mod apk. Este apk mod le dará dinero ilimitado y oro, coches desbloqueados y pistas, y la física realista y gráficos. Sin embargo, también debe ser consciente de los contras de usar este apk mod, tales como problemas de compatibilidad, errores y fallas, y la falta de actualizaciones.

    -

    Si está interesado en descargar e instalar CarX Drift Racing mod apk versión antigua, puede seguir los pasos que proporcionamos en este artículo. También recomendamos que utilice una fuente confiable y segura para descargar el archivo apk, como APKPure. Esperamos que este artículo sea útil e informativo para usted. ¡Feliz deriva!

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre CarX Drift Racing mod apk versión antigua:

    -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/easter.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/easter.py deleted file mode 100644 index f74d1f7442473997245ac683b8a269a3574d1ba4..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/easter.py +++ /dev/null @@ -1,89 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers a generic Easter computing method for any given year, using -Western, Orthodox or Julian algorithms. -""" - -import datetime - -__all__ = ["easter", "EASTER_JULIAN", "EASTER_ORTHODOX", "EASTER_WESTERN"] - -EASTER_JULIAN = 1 -EASTER_ORTHODOX = 2 -EASTER_WESTERN = 3 - - -def easter(year, method=EASTER_WESTERN): - """ - This method was ported from the work done by GM Arts, - on top of the algorithm by Claus Tondering, which was - based in part on the algorithm of Ouding (1940), as - quoted in "Explanatory Supplement to the Astronomical - Almanac", P. Kenneth Seidelmann, editor. - - This algorithm implements three different Easter - calculation methods: - - 1. Original calculation in Julian calendar, valid in - dates after 326 AD - 2. Original method, with date converted to Gregorian - calendar, valid in years 1583 to 4099 - 3. Revised method, in Gregorian calendar, valid in - years 1583 to 4099 as well - - These methods are represented by the constants: - - * ``EASTER_JULIAN = 1`` - * ``EASTER_ORTHODOX = 2`` - * ``EASTER_WESTERN = 3`` - - The default method is method 3. - - More about the algorithm may be found at: - - `GM Arts: Easter Algorithms `_ - - and - - `The Calendar FAQ: Easter `_ - - """ - - if not (1 <= method <= 3): - raise ValueError("invalid method") - - # g - Golden year - 1 - # c - Century - # h - (23 - Epact) mod 30 - # i - Number of days from March 21 to Paschal Full Moon - # j - Weekday for PFM (0=Sunday, etc) - # p - Number of days from March 21 to Sunday on or before PFM - # (-6 to 28 methods 1 & 3, to 56 for method 2) - # e - Extra days to add for method 2 (converting Julian - # date to Gregorian date) - - y = year - g = y % 19 - e = 0 - if method < 3: - # Old method - i = (19*g + 15) % 30 - j = (y + y//4 + i) % 7 - if method == 2: - # Extra dates to convert Julian to Gregorian date - e = 10 - if y > 1600: - e = e + y//100 - 16 - (y//100 - 16)//4 - else: - # New method - c = y//100 - h = (c - c//4 - (8*c + 13)//25 + 19*g + 15) % 30 - i = h - (h//28)*(1 - (h//28)*(29//(h + 1))*((21 - g)//11)) - j = (y + y//4 + i + 2 - c + c//4) % 7 - - # p can be from -6 to 56 corresponding to dates 22 March to 23 May - # (later dates apply to method 2, although 23 May never actually occurs) - p = i - j + e - d = 1 + (p + 27 + (p + 6)//40) % 31 - m = 3 + (p + 26)//30 - return datetime.date(int(y), int(m), int(d)) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/specifiers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/specifiers.py deleted file mode 100644 index 0e218a6f9f75ea2060a8b08d1f1a043fdad68df8..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/packaging/specifiers.py +++ /dev/null @@ -1,802 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import abc -import functools -import itertools -import re -import warnings -from typing import ( - Callable, - Dict, - Iterable, - Iterator, - List, - Optional, - Pattern, - Set, - Tuple, - TypeVar, - Union, -) - -from .utils import canonicalize_version -from .version import LegacyVersion, Version, parse - -ParsedVersion = Union[Version, LegacyVersion] -UnparsedVersion = Union[Version, LegacyVersion, str] -VersionTypeVar = TypeVar("VersionTypeVar", bound=UnparsedVersion) -CallableOperator = Callable[[ParsedVersion, str], bool] - - -class InvalidSpecifier(ValueError): - """ - An invalid specifier was found, users should refer to PEP 440. - """ - - -class BaseSpecifier(metaclass=abc.ABCMeta): - @abc.abstractmethod - def __str__(self) -> str: - """ - Returns the str representation of this Specifier like object. This - should be representative of the Specifier itself. - """ - - @abc.abstractmethod - def __hash__(self) -> int: - """ - Returns a hash value for this Specifier like object. - """ - - @abc.abstractmethod - def __eq__(self, other: object) -> bool: - """ - Returns a boolean representing whether or not the two Specifier like - objects are equal. - """ - - @abc.abstractproperty - def prereleases(self) -> Optional[bool]: - """ - Returns whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @prereleases.setter - def prereleases(self, value: bool) -> None: - """ - Sets whether or not pre-releases as a whole are allowed by this - specifier. - """ - - @abc.abstractmethod - def contains(self, item: str, prereleases: Optional[bool] = None) -> bool: - """ - Determines if the given item is contained within this specifier. - """ - - @abc.abstractmethod - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - """ - Takes an iterable of items and filters them so that only items which - are contained within this specifier are allowed in it. - """ - - -class _IndividualSpecifier(BaseSpecifier): - - _operators: Dict[str, str] = {} - _regex: Pattern[str] - - def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None: - match = self._regex.search(spec) - if not match: - raise InvalidSpecifier(f"Invalid specifier: '{spec}'") - - self._spec: Tuple[str, str] = ( - match.group("operator").strip(), - match.group("version").strip(), - ) - - # Store whether or not this Specifier should accept prereleases - self._prereleases = prereleases - - def __repr__(self) -> str: - pre = ( - f", prereleases={self.prereleases!r}" - if self._prereleases is not None - else "" - ) - - return f"<{self.__class__.__name__}({str(self)!r}{pre})>" - - def __str__(self) -> str: - return "{}{}".format(*self._spec) - - @property - def _canonical_spec(self) -> Tuple[str, str]: - return self._spec[0], canonicalize_version(self._spec[1]) - - def __hash__(self) -> int: - return hash(self._canonical_spec) - - def __eq__(self, other: object) -> bool: - if isinstance(other, str): - try: - other = self.__class__(str(other)) - except InvalidSpecifier: - return NotImplemented - elif not isinstance(other, self.__class__): - return NotImplemented - - return self._canonical_spec == other._canonical_spec - - def _get_operator(self, op: str) -> CallableOperator: - operator_callable: CallableOperator = getattr( - self, f"_compare_{self._operators[op]}" - ) - return operator_callable - - def _coerce_version(self, version: UnparsedVersion) -> ParsedVersion: - if not isinstance(version, (LegacyVersion, Version)): - version = parse(version) - return version - - @property - def operator(self) -> str: - return self._spec[0] - - @property - def version(self) -> str: - return self._spec[1] - - @property - def prereleases(self) -> Optional[bool]: - return self._prereleases - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - def __contains__(self, item: str) -> bool: - return self.contains(item) - - def contains( - self, item: UnparsedVersion, prereleases: Optional[bool] = None - ) -> bool: - - # Determine if prereleases are to be allowed or not. - if prereleases is None: - prereleases = self.prereleases - - # Normalize item to a Version or LegacyVersion, this allows us to have - # a shortcut for ``"2.0" in Specifier(">=2") - normalized_item = self._coerce_version(item) - - # Determine if we should be supporting prereleases in this specifier - # or not, if we do not support prereleases than we can short circuit - # logic if this version is a prereleases. - if normalized_item.is_prerelease and not prereleases: - return False - - # Actually do the comparison to determine if this item is contained - # within this Specifier or not. - operator_callable: CallableOperator = self._get_operator(self.operator) - return operator_callable(normalized_item, self.version) - - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - - yielded = False - found_prereleases = [] - - kw = {"prereleases": prereleases if prereleases is not None else True} - - # Attempt to iterate over all the values in the iterable and if any of - # them match, yield them. - for version in iterable: - parsed_version = self._coerce_version(version) - - if self.contains(parsed_version, **kw): - # If our version is a prerelease, and we were not set to allow - # prereleases, then we'll store it for later in case nothing - # else matches this specifier. - if parsed_version.is_prerelease and not ( - prereleases or self.prereleases - ): - found_prereleases.append(version) - # Either this is not a prerelease, or we should have been - # accepting prereleases from the beginning. - else: - yielded = True - yield version - - # Now that we've iterated over everything, determine if we've yielded - # any values, and if we have not and we have any prereleases stored up - # then we will go ahead and yield the prereleases. - if not yielded and found_prereleases: - for version in found_prereleases: - yield version - - -class LegacySpecifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(==|!=|<=|>=|<|>)) - \s* - (?P - [^,;\s)]* # Since this is a "legacy" specifier, and the version - # string can be just about anything, we match everything - # except for whitespace, a semi-colon for marker support, - # a closing paren since versions can be enclosed in - # them, and a comma since it's a version separator. - ) - """ - - _regex = re.compile(r"^\s*" + _regex_str + r"\s*$", re.VERBOSE | re.IGNORECASE) - - _operators = { - "==": "equal", - "!=": "not_equal", - "<=": "less_than_equal", - ">=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - } - - def __init__(self, spec: str = "", prereleases: Optional[bool] = None) -> None: - super().__init__(spec, prereleases) - - warnings.warn( - "Creating a LegacyVersion has been deprecated and will be " - "removed in the next major release", - DeprecationWarning, - ) - - def _coerce_version(self, version: UnparsedVersion) -> LegacyVersion: - if not isinstance(version, LegacyVersion): - version = LegacyVersion(str(version)) - return version - - def _compare_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective == self._coerce_version(spec) - - def _compare_not_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective != self._coerce_version(spec) - - def _compare_less_than_equal(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective <= self._coerce_version(spec) - - def _compare_greater_than_equal( - self, prospective: LegacyVersion, spec: str - ) -> bool: - return prospective >= self._coerce_version(spec) - - def _compare_less_than(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective < self._coerce_version(spec) - - def _compare_greater_than(self, prospective: LegacyVersion, spec: str) -> bool: - return prospective > self._coerce_version(spec) - - -def _require_version_compare( - fn: Callable[["Specifier", ParsedVersion, str], bool] -) -> Callable[["Specifier", ParsedVersion, str], bool]: - @functools.wraps(fn) - def wrapped(self: "Specifier", prospective: ParsedVersion, spec: str) -> bool: - if not isinstance(prospective, Version): - return False - return fn(self, prospective, spec) - - return wrapped - - -class Specifier(_IndividualSpecifier): - - _regex_str = r""" - (?P(~=|==|!=|<=|>=|<|>|===)) - (?P - (?: - # The identity operators allow for an escape hatch that will - # do an exact string match of the version you wish to install. - # This will not be parsed by PEP 440 and we cannot determine - # any semantic meaning from it. This operator is discouraged - # but included entirely as an escape hatch. - (?<====) # Only match for the identity operator - \s* - [^\s]* # We just match everything, except for whitespace - # since we are only testing for strict identity. - ) - | - (?: - # The (non)equality operators allow for wild card and local - # versions to be specified so we have to define these two - # operators separately to enable that. - (?<===|!=) # Only match for equals and not equals - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)* # release - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - - # You cannot use a wild card and a dev or local version - # together so group them with a | and make them optional. - (?: - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - (?:\+[a-z0-9]+(?:[-_\.][a-z0-9]+)*)? # local - | - \.\* # Wild card syntax of .* - )? - ) - | - (?: - # The compatible operator requires at least two digits in the - # release segment. - (?<=~=) # Only match for the compatible operator - - \s* - v? - (?:[0-9]+!)? # epoch - [0-9]+(?:\.[0-9]+)+ # release (We have a + instead of a *) - (?: # pre release - [-_\.]? - (a|b|c|rc|alpha|beta|pre|preview) - [-_\.]? - [0-9]* - )? - (?: # post release - (?:-[0-9]+)|(?:[-_\.]?(post|rev|r)[-_\.]?[0-9]*) - )? - (?:[-_\.]?dev[-_\.]?[0-9]*)? # dev release - ) - | - (?: - # All other operators only allow a sub set of what the - # (non)equality operators do. Specifically they do not allow - # local versions to be specified nor do they allow the prefix - # matching wild cards. - (?=": "greater_than_equal", - "<": "less_than", - ">": "greater_than", - "===": "arbitrary", - } - - @_require_version_compare - def _compare_compatible(self, prospective: ParsedVersion, spec: str) -> bool: - - # Compatible releases have an equivalent combination of >= and ==. That - # is that ~=2.2 is equivalent to >=2.2,==2.*. This allows us to - # implement this in terms of the other specifiers instead of - # implementing it ourselves. The only thing we need to do is construct - # the other specifiers. - - # We want everything but the last item in the version, but we want to - # ignore suffix segments. - prefix = ".".join( - list(itertools.takewhile(_is_not_suffix, _version_split(spec)))[:-1] - ) - - # Add the prefix notation to the end of our string - prefix += ".*" - - return self._get_operator(">=")(prospective, spec) and self._get_operator("==")( - prospective, prefix - ) - - @_require_version_compare - def _compare_equal(self, prospective: ParsedVersion, spec: str) -> bool: - - # We need special logic to handle prefix matching - if spec.endswith(".*"): - # In the case of prefix matching we want to ignore local segment. - prospective = Version(prospective.public) - # Split the spec out by dots, and pretend that there is an implicit - # dot in between a release segment and a pre-release segment. - split_spec = _version_split(spec[:-2]) # Remove the trailing .* - - # Split the prospective version out by dots, and pretend that there - # is an implicit dot in between a release segment and a pre-release - # segment. - split_prospective = _version_split(str(prospective)) - - # Shorten the prospective version to be the same length as the spec - # so that we can determine if the specifier is a prefix of the - # prospective version or not. - shortened_prospective = split_prospective[: len(split_spec)] - - # Pad out our two sides with zeros so that they both equal the same - # length. - padded_spec, padded_prospective = _pad_version( - split_spec, shortened_prospective - ) - - return padded_prospective == padded_spec - else: - # Convert our spec string into a Version - spec_version = Version(spec) - - # If the specifier does not have a local segment, then we want to - # act as if the prospective version also does not have a local - # segment. - if not spec_version.local: - prospective = Version(prospective.public) - - return prospective == spec_version - - @_require_version_compare - def _compare_not_equal(self, prospective: ParsedVersion, spec: str) -> bool: - return not self._compare_equal(prospective, spec) - - @_require_version_compare - def _compare_less_than_equal(self, prospective: ParsedVersion, spec: str) -> bool: - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) <= Version(spec) - - @_require_version_compare - def _compare_greater_than_equal( - self, prospective: ParsedVersion, spec: str - ) -> bool: - - # NB: Local version identifiers are NOT permitted in the version - # specifier, so local version labels can be universally removed from - # the prospective version. - return Version(prospective.public) >= Version(spec) - - @_require_version_compare - def _compare_less_than(self, prospective: ParsedVersion, spec_str: str) -> bool: - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is less than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective < spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a pre-release version, that we do not accept pre-release - # versions for the version mentioned in the specifier (e.g. <3.1 should - # not match 3.1.dev0, but should match 3.0.dev0). - if not spec.is_prerelease and prospective.is_prerelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # less than the spec version *and* it's not a pre-release of the same - # version in the spec. - return True - - @_require_version_compare - def _compare_greater_than(self, prospective: ParsedVersion, spec_str: str) -> bool: - - # Convert our spec to a Version instance, since we'll want to work with - # it as a version. - spec = Version(spec_str) - - # Check to see if the prospective version is greater than the spec - # version. If it's not we can short circuit and just return False now - # instead of doing extra unneeded work. - if not prospective > spec: - return False - - # This special case is here so that, unless the specifier itself - # includes is a post-release version, that we do not accept - # post-release versions for the version mentioned in the specifier - # (e.g. >3.1 should not match 3.0.post0, but should match 3.2.post0). - if not spec.is_postrelease and prospective.is_postrelease: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # Ensure that we do not allow a local version of the version mentioned - # in the specifier, which is technically greater than, to match. - if prospective.local is not None: - if Version(prospective.base_version) == Version(spec.base_version): - return False - - # If we've gotten to here, it means that prospective version is both - # greater than the spec version *and* it's not a pre-release of the - # same version in the spec. - return True - - def _compare_arbitrary(self, prospective: Version, spec: str) -> bool: - return str(prospective).lower() == str(spec).lower() - - @property - def prereleases(self) -> bool: - - # If there is an explicit prereleases set for this, then we'll just - # blindly use that. - if self._prereleases is not None: - return self._prereleases - - # Look at all of our specifiers and determine if they are inclusive - # operators, and if they are if they are including an explicit - # prerelease. - operator, version = self._spec - if operator in ["==", ">=", "<=", "~=", "==="]: - # The == specifier can include a trailing .*, if it does we - # want to remove before parsing. - if operator == "==" and version.endswith(".*"): - version = version[:-2] - - # Parse the version, and if it is a pre-release than this - # specifier allows pre-releases. - if parse(version).is_prerelease: - return True - - return False - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - -_prefix_regex = re.compile(r"^([0-9]+)((?:a|b|c|rc)[0-9]+)$") - - -def _version_split(version: str) -> List[str]: - result: List[str] = [] - for item in version.split("."): - match = _prefix_regex.search(item) - if match: - result.extend(match.groups()) - else: - result.append(item) - return result - - -def _is_not_suffix(segment: str) -> bool: - return not any( - segment.startswith(prefix) for prefix in ("dev", "a", "b", "rc", "post") - ) - - -def _pad_version(left: List[str], right: List[str]) -> Tuple[List[str], List[str]]: - left_split, right_split = [], [] - - # Get the release segment of our versions - left_split.append(list(itertools.takewhile(lambda x: x.isdigit(), left))) - right_split.append(list(itertools.takewhile(lambda x: x.isdigit(), right))) - - # Get the rest of our versions - left_split.append(left[len(left_split[0]) :]) - right_split.append(right[len(right_split[0]) :]) - - # Insert our padding - left_split.insert(1, ["0"] * max(0, len(right_split[0]) - len(left_split[0]))) - right_split.insert(1, ["0"] * max(0, len(left_split[0]) - len(right_split[0]))) - - return (list(itertools.chain(*left_split)), list(itertools.chain(*right_split))) - - -class SpecifierSet(BaseSpecifier): - def __init__( - self, specifiers: str = "", prereleases: Optional[bool] = None - ) -> None: - - # Split on , to break each individual specifier into it's own item, and - # strip each item to remove leading/trailing whitespace. - split_specifiers = [s.strip() for s in specifiers.split(",") if s.strip()] - - # Parsed each individual specifier, attempting first to make it a - # Specifier and falling back to a LegacySpecifier. - parsed: Set[_IndividualSpecifier] = set() - for specifier in split_specifiers: - try: - parsed.add(Specifier(specifier)) - except InvalidSpecifier: - parsed.add(LegacySpecifier(specifier)) - - # Turn our parsed specifiers into a frozen set and save them for later. - self._specs = frozenset(parsed) - - # Store our prereleases value so we can use it later to determine if - # we accept prereleases or not. - self._prereleases = prereleases - - def __repr__(self) -> str: - pre = ( - f", prereleases={self.prereleases!r}" - if self._prereleases is not None - else "" - ) - - return f"" - - def __str__(self) -> str: - return ",".join(sorted(str(s) for s in self._specs)) - - def __hash__(self) -> int: - return hash(self._specs) - - def __and__(self, other: Union["SpecifierSet", str]) -> "SpecifierSet": - if isinstance(other, str): - other = SpecifierSet(other) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - specifier = SpecifierSet() - specifier._specs = frozenset(self._specs | other._specs) - - if self._prereleases is None and other._prereleases is not None: - specifier._prereleases = other._prereleases - elif self._prereleases is not None and other._prereleases is None: - specifier._prereleases = self._prereleases - elif self._prereleases == other._prereleases: - specifier._prereleases = self._prereleases - else: - raise ValueError( - "Cannot combine SpecifierSets with True and False prerelease " - "overrides." - ) - - return specifier - - def __eq__(self, other: object) -> bool: - if isinstance(other, (str, _IndividualSpecifier)): - other = SpecifierSet(str(other)) - elif not isinstance(other, SpecifierSet): - return NotImplemented - - return self._specs == other._specs - - def __len__(self) -> int: - return len(self._specs) - - def __iter__(self) -> Iterator[_IndividualSpecifier]: - return iter(self._specs) - - @property - def prereleases(self) -> Optional[bool]: - - # If we have been given an explicit prerelease modifier, then we'll - # pass that through here. - if self._prereleases is not None: - return self._prereleases - - # If we don't have any specifiers, and we don't have a forced value, - # then we'll just return None since we don't know if this should have - # pre-releases or not. - if not self._specs: - return None - - # Otherwise we'll see if any of the given specifiers accept - # prereleases, if any of them do we'll return True, otherwise False. - return any(s.prereleases for s in self._specs) - - @prereleases.setter - def prereleases(self, value: bool) -> None: - self._prereleases = value - - def __contains__(self, item: UnparsedVersion) -> bool: - return self.contains(item) - - def contains( - self, item: UnparsedVersion, prereleases: Optional[bool] = None - ) -> bool: - - # Ensure that our item is a Version or LegacyVersion instance. - if not isinstance(item, (LegacyVersion, Version)): - item = parse(item) - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # We can determine if we're going to allow pre-releases by looking to - # see if any of the underlying items supports them. If none of them do - # and this item is a pre-release then we do not allow it and we can - # short circuit that here. - # Note: This means that 1.0.dev1 would not be contained in something - # like >=1.0.devabc however it would be in >=1.0.debabc,>0.0.dev0 - if not prereleases and item.is_prerelease: - return False - - # We simply dispatch to the underlying specs here to make sure that the - # given version is contained within all of them. - # Note: This use of all() here means that an empty set of specifiers - # will always return True, this is an explicit design decision. - return all(s.contains(item, prereleases=prereleases) for s in self._specs) - - def filter( - self, iterable: Iterable[VersionTypeVar], prereleases: Optional[bool] = None - ) -> Iterable[VersionTypeVar]: - - # Determine if we're forcing a prerelease or not, if we're not forcing - # one for this particular filter call, then we'll use whatever the - # SpecifierSet thinks for whether or not we should support prereleases. - if prereleases is None: - prereleases = self.prereleases - - # If we have any specifiers, then we want to wrap our iterable in the - # filter method for each one, this will act as a logical AND amongst - # each specifier. - if self._specs: - for spec in self._specs: - iterable = spec.filter(iterable, prereleases=bool(prereleases)) - return iterable - # If we do not have any specifiers, then we need to have a rough filter - # which will filter out any pre-releases, unless there are no final - # releases, and which will filter out LegacyVersion in general. - else: - filtered: List[VersionTypeVar] = [] - found_prereleases: List[VersionTypeVar] = [] - - item: UnparsedVersion - parsed_version: Union[Version, LegacyVersion] - - for item in iterable: - # Ensure that we some kind of Version class for this item. - if not isinstance(item, (LegacyVersion, Version)): - parsed_version = parse(item) - else: - parsed_version = item - - # Filter out any item which is parsed as a LegacyVersion - if isinstance(parsed_version, LegacyVersion): - continue - - # Store any item which is a pre-release for later unless we've - # already found a final version or we are accepting prereleases - if parsed_version.is_prerelease and not prereleases: - if not filtered: - found_prereleases.append(item) - else: - filtered.append(item) - - # If we've found no items except for pre-releases, then we'll go - # ahead and use the pre-releases - if not filtered and found_prereleases and prereleases is None: - return found_prereleases - - return filtered diff --git a/spaces/BraydenMoore/MARCI-NFL-Betting/Templates/index.html b/spaces/BraydenMoore/MARCI-NFL-Betting/Templates/index.html deleted file mode 100644 index e4d2d353f823a8624500816c5092e7378e84136f..0000000000000000000000000000000000000000 --- a/spaces/BraydenMoore/MARCI-NFL-Betting/Templates/index.html +++ /dev/null @@ -1,724 +0,0 @@ - - - - - - MARCI - NFL Betting - - - -
    -
    - Predictions will begin at the conclusion of Week 1. Bet at your own risk. Know your limits. And most importantly, have fun! -
    -
    - - -

    M A R C I

    -
    - Moore's Algorithm for Risky Capital Investments

    - - Remember to have fun!

    - - Record through {{ latest_game }}
    - Winners: {{ winners_correct }}-{{winners_incorrect}}{{winners_tie}} ({{ winners_return }})
    - Over/Unders: {{over_unders_correct}}-{{over_unders_incorrect}}{{over_unders_push}} ({{over_unders_return}})

    -
    - - - -
    - - - - - - - - - -
    DateAwayHomeO/UPredicted WinnerPredicted O/U
    -
    - - -
    - -
    -

    Model Train/Test Details

    -
    -
    -

    Moneyline

    -
    Test Accuracy: 71.4%
    -
    - Moneyline Model -
    - Model: XGBoost
    - Train/Test Split: 1782/199
    - Max Depth: 2
    - Learning Rate: 0.01
    - Epochs: 500 -
    -
    -
    -
    -

    Over/Under

    -
    -
    Test Accuracy: 59.8%
    - Over/Under Model -
    - Model: XGBoost
    - Train/Test Split: 1782/199
    - Max Depth: 6
    - Learning Rate: 0.05
    - Epochs: 300 -
    -
    -
    -
    -
    - -
    -

    Predictive Accuracy This Year

    -
    -
    -

    Moneyline

    -
    {{ winners_return }}.
    -
    - Moneyline Accuracy -
    -
    {{ winners_binom }}
    - -
    -
    -

    Over/Under

    -
    {{ over_unders_return }}.
    -
    - Over/Under Model -
    -
    {{ over_unders_binom }}
    -
    -
    -
    - -

    🤗See the Code

    - - - - - - - - diff --git a/spaces/BraydenMoore/a-random-unsecured-camera/Dockerfile b/spaces/BraydenMoore/a-random-unsecured-camera/Dockerfile deleted file mode 100644 index 1a258058e08d3597732a2bc4e877dbd2d36c7668..0000000000000000000000000000000000000000 --- a/spaces/BraydenMoore/a-random-unsecured-camera/Dockerfile +++ /dev/null @@ -1,29 +0,0 @@ -# Use the official lightweight Python image. -FROM python:3.11 - -# Allow statements and log messages to immediately appear in the logs -ENV PYTHONUNBUFFERED True - -# Copy local code to the container image. -ENV APP_HOME /app -WORKDIR $APP_HOME -COPY . ./ - -# Install production dependencies. -RUN pip install --no-cache-dir -r requirements.txt - -# Create a non-root user and switch to it -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set work directory -WORKDIR $APP_HOME - -# Change ownership of app files to the new user -COPY --chown=user . $HOME/app - -# Run the web service on container startup. -CMD exec gunicorn --bind 0.0.0.0:7860 --workers 4 --threads 16 main:app - diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rrpn.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rrpn.py deleted file mode 100644 index edd1da3809c77582c10c6b93056280280a07e04e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/proposal_generator/rrpn.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import logging -from typing import Dict -import torch - -from detectron2.layers import ShapeSpec - -from ..box_regression import Box2BoxTransformRotated -from .build import PROPOSAL_GENERATOR_REGISTRY -from .rpn import RPN -from .rrpn_outputs import RRPNOutputs, find_top_rrpn_proposals - -logger = logging.getLogger(__name__) - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RRPN(RPN): - """ - Rotated RPN subnetwork. - Please refer to https://arxiv.org/pdf/1703.01086.pdf for the original RRPN paper: - Ma, J., Shao, W., Ye, H., Wang, L., Wang, H., Zheng, Y., & Xue, X. (2018). - Arbitrary-oriented scene text detection via rotation proposals. - IEEE Transactions on Multimedia, 20(11), 3111-3122. - """ - - def __init__(self, cfg, input_shape: Dict[str, ShapeSpec]): - super().__init__(cfg, input_shape) - self.box2box_transform = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS) - - def forward(self, images, features, gt_instances=None): - # same signature as RPN.forward - gt_boxes = [x.gt_boxes for x in gt_instances] if gt_instances is not None else None - del gt_instances - features = [features[f] for f in self.in_features] - pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features) - anchors = self.anchor_generator(features) - - outputs = RRPNOutputs( - self.box2box_transform, - self.anchor_matcher, - self.batch_size_per_image, - self.positive_fraction, - images, - pred_objectness_logits, - pred_anchor_deltas, - anchors, - self.boundary_threshold, - gt_boxes, - self.smooth_l1_beta, - ) - - if self.training: - losses = outputs.losses() - else: - losses = {} - - with torch.no_grad(): - # Find the top proposals by applying NMS and removing boxes that - # are too small. The proposals are treated as fixed for approximate - # joint training with roi heads. This approach ignores the derivative - # w.r.t. the proposal boxes’ coordinates that are also network - # responses, so is approximate. - proposals = find_top_rrpn_proposals( - outputs.predict_proposals(), - outputs.predict_objectness_logits(), - images, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_side_len, - self.training, - ) - - return proposals, losses diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/extrema.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/extrema.h deleted file mode 100644 index 40903cd9a9aca0ec22b5521a33964deea9961cd9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/extrema.h +++ /dev/null @@ -1,568 +0,0 @@ -/******************************************************************************* - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace cuda_cub { - -namespace __extrema { - - template - struct arg_min_f - { - Predicate predicate; - typedef tuple pair_type; - - __host__ __device__ - arg_min_f(Predicate p) : predicate(p) {} - - pair_type __device__ - operator()(pair_type const &lhs, pair_type const &rhs) - { - InputType const &rhs_value = get<0>(rhs); - InputType const &lhs_value = get<0>(lhs); - IndexType const &rhs_key = get<1>(rhs); - IndexType const &lhs_key = get<1>(lhs); - - // check values first - if (predicate(lhs_value, rhs_value)) - return lhs; - else if (predicate(rhs_value, lhs_value)) - return rhs; - - // values are equivalent, prefer smaller index - if (lhs_key < rhs_key) - return lhs; - else - return rhs; - } - }; // struct arg_min_f - - template - struct arg_max_f - { - Predicate predicate; - typedef tuple pair_type; - - __host__ __device__ - arg_max_f(Predicate p) : predicate(p) {} - - pair_type __device__ - operator()(pair_type const &lhs, pair_type const &rhs) - { - InputType const &rhs_value = get<0>(rhs); - InputType const &lhs_value = get<0>(lhs); - IndexType const &rhs_key = get<1>(rhs); - IndexType const &lhs_key = get<1>(lhs); - - // check values first - if (predicate(lhs_value, rhs_value)) - return rhs; - else if (predicate(rhs_value, lhs_value)) - return lhs; - - // values are equivalent, prefer smaller index - if (lhs_key < rhs_key) - return lhs; - else - return rhs; - } - }; // struct arg_max_f - - template - struct arg_minmax_f - { - Predicate predicate; - - typedef tuple pair_type; - typedef tuple two_pairs_type; - - typedef arg_min_f arg_min_t; - typedef arg_max_f arg_max_t; - - __host__ __device__ - arg_minmax_f(Predicate p) : predicate(p) - { - } - - two_pairs_type __device__ - operator()(two_pairs_type const &lhs, two_pairs_type const &rhs) - { - pair_type const &rhs_min = get<0>(rhs); - pair_type const &lhs_min = get<0>(lhs); - pair_type const &rhs_max = get<1>(rhs); - pair_type const &lhs_max = get<1>(lhs); - return thrust::make_tuple(arg_min_t(predicate)(lhs_min, rhs_min), - arg_max_t(predicate)(lhs_max, rhs_max)); - } - - struct duplicate_tuple - { - __device__ two_pairs_type - operator()(pair_type const &t) - { - return thrust::make_tuple(t, t); - } - }; - }; // struct arg_minmax_f - - template - cudaError_t THRUST_RUNTIME_FUNCTION - doit_step(void * d_temp_storage, - size_t & temp_storage_bytes, - InputIt input_it, - Size num_items, - ReductionOp reduction_op, - OutputIt output_it, - cudaStream_t stream, - bool debug_sync) - { - using core::AgentPlan; - using core::AgentLauncher; - using core::get_agent_plan; - using core::cuda_optional; - - typedef typename detail::make_unsigned_special::type UnsignedSize; - - if (num_items == 0) - return cudaErrorNotSupported; - - typedef AgentLauncher< - __reduce::ReduceAgent > - reduce_agent; - - typename reduce_agent::Plan reduce_plan = reduce_agent::get_plan(stream); - - cudaError_t status = cudaSuccess; - - - if (num_items <= reduce_plan.items_per_tile) - { - size_t vshmem_size = core::vshmem_size(reduce_plan.shared_memory_size, 1); - - // small, single tile size - if (d_temp_storage == NULL) - { - temp_storage_bytes = max(1, vshmem_size); - return status; - } - char *vshmem_ptr = vshmem_size > 0 ? (char*)d_temp_storage : NULL; - - reduce_agent ra(reduce_plan, num_items, stream, vshmem_ptr, "reduce_agent: single_tile only", debug_sync); - ra.launch(input_it, output_it, num_items, reduction_op); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - } - else - { - // regular size - cuda_optional sm_count = core::get_sm_count(); - CUDA_CUB_RET_IF_FAIL(sm_count.status()); - - // reduction will not use more cta counts than requested - cuda_optional max_blocks_per_sm = - reduce_agent:: - template get_max_blocks_per_sm, - cub::GridQueue, - ReductionOp>(reduce_plan); - CUDA_CUB_RET_IF_FAIL(max_blocks_per_sm.status()); - - - - int reduce_device_occupancy = (int)max_blocks_per_sm * sm_count; - - int sm_oversubscription = 5; - int max_blocks = reduce_device_occupancy * sm_oversubscription; - - cub::GridEvenShare even_share; - even_share.DispatchInit(num_items, max_blocks, - reduce_plan.items_per_tile); - - // we will launch at most "max_blocks" blocks in a grid - // so preallocate virtual shared memory storage for this if required - // - size_t vshmem_size = core::vshmem_size(reduce_plan.shared_memory_size, - max_blocks); - - // Temporary storage allocation requirements - void * allocations[3] = {NULL, NULL, NULL}; - size_t allocation_sizes[3] = - { - max_blocks * sizeof(T), // bytes needed for privatized block reductions - cub::GridQueue::AllocationSize(), // bytes needed for grid queue descriptor0 - vshmem_size // size of virtualized shared memory storage - }; - status = cub::AliasTemporaries(d_temp_storage, - temp_storage_bytes, - allocations, - allocation_sizes); - CUDA_CUB_RET_IF_FAIL(status); - if (d_temp_storage == NULL) - { - return status; - } - - T *d_block_reductions = (T*) allocations[0]; - cub::GridQueue queue(allocations[1]); - char *vshmem_ptr = vshmem_size > 0 ? (char *)allocations[2] : NULL; - - - // Get grid size for device_reduce_sweep_kernel - int reduce_grid_size = 0; - if (reduce_plan.grid_mapping == cub::GRID_MAPPING_RAKE) - { - // Work is distributed evenly - reduce_grid_size = even_share.grid_size; - } - else if (reduce_plan.grid_mapping == cub::GRID_MAPPING_DYNAMIC) - { - // Work is distributed dynamically - size_t num_tiles = (num_items + reduce_plan.items_per_tile - 1) / - reduce_plan.items_per_tile; - - // if not enough to fill the device with threadblocks - // then fill the device with threadblocks - reduce_grid_size = static_cast(min(num_tiles, static_cast(reduce_device_occupancy))); - - typedef AgentLauncher<__reduce::DrainAgent > drain_agent; - AgentPlan drain_plan = drain_agent::get_plan(); - drain_plan.grid_size = 1; - drain_agent da(drain_plan, stream, "__reduce::drain_agent", debug_sync); - da.launch(queue, num_items); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - } - else - { - CUDA_CUB_RET_IF_FAIL(cudaErrorNotSupported); - } - - reduce_plan.grid_size = reduce_grid_size; - reduce_agent ra(reduce_plan, stream, vshmem_ptr, "reduce_agent: regular size reduce", debug_sync); - ra.launch(input_it, - d_block_reductions, - num_items, - even_share, - queue, - reduction_op); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - - - typedef AgentLauncher< - __reduce::ReduceAgent > - reduce_agent_single; - - reduce_plan.grid_size = 1; - reduce_agent_single ra1(reduce_plan, stream, vshmem_ptr, "reduce_agent: single tile reduce", debug_sync); - - ra1.launch(d_block_reductions, output_it, reduce_grid_size, reduction_op); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - } - - return status; - } // func doit_step - - // this is an init-less reduce, needed for min/max-element functionality - // this will avoid copying the first value from device->host - template - THRUST_RUNTIME_FUNCTION - T extrema(execution_policy& policy, - InputIt first, - Size num_items, - BinaryOp binary_op, - T*) - { - size_t temp_storage_bytes = 0; - cudaStream_t stream = cuda_cub::stream(policy); - bool debug_sync = THRUST_DEBUG_SYNC_FLAG; - - cudaError_t status; - THRUST_INDEX_TYPE_DISPATCH(status, doit_step, num_items, - (NULL, temp_storage_bytes, first, num_items_fixed, - binary_op, reinterpret_cast(NULL), stream, - debug_sync)); - cuda_cub::throw_on_error(status, "extrema failed on 1st step"); - - size_t allocation_sizes[2] = {sizeof(T*), temp_storage_bytes}; - void * allocations[2] = {NULL, NULL}; - - size_t storage_size = 0; - status = core::alias_storage(NULL, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "extrema failed on 1st alias storage"); - - // Allocate temporary storage. - thrust::detail::temporary_array - tmp(policy, storage_size); - void *ptr = static_cast(tmp.data().get()); - - status = core::alias_storage(ptr, - storage_size, - allocations, - allocation_sizes); - cuda_cub::throw_on_error(status, "extrema failed on 2nd alias storage"); - - T* d_result = thrust::detail::aligned_reinterpret_cast(allocations[0]); - - THRUST_INDEX_TYPE_DISPATCH(status, doit_step, num_items, - (allocations[1], temp_storage_bytes, first, - num_items_fixed, binary_op, d_result, stream, - debug_sync)); - cuda_cub::throw_on_error(status, "extrema failed on 2nd step"); - - status = cuda_cub::synchronize(policy); - cuda_cub::throw_on_error(status, "extrema failed to synchronize"); - - T result = cuda_cub::get_value(policy, d_result); - - return result; - } - - template