diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 5 Enterprise.zip Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 5 Enterprise.zip Crack.md deleted file mode 100644 index 7d9a31959c00a19f31f1263b15663ebcd9eeff0c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Delphi 5 Enterprise.zip Crack.md +++ /dev/null @@ -1,22 +0,0 @@ -
-

How to Install Delphi 5 Enterprise on Windows 10 with a Crack

-

Delphi 5 Enterprise is a powerful and versatile software development tool that uses the Object Pascal programming language and provides an integrated development environment (IDE) for creating desktop, mobile, web, and console applications. It was originally released by Borland in 1999, but it is still used by many developers who need to maintain legacy code or prefer its features and performance.

-

However, installing Delphi 5 Enterprise on Windows 10 can be challenging, as the installer may hang or fail due to compatibility issues, especially when trying to install the Borland Database Engine (BDE). Moreover, you may need a crack to bypass the license verification and use the software without restrictions.

-

Delphi 5 Enterprise.zip crack


Downloadhttps://byltly.com/2uKwfU



-

In this article, we will show you how to install Delphi 5 Enterprise on Windows 10 with a crack, using a simple and effective method that does not require any special skills or tools. We will also provide you with a link to download the Delphi 5 Enterprise.zip file that contains the installer and the crack.

-

Step 1: Download the Delphi 5 Enterprise.zip file

-

The first step is to download the Delphi 5 Enterprise.zip file that contains the installer and the crack. You can find it on various websites that offer software downloads, such as Archive.org, SourceForge.net, or FilePlanet.com. Make sure you download the file from a reliable and trustworthy source, as some files may contain viruses or malware that can harm your computer.

-

Once you have downloaded the Delphi 5 Enterprise.zip file, extract it to a folder of your choice using a program like WinZip or WinRAR. You should see two files inside the folder: setup.exe and crack.exe. The setup.exe file is the installer for Delphi 5 Enterprise, and the crack.exe file is the program that will patch the software and remove the license verification.

-

Step 2: Install Delphi 5 Enterprise

-

The next step is to install Delphi 5 Enterprise on your Windows 10 computer. Before you start, make sure you log in as an administrator and turn off User Account Control (UAC) from the Control Panel. This will prevent any errors or interruptions during the installation process.

-

Then, double-click on the setup.exe file to launch the installer. Follow the instructions on the screen and choose the options that suit your preferences. When you reach the point where the installer asks you to install the BDE, uncheck the box and skip this step. The BDE is not compatible with Windows 10 and may cause problems if you try to install it.

-

-

After you finish installing Delphi 5 Enterprise, do not run it yet. You need to apply the crack first to activate it and use it without limitations.

-

Step 3: Apply the crack

-

The final step is to apply the crack to Delphi 5 Enterprise. To do this, double-click on the crack.exe file that you extracted from the Delphi 5 Enterprise.zip file. A window will open with a button that says "Crack". Click on it and wait for a few seconds until you see a message that says "Done". This means that the crack has successfully patched Delphi 5 Enterprise and removed the license verification.

-

Now you can run Delphi 5 Enterprise from your Start menu or desktop shortcut. You should see a splash screen that says "Delphi 5 Enterprise - Cracked by [name of cracker]". You can ignore this message and proceed to use the software as normal. You can also turn on UAC again if you want.

-

Conclusion

-

In this article, we have shown you how to install Delphi 5 Enterprise on Windows 10 with a crack, using a simple and effective method that does not require any special skills or tools. We have also provided you with a link to download the Delphi 5 Enterprise.zip file that contains the installer and the crack.

-

We hope this article has been helpful and informative for you. If you have any questions or comments, feel free to leave them below. Thank you for reading!

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Crack the Code Worksheets The Best Kept Secret for Kids Education.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Crack the Code Worksheets The Best Kept Secret for Kids Education.md deleted file mode 100644 index 99e22614c02a3e1a3bfa9cc622e0a0b4dc4a91fd..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Free Crack the Code Worksheets The Best Kept Secret for Kids Education.md +++ /dev/null @@ -1,25 +0,0 @@ - -

Free Crack the Code Worksheets for Kids

-

Do you want to challenge your kids' logic and problem-solving skills? Do you want to make learning fun and engaging? If yes, then you should try these free crack the code worksheets for kids!

-

Crack the code worksheets are a type of puzzle where kids have to use a key to decode a secret message. The key can be a letter, a number, a symbol, or a pattern. The secret message can be a word, a phrase, a joke, or a fact. The worksheets can cover various topics, such as math, spelling, vocabulary, science, history, and more.

-

free crack the code worksheets


DOWNLOAD - https://byltly.com/2uKvDk



-

Crack the code worksheets are great for kids of all ages and abilities. They can help kids develop their critical thinking, reasoning, and deduction skills. They can also improve their literacy, numeracy, and general knowledge. Plus, they are fun and satisfying to solve!

-

In this article, we will share some of the best free crack the code worksheets for kids that you can find online. You can download and print them for your personal or classroom use. You can also customize them to suit your kids' interests and needs.

-

Best Free Crack the Code Worksheets for Kids

-

Here are some of the best free crack the code worksheets for kids that we have found online:

- -

How to Create Your Own Crack the Code Worksheets

-

If you want to create your own crack the code worksheets for your kids or students, you can use online tools or software to generate them. Here are some of the options that you can try:

- -

Conclusion ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Aqua Energizer Game Full Version Free Download HOT!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Aqua Energizer Game Full Version Free Download HOT!.md deleted file mode 100644 index bdc100ed24515bec94c21dee6149de4767fe133f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Aqua Energizer Game Full Version Free Download HOT!.md +++ /dev/null @@ -1,6 +0,0 @@ -

aqua energizer game full version free download


DOWNLOADhttps://imgfil.com/2uxXA8



- -Musafir full movies 720p torrent · HD Online Player (Shimla Mirchi tamil full movie hd 1080p free download) · aqua energizer game full version ... 1fdad05405
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/Bus Simulator Indonesia The Best Bus Simulator Game with APK Download.md b/spaces/1phancelerku/anime-remove-background/Bus Simulator Indonesia The Best Bus Simulator Game with APK Download.md deleted file mode 100644 index 8520a707f1a41b7c2cba2eb00a2ebb3afdf12feb..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Bus Simulator Indonesia The Best Bus Simulator Game with APK Download.md +++ /dev/null @@ -1,118 +0,0 @@ -
-

Bus Simulator Indonesia: A Fun and Authentic Game for Android

-

If you love driving games and want to experience what it's like to be a bus driver in Indonesia, then you should try Bus Simulator Indonesia. This is a free bus simulator game that lets you design your own livery, drive around authentic Indonesian cities and places, honk your horn, and enjoy the realistic 3D graphics. In this article, we will tell you more about this game, how to download it in APK format, and why you should play it.

-

What is Bus Simulator Indonesia?

-

Bus Simulator Indonesia (aka BUSSID) is a game developed by Maleo, an Indonesian game studio. It was released in 2017 and has been updated regularly since then. The game aims to replicate the experience of being a bus driver in Indonesia in a fun and authentic way. You can choose from a wide variety of customizable vehicles, pick up passengers, follow the traffic rules, and drive around different cities and places in Indonesia. You can also play online with other players and join convoys.

-

bus simulator indonesia game download in apk


DOWNLOAD - https://jinyurl.com/2uNQT2



-

Features of Bus Simulator Indonesia

-

Bus Simulator Indonesia has many features that make it one of the best bus simulator games on Android. Here are some of them:

-

Design your own livery

-

You can customize your bus with your own design and colors. You can also use your own 3D model using the vehicle mod system. This way, you can express your creativity and personality with your bus.

-

Easy and intuitive control

-

The game has very easy and intuitive control options. You can use the tilt, steering wheel, or buttons to control your bus. You can also adjust the camera angle and zoom level to suit your preference.

-

Authentic Indonesian cities and places

-

The game features authentic Indonesian cities and places, such as Jakarta, Surabaya, Bali, Yogyakarta, and more. You can see the landmarks, buildings, roads, bridges, and scenery that are unique to each location. You can also experience the weather, traffic, and culture of each place.

-

Indonesian buses

-

The game has a variety of Indonesian buses that you can choose from. You can drive buses from different brands, models, sizes, and types. You can also see the interior and exterior details of each bus.

-

Cool and fun honks

-

The game has cool and fun honks that you can use to communicate with other drivers and pedestrians. You can also hear the famous "Om Telolet Om!" (Uncle, honk your horn, uncle!) phrase that became a viral sensation in Indonesia.

-

Bus Simulator Indonesia APK free download for Android
-How to install Bus Simulator Indonesia game on your phone
-Bus Simulator Indonesia mod APK with unlimited money and fuel
-Bus Simulator Indonesia game review and features
-Bus Simulator Indonesia online multiplayer mode
-Bus Simulator Indonesia latest version 3.7.1 update
-Bus Simulator Indonesia vehicle mod system tutorial
-Bus Simulator Indonesia authentic Indonesian cities and places
-Bus Simulator Indonesia cool and fun honks
-Bus Simulator Indonesia design your own livery
-Bus Simulator Indonesia best buses and routes
-Bus Simulator Indonesia realistic driving physics and graphics
-Bus Simulator Indonesia tips and tricks for beginners
-Bus Simulator Indonesia cheats and hacks
-Bus Simulator Indonesia offline mode without internet
-Bus Simulator Indonesia Maleo developer information
-Bus Simulator Indonesia ratings and reviews from users
-Bus Simulator Indonesia gameplay videos and screenshots
-Bus Simulator Indonesia download size and requirements
-Bus Simulator Indonesia data safety and privacy policy
-Bus Simulator Indonesia support and feedback
-Bus Simulator Indonesia leaderboards and achievements
-Bus Simulator Indonesia data saved online feature
-Bus Simulator Indonesia alternatives and similar games
-Bus Simulator Indonesia problems and solutions
-Bus Simulator Indonesia FAQs and guides
-Bus Simulator Indonesia news and updates
-Bus Simulator Indonesia no ads option
-Bus Simulator Indonesia for PC and laptop
-Bus Simulator Indonesia for iOS and iPhone
-Bus Simulator Indonesia for Windows 10 and Mac OS
-Bus Simulator Indonesia for tablet and iPad
-Bus Simulator Indonesia for Chromebook and Kindle Fire
-Bus Simulator Indonesia for Samsung and Huawei devices
-Bus Simulator Indonesia for LG and Sony devices
-Bus Simulator Indonesia for Xiaomi and Oppo devices
-Bus Simulator Indonesia for Vivo and Realme devices
-Bus Simulator Indonesia for Nokia and Motorola devices
-Bus Simulator Indonesia for Asus and Lenovo devices
-Bus Simulator Indonesia for ZTE and Alcatel devices
-Bus Simulator Indonesia APK download link from Google Play Store
-Bus Simulator Indonesia APK download link from APK Combo
-Bus Simulator Indonesia APK download link from Softonic
-Bus Simulator Indonesia APK download link from APK Pure
-Bus Simulator Indonesia APK download link from Uptodown
-Bus Simulator Indonesia APK download link from APK Mirror

-

High quality and detailed 3D graphics

-

The game has high quality and detailed 3D graphics that make it look realistic and immersive. You can see the shadows, reflections, textures, lighting, and animations of the game. You can also adjust the graphics settings to optimize the performance of your device.

-

No obstructive ads while driving

-

The game has no obstructive ads while driving. You can enjoy the game without being interrupted by annoying pop-ups or banners. The only ads you will see are on the billboards along the road, which add to the realism of the game.

-

Leaderboard and online multiplayer convoy

-

The game has a leaderboard system that ranks the players based on their score, distance, speed, fuel consumption, and other factors. - - - - - - - - - - - - - - - - - - - - - - - - -
ProsCons
Free to play and downloadSome bugs and glitches may occur
Funny and realistic gameplaySome features may require in-app purchases
Creative and customizable optionsSome devices may not support the game well
Cultural and educational valueSome content may not be suitable for children
Social and competitive aspectsSome players may be rude or abusive online
-

User reviews and ratings

-

Bus Simulator Indonesia has received mostly positive reviews and ratings from users. On Google Play Store, it has a rating of 4.4 out of 5 stars, based on over 1.5 million reviews. On App Store, it has a rating of 4.6 out of 5 stars, based on over 16 thousand reviews. Here are some of the user comments:

-
"This game is awesome. I love the graphics, the sounds, the controls, and the customization. I feel like I'm really driving a bus in Indonesia. The online mode is also fun and exciting. I recommend this game to anyone who likes driving games."
-
"This game is very good and realistic. I like the Indonesian culture and scenery in this game. The buses are also very nice and detailed. The only problem is that sometimes the game crashes or freezes. I hope the developers can fix this issue."
-
"This game is very bad and boring. I hate the graphics, the sounds, the controls, and the customization. I feel like I'm wasting my time playing this game. The online mode is also laggy and annoying. I don't recommend this game to anyone who likes driving games."
-

Conclusion

-

Bus Simulator Indonesia is a fun and authentic game for Android that lets you drive a bus in Indonesia. You can design your own livery, drive around different cities and places, honk your horn, and enjoy the realistic 3D graphics. You can also play online with other players and join convoys. To download the game in APK format, you can follow the steps we provided above. You can also check the pros and cons and user reviews and ratings of the game before you play it.

-

We hope you enjoyed this article and learned something new about Bus Simulator Indonesia. If you have any questions or feedback, please let us know in the comments below. Thank you for reading!

-

Frequently Asked Questions (FAQs)

-
    -
  1. What is the difference between APK and normal download?
  2. -

    An APK file is an Android Package file that contains all the files and data needed to install an app on an Android device. A normal download is a file that can be downloaded from an official app store or website.

    -
  3. Is Bus Simulator Indonesia safe to download?
  4. -

    Yes, Bus Simulator Indonesia is safe to download as long as you download it from a trusted source, such as the official website or app store. You should also scan the file with an antivirus software before installing it.

    -
  5. How can I update Bus Simulator Indonesia?
  6. -

    You can update Bus Simulator Indonesia by downloading the latest version from the official website or app store. You can also enable the automatic update option in your device settings to get notified when a new version is available.

    -
  7. How can I contact the developers of Bus Simulator Indonesia?
  8. -

    You can contact the developers of Bus Simulator Indonesia by sending them an email at support@maleo.id or by visiting their Facebook page at https://www.facebook.com/bussimulatorid/.

    -
  9. How can I support Bus Simulator Indonesia?
  10. -

    You can support Bus Simulator Indonesia by giving it a positive review and rating on the app store or website, by sharing it with your friends and family, by making in-app purchases, or by donating to the developers via PayPal or bank transfer.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/inference.py b/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/inference.py deleted file mode 100644 index 3e5156e8d649954837e397c2ff15ec29995e7502..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/inference.py +++ /dev/null @@ -1,35 +0,0 @@ -import argparse - -import cv2 -import numpy as np -import torch - -from backbones import get_model - - -@torch.no_grad() -def inference(weight, name, img): - if img is None: - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.uint8) - else: - img = cv2.imread(img) - img = cv2.resize(img, (112, 112)) - - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - img = np.transpose(img, (2, 0, 1)) - img = torch.from_numpy(img).unsqueeze(0).float() - img.div_(255).sub_(0.5).div_(0.5) - net = get_model(name, fp16=False) - net.load_state_dict(torch.load(weight)) - net.eval() - feat = net(img).numpy() - print(feat) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description='PyTorch ArcFace Training') - parser.add_argument('--network', type=str, default='r50', help='backbone network') - parser.add_argument('--weight', type=str, default='') - parser.add_argument('--img', type=str, default=None) - args = parser.parse_args() - inference(args.weight, args.network, args.img) diff --git a/spaces/801artistry/RVC801/go-tensorboard.bat b/spaces/801artistry/RVC801/go-tensorboard.bat deleted file mode 100644 index cb81c17d3865513adec8eb0b832b7888cd1e4078..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/go-tensorboard.bat +++ /dev/null @@ -1,2 +0,0 @@ -python fixes/tensor-launch.py -pause \ No newline at end of file diff --git a/spaces/AFischer1985/German-Flan-T5/app.py b/spaces/AFischer1985/German-Flan-T5/app.py deleted file mode 100644 index e57fd5be8e38681ac67c6a51d0ebf01a3ecc2ef7..0000000000000000000000000000000000000000 --- a/spaces/AFischer1985/German-Flan-T5/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import gradio as gr -from transformers import pipeline -title= "German Flan-T5" -desc="Kommunikation mit flan-t5-large auf Deutsch wird intern ins Englische (opus-mt-de-en) und vom Englischen (opus-mt-en-de) übersetzt." -examples = [ - ["Erzähl mit eine Geschichte!",50,2,3,1,"Deutsch"], - ["Welche Blumen sollte man jemandem zum Valentinstag schenken?",50,1,0,1,"Deutsch"], - ["Please write a step by step recipe to make bolognese pasta!",50,2,3,2,"Englisch"] -] - -tDeEn = pipeline(model="Helsinki-NLP/opus-mt-de-en") -tEnDe = pipeline(model="Helsinki-NLP/opus-mt-en-de") -bot = pipeline(model="google/flan-t5-large") - -def solve(text,max_length,length_penalty,no_repeat_ngram_size,num_beams,language): - if(language=="Deutsch"): - text=tDeEn(text)[0]["translation_text"] - out=bot(text,max_length=max_length, length_penalty=length_penalty, no_repeat_ngram_size=no_repeat_ngram_size, num_beams=num_beams, early_stopping=True)[0]["generated_text"] - if(language=="Deutsch"): - out=tEnDe(out)[0]["translation_text"] - return out - -task = gr.Interface( - fn=solve, - inputs=[ - gr.Textbox(lines=5,max_lines=6,label="Frage"), - gr.Slider(minimum=1.0,maximum=200.0,value=50.0,step=1,interactive=True,label="max_length"), - gr.Slider(minimum=1.0,maximum=20.0,value=1.0,step=1,interactive=True,label="length_penalty"), - gr.Slider(minimum=0.0,maximum=5.0,value=3.0,step=1,interactive=True,label="no_repeat_ngram_size"), - gr.Slider(minimum=1.0,maximum=20.0,value=1.0,step=1,interactive=True,label="num_beams"), - gr.Dropdown(["Deutsch", "Englisch"],value="Deutsch"), - ], - outputs="text", - title=title, - description=desc, - examples=examples -) - -if __name__ == "__main__": - task.launch() \ No newline at end of file diff --git a/spaces/AILab-CVC/EvalCrafter/src/utils_display.py b/spaces/AILab-CVC/EvalCrafter/src/utils_display.py deleted file mode 100644 index 62e753fc5e1553a15a8611deebe641db5524ddb3..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/EvalCrafter/src/utils_display.py +++ /dev/null @@ -1,99 +0,0 @@ -from dataclasses import dataclass - -# These classes are for user facing column names, to avoid having to change them -# all around the code when a modif is needed -@dataclass -class ColumnContent: - name: str - type: str - displayed_by_default: bool - hidden: bool = False - -def fields(raw_class): - return [v for k, v in raw_class.__dict__.items() if k[:2] != "__" and k[-2:] != "__"] - -@dataclass(frozen=True) -class AutoEvalColumn: # Auto evals column - model_type_symbol = ColumnContent("T", "str", True) - model = ColumnContent("Model", "markdown", True) - average = ColumnContent("Average ⬆️", "number", True) - arc = ColumnContent("ARC", "number", True) - hellaswag = ColumnContent("HellaSwag", "number", True) - mmlu = ColumnContent("MMLU", "number", True) - truthfulqa = ColumnContent("TruthfulQA", "number", True) - model_type = ColumnContent("Type", "str", False) - precision = ColumnContent("Precision", "str", False, True) - license = ColumnContent("Hub License", "str", False) - params = ColumnContent("#Params (B)", "number", False) - likes = ColumnContent("Hub ❤️", "number", False) - revision = ColumnContent("Model sha", "str", False, False) - dummy = ColumnContent("model_name_for_query", "str", True) # dummy col to implement search bar (hidden by custom CSS) - -@dataclass(frozen=True) -class EloEvalColumn: # Elo evals column - model = ColumnContent("Model", "markdown", True) - gpt4 = ColumnContent("GPT-4 (all)", "number", True) - human_all = ColumnContent("Human (all)", "number", True) - human_instruct = ColumnContent("Human (instruct)", "number", True) - human_code_instruct = ColumnContent("Human (code-instruct)", "number", True) - - -@dataclass(frozen=True) -class EvalQueueColumn: # Queue column - model = ColumnContent("model", "markdown", True) - revision = ColumnContent("revision", "str", True) - private = ColumnContent("private", "bool", True) - precision = ColumnContent("precision", "bool", True) - weight_type = ColumnContent("weight_type", "str", "Original") - status = ColumnContent("status", "str", True) - -LLAMAS = ["huggingface/llama-7b", "huggingface/llama-13b", "huggingface/llama-30b", "huggingface/llama-65b"] - - -KOALA_LINK = "https://huggingface.co/TheBloke/koala-13B-HF" -VICUNA_LINK = "https://huggingface.co/lmsys/vicuna-13b-delta-v1.1" -OASST_LINK = "https://huggingface.co/OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5" -DOLLY_LINK = "https://huggingface.co/databricks/dolly-v2-12b" -MODEL_PAGE = "https://huggingface.co/models" -LLAMA_LINK = "https://ai.facebook.com/blog/large-language-model-llama-meta-ai/" -VICUNA_LINK = "https://huggingface.co/CarperAI/stable-vicuna-13b-delta" -ALPACA_LINK = "https://crfm.stanford.edu/2023/03/13/alpaca.html" - - -def model_hyperlink(link, model_name): - return f'{model_name}' - - -def make_clickable_model(model_name): - link = f"https://huggingface.co/{model_name}" - - if model_name in LLAMAS: - link = LLAMA_LINK - model_name = model_name.split("/")[1] - elif model_name == "HuggingFaceH4/stable-vicuna-13b-2904": - link = VICUNA_LINK - model_name = "stable-vicuna-13b" - elif model_name == "HuggingFaceH4/llama-7b-ift-alpaca": - link = ALPACA_LINK - model_name = "alpaca-13b" - if model_name == "dolly-12b": - link = DOLLY_LINK - elif model_name == "vicuna-13b": - link = VICUNA_LINK - elif model_name == "koala-13b": - link = KOALA_LINK - elif model_name == "oasst-12b": - link = OASST_LINK - #else: - # link = MODEL_PAGE - - return model_hyperlink(link, model_name) - -def styled_error(error): - return f"

{error}

" - -def styled_warning(warn): - return f"

{warn}

" - -def styled_message(message): - return f"

{message}

" \ No newline at end of file diff --git a/spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/README.md b/spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/README.md deleted file mode 100644 index 6381a332357f6b7fb4d9b010a02f74b0f9ee0835..0000000000000000000000000000000000000000 --- a/spaces/AISuperheroes/01ST-CSV-Dataset-Analyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 01ST CSV Dataset Analyzer -emoji: 🔥 -colorFrom: red -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/r/[id]/+page.server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/r/[id]/+page.server.ts deleted file mode 100644 index e09e70bac0c3ec133ea691d4c808160973ef2a7b..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/r/[id]/+page.server.ts +++ /dev/null @@ -1,34 +0,0 @@ -import type { PageServerLoad } from "./$types"; -import { collections } from "$lib/server/database"; -import { error } from "@sveltejs/kit"; -import type { WebSearchMessageResult } from "$lib/types/WebSearch"; - -export const load: PageServerLoad = async ({ params }) => { - /*const conversation = await collections.sharedConversations.findOne({ - _id: params.id, - }); - - if (!conversation) { - throw error(404, "Conversation not found"); - } - - const webSearchesId = conversation.messages - .filter((message) => message.webSearchId) - .map((message) => new ObjectId(message.webSearchId)); - - const results = await collections.webSearches.find({ _id: { $in: webSearchesId } }).toArray(); - - const searches = Object.fromEntries( - results.map((x) => [ - x._id.toString(), - [...x.messages, { type: "result", id: x._id.toString() } satisfies WebSearchMessageResult], - ]) - ); - - return { - messages: conversation.messages, - title: conversation.title, - model: conversation.model, - searches, - };*/ -}; diff --git a/spaces/AgentVerse/agentVerse/agentverse/tasks/simulation/sde_team/readme.md b/spaces/AgentVerse/agentVerse/agentverse/tasks/simulation/sde_team/readme.md deleted file mode 100644 index b022b820b5af939f3babe0204d914662252a4409..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/agentverse/tasks/simulation/sde_team/readme.md +++ /dev/null @@ -1,98 +0,0 @@ -# SDE team - -In this task, LLMs work as a software development team to solve code implementation problem. We have simulated two scenarios *sde_team/sde_team_2players* and *sde_team/sde_team_3players*. - -The performance on [HumanEval](https://github.com/openai/human-eval) is shown below. - -| Methods | Pass@1 HumanEval | -|---------------------------------|-----------| -| Codex (175B)* | 0.47 | -|     + CodeT* | 0.658 | -| PaLM Coder (540B)* | 0.36 | -| GPT-4* | 0.67 | -| ChatGPT (gpt-3.5-turbo)* | 0.573 | -|     + Self-collaboration* | 0.744 | -|     + Our *sde_team/sde_team_2players* | **0.799** | - -*: Results are from [Self-collaboration](https://arxiv.org/abs/2304.07590). The methods in the table all employed the provided unit tests. - -Our *sde_team/sde_team_2players* shares the similar spirit as Self-collaboration at the moment. We are working to introduce more features in this repo! - - -## *sde_team/sde_team_2players* - -In this case, we are simulating a code generation problem that a python function body is required to be generated given function signature, doc string and unit tests. In the following, we will elaborate the details. - -### Roles - -Detailed role description and prompts can be found in `config.yaml` - -#### *code writer* - -Code writer writes the code to satisfy the given requirement. The requirement is given in the \ field of the prompt. The code writer first thinks about the task (the thoughts written in \) and then write the code in \. - -The submitted code will be tested automatically on a series of unit tests. Then the feedback (in \) together with a professional code review (in \) will be returned. Then code writer will leverage this information to refine the previously submitted code. The refinement will take multiple iterations. - -#### *code reviewer* - -Code reviewer will write professional review for the submitted code. The submitted code will be given in \, the execution feedback of unit tests will be given in \ and the review will be composed in \. - -#### dummy *code tester* -Code tester is a dummy agent. In the current implementation, unit tests are executed via the local python code `agentverse/environments/rules/selector/code_api.py`. We will integrate the execution tools to BMTools soon. - -### How to run the simulation - -#### Provide problem and unit tests - -The code problem and unit tests should be given in `agentverse/tasks/sde_team/sde_team_2players/code_problem.json`. Here is an example. - -```json -{ - "problem": "from typing import List\n\n\ndef separate_paren_groups(paren_string: str) -> List[str]:\n \"\"\" Input to this function is a string containing multiple groups of nested parentheses. Your goal is to\n separate those group into separate strings and return the list of those.\n Separate groups are balanced (each open brace is properly closed) and not nested within each other\n Ignore any spaces in the input string.\n >>> separate_paren_groups('( ) (( )) (( )( ))')\n ['()', '(())', '(()())']\n \"\"\"\n", - "unit_tests": [ - "assert separate_paren_groups('(()()) ((())) () ((())()())') == ['(()())', '((()))', '()', '((())()())']", - "assert separate_paren_groups('() (()) ((())) (((())))') == ['()', '(())', '((()))', '(((())))']", - "assert separate_paren_groups('(()(())((())))') == ['(()(())((())))']", - "assert separate_paren_groups('( ) (( )) (( )( ))') == ['()', '(())', '(()())']" - ] -} -``` - -#### Build the configuration file - -Run `agentverse/tasks/sde_team/sde_team_2players/build_config.py` to generate `config.yaml`. - -```bash -cd agentverse/tasks/sde_team/sde_team_2players/ -python build_config.py -``` - -#### Run the session - -After generating `config.yaml`, run the `main.py` to start the task. - -```python -import os -from agentverse.agentverse import AgentVerse -from argparse import ArgumentParser - -parser = ArgumentParser() -parser.add_argument("--task", type=str, default="sde_team/sde_team_2players") -parser.add_argument("--tasks_dir", type=str, default=os.path.join( - os.path.dirname(__file__), "agentverse", "tasks")) - -args = parser.parse_args() -agentverse = AgentVerse.from_task(args.task, args.tasks_dir) -agentverse.run() -``` - - -## *sde_team/sde_team_3players* - -Different from *sde_team/sde_team_2players*, we additionally introduce a role to automatically generate unit tests. - -- *unit test generator*: generate a series of unit test cases for the coding problem. - -### Stay tuned - -The generated unit tests are not always perfect, as they may not be correct. We plan to incorporate tools to raise the correctness of the generated cases. \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements-plugin.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements-plugin.d.ts deleted file mode 100644 index edeef7abaf9c27ed1a9258269be574a1ac3eee3c..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/ymlachievements-plugin.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import Achievements from './ymlachievements'; - -export default class AchievementsPlugin extends Phaser.Plugins.BasePlugin { - add(): Achievements; - -} \ No newline at end of file diff --git a/spaces/AlanMars/QYL-AI-Space/assets/external-scripts.js b/spaces/AlanMars/QYL-AI-Space/assets/external-scripts.js deleted file mode 100644 index 8d0352669045537af5698b1824dbc1dba21df478..0000000000000000000000000000000000000000 --- a/spaces/AlanMars/QYL-AI-Space/assets/external-scripts.js +++ /dev/null @@ -1,2 +0,0 @@ - -// external javascript here diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/criteria/l2_loss.py b/spaces/Amrrs/DragGan-Inversion/PTI/criteria/l2_loss.py deleted file mode 100644 index c7ac2753b02dfa9d21ccf03fa3b87b9d6fc3f01d..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/criteria/l2_loss.py +++ /dev/null @@ -1,8 +0,0 @@ -import torch - -l2_criterion = torch.nn.MSELoss(reduction='mean') - - -def l2_loss(real_images, generated_images): - loss = l2_criterion(real_images, generated_images) - return loss diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/adapt_a_model.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/adapt_a_model.md deleted file mode 100644 index f1af5fca57a2fc117c5601862875b853760cdd21..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/adapt_a_model.md +++ /dev/null @@ -1,42 +0,0 @@ -# Adapt a model to a new task - -Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task. - -This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained [`UNet2DConditionModel`]. - -## Configure UNet2DConditionModel parameters - -A [`UNet2DConditionModel`] by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, load a pretrained text-to-image model like [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) and take a look at the number of `in_channels`: - -```py -from diffusers import StableDiffusionPipeline - -pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") -pipeline.unet.config["in_channels"] -4 -``` - -Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting): - -```py -from diffusers import StableDiffusionPipeline - -pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting") -pipeline.unet.config["in_channels"] -9 -``` - -To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9. - -Initialize a [`UNet2DConditionModel`] with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now. - -```py -from diffusers import UNet2DConditionModel - -model_id = "runwayml/stable-diffusion-v1-5" -unet = UNet2DConditionModel.from_pretrained( - model_id, subfolder="unet", in_channels=9, low_cpu_mem_usage=False, ignore_mismatched_sizes=True -) -``` - -The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (`conv_in.weight`) of the `unet` are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise. diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/onnx_utils.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/onnx_utils.py deleted file mode 100644 index 07c32e4e84bfee0241733a077fef9c0dec06905e..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/onnx_utils.py +++ /dev/null @@ -1,212 +0,0 @@ -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import os -import shutil -from pathlib import Path -from typing import Optional, Union - -import numpy as np -from huggingface_hub import hf_hub_download - -from ..utils import ONNX_EXTERNAL_WEIGHTS_NAME, ONNX_WEIGHTS_NAME, is_onnx_available, logging - - -if is_onnx_available(): - import onnxruntime as ort - - -logger = logging.get_logger(__name__) - -ORT_TO_NP_TYPE = { - "tensor(bool)": np.bool_, - "tensor(int8)": np.int8, - "tensor(uint8)": np.uint8, - "tensor(int16)": np.int16, - "tensor(uint16)": np.uint16, - "tensor(int32)": np.int32, - "tensor(uint32)": np.uint32, - "tensor(int64)": np.int64, - "tensor(uint64)": np.uint64, - "tensor(float16)": np.float16, - "tensor(float)": np.float32, - "tensor(double)": np.float64, -} - - -class OnnxRuntimeModel: - def __init__(self, model=None, **kwargs): - logger.info("`diffusers.OnnxRuntimeModel` is experimental and might change in the future.") - self.model = model - self.model_save_dir = kwargs.get("model_save_dir", None) - self.latest_model_name = kwargs.get("latest_model_name", ONNX_WEIGHTS_NAME) - - def __call__(self, **kwargs): - inputs = {k: np.array(v) for k, v in kwargs.items()} - return self.model.run(None, inputs) - - @staticmethod - def load_model(path: Union[str, Path], provider=None, sess_options=None): - """ - Loads an ONNX Inference session with an ExecutionProvider. Default provider is `CPUExecutionProvider` - - Arguments: - path (`str` or `Path`): - Directory from which to load - provider(`str`, *optional*): - Onnxruntime execution provider to use for loading the model, defaults to `CPUExecutionProvider` - """ - if provider is None: - logger.info("No onnxruntime provider specified, using CPUExecutionProvider") - provider = "CPUExecutionProvider" - - return ort.InferenceSession(path, providers=[provider], sess_options=sess_options) - - def _save_pretrained(self, save_directory: Union[str, Path], file_name: Optional[str] = None, **kwargs): - """ - Save a model and its configuration file to a directory, so that it can be re-loaded using the - [`~optimum.onnxruntime.modeling_ort.ORTModel.from_pretrained`] class method. It will always save the - latest_model_name. - - Arguments: - save_directory (`str` or `Path`): - Directory where to save the model file. - file_name(`str`, *optional*): - Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to save the - model with a different name. - """ - model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME - - src_path = self.model_save_dir.joinpath(self.latest_model_name) - dst_path = Path(save_directory).joinpath(model_file_name) - try: - shutil.copyfile(src_path, dst_path) - except shutil.SameFileError: - pass - - # copy external weights (for models >2GB) - src_path = self.model_save_dir.joinpath(ONNX_EXTERNAL_WEIGHTS_NAME) - if src_path.exists(): - dst_path = Path(save_directory).joinpath(ONNX_EXTERNAL_WEIGHTS_NAME) - try: - shutil.copyfile(src_path, dst_path) - except shutil.SameFileError: - pass - - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - **kwargs, - ): - """ - Save a model to a directory, so that it can be re-loaded using the [`~OnnxModel.from_pretrained`] class - method.: - - Arguments: - save_directory (`str` or `os.PathLike`): - Directory to which to save. Will be created if it doesn't exist. - """ - if os.path.isfile(save_directory): - logger.error(f"Provided path ({save_directory}) should be a directory, not a file") - return - - os.makedirs(save_directory, exist_ok=True) - - # saving model weights/files - self._save_pretrained(save_directory, **kwargs) - - @classmethod - def _from_pretrained( - cls, - model_id: Union[str, Path], - use_auth_token: Optional[Union[bool, str, None]] = None, - revision: Optional[Union[str, None]] = None, - force_download: bool = False, - cache_dir: Optional[str] = None, - file_name: Optional[str] = None, - provider: Optional[str] = None, - sess_options: Optional["ort.SessionOptions"] = None, - **kwargs, - ): - """ - Load a model from a directory or the HF Hub. - - Arguments: - model_id (`str` or `Path`): - Directory from which to load - use_auth_token (`str` or `bool`): - Is needed to load models from a private or gated repository - revision (`str`): - Revision is the specific model version to use. It can be a branch name, a tag name, or a commit id - cache_dir (`Union[str, Path]`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - force_download (`bool`, *optional*, defaults to `False`): - Whether or not to force the (re-)download of the model weights and configuration files, overriding the - cached versions if they exist. - file_name(`str`): - Overwrites the default model file name from `"model.onnx"` to `file_name`. This allows you to load - different model files from the same repository or directory. - provider(`str`): - The ONNX runtime provider, e.g. `CPUExecutionProvider` or `CUDAExecutionProvider`. - kwargs (`Dict`, *optional*): - kwargs will be passed to the model during initialization - """ - model_file_name = file_name if file_name is not None else ONNX_WEIGHTS_NAME - # load model from local directory - if os.path.isdir(model_id): - model = OnnxRuntimeModel.load_model( - os.path.join(model_id, model_file_name), provider=provider, sess_options=sess_options - ) - kwargs["model_save_dir"] = Path(model_id) - # load model from hub - else: - # download model - model_cache_path = hf_hub_download( - repo_id=model_id, - filename=model_file_name, - use_auth_token=use_auth_token, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - ) - kwargs["model_save_dir"] = Path(model_cache_path).parent - kwargs["latest_model_name"] = Path(model_cache_path).name - model = OnnxRuntimeModel.load_model(model_cache_path, provider=provider, sess_options=sess_options) - return cls(model=model, **kwargs) - - @classmethod - def from_pretrained( - cls, - model_id: Union[str, Path], - force_download: bool = True, - use_auth_token: Optional[str] = None, - cache_dir: Optional[str] = None, - **model_kwargs, - ): - revision = None - if len(str(model_id).split("@")) == 2: - model_id, revision = model_id.split("@") - - return cls._from_pretrained( - model_id=model_id, - revision=revision, - cache_dir=cache_dir, - force_download=force_download, - use_auth_token=use_auth_token, - **model_kwargs, - ) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py deleted file mode 100644 index 61457e6ca01f04a5efb3404869897fc3498b16aa..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/altdiffusion/test_alt_diffusion_img2img.py +++ /dev/null @@ -1,300 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import random -import unittest - -import numpy as np -import torch -from transformers import XLMRobertaTokenizer - -from diffusers import ( - AltDiffusionImg2ImgPipeline, - AutoencoderKL, - PNDMScheduler, - UNet2DConditionModel, -) -from diffusers.image_processor import VaeImageProcessor -from diffusers.pipelines.alt_diffusion.modeling_roberta_series import ( - RobertaSeriesConfig, - RobertaSeriesModelWithTransformation, -) -from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device -from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu - - -enable_full_determinism() - - -class AltDiffusionImg2ImgPipelineFastTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - @property - def dummy_image(self): - batch_size = 1 - num_channels = 3 - sizes = (32, 32) - - image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device) - return image - - @property - def dummy_cond_unet(self): - torch.manual_seed(0) - model = UNet2DConditionModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=4, - out_channels=4, - down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"), - up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"), - cross_attention_dim=32, - ) - return model - - @property - def dummy_vae(self): - torch.manual_seed(0) - model = AutoencoderKL( - block_out_channels=[32, 64], - in_channels=3, - out_channels=3, - down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"], - up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"], - latent_channels=4, - ) - return model - - @property - def dummy_text_encoder(self): - torch.manual_seed(0) - config = RobertaSeriesConfig( - hidden_size=32, - project_dim=32, - intermediate_size=37, - layer_norm_eps=1e-05, - num_attention_heads=4, - num_hidden_layers=5, - pad_token_id=1, - vocab_size=5006, - ) - return RobertaSeriesModelWithTransformation(config) - - @property - def dummy_extractor(self): - def extract(*args, **kwargs): - class Out: - def __init__(self): - self.pixel_values = torch.ones([0]) - - def to(self, device): - self.pixel_values.to(device) - return self - - return Out() - - return extract - - def test_stable_diffusion_img2img_default_case(self): - device = "cpu" # ensure determinism for the device-dependent torch.Generator - unet = self.dummy_cond_unet - scheduler = PNDMScheduler(skip_prk_steps=True) - vae = self.dummy_vae - bert = self.dummy_text_encoder - tokenizer = XLMRobertaTokenizer.from_pretrained("hf-internal-testing/tiny-xlm-roberta") - tokenizer.model_max_length = 77 - - init_image = self.dummy_image.to(device) - init_image = init_image / 2 + 0.5 - - # make sure here that pndm scheduler skips prk - alt_pipe = AltDiffusionImg2ImgPipeline( - unet=unet, - scheduler=scheduler, - vae=vae, - text_encoder=bert, - tokenizer=tokenizer, - safety_checker=None, - feature_extractor=self.dummy_extractor, - ) - alt_pipe.image_processor = VaeImageProcessor(vae_scale_factor=alt_pipe.vae_scale_factor, do_normalize=True) - alt_pipe = alt_pipe.to(device) - alt_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - generator = torch.Generator(device=device).manual_seed(0) - output = alt_pipe( - [prompt], - generator=generator, - guidance_scale=6.0, - num_inference_steps=2, - output_type="np", - image=init_image, - ) - - image = output.images - - generator = torch.Generator(device=device).manual_seed(0) - image_from_tuple = alt_pipe( - [prompt], - generator=generator, - guidance_scale=6.0, - num_inference_steps=2, - output_type="np", - image=init_image, - return_dict=False, - )[0] - - image_slice = image[0, -3:, -3:, -1] - image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.4427, 0.3731, 0.4249, 0.4941, 0.4546, 0.4148, 0.4193, 0.4666, 0.4499]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-3 - assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 5e-3 - - @unittest.skipIf(torch_device != "cuda", "This test requires a GPU") - def test_stable_diffusion_img2img_fp16(self): - """Test that stable diffusion img2img works with fp16""" - unet = self.dummy_cond_unet - scheduler = PNDMScheduler(skip_prk_steps=True) - vae = self.dummy_vae - bert = self.dummy_text_encoder - tokenizer = XLMRobertaTokenizer.from_pretrained("hf-internal-testing/tiny-xlm-roberta") - tokenizer.model_max_length = 77 - - init_image = self.dummy_image.to(torch_device) - - # put models in fp16 - unet = unet.half() - vae = vae.half() - bert = bert.half() - - # make sure here that pndm scheduler skips prk - alt_pipe = AltDiffusionImg2ImgPipeline( - unet=unet, - scheduler=scheduler, - vae=vae, - text_encoder=bert, - tokenizer=tokenizer, - safety_checker=None, - feature_extractor=self.dummy_extractor, - ) - alt_pipe.image_processor = VaeImageProcessor(vae_scale_factor=alt_pipe.vae_scale_factor, do_normalize=False) - alt_pipe = alt_pipe.to(torch_device) - alt_pipe.set_progress_bar_config(disable=None) - - prompt = "A painting of a squirrel eating a burger" - generator = torch.manual_seed(0) - image = alt_pipe( - [prompt], - generator=generator, - num_inference_steps=2, - output_type="np", - image=init_image, - ).images - - assert image.shape == (1, 32, 32, 3) - - @unittest.skipIf(torch_device != "cuda", "This test requires a GPU") - def test_stable_diffusion_img2img_pipeline_multiple_of_8(self): - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/img2img/sketch-mountains-input.jpg" - ) - # resize to resolution that is divisible by 8 but not 16 or 32 - init_image = init_image.resize((760, 504)) - - model_id = "BAAI/AltDiffusion" - pipe = AltDiffusionImg2ImgPipeline.from_pretrained( - model_id, - safety_checker=None, - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - prompt = "A fantasy landscape, trending on artstation" - - generator = torch.manual_seed(0) - output = pipe( - prompt=prompt, - image=init_image, - strength=0.75, - guidance_scale=7.5, - generator=generator, - output_type="np", - ) - image = output.images[0] - - image_slice = image[255:258, 383:386, -1] - - assert image.shape == (504, 760, 3) - expected_slice = np.array([0.9358, 0.9397, 0.9599, 0.9901, 1.0000, 1.0000, 0.9882, 1.0000, 1.0000]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - -@slow -@require_torch_gpu -class AltDiffusionImg2ImgPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - torch.cuda.empty_cache() - - def test_stable_diffusion_img2img_pipeline_default(self): - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/img2img/sketch-mountains-input.jpg" - ) - init_image = init_image.resize((768, 512)) - expected_image = load_numpy( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/img2img/fantasy_landscape_alt.npy" - ) - - model_id = "BAAI/AltDiffusion" - pipe = AltDiffusionImg2ImgPipeline.from_pretrained( - model_id, - safety_checker=None, - ) - pipe.to(torch_device) - pipe.set_progress_bar_config(disable=None) - pipe.enable_attention_slicing() - - prompt = "A fantasy landscape, trending on artstation" - - generator = torch.manual_seed(0) - output = pipe( - prompt=prompt, - image=init_image, - strength=0.75, - guidance_scale=7.5, - generator=generator, - output_type="np", - ) - image = output.images[0] - - assert image.shape == (512, 768, 3) - # img2img is flaky across GPUs even in fp32, so using MAE here - assert np.abs(expected_image - image).max() < 1e-2 diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/resnest.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/resnest.py deleted file mode 100644 index 48e1d8bfa47348a13f0da0b9ecf32354fa270340..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/resnest.py +++ /dev/null @@ -1,317 +0,0 @@ -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNetV1d - - -class RSoftmax(nn.Module): - """Radix Softmax module in ``SplitAttentionConv2d``. - - Args: - radix (int): Radix of input. - groups (int): Groups of input. - """ - - def __init__(self, radix, groups): - super().__init__() - self.radix = radix - self.groups = groups - - def forward(self, x): - batch = x.size(0) - if self.radix > 1: - x = x.view(batch, self.groups, self.radix, -1).transpose(1, 2) - x = F.softmax(x, dim=1) - x = x.reshape(batch, -1) - else: - x = torch.sigmoid(x) - return x - - -class SplitAttentionConv2d(nn.Module): - """Split-Attention Conv2d in ResNeSt. - - Args: - in_channels (int): Number of channels in the input feature map. - channels (int): Number of intermediate channels. - kernel_size (int | tuple[int]): Size of the convolution kernel. - stride (int | tuple[int]): Stride of the convolution. - padding (int | tuple[int]): Zero-padding added to both sides of - dilation (int | tuple[int]): Spacing between kernel elements. - groups (int): Number of blocked connections from input channels to - output channels. - groups (int): Same as nn.Conv2d. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels. Default: 4. - conv_cfg (dict): Config dict for convolution layer. Default: None, - which means using conv2d. - norm_cfg (dict): Config dict for normalization layer. Default: None. - dcn (dict): Config dict for DCN. Default: None. - """ - - def __init__(self, - in_channels, - channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - radix=2, - reduction_factor=4, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None): - super(SplitAttentionConv2d, self).__init__() - inter_channels = max(in_channels * radix // reduction_factor, 32) - self.radix = radix - self.groups = groups - self.channels = channels - self.with_dcn = dcn is not None - self.dcn = dcn - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_dcn and not fallback_on_stride: - assert conv_cfg is None, 'conv_cfg must be None for DCN' - conv_cfg = dcn - self.conv = build_conv_layer( - conv_cfg, - in_channels, - channels * radix, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups * radix, - bias=False) - # To be consistent with original implementation, starting from 0 - self.norm0_name, norm0 = build_norm_layer( - norm_cfg, channels * radix, postfix=0) - self.add_module(self.norm0_name, norm0) - self.relu = nn.ReLU(inplace=True) - self.fc1 = build_conv_layer( - None, channels, inter_channels, 1, groups=self.groups) - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, inter_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.fc2 = build_conv_layer( - None, inter_channels, channels * radix, 1, groups=self.groups) - self.rsoftmax = RSoftmax(radix, groups) - - @property - def norm0(self): - """nn.Module: the normalization layer named "norm0" """ - return getattr(self, self.norm0_name) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def forward(self, x): - x = self.conv(x) - x = self.norm0(x) - x = self.relu(x) - - batch, rchannel = x.shape[:2] - batch = x.size(0) - if self.radix > 1: - splits = x.view(batch, self.radix, -1, *x.shape[2:]) - gap = splits.sum(dim=1) - else: - gap = x - gap = F.adaptive_avg_pool2d(gap, 1) - gap = self.fc1(gap) - - gap = self.norm1(gap) - gap = self.relu(gap) - - atten = self.fc2(gap) - atten = self.rsoftmax(atten).view(batch, -1, 1, 1) - - if self.radix > 1: - attens = atten.view(batch, self.radix, -1, *atten.shape[2:]) - out = torch.sum(attens * splits, dim=1) - else: - out = atten * x - return out.contiguous() - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeSt. - - Args: - inplane (int): Input planes of this block. - planes (int): Middle planes of this block. - groups (int): Groups of conv2. - base_width (int): Base of width in terms of base channels. Default: 4. - base_channels (int): Base of channels for calculating width. - Default: 64. - radix (int): Radix of SpltAtConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Key word arguments for base class. - """ - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - """Bottleneck block for ResNeSt.""" - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.avg_down_stride = avg_down_stride and self.conv2_stride > 1 - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - self.with_modulated_dcn = False - self.conv2 = SplitAttentionConv2d( - width, - width, - kernel_size=3, - stride=1 if self.avg_down_stride else self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - radix=radix, - reduction_factor=reduction_factor, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - dcn=self.dcn) - delattr(self, self.norm2_name) - - if self.avg_down_stride: - self.avd_layer = nn.AvgPool2d(3, self.conv2_stride, padding=1) - - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - def forward(self, x): - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - - if self.avg_down_stride: - out = self.avd_layer(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNeSt(ResNetV1d): - """ResNeSt backbone. - - Args: - groups (int): Number of groups of Bottleneck. Default: 1 - base_width (int): Base width of Bottleneck. Default: 4 - radix (int): Radix of SplitAttentionConv2d. Default: 2 - reduction_factor (int): Reduction factor of inter_channels in - SplitAttentionConv2d. Default: 4. - avg_down_stride (bool): Whether to use average pool for stride in - Bottleneck. Default: True. - kwargs (dict): Keyword arguments for ResNet. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)), - 200: (Bottleneck, (3, 24, 36, 3)) - } - - def __init__(self, - groups=1, - base_width=4, - radix=2, - reduction_factor=4, - avg_down_stride=True, - **kwargs): - self.groups = groups - self.base_width = base_width - self.radix = radix - self.reduction_factor = reduction_factor - self.avg_down_stride = avg_down_stride - super(ResNeSt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - radix=self.radix, - reduction_factor=self.reduction_factor, - avg_down_stride=self.avg_down_stride, - **kwargs) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fovea.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fovea.py deleted file mode 100644 index 22a578efffbd108db644d907bae95c7c8df31f2e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/fovea.py +++ /dev/null @@ -1,17 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class FOVEA(SingleStageDetector): - """Implementation of `FoveaBox `_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(FOVEA, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/AnnaPalatkina/fine_grained_SA/sentiment_wrapper.py b/spaces/AnnaPalatkina/fine_grained_SA/sentiment_wrapper.py deleted file mode 100644 index 82de3d48aa2b2c6d6e051b9a81eb831461e153fe..0000000000000000000000000000000000000000 --- a/spaces/AnnaPalatkina/fine_grained_SA/sentiment_wrapper.py +++ /dev/null @@ -1,100 +0,0 @@ -from transformers import BertModel, BertTokenizer, AdamW, get_linear_schedule_with_warmup -from sklearn.metrics import classification_report, f1_score -from torch.utils.data import Dataset, DataLoader -from tqdm.auto import tqdm -from config import params -from torch import nn -import pandas as pd -import numpy as np -import warnings -import random -import torch -import os - -device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") - - -class Dataset(Dataset): - def __init__(self, texts, max_len): - self.texts = texts - self.tokenizer = BertTokenizer.from_pretrained(params['pretrained_model_name']) - self.max_len = max_len - - def __len__(self): - return len(self.texts) - - def __getitem__(self, item): - text = str(self.texts[item]) - encoding = self.tokenizer.encode_plus( - text, - add_special_tokens=True, - max_length=self.max_len, - return_token_type_ids=False, - pad_to_max_length=True, - return_attention_mask=True, - truncation=True, - return_tensors='pt', - ) - - return { - 'text': text, - 'input_ids': encoding['input_ids'].flatten(), - 'attention_mask': encoding['attention_mask'].flatten(), - } - -class SentimentClassifier(nn.Module): - - def __init__(self, n_classes): - super(SentimentClassifier, self).__init__() - self.bert = BertModel.from_pretrained(params['pretrained_model_name']) - self.drop = nn.Dropout(params['dropout']) - self.out = nn.Linear(self.bert.config.hidden_size, n_classes) - - def forward(self, input_ids, attention_mask): - - bert_output = self.bert( - input_ids=input_ids, - attention_mask=attention_mask, - return_dict=False - ) - last_hidden_state, pooled_output = bert_output - output = self.drop(pooled_output) - return self.out(output) - - -class PredictionModel: - - def __init__(self): - self.model = SentimentClassifier(n_classes = 6) - self.loss_fn = nn.CrossEntropyLoss().to(device) - - def create_data_loader(self, X_test, max_len, batch_size): - ds = Dataset( - texts= np.array(X_test), - max_len=max_len - ) - return DataLoader( - ds, - batch_size=batch_size - ) - - def predict(self, X_test: list): - - data_loader = self.create_data_loader(X_test, params['max_length'], params['batch_size']) - self.model.load_state_dict(torch.load(params['path_to_model_bin'])) - self.model.eval() - losses = [] - y_pred = [] - with torch.no_grad(): - for d in data_loader: - input_ids = d["input_ids"].to(device) - attention_mask = d["attention_mask"].to(device) - outputs = self.model( - input_ids=input_ids, - attention_mask=attention_mask - ) - _, preds = torch.max(outputs, dim=1) - y_pred += preds.tolist() - return y_pred - - diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/tc_model.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/tc_model.py deleted file mode 100644 index bdcfdc8f9771bcef13f606fb301fb46fd6795405..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/tc_model.py +++ /dev/null @@ -1,247 +0,0 @@ -import torch -import torch.nn.functional as F -from .base_model import BaseModel -from . import networks, losses - - -class TC(BaseModel): - """This class implements the transformer for image completion""" - def name(self): - return "Transformer Image Completion" - - @staticmethod - def modify_options(parser, is_train=True): - """Add new options and rewrite default values for existing options""" - - parser.add_argument('--coarse_or_refine', type=str, default='refine', help='train the transform or refined network') - parser.add_argument('--down_layers', type=int, default=4, help='# times down sampling for refine generator') - parser.add_argument('--mid_layers', type=int, default=6, help='# times middle layers for refine generator') - if is_train: - parser.add_argument('--lambda_rec', type=float, default=10.0, help='weight for image reconstruction loss') - parser.add_argument('--lambda_g', type=float, default=1.0, help='weight for discriminator loss') - parser.add_argument('--lambda_lp', type=float, default=10.0, help='weight for the perceptual loss') - parser.add_argument('--lambda_gradient', type=float, default=0.0, help='weight for the gradient penalty') - - return parser - - def __init__(self, opt): - """inital the Transformer model""" - BaseModel.__init__(self, opt) - self.visual_names = ['img_org', 'img_m', 'img_g', 'img_out'] - self.model_names = ['E', 'G', 'D', 'T'] - self.loss_names = ['G_rec', 'G_lp', 'G_GAN', 'D_real', 'D_fake'] - - self.netE = networks.define_E(opt) - self.netT = networks.define_T(opt) - self.netG = networks.define_G(opt) - self.netD = networks.define_D(opt, opt.fixed_size) - - if 'refine' in self.opt.coarse_or_refine: - opt = self._refine_opt(opt) - self.netG_Ref = networks.define_G(opt) - self.netD_Ref = networks.define_D(opt, opt.fine_size) - self.visual_names += ['img_ref', 'img_ref_out'] - self.model_names += ['G_Ref', 'D_Ref'] - - if self.isTrain: - # define the loss function - self.L1loss = torch.nn.L1Loss() - self.GANloss = losses.GANLoss(opt.gan_mode).to(self.device) - self.NormalVGG = losses.Normalization(self.device) - self.LPIPSloss = losses.LPIPSLoss(ckpt_path=opt.lipip_path).to(self.device) - if len(self.opt.gpu_ids) > 0: - self.LPIPSloss = torch.nn.parallel.DataParallel(self.LPIPSloss, self.opt.gpu_ids) - # define the optimizer - if 'coarse' in self.opt.coarse_or_refine: - self.optimizerG = torch.optim.Adam(list(self.netE.parameters()) + list(self.netG.parameters()) - + list(self.netT.parameters()), lr=opt.lr, betas=(opt.beta1, opt.beta2)) - self.optimizerD = torch.optim.Adam(self.netD.parameters(), lr=opt.lr * 4, betas=(opt.beta1, opt.beta2)) - self.optimizers.append(self.optimizerG) - self.optimizers.append(self.optimizerD) - if 'refine' in self.opt.coarse_or_refine: - self.optimizerGRef = torch.optim.Adam(self.netG_Ref.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2)) - self.optimizerDRef = torch.optim.Adam(self.netD_Ref.parameters(), lr=opt.lr * 4, betas=(opt.beta1, opt.beta2)) - self.optimizers.append(self.optimizerGRef) - self.optimizers.append(self.optimizerDRef) - else: - self.visual_names = ['img_org', 'img_m', 'img_out'] - if 'refine' in self.opt.coarse_or_refine: - self.visual_names += ['img_ref_out'] - - def set_input(self, input): - """Unpack input data from the data loader and perform necessary pre-process steps""" - self.input = input - - self.image_paths = self.input['img_path'] - self.img_org = input['img_org'].to(self.device) * 2 - 1 - self.img = input['img'].to(self.device) * 2 - 1 - self.mask = input['mask'].to(self.device) - - # get I_m and I_c for image with mask and complement regions for training - self.img_m = self.mask * self.img_org - - @torch.no_grad() - def test(self): - """Run forward processing for testing""" - fixed_img = F.interpolate(self.img_m, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True).clamp(-1, 1) - fixed_mask = (F.interpolate(self.mask, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True) > 0.9).type_as(fixed_img) - out, mask = self.netE(fixed_img, mask=fixed_mask, return_mask=True) - out = self.netT(out, mask, bool_mask=False) - - # sample result - for i in range(self.opt.nsampling): - img_g = self.netG(out, mask=self.mask) - img_g_org = F.interpolate(img_g, size=self.img_org.size()[2:], mode='bicubic', align_corners=True).clamp(-1, 1) - self.img_out = self.mask * self.img_org + (1 - self.mask) * img_g_org - # save for multiple results - self.save_results(self.img_out, path=self.opt.save_dir + '/img_out', data_name=i) - if 'refine' in self.opt.coarse_or_refine: - img_ref = self.netG_Ref(self.img_out, mask=self.mask) - self.img_ref_out = self.mask * self.img_org + (1 - self.mask) * img_ref - # save for multiple results - self.save_results(self.img_ref_out, path=self.opt.save_dir + '/img_ref_out', data_name=i) - - def forward(self): - """Run forward processing to get the outputs""" - fixed_img = F.interpolate(self.img_m, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True).clamp(-1, 1) - self.fixed_mask = (F.interpolate(self.mask, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True) > 0.9).type_as(fixed_img) - out, mask = self.netE(fixed_img, mask=self.fixed_mask, return_mask=True) - out = self.netT(out, mask, bool_mask=False) - self.img_g = self.netG(out, mask=self.mask) - img_g_org = F.interpolate(self.img_g, size=self.img_org.size()[2:], mode='bicubic', align_corners=True).clamp(-1, 1) - self.img_out = self.mask * self.img_org + (1 - self.mask) * img_g_org - - if 'refine' in self.opt.coarse_or_refine: - self.img_ref = self.netG_Ref(self.img_out, self.mask) - self.img_ref_out = self.mask * self.img_org + (1 - self.mask) * self.img_ref - - def backward_D_basic(self, netD, real, fake): - """ - Calculate GAN loss for the discriminator - :param netD: the discriminator D - :param real: real examples - :param fake: examples generated by a generator - :return: discriminator loss - """ - self.loss_D_real = self.GANloss(netD(real), True, is_dis=True) - self.loss_D_fake = self.GANloss(netD(fake), False, is_dis=True) - loss_D = self.loss_D_real + self.loss_D_fake - if self.opt.lambda_gradient > 0: - self.loss_D_Gradient, _ = losses.cal_gradient_penalty(netD, real, fake, real.device, lambda_gp=self.opt.lambda_gradient) - loss_D += self.loss_D_Gradient - loss_D.backward() - return loss_D - - def backward_D(self): - """Calculate the GAN loss for discriminator""" - self.loss_D = 0 - if 'coarse' in self.opt.coarse_or_refine: - self.set_requires_grad([self.netD], True) - self.optimizerD.zero_grad() - real = self.img.detach() - fake = self.img_g.detach() - self.loss_D += self.backward_D_basic(self.netD, real, fake) if self.opt.lambda_g > 0 else 0 - if 'refine' in self.opt.coarse_or_refine: - self.set_requires_grad([self.netD_Ref], True) - self.optimizerDRef.zero_grad() - real = self.img_org.detach() - fake = self.img_ref.detach() - self.loss_D += self.backward_D_basic(self.netD_Ref, real, fake) if self.opt.lambda_g > 0 else 0 - - def backward_G(self): - """Calculate the loss for generator""" - self.loss_G_GAN = 0 - self.loss_G_rec = 0 - self.loss_G_lp =0 - if 'coarse' in self.opt.coarse_or_refine: - self.set_requires_grad([self.netD], False) - self.optimizerG.zero_grad() - self.loss_G_GAN += self.GANloss(self.netD(self.img_g), True) * self.opt.lambda_g if self.opt.lambda_g > 0 else 0 - self.loss_G_rec += (self.L1loss(self.img_g * (1 - self.fixed_mask), self.img * (1 - self.fixed_mask)) * 3 + - self.L1loss(self.img_g * self.fixed_mask, self.img_g * self.fixed_mask)) * self.opt.lambda_rec - norm_real = self.NormalVGG((self.img + 1) * 0.5) - norm_fake = self.NormalVGG((self.img_g + 1) * 0.5) - self.loss_G_lp += (self.LPIPSloss(norm_real, norm_fake).mean()) * self.opt.lambda_lp if self.opt.lambda_lp > 0 else 0 - if 'refine' in self.opt.coarse_or_refine: - self.set_requires_grad([self.netD_Ref], False) - self.optimizerGRef.zero_grad() - self.loss_G_GAN += self.GANloss(self.netD_Ref(self.img_ref), True) * self.opt.lambda_g if self.opt.lambda_g > 0 else 0 - self.loss_G_rec += (self.L1loss(self.img_ref * (1 - self.mask), self.img_org * (1 - self.mask)) * 3 + - self.L1loss(self.img_ref * self.mask, self.img_org * self.mask)) * self.opt.lambda_rec - norm_real = self.NormalVGG((self.img_org + 1) * 0.5) - norm_fake = self.NormalVGG((self.img_ref + 1) * 0.5) - self.loss_G_lp += (self.LPIPSloss(norm_real, norm_fake).mean()) * self.opt.lambda_lp if self.opt.lambda_lp > 0 else 0 - - self.loss_G = self.loss_G_GAN + self.loss_G_rec + self.loss_G_lp - - self.loss_G.backward() - - def optimize_parameters(self): - """update network weights""" - # forward - self.set_requires_grad([self.netE, self.netT, self.netG], 'coarse' in self.opt.coarse_or_refine) - self.forward() - # update D - self.backward_D() - if 'coarse' in self.opt.coarse_or_refine: - self.optimizerD.step() - if 'refine' in self.opt.coarse_or_refine: - self.optimizerDRef.step() - # update G - self.backward_G() - if 'coarse' in self.opt.coarse_or_refine: - self.optimizerG.step() - if 'refine' in self.opt.coarse_or_refine: - self.optimizerGRef.step() - - def configure_optimizers(self): - """ - Following minGPT: - This long function is unfortunately doing something very simple and is being very defensive: - We are separating out all parameters of the model into two buckets: those that will experience - weight decay for regularization and those that won't (biases, and layernorm/embedding weights). - We are then returning the PyTorch optimizer object. - """ - # separate out all parameters to those that will and won't experience regularizing weight decay - decay = set() - no_decay = set() - whitelist_weight_modules = (torch.nn.Linear, torch.nn.Conv2d) - blacklist_weight_modules = (torch.nn.LayerNorm, torch.nn.Embedding) - for mn, m in self.netT.named_modules(): - for pn, p in m.named_parameters(): - fpn = '%s.%s' % (mn, pn) if mn else pn # full param name - - if pn.endswith('bias') or pn.endswith('alpha'): - # all biases will not be decayed - no_decay.add(fpn) - elif pn.endswith('weight') and isinstance(m, whitelist_weight_modules): - # weights of whitelist modules will be weight decayed - decay.add(fpn) - elif pn.endswith('weight') and isinstance(m, blacklist_weight_modules): - # weights of blacklist modules will NOT be weight decayed - no_decay.add(fpn) - - # validate that we considered every parameter - param_dict = {pn: p for pn, p in self.netT.named_parameters()} - inter_params = decay & no_decay - union_params = decay | no_decay - assert len(inter_params) == 0, "parameters %s made it into both decay/no_decay sets!" % (str(inter_params),) - assert len(param_dict.keys() - union_params) == 0, "parameters %s were not separated into either decay/no_decay set!" \ - % (str(param_dict.keys() - union_params),) - - # create the pytorch optimizer object - optim_groups = [ - {"params": [param_dict[pn] for pn in sorted(list(decay))], "weight_decay": 0.01, "betas":(0.9, 0.95)}, - {"params": [param_dict[pn] for pn in sorted(list(no_decay))], "weight_decay": 0.0, "betas":(0.9, 0.95)}, - {"params": list(filter(lambda p: p.requires_grad, self.netE.parameters()))}, - {"params": list(filter(lambda p: p.requires_grad, self.netG.parameters()))} - ] - optimizer = torch.optim.Adam(optim_groups, lr=self.opt.lr, betas=(self.opt.beta1, self.opt.beta2)) - return optimizer - - def _refine_opt(self, opt): - """modify the opt for refine generator and discriminator""" - opt.netG = 'refine' - opt.netD = 'style' - - return opt \ No newline at end of file diff --git a/spaces/Ariharasudhan/YoloV5/utils/segment/dataloaders.py b/spaces/Ariharasudhan/YoloV5/utils/segment/dataloaders.py deleted file mode 100644 index 9de6f0fbf903b111e517e3392130d464b0828542..0000000000000000000000000000000000000000 --- a/spaces/Ariharasudhan/YoloV5/utils/segment/dataloaders.py +++ /dev/null @@ -1,331 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Dataloaders -""" - -import os -import random - -import cv2 -import numpy as np -import torch -from torch.utils.data import DataLoader, distributed - -from ..augmentations import augment_hsv, copy_paste, letterbox -from ..dataloaders import InfiniteDataLoader, LoadImagesAndLabels, seed_worker -from ..general import LOGGER, xyn2xy, xywhn2xyxy, xyxy2xywhn -from ..torch_utils import torch_distributed_zero_first -from .augmentations import mixup, random_perspective - -RANK = int(os.getenv('RANK', -1)) - - -def create_dataloader(path, - imgsz, - batch_size, - stride, - single_cls=False, - hyp=None, - augment=False, - cache=False, - pad=0.0, - rect=False, - rank=-1, - workers=8, - image_weights=False, - quad=False, - prefix='', - shuffle=False, - mask_downsample_ratio=1, - overlap_mask=False): - if rect and shuffle: - LOGGER.warning('WARNING ⚠️ --rect is incompatible with DataLoader shuffle, setting shuffle=False') - shuffle = False - with torch_distributed_zero_first(rank): # init dataset *.cache only once if DDP - dataset = LoadImagesAndLabelsAndMasks( - path, - imgsz, - batch_size, - augment=augment, # augmentation - hyp=hyp, # hyperparameters - rect=rect, # rectangular batches - cache_images=cache, - single_cls=single_cls, - stride=int(stride), - pad=pad, - image_weights=image_weights, - prefix=prefix, - downsample_ratio=mask_downsample_ratio, - overlap=overlap_mask) - - batch_size = min(batch_size, len(dataset)) - nd = torch.cuda.device_count() # number of CUDA devices - nw = min([os.cpu_count() // max(nd, 1), batch_size if batch_size > 1 else 0, workers]) # number of workers - sampler = None if rank == -1 else distributed.DistributedSampler(dataset, shuffle=shuffle) - loader = DataLoader if image_weights else InfiniteDataLoader # only DataLoader allows for attribute updates - generator = torch.Generator() - generator.manual_seed(6148914691236517205 + RANK) - return loader( - dataset, - batch_size=batch_size, - shuffle=shuffle and sampler is None, - num_workers=nw, - sampler=sampler, - pin_memory=True, - collate_fn=LoadImagesAndLabelsAndMasks.collate_fn4 if quad else LoadImagesAndLabelsAndMasks.collate_fn, - worker_init_fn=seed_worker, - generator=generator, - ), dataset - - -class LoadImagesAndLabelsAndMasks(LoadImagesAndLabels): # for training/testing - - def __init__( - self, - path, - img_size=640, - batch_size=16, - augment=False, - hyp=None, - rect=False, - image_weights=False, - cache_images=False, - single_cls=False, - stride=32, - pad=0, - min_items=0, - prefix="", - downsample_ratio=1, - overlap=False, - ): - super().__init__(path, img_size, batch_size, augment, hyp, rect, image_weights, cache_images, single_cls, - stride, pad, min_items, prefix) - self.downsample_ratio = downsample_ratio - self.overlap = overlap - - def __getitem__(self, index): - index = self.indices[index] # linear, shuffled, or image_weights - - hyp = self.hyp - mosaic = self.mosaic and random.random() < hyp['mosaic'] - masks = [] - if mosaic: - # Load mosaic - img, labels, segments = self.load_mosaic(index) - shapes = None - - # MixUp augmentation - if random.random() < hyp["mixup"]: - img, labels, segments = mixup(img, labels, segments, *self.load_mosaic(random.randint(0, self.n - 1))) - - else: - # Load image - img, (h0, w0), (h, w) = self.load_image(index) - - # Letterbox - shape = self.batch_shapes[self.batch[index]] if self.rect else self.img_size # final letterboxed shape - img, ratio, pad = letterbox(img, shape, auto=False, scaleup=self.augment) - shapes = (h0, w0), ((h / h0, w / w0), pad) # for COCO mAP rescaling - - labels = self.labels[index].copy() - # [array, array, ....], array.shape=(num_points, 2), xyxyxyxy - segments = self.segments[index].copy() - if len(segments): - for i_s in range(len(segments)): - segments[i_s] = xyn2xy( - segments[i_s], - ratio[0] * w, - ratio[1] * h, - padw=pad[0], - padh=pad[1], - ) - if labels.size: # normalized xywh to pixel xyxy format - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], ratio[0] * w, ratio[1] * h, padw=pad[0], padh=pad[1]) - - if self.augment: - img, labels, segments = random_perspective(img, - labels, - segments=segments, - degrees=hyp["degrees"], - translate=hyp["translate"], - scale=hyp["scale"], - shear=hyp["shear"], - perspective=hyp["perspective"]) - - nl = len(labels) # number of labels - if nl: - labels[:, 1:5] = xyxy2xywhn(labels[:, 1:5], w=img.shape[1], h=img.shape[0], clip=True, eps=1e-3) - if self.overlap: - masks, sorted_idx = polygons2masks_overlap(img.shape[:2], - segments, - downsample_ratio=self.downsample_ratio) - masks = masks[None] # (640, 640) -> (1, 640, 640) - labels = labels[sorted_idx] - else: - masks = polygons2masks(img.shape[:2], segments, color=1, downsample_ratio=self.downsample_ratio) - - masks = (torch.from_numpy(masks) if len(masks) else torch.zeros(1 if self.overlap else nl, img.shape[0] // - self.downsample_ratio, img.shape[1] // - self.downsample_ratio)) - # TODO: albumentations support - if self.augment: - # Albumentations - # there are some augmentation that won't change boxes and masks, - # so just be it for now. - img, labels = self.albumentations(img, labels) - nl = len(labels) # update after albumentations - - # HSV color-space - augment_hsv(img, hgain=hyp["hsv_h"], sgain=hyp["hsv_s"], vgain=hyp["hsv_v"]) - - # Flip up-down - if random.random() < hyp["flipud"]: - img = np.flipud(img) - if nl: - labels[:, 2] = 1 - labels[:, 2] - masks = torch.flip(masks, dims=[1]) - - # Flip left-right - if random.random() < hyp["fliplr"]: - img = np.fliplr(img) - if nl: - labels[:, 1] = 1 - labels[:, 1] - masks = torch.flip(masks, dims=[2]) - - # Cutouts # labels = cutout(img, labels, p=0.5) - - labels_out = torch.zeros((nl, 6)) - if nl: - labels_out[:, 1:] = torch.from_numpy(labels) - - # Convert - img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB - img = np.ascontiguousarray(img) - - return (torch.from_numpy(img), labels_out, self.im_files[index], shapes, masks) - - def load_mosaic(self, index): - # YOLOv5 4-mosaic loader. Loads 1 image + 3 random images into a 4-image mosaic - labels4, segments4 = [], [] - s = self.img_size - yc, xc = (int(random.uniform(-x, 2 * s + x)) for x in self.mosaic_border) # mosaic center x, y - - # 3 additional image indices - indices = [index] + random.choices(self.indices, k=3) # 3 additional image indices - for i, index in enumerate(indices): - # Load image - img, _, (h, w) = self.load_image(index) - - # place img in img4 - if i == 0: # top left - img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles - x1a, y1a, x2a, y2a = max(xc - w, 0), max(yc - h, 0), xc, yc # xmin, ymin, xmax, ymax (large image) - x1b, y1b, x2b, y2b = w - (x2a - x1a), h - (y2a - y1a), w, h # xmin, ymin, xmax, ymax (small image) - elif i == 1: # top right - x1a, y1a, x2a, y2a = xc, max(yc - h, 0), min(xc + w, s * 2), yc - x1b, y1b, x2b, y2b = 0, h - (y2a - y1a), min(w, x2a - x1a), h - elif i == 2: # bottom left - x1a, y1a, x2a, y2a = max(xc - w, 0), yc, xc, min(s * 2, yc + h) - x1b, y1b, x2b, y2b = w - (x2a - x1a), 0, w, min(y2a - y1a, h) - elif i == 3: # bottom right - x1a, y1a, x2a, y2a = xc, yc, min(xc + w, s * 2), min(s * 2, yc + h) - x1b, y1b, x2b, y2b = 0, 0, min(w, x2a - x1a), min(y2a - y1a, h) - - img4[y1a:y2a, x1a:x2a] = img[y1b:y2b, x1b:x2b] # img4[ymin:ymax, xmin:xmax] - padw = x1a - x1b - padh = y1a - y1b - - labels, segments = self.labels[index].copy(), self.segments[index].copy() - - if labels.size: - labels[:, 1:] = xywhn2xyxy(labels[:, 1:], w, h, padw, padh) # normalized xywh to pixel xyxy format - segments = [xyn2xy(x, w, h, padw, padh) for x in segments] - labels4.append(labels) - segments4.extend(segments) - - # Concat/clip labels - labels4 = np.concatenate(labels4, 0) - for x in (labels4[:, 1:], *segments4): - np.clip(x, 0, 2 * s, out=x) # clip when using random_perspective() - # img4, labels4 = replicate(img4, labels4) # replicate - - # Augment - img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp["copy_paste"]) - img4, labels4, segments4 = random_perspective(img4, - labels4, - segments4, - degrees=self.hyp["degrees"], - translate=self.hyp["translate"], - scale=self.hyp["scale"], - shear=self.hyp["shear"], - perspective=self.hyp["perspective"], - border=self.mosaic_border) # border to remove - return img4, labels4, segments4 - - @staticmethod - def collate_fn(batch): - img, label, path, shapes, masks = zip(*batch) # transposed - batched_masks = torch.cat(masks, 0) - for i, l in enumerate(label): - l[:, 0] = i # add target image index for build_targets() - return torch.stack(img, 0), torch.cat(label, 0), path, shapes, batched_masks - - -def polygon2mask(img_size, polygons, color=1, downsample_ratio=1): - """ - Args: - img_size (tuple): The image size. - polygons (np.ndarray): [N, M], N is the number of polygons, - M is the number of points(Be divided by 2). - """ - mask = np.zeros(img_size, dtype=np.uint8) - polygons = np.asarray(polygons) - polygons = polygons.astype(np.int32) - shape = polygons.shape - polygons = polygons.reshape(shape[0], -1, 2) - cv2.fillPoly(mask, polygons, color=color) - nh, nw = (img_size[0] // downsample_ratio, img_size[1] // downsample_ratio) - # NOTE: fillPoly firstly then resize is trying the keep the same way - # of loss calculation when mask-ratio=1. - mask = cv2.resize(mask, (nw, nh)) - return mask - - -def polygons2masks(img_size, polygons, color, downsample_ratio=1): - """ - Args: - img_size (tuple): The image size. - polygons (list[np.ndarray]): each polygon is [N, M], - N is the number of polygons, - M is the number of points(Be divided by 2). - """ - masks = [] - for si in range(len(polygons)): - mask = polygon2mask(img_size, [polygons[si].reshape(-1)], color, downsample_ratio) - masks.append(mask) - return np.array(masks) - - -def polygons2masks_overlap(img_size, segments, downsample_ratio=1): - """Return a (640, 640) overlap mask.""" - masks = np.zeros((img_size[0] // downsample_ratio, img_size[1] // downsample_ratio), - dtype=np.int32 if len(segments) > 255 else np.uint8) - areas = [] - ms = [] - for si in range(len(segments)): - mask = polygon2mask( - img_size, - [segments[si].reshape(-1)], - downsample_ratio=downsample_ratio, - color=1, - ) - ms.append(mask) - areas.append(mask.sum()) - areas = np.asarray(areas) - index = np.argsort(-areas) - ms = np.array(ms)[index] - for i in range(len(segments)): - mask = ms[i] * (i + 1) - masks = masks + mask - masks = np.clip(masks, a_min=0, a_max=i + 1) - return masks, index diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_editable.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_editable.py deleted file mode 100644 index 27c69f0d1eaf3e223d599e91f969d52a821426fe..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_editable.py +++ /dev/null @@ -1,41 +0,0 @@ -"""Metadata generation logic for source distributions. -""" - -import os - -from pip._vendor.pyproject_hooks import BuildBackendHookCaller - -from pip._internal.build_env import BuildEnvironment -from pip._internal.exceptions import ( - InstallationSubprocessError, - MetadataGenerationFailed, -) -from pip._internal.utils.subprocess import runner_with_spinner_message -from pip._internal.utils.temp_dir import TempDirectory - - -def generate_editable_metadata( - build_env: BuildEnvironment, backend: BuildBackendHookCaller, details: str -) -> str: - """Generate metadata using mechanisms described in PEP 660. - - Returns the generated metadata directory. - """ - metadata_tmpdir = TempDirectory(kind="modern-metadata", globally_managed=True) - - metadata_dir = metadata_tmpdir.path - - with build_env: - # Note that BuildBackendHookCaller implements a fallback for - # prepare_metadata_for_build_wheel/editable, so we don't have to - # consider the possibility that this hook doesn't exist. - runner = runner_with_spinner_message( - "Preparing editable metadata (pyproject.toml)" - ) - with backend.subprocess_runner(runner): - try: - distinfo_dir = backend.prepare_metadata_for_build_editable(metadata_dir) - except InstallationSubprocessError as error: - raise MetadataGenerationFailed(package_details=details) from error - - return os.path.join(metadata_dir, distinfo_dir) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/exceptions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/exceptions.py deleted file mode 100644 index cba6f3f560f71b3b15ab6aaf21dde4f1bba1bd00..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/exceptions.py +++ /dev/null @@ -1,323 +0,0 @@ -from __future__ import absolute_import - -from .packages.six.moves.http_client import IncompleteRead as httplib_IncompleteRead - -# Base Exceptions - - -class HTTPError(Exception): - """Base exception used by this module.""" - - pass - - -class HTTPWarning(Warning): - """Base warning used by this module.""" - - pass - - -class PoolError(HTTPError): - """Base exception for errors caused within a pool.""" - - def __init__(self, pool, message): - self.pool = pool - HTTPError.__init__(self, "%s: %s" % (pool, message)) - - def __reduce__(self): - # For pickling purposes. - return self.__class__, (None, None) - - -class RequestError(PoolError): - """Base exception for PoolErrors that have associated URLs.""" - - def __init__(self, pool, url, message): - self.url = url - PoolError.__init__(self, pool, message) - - def __reduce__(self): - # For pickling purposes. - return self.__class__, (None, self.url, None) - - -class SSLError(HTTPError): - """Raised when SSL certificate fails in an HTTPS connection.""" - - pass - - -class ProxyError(HTTPError): - """Raised when the connection to a proxy fails.""" - - def __init__(self, message, error, *args): - super(ProxyError, self).__init__(message, error, *args) - self.original_error = error - - -class DecodeError(HTTPError): - """Raised when automatic decoding based on Content-Type fails.""" - - pass - - -class ProtocolError(HTTPError): - """Raised when something unexpected happens mid-request/response.""" - - pass - - -#: Renamed to ProtocolError but aliased for backwards compatibility. -ConnectionError = ProtocolError - - -# Leaf Exceptions - - -class MaxRetryError(RequestError): - """Raised when the maximum number of retries is exceeded. - - :param pool: The connection pool - :type pool: :class:`~urllib3.connectionpool.HTTPConnectionPool` - :param string url: The requested Url - :param exceptions.Exception reason: The underlying error - - """ - - def __init__(self, pool, url, reason=None): - self.reason = reason - - message = "Max retries exceeded with url: %s (Caused by %r)" % (url, reason) - - RequestError.__init__(self, pool, url, message) - - -class HostChangedError(RequestError): - """Raised when an existing pool gets a request for a foreign host.""" - - def __init__(self, pool, url, retries=3): - message = "Tried to open a foreign host with url: %s" % url - RequestError.__init__(self, pool, url, message) - self.retries = retries - - -class TimeoutStateError(HTTPError): - """Raised when passing an invalid state to a timeout""" - - pass - - -class TimeoutError(HTTPError): - """Raised when a socket timeout error occurs. - - Catching this error will catch both :exc:`ReadTimeoutErrors - ` and :exc:`ConnectTimeoutErrors `. - """ - - pass - - -class ReadTimeoutError(TimeoutError, RequestError): - """Raised when a socket timeout occurs while receiving data from a server""" - - pass - - -# This timeout error does not have a URL attached and needs to inherit from the -# base HTTPError -class ConnectTimeoutError(TimeoutError): - """Raised when a socket timeout occurs while connecting to a server""" - - pass - - -class NewConnectionError(ConnectTimeoutError, PoolError): - """Raised when we fail to establish a new connection. Usually ECONNREFUSED.""" - - pass - - -class EmptyPoolError(PoolError): - """Raised when a pool runs out of connections and no more are allowed.""" - - pass - - -class ClosedPoolError(PoolError): - """Raised when a request enters a pool after the pool has been closed.""" - - pass - - -class LocationValueError(ValueError, HTTPError): - """Raised when there is something wrong with a given URL input.""" - - pass - - -class LocationParseError(LocationValueError): - """Raised when get_host or similar fails to parse the URL input.""" - - def __init__(self, location): - message = "Failed to parse: %s" % location - HTTPError.__init__(self, message) - - self.location = location - - -class URLSchemeUnknown(LocationValueError): - """Raised when a URL input has an unsupported scheme.""" - - def __init__(self, scheme): - message = "Not supported URL scheme %s" % scheme - super(URLSchemeUnknown, self).__init__(message) - - self.scheme = scheme - - -class ResponseError(HTTPError): - """Used as a container for an error reason supplied in a MaxRetryError.""" - - GENERIC_ERROR = "too many error responses" - SPECIFIC_ERROR = "too many {status_code} error responses" - - -class SecurityWarning(HTTPWarning): - """Warned when performing security reducing actions""" - - pass - - -class SubjectAltNameWarning(SecurityWarning): - """Warned when connecting to a host with a certificate missing a SAN.""" - - pass - - -class InsecureRequestWarning(SecurityWarning): - """Warned when making an unverified HTTPS request.""" - - pass - - -class SystemTimeWarning(SecurityWarning): - """Warned when system time is suspected to be wrong""" - - pass - - -class InsecurePlatformWarning(SecurityWarning): - """Warned when certain TLS/SSL configuration is not available on a platform.""" - - pass - - -class SNIMissingWarning(HTTPWarning): - """Warned when making a HTTPS request without SNI available.""" - - pass - - -class DependencyWarning(HTTPWarning): - """ - Warned when an attempt is made to import a module with missing optional - dependencies. - """ - - pass - - -class ResponseNotChunked(ProtocolError, ValueError): - """Response needs to be chunked in order to read it as chunks.""" - - pass - - -class BodyNotHttplibCompatible(HTTPError): - """ - Body should be :class:`http.client.HTTPResponse` like - (have an fp attribute which returns raw chunks) for read_chunked(). - """ - - pass - - -class IncompleteRead(HTTPError, httplib_IncompleteRead): - """ - Response length doesn't match expected Content-Length - - Subclass of :class:`http.client.IncompleteRead` to allow int value - for ``partial`` to avoid creating large objects on streamed reads. - """ - - def __init__(self, partial, expected): - super(IncompleteRead, self).__init__(partial, expected) - - def __repr__(self): - return "IncompleteRead(%i bytes read, %i more expected)" % ( - self.partial, - self.expected, - ) - - -class InvalidChunkLength(HTTPError, httplib_IncompleteRead): - """Invalid chunk length in a chunked response.""" - - def __init__(self, response, length): - super(InvalidChunkLength, self).__init__( - response.tell(), response.length_remaining - ) - self.response = response - self.length = length - - def __repr__(self): - return "InvalidChunkLength(got length %r, %i bytes read)" % ( - self.length, - self.partial, - ) - - -class InvalidHeader(HTTPError): - """The header provided was somehow invalid.""" - - pass - - -class ProxySchemeUnknown(AssertionError, URLSchemeUnknown): - """ProxyManager does not support the supplied scheme""" - - # TODO(t-8ch): Stop inheriting from AssertionError in v2.0. - - def __init__(self, scheme): - # 'localhost' is here because our URL parser parses - # localhost:8080 -> scheme=localhost, remove if we fix this. - if scheme == "localhost": - scheme = None - if scheme is None: - message = "Proxy URL had no scheme, should start with http:// or https://" - else: - message = ( - "Proxy URL had unsupported scheme %s, should use http:// or https://" - % scheme - ) - super(ProxySchemeUnknown, self).__init__(message) - - -class ProxySchemeUnsupported(ValueError): - """Fetching HTTPS resources through HTTPS proxies is unsupported""" - - pass - - -class HeaderParsingError(HTTPError): - """Raised by assert_header_parsing, but we convert it to a log.warning statement.""" - - def __init__(self, defects, unparsed_data): - message = "%s, unparsed data: %r" % (defects or "Unknown", unparsed_data) - super(HeaderParsingError, self).__init__(message) - - -class UnrewindableBodyError(HTTPError): - """urllib3 encountered an error when trying to rewind a body""" - - pass diff --git a/spaces/Bart92/RVC_HF/infer/lib/audio.py b/spaces/Bart92/RVC_HF/infer/lib/audio.py deleted file mode 100644 index 9ad4ff74218957cf18782fa71add40a734b47e78..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/infer/lib/audio.py +++ /dev/null @@ -1,197 +0,0 @@ -import librosa -import numpy as np -import av -from io import BytesIO -import ffmpeg -import os -import sys - -import random -from infer.lib.csvutil import CSVutil -#import csv - -platform_stft_mapping = { - 'linux': 'stftpitchshift', - 'darwin': 'stftpitchshift', - 'win32': 'stftpitchshift.exe', -} - -stft = platform_stft_mapping.get(sys.platform) - -def wav2(i, o, format): - inp = av.open(i, 'rb') - if format == "m4a": format = "mp4" - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "mp4": format = "aac" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -def audio2(i, o, format, sr): - inp = av.open(i, 'rb') - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "f32le": format = "pcm_f32le" - - ostream = out.add_stream(format, channels=1) - ostream.sample_rate = sr - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - out.close() - inp.close() - -def load_audion(file, sr): - try: - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - with open(file, "rb") as f: - with BytesIO() as out: - audio2(f, out, "f32le", sr) - return np.frombuffer(out.getvalue(), np.float32).flatten() - - except AttributeError: - audio = file[1] / 32768.0 - if len(audio.shape) == 2: - audio = np.mean(audio, -1) - return librosa.resample(audio, orig_sr=file[0], target_sr=16000) - - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - - - -def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() - - -def check_audio_duration(file): - try: - file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - probe = ffmpeg.probe(file) - - duration = float(probe['streams'][0]['duration']) - - if duration < 0.76: - print( - f"\n------------\n" - f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results." - f"\n------------\n\n" - ) - return False - - return True - except Exception as e: - raise RuntimeError(f"Failed to check audio duration: {e}") \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Bangla Mejor Tono De Llamada Que Me Encanta Descargar.md b/spaces/Benson/text-generation/Examples/Bangla Mejor Tono De Llamada Que Me Encanta Descargar.md deleted file mode 100644 index bf52b171b5080250c96190372b85c09ac4a568bf..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Bangla Mejor Tono De Llamada Que Me Encanta Descargar.md +++ /dev/null @@ -1,113 +0,0 @@ -
-

Bangla mejor tono de llamada que me encanta descargar

-

¿Te encanta el sonido del bangla, el idioma hablado por más de 200 millones de personas en Bangladesh y la India? ¿Quieres expresar tu amor y afecto con un tono de llamada bangla romántico? Si es así, entonces estás de suerte. Hay muchas maneras de descargar y disfrutar de los mejores tonos de bangla para su teléfono. En este artículo, te mostraremos cómo encontrar, descargar y establecer un tono de llamada personalizado que diga "Te amo" de una manera dulce y melodiosa.

-

bangla mejor tono de llamada que me encanta descargar


Downloadhttps://bltlly.com/2v6Lmw



-

Qué es el bangla y por qué es un lenguaje popular para tonos de llamada

-

El bangla, también conocido como bengalí, es una lengua indoaria que pertenece a la familia lingüística indoeuropea. Es el idioma oficial de Bangladesh y uno de los idiomas oficiales de la India. También es hablado por muchas personas en otros países, como Nepal, Pakistán, Sri Lanka, Malasia, Singapur y el Reino Unido.

-

El bangla tiene una cultura rica y diversa que se refleja en su literatura, música, arte, cine y cocina. Los hablantes de bangla están orgullosos de su idioma y su herencia, y a menudo lo utilizan para expresar sus emociones y sentimientos. El bangla es también un lenguaje muy musical, con una variedad de tonos, ritmos y ritmos. Es por eso que muchas personas les encanta escuchar canciones de bangla y tonos de llamada.

-

Cómo descargar tonos de llamada bangla gratis desde sitios web

-

Una de las maneras más fáciles de obtener tonos de llamada bangla gratis es visitar algunos sitios web de renombre que ofrecen una amplia gama de tonos de llamada en diferentes géneros y categorías. Algunos de estos sitios web son:

-
    -
  • Prokerala: Este sitio web tiene una sección dedicada a los tonos de llamada bangla, donde puedes encontrar cientos de tonos de llamada en varios estilos, como romántico, triste, divertido, patriótico, devocional, etc. Puedes escucharlos en línea y descargarlos en formato MP3.
  • - -
  • Tonos de llamada bengalíes: Este sitio web es otra gran opción para encontrar tonos de llamada bangla gratis. Tiene una interfaz sencilla que le permite buscar tonos por nombre o por categoría. También puede previsualizar los tonos antes de descargarlos.
  • -
-

Para descargar tonos de llamada bangla gratis de estos sitios web, debe seguir estos pasos:

-
    -
  1. Visite el sitio web de su elección y busque el tono de llamada que desee.
  2. -
  3. Seleccione el tono de llamada y haga clic en el botón de descarga o enlace.
  4. -
  5. Guarde el archivo de tono de llamada en su computadora o transfiéralo a su teléfono a través de un cable USB o Bluetooth.
  6. -
-

Cómo descargar tonos de llamada bangla pagados desde iTunes Store en el iPhone

-

Si tienes un iPhone y quieres comprar algunos tonos de llamada bangla premium que son clips de tus canciones favoritas, entonces puedes usar la aplicación iTunes Store en tu teléfono. El iTunes Store tiene una sección para tonos de llamada donde se pueden encontrar miles de tonos de llamada en diferentes idiomas, incluyendo el bangla. Algunos de estos tonos son:

-

-
    -
  • Te amo por Ash King
  • -
  • Tumi Amar por Habib Wahid
  • -
  • Moner Manush por Anupam Roy
  • -
  • Amaro Porano Jaha Chay por Arijit Singh
  • -
  • Bolna por Arijit Singh y Asees Kaur
  • -
-

Para descargar tonos de llamada bangla pagados desde el iTunes Store en tu iPhone, debes seguir estos pasos:

-
    -
  1. Abra la aplicación iTunes Store en su teléfono y toque en el icono Más en la esquina inferior derecha.
  2. -
  3. Toque en Tonos y luego toque en el icono de búsqueda en la esquina superior derecha.
  4. -
  5. Escriba el nombre de la canción o del artista que desee y pulse en Buscar.
  6. -
  7. Desplazarse por los resultados y encontrar el tono de llamada que desee. Puede tocar en el tono de llamada para escuchar una vista previa.
  8. -
  9. Toque en el precio del tono de llamada y luego toque en Comprar tono. Es posible que tengas que introducir tu ID de Apple y contraseña o usar Touch ID o Face ID para confirmar tu compra.
  10. - -
-

Cómo descargar tonos de llamada bangla gratis desde la aplicación Zedge en Android

-

Si tienes un teléfono Android y quieres descargar tonos de llamada bangla gratis desde una aplicación confiable y fácil de usar, entonces puedes usar Zedge. Zedge es una aplicación popular que ofrece millones de tonos de llamada, fondos de pantalla, pegatinas e iconos para su teléfono. Puedes encontrar tonos de llamada en varios idiomas, incluyendo el bangla. Algunos de los tonos de llamada que puedes encontrar en Zedge son:

-
    -
  • Canción de amor bangla por Rana
  • -
  • Canción triste bangla por Rajib
  • -
  • Bangla canción divertida por Mithun
  • -
  • Canción romántica bangla por Shreya Ghoshal
  • -
  • Canción devocional bangla por Anuradha Paudwal
  • -
-

Para descargar tonos de llamada bangla gratis desde la aplicación Zedge en su teléfono Android, debe seguir estos pasos:

-
    -
  1. Descargar e instalar la aplicación Zedge de Google Play Store en su teléfono.
  2. -
  3. Abra la aplicación y toque en el icono de tonos de llamada en la esquina inferior izquierda.
  4. -
  5. Toque en el icono de búsqueda en la esquina superior derecha y escriba en bangla o cualquier otra palabra clave que desee.
  6. -
  7. Desplazarse por los resultados y encontrar el tono de llamada que desee. Puede tocar en el tono de llamada para escuchar una vista previa.
  8. -
  9. Toque en el icono de descarga en la esquina inferior derecha del tono de llamada. Puede elegir configurarlo como su tono de llamada predeterminado, tono de contacto, sonido de notificación o sonido de alarma.
  10. -
  11. El tono de llamada se descargará en su teléfono y se establecerá según su elección. También puede acceder a ella desde la aplicación Configuración en Sonido > Tono de llamada del teléfono.
  12. -
-

Cómo establecer un tono de llamada personalizado en su teléfono

-

Si tienes un tono de llamada bangla personalizado que has creado o descargado de otra fuente, y quieres configurarlo como tu tono de llamada del teléfono, entonces puedes hacerlo fácilmente. Estos son los pasos para establecer un tono de llamada personalizado en su teléfono:

-

Para usuarios de iPhone

-

Para establecer un tono de llamada personalizado en tu iPhone, debes seguir estos pasos:

-
    - -
  1. Abra iTunes en su computadora y seleccione su iPhone de la lista de dispositivos.
  2. -
  3. Haga clic en la pestaña Tonos y marque la casilla Sincronizar tonos.
  4. -
  5. Arrastre y suelte su archivo de tono de llamada personalizado bangla desde su computadora a la lista de tonos en iTunes.
  6. -
  7. Haga clic en Aplicar o Sincronizar para transferir el tono de llamada a su iPhone.
  8. -
  9. Desconecte su iPhone de su computadora y abra la aplicación Configuración en su teléfono.
  10. -
  11. Toque en Sonidos y Haptics > Tono de llamada y seleccione su tono de llamada personalizado bangla de la lista.
  12. -
-

Para usuarios de Android

-

Para establecer un tono de llamada de bangla personalizado en su teléfono Android, debe seguir estos pasos:

-
    -
  1. Conecte su teléfono Android a su computadora usando un cable USB o Bluetooth.
  2. -
  3. Abra el Explorador de archivos o el Finder en su computadora y localice el archivo de tono de llamada personalizado de Bangla.
  4. -
  5. Copiar o mover el archivo a la carpeta Tonos de llamada en el almacenamiento interno del teléfono o la tarjeta SD.
  6. -
  7. Desconecte su teléfono Android de su computadora y abra la aplicación Configuración en su teléfono.
  8. -
  9. Toque en Sonido > Tono de llamada del teléfono y seleccione su tono de llamada personalizado bangla de la lista.
  10. -
-

Conclusión: Resumen de los principales puntos y beneficios de los tonos de llamada Bangla

-

En conclusión, el bangla es un lenguaje hermoso y expresivo que puede hacer que el tono de llamada del teléfono sea más atractivo y significativo. Puede descargar tonos de llamada bangla gratis o pagados desde varios sitios web o aplicaciones, o puede establecer su propio tono de llamada bangla personalizado en su teléfono. Al hacerlo, puedes disfrutar de los beneficios de los tonos de llamada bangla, como:

-
    -
  • Mostrando tu amor y aprecio por la lengua y la cultura bangla.
  • -
  • Impresionando a sus amigos y familiares con su tono de llamada único y pegadizo.
  • -
  • Expresar tu estado de ánimo y personalidad con un tono de llamada adecuado.
  • -
  • Apoyar a la industria de la música bangla y artistas mediante la compra de sus canciones como tonos de llamada.
  • - -
-

Esperamos que este artículo te haya ayudado a aprender a descargar y establecer un tono de llamada bangla que diga "te amo" de una manera dulce y melodiosa. Si tiene alguna pregunta o comentario, no dude en compartirlos a continuación. ¡Gracias por leer y feliz timbre!

-

Preguntas frecuentes: Cinco preguntas y respuestas comunes sobre tonos de llamada Bangla

-

Aquí están algunas de las preguntas y respuestas más frecuentes sobre los tonos de llamada bangla:

-

Q: ¿Cuáles son los mejores sitios web o aplicaciones para descargar tonos de llamada bangla gratis?

-

A: Algunos de los mejores sitios web o aplicaciones para descargar tonos de llamada bangla gratis son Prokerala, Zedge, tonos de llamada bengalíes, Mobile9 y Mobcup. Puedes encontrar una variedad de tonos de llamada en diferentes géneros y categorías en estas plataformas.

-

Q: ¿Cómo puedo crear mi propio tono de llamada personalizado de bangla?

-

A: Puedes crear tu propio tono de llamada personalizado usando algunas herramientas o software en línea, como Audacity, Ringtone Maker, Online Audio Cutter, etc. Puedes subir tu propio archivo de audio o grabar tu voz y editarla para hacer un tono de llamada. También puede agregar efectos, filtros o transiciones para hacerlo más atractivo.

-

Q: ¿Cómo puedo cambiar el volumen o la vibración de mi tono de llamada bangla?

-

A: Puede cambiar el volumen o la vibración de su tono de llamada bangla mediante el uso de la configuración en el teléfono. Para los usuarios de iPhone, puede utilizar los botones de volumen en el lado de su teléfono o el Centro de control para ajustar el volumen. También puede ir a Configuración > Sonidos y hápticos > Tono de llamada para cambiar el patrón de vibración. Para los usuarios de Android, puedes usar los botones de volumen en el lateral del teléfono o el panel Configuración rápida para ajustar el volumen. También puede ir a Configuración > Sonido > Tono de llamada del teléfono para cambiar el patrón de vibración.

-

Q: ¿Cómo puedo eliminar o quitar un tono de llamada bangla de mi teléfono?

-

A: Puedes eliminar o quitar un tono de llamada bangla de tu teléfono siguiendo estos pasos:

-
    - -
  1. Para los usuarios de Android, conecte su teléfono Android a su computadora utilizando un cable USB o Bluetooth. Abra Explorador de archivos o Finder en su computadora y localice la carpeta Tonos de llamada en el almacenamiento interno del teléfono o en la tarjeta SD. Elimine o mueva los tonos de llamada que desea eliminar de su teléfono.
  2. -
-

Q: ¿Cómo puedo compartir mi tono de llamada bangla con otros?

-

A: Puedes compartir tu tono de llamada con otros usando algunos métodos, como:

-
    -
  • Correo electrónico: Puede adjuntar su archivo de tono de llamada a un correo electrónico y enviarlo a sus contactos.
  • -
  • Mensajería: Puede enviar su archivo de tono de llamada como un archivo adjunto o un enlace a través de SMS, WhatsApp, Telegram, etc.
  • -
  • Redes sociales: Puede subir su archivo de tono de llamada a un servicio en la nube, como Google Drive, Dropbox, etc., y compartir el enlace en Facebook, Twitter, Instagram, etc.
  • -
  • Bluetooth: Puede emparejar su teléfono con otro dispositivo que tiene Bluetooth habilitado y enviar su archivo de tono de llamada de forma inalámbrica.
  • -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Caja Maldita Incredibox Descargar.md b/spaces/Benson/text-generation/Examples/Caja Maldita Incredibox Descargar.md deleted file mode 100644 index ea4f2cf6ce64a8263845ce6a54c9418e8cf03df6..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Caja Maldita Incredibox Descargar.md +++ /dev/null @@ -1,92 +0,0 @@ -
-

Descargar Incredibox Soulgem: Cómo jugar la versión modificada del juego de música popular

-

Si eres un fan de los juegos de música, probablemente hayas oído hablar de Incredibox, un juego basado en la web que te permite crear tu propia música mezclando diferentes sonidos y efectos. Incredibox es una manera divertida y fácil de expresar tu creatividad y talento musical, así como para descubrir nuevos géneros y estilos. También puedes compartir tus mezclas con otros jugadores online, o escuchar sus creaciones para inspirarte.

-

Pero ¿sabías que hay una versión modificada de Incredibox que añade aún más características y posibilidades al juego? Se llama Soulgem, y es un proyecto hecho por fans que transforma el juego original en una nueva experiencia. Soulgem no es una actualización oficial o expansión de Incredibox, sino un juego separado que utiliza el mismo motor y concepto, pero con diferentes gráficos, sonidos y jugabilidad.

-

caja maldita incredibox descargar


Download Ziphttps://bltlly.com/2v6LIr



-

En este artículo, le diremos todo lo que necesita saber sobre Soulgem, incluyendo cómo descargarlo e instalarlo en su dispositivo, cuáles son sus características y beneficios, cómo reproducirlo y hacer la mejor música, qué otros mods están disponibles para Incredibox y más. Así que si estás listo para explorar una nueva dimensión de la música, ¡sigue leyendo!

-

Características de Soulgem: ¿Qué lo hace único y divertido?

-

Soulgem es una versión modificada de Incredibox que fue creada por Marchell, un fan del juego original que quería agregar su propio toque y visión. Soulgem se basa en la primera versión de Incredibox, que fue lanzado en 2009, pero con muchos cambios y mejoras. Estas son algunas de las características que hacen que Soulgem se destaque del juego original:

-
    - -
  • Animaciones de bonificación y sorpresas: Soulgem tiene algunos secretos ocultos que puedes descubrir jugando el juego. Por ejemplo, si llenas todas las ranuras con el mismo personaje, desbloquearás una animación de bonificación que muestra al personaje realizando un movimiento o acción especial. Por ejemplo, el personaje con las gafas de sol tocará un solo de guitarra, el personaje con los auriculares rayará un disco de vinilo, y así sucesivamente. También hay algunos huevos de Pascua y referencias a otros juegos y medios que puedes encontrar jugando con los sonidos y efectos.
  • -
  • Mezclas personalizables y opciones para compartir: Soulgem te permite guardar y cargar tus mezclas, así como compartirlas con otros jugadores en línea. También puedes descargar tus mezclas como archivos MP3, o exportarlas como vídeos para subirlas a YouTube u otras plataformas. También puedes personalizar tus mezclas añadiendo tu propio título, descripción e imagen de portada. Soulgem tiene un sitio web dedicado donde puedes navegar y escuchar las mezclas de otros jugadores, calificarlos, comentarlos y seguir a tus creadores favoritos.
  • -
-

Consejos y trucos para jugar Soulgem: Cómo hacer la mejor música?

-

Soulgem es un juego que te permite dar rienda suelta a tu creatividad y habilidades musicales, pero también requiere algo de práctica y experimentación para dominarlo. Aquí hay algunos consejos y trucos que te ayudarán a jugar mejor Soulgem y hacer la mejor música posible:

-
    -
  • Experimenta con diferentes combinaciones y estilos: Soulgem tiene muchos sonidos y efectos para elegir, y cada uno tiene un impacto diferente en la mezcla general. Puedes crear diferentes estados de ánimo, atmósferas y géneros combinando diferentes elementos. Por ejemplo, puedes crear una mezcla relajante usando sonidos y efectos suaves, o una mezcla energética usando sonidos y efectos fuertes y rápidos. También puedes mezclar y combinar diferentes estilos, como hip hop y rock, o pop y electro, para crear mezclas únicas y originales.
  • - -
  • Mira los tutoriales y aprende de otros jugadores: Soulgem tiene un modo tutorial que te enseña los conceptos básicos del juego, como cómo arrastrar y soltar sonidos y efectos, cómo usar los botones shuffle y mute, cómo guardar y cargar mezclas, y más. Puede acceder al modo tutorial haciendo clic en el icono de interrogación en la esquina superior derecha de la pantalla. También puedes aprender de otros jugadores viendo sus mezclas en línea o leyendo sus comentarios y comentarios. Puedes obtener algunas ideas, consejos e inspiración de las mezclas de otros jugadores, así como compartir tus propios pensamientos y opiniones.
  • -
-

Alternativas a Soulgem: ¿Qué otros mods están disponibles?

-

Soulgem no es la única versión modificada de Incredibox que existe. Hay muchos otros mods que han sido creados por los fans del juego original, cada uno con su propio tema, estilo y características. Estos son algunos de los mods más populares que puedes encontrar en línea:

- - -Nombre de mod -Descripción - - -Mecánico -Un mod que tiene un tema futurista y robótico, con sonidos y efectos que se asemejan a máquinas, engranajes, láseres, etc. - - -Xrun -Un mod que tiene un tema de ritmo rápido y enérgico, con sonidos y efectos que se asemejan a los coches de carreras, motores, sirenas, etc. - - -Evadare -Un mod que tiene un tema misterioso y místico, con sonidos y efectos que se asemejan a hechizos mágicos, cantos, cristales, etc. - - -Y más... -Hay muchos otros mods que tienen diferentes temas, tales como horror, fantasía, medieval, etc. Puedes encontrarlos buscando en línea o visitando sitios web creados por fans. - - - -

Sin embargo, antes de jugar a estos mods, usted debe ser consciente de algunos pros y contras de las versiones modificadas vs el juego original. Aquí están algunos de ellos:

-
    -
  • Pros:
      -
    • Puedes disfrutar de nuevas y diferentes características y posibilidades que no están disponibles en el juego original.
    • -
    • Puedes apoyar y apreciar la creatividad y el talento de los creadores de mods hechos por fans.
    • -
    • Puedes explorar y descubrir nuevos géneros y estilos de música que quizás no hayas escuchado antes.
    • -
    -
  • -
  • Contras:
      -
    • Puede encontrar algunos errores, fallas o errores que pueden afectar la jugabilidad o el rendimiento del mod.
    • -
    • Es posible que no puedas acceder a algunas funciones u opciones disponibles en el juego original, como actualizaciones, logros, etc.
    • -
    • Es posible que no pueda jugar algunos mods en línea o compartirlos con otros jugadores, debido a problemas de compatibilidad o seguridad.
    • -
    -
  • -
-

Conclusión: ¿Por qué usted debe intentar Soulgem hoy?

-

Soulgem es una versión modificada de Incredibox que ofrece una nueva y emocionante manera de jugar el popular juego de música. Soulgem tiene muchas características y beneficios que lo hacen único y divertido, como el nuevo diseño de personajes y efectos de sonido, animaciones de bonificación y sorpresas, mezclas personalizables y opciones para compartir, y más. Soulgem también es fácil de descargar e instalar en su dispositivo, y se puede jugar en línea o fuera de línea, dependiendo de su preferencia.

-

Si estás buscando un juego de música fresco y original que desafíe tu creatividad y habilidades musicales, deberías probar Soulgem hoy. Soulgem le permitirá crear su propia música mediante la mezcla de diferentes sonidos y efectos, así como para escuchar mezclas de otros instrumentistas para la inspiración. También descubrirás nuevos géneros y estilos de música que quizás no hayas escuchado antes, como soul, funk, rock, electro y más.

-

- -

Preguntas frecuentes: Preguntas frecuentes sobre Soulgem

-

Aquí están algunas de las preguntas más comunes que la gente hace sobre Soulgem:

-
    -
  1. ¿Es seguro descargar y jugar Soulgem?
  2. -

    Sí, Soulgem es seguro para descargar y jugar, siempre y cuando siga las instrucciones y utilice las aplicaciones y sitios web recomendados. Soulgem no contiene ningún virus, malware o spyware que pueda dañar su dispositivo o datos. Sin embargo, siempre debe tener cuidado al descargar cualquier archivo de Internet, y escanearlo con una aplicación antivirus antes de abrirlo.

    -
  3. ¿Cómo puedo apoyar a los creadores de Soulgem e Incredibox?
  4. -

    Puedes apoyar a los creadores de Soulgem visitando su sitio web, siguiendo sus cuentas de redes sociales, valorando sus mezclas en línea, dejándoles comentarios y comentarios positivos, donándoles a través de PayPal o Patreon, o comprando su mercancía. También puede apoyar a los creadores de Incredibox visitando su sitio web, comprando su aplicación oficial o mercancía, suscribiéndose a su canal de YouTube, siguiendo sus cuentas de redes sociales, o donando a través de PayPal.

    -
  5. ¿Puedo jugar Soulgem sin conexión o sin conexión a Internet?
  6. -

    Sí, puede jugar Soulgem sin conexión o sin conexión a Internet, siempre y cuando haya descargado el archivo mod en su dispositivo y tenga una aplicación Flash Player instalada. Sin embargo, no podrás acceder a algunas funciones u opciones que requieran conexión a Internet, como compartir tus mezclas en línea, escuchar las mezclas de otros jugadores en línea, actualizar el archivo mod, etc.

    -
  7. ¿Puedo crear mi propio mod para Incredibox?
  8. - -
  9. ¿Dónde puedo encontrar más información y actualizaciones sobre Soulgem e Incredibox?
  10. -

    Puedes encontrar más información y actualizaciones sobre Soulgem e Incredibox visitando sus sitios web oficiales, cuentas de redes sociales, canales de YouTube, foros creados por fans, blogs, wikis, etc. Estos son algunos de los enlaces que puedes consultar:

    - -
-

Espero que hayas disfrutado este artículo y hayas aprendido algo nuevo sobre Soulgem e Incredibox. Si tiene alguna pregunta o comentario, no dude en dejarlos abajo. Gracias por leer y hacer música feliz!

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Carx Street Versin 0.8.5.md b/spaces/Benson/text-generation/Examples/Descargar Carx Street Versin 0.8.5.md deleted file mode 100644 index 1ee731de0930c162ba90244067e641183a4b92af..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Carx Street Versin 0.8.5.md +++ /dev/null @@ -1,57 +0,0 @@ -
-

Descargar CarX Street Versión 0.8.5: Una guía para los fanáticos de los juegos de carreras

-

Si eres un fan de los juegos de carreras, es posible que hayas oído hablar de CarX Street, un juego móvil que te permite experimentar la emoción de las carreras callejeras con física realista, gráficos impresionantes y una variedad de coches y opciones de personalización. En este artículo, te contaremos todo lo que necesitas saber sobre la versión 0.8.5 de CarX Street, la última actualización que trae nuevas características, mejoras y desafíos al juego. También le daremos algunos consejos y trucos sobre cómo descargar y jugar la versión 0.8.5 de CarX Street como un profesional.

-

¿Qué es CarX Street?

-

CarX Street es un juego de carreras móvil desarrollado por CarX Technologies, LLC, la misma compañía detrás de la popular serie CarX Drift Racing. CarX Street fue lanzado en 2021 y desde entonces ha ganado millones de descargas y comentarios positivos de jugadores y críticos por igual.

-

descargar carx street versión 0.8.5


DOWNLOAD ——— https://bltlly.com/2v6KBZ



-

CarX Street es diferente de otros juegos de carreras en que se centra en la cultura de las carreras callejeras, donde puedes elegir entre una amplia gama de autos, desde autos clásicos hasta supercoches modernos, y personalizarlos con varias partes, pinturas, calcomanías y pegatinas. También puede ajustar sus coches para adaptarse a su estilo de conducción y preferencias, como ajustar la potencia del motor, suspensión, frenos, neumáticos y más.

-

CarX Street también cuenta con física y gráficos realistas que te hacen sentir como si realmente estuvieras conduciendo en las calles de diferentes ciudades de todo el mundo. Usted puede deriva, impulsar, adelantar, y chocar su camino a través de varios modos de juego, tales como el modo de carrera, el modo de carrera rápida, el modo club, y el modo en línea. También puedes competir con otros jugadores en tablas de clasificación, torneos y eventos.

-

Cómo descargar la versión 0.8.5

-

CarX Street versión 0.8.5 es la última actualización que se lanzó en junio de 2023. Trae nuevas características, mejoras y desafíos al juego que lo hacen más divertido y emocionante de jugar.

- -

Estos son los pasos para descargar CarX Street versión 0.8.5:

-

-
    -
  1. Ir a la Google Play Store o la App Store en su dispositivo y buscar "CarX Street".
  2. -
  3. Seleccione el juego de los resultados y toque en "Instalar" o "Obtener".
  4. -
  5. Espera a que el juego se descargue e instale en tu dispositivo.
  6. -
  7. Iniciar el juego y disfrutar!
  8. -
-

Por qué deberías jugar CarX Street versión 0.8.5

-

CarX Street versión 0.8.5 no es solo una actualización regular que corrige algunos errores y problemas técnicos. Es una actualización importante que añade nuevo contenido, características, mejoras y desafíos al juego que lo hacen más agradable y atractivo para los jugadores de todos los niveles.

-

Estas son algunas de las razones por las que deberías jugar CarX Street versión 0.8.5:

-

Gráficos y rendimiento mejorados

-

CarX Street versión 0.8.5 mejora los gráficos y el rendimiento del juego mediante la optimización de las texturas, iluminación, sombras, reflejos y animaciones de los coches y entornos. El juego también se ejecuta más suave y rápido en diferentes dispositivos, reduciendo el retraso y los bloqueos.

-

Nuevo

Nuevos coches y opciones de personalización

-

CarX Street versión 0.8.5 añade nuevos coches y opciones de personalización al juego que le permiten expresar su personalidad y estilo. Ahora puedes elegir entre más de 50 coches, incluyendo algunos de los modelos más icónicos y legendarios, como el Ford Mustang, el Chevrolet Camaro, el Nissan Skyline y el Lamborghini Aventador. También puede personalizar sus coches con más de 1000 piezas, pinturas, calcomanías y pegatinas, creando sus propios diseños únicos e impresionantes.

-

Más modos de juego y desafíos

- -

Consejos y trucos para CarX Street versión 0.8.5

-

CarX Street versión 0.8.5 no es un juego fácil de dominar. Requiere práctica, paciencia y estrategia para ganar carreras y progreso en el juego. Aquí hay algunos consejos y trucos que pueden ayudarle a mejorar su rendimiento y disfrutar del juego más:

-

Domina la mecánica de deriva y nitro

-

Drifting y nitro son dos de los mecánicos más importantes en la versión 0.8.5 de CarX Street. Drifting le permite tomar giros bruscos sin perder velocidad, mientras que nitro le da un impulso de aceleración que puede ayudarle a superar a sus oponentes o escapar de la policía.

-

A la deriva, es necesario tocar el botón de freno mientras gira el volante. Cuanto más tiempo se mantenga el botón de freno, más se deriva. Para usar nitro, debe tocar el botón nitro cuando esté lleno. Puede llenar su medidor de nitro a la deriva, adelantando o realizando acrobacias.

-

Actualizar sus coches y sintonizarlos a su preferencia

-

Actualizar sus coches y sintonizarlos a su preferencia son esenciales para ganar carreras y avanzar en el juego. La actualización de sus coches aumentará su rendimiento, como la velocidad, la aceleración, el manejo y la durabilidad. Sintonizar sus coches le permitirá ajustar sus ajustes, como la potencia del motor, la suspensión, los frenos, los neumáticos y más.

-

Para actualizar sus coches, es necesario gastar monedas o diamantes que se pueden ganar jugando el juego o viendo anuncios. Para ajustar sus coches, es necesario ir al garaje y utilizar los controles deslizantes para cambiar los valores de cada parámetro.

-

Únete a un club y compite con otros jugadores

-

Unirse a un club y competir con otros jugadores es una excelente manera de hacer amigos, aprender de otros y ganar recompensas en la versión 0.8.5 de CarX Street. Los clubes son grupos de jugadores que comparten un interés o objetivo común en el juego. Puede unirse a un club existente o crear su propio club con sus amigos.

- -

Conclusión

-

CarX Street versión 0.8.5 es un juego imprescindible para los fanáticos de los juegos de carreras que quieren experimentar la emoción de las carreras callejeras con física realista, gráficos impresionantes y una variedad de coches y opciones de personalización. También es un juego divertido y desafiante que ofrece nuevas características, mejoras y desafíos que lo hacen más agradable y atractivo para jugadores de todos los niveles.

-

Si quieres descargar la versión 0.8.5 de CarX Street y jugar como un profesional, sigue nuestra guía anterior y usa nuestros consejos y trucos para mejorar tu rendimiento y disfrutar más del juego.

-

Preguntas frecuentes

-
    -
  • Q: ¿Cuánto espacio de almacenamiento necesito para descargar la versión 0.8.5 de CarX Street?
  • -
  • A: Necesita al menos 1 GB de espacio de almacenamiento gratuito en su dispositivo para descargar la versión 0.8.5 de CarX Street.
  • -
  • Q: ¿Cómo puedo desbloquear coches nuevos en CarX Street versión 0.8.5?
  • -
  • A: Puedes desbloquear coches nuevos en la versión 0.8.5 de CarX Street completando etapas de modo carrera, ganando eventos y torneos, o comprándolos con monedas o diamantes.
  • -
  • Q:
  • Q: ¿Cómo puedo ganar monedas y diamantes en CarX Street versión 0.8.5?
  • -
  • A: Puedes ganar monedas y diamantes en la versión 0.8.5 de CarX Street al jugar el juego, completar misiones, ganar carreras, ver anuncios o comprarlos con dinero real.
  • -
  • Q: ¿Cómo puedo usar nitro en CarX Street versión 0.8.5?
  • -
  • A: Para la deriva, es necesario tocar el botón de freno mientras gira el volante. Para usar nitro, debes tocar el botón nitro cuando esté lleno.
  • -
  • Q: ¿Cómo puedo unirme o crear un club en la versión 0.8.5 de CarX Street?
  • -
  • A: Para unirse o crear un club en la versión 0.8.5 de CarX Street, debe ir al menú del club y tocar en el botón unirse o crear. A continuación, puede buscar un club existente o introducir el nombre y la descripción de su propio club.
  • -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Chicos Stumble Para Pc Sin Emulador.md b/spaces/Benson/text-generation/Examples/Descargar Chicos Stumble Para Pc Sin Emulador.md deleted file mode 100644 index aac53639b4232fc8dac5b3475db991964402d08c..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Chicos Stumble Para Pc Sin Emulador.md +++ /dev/null @@ -1,100 +0,0 @@ -
-

Cómo descargar Stumble Guys para PC sin emulador

-

Stumble Guys es un popular juego de fiesta multijugador en línea que te permite competir con hasta 32 jugadores a través de caóticas carreras de obstáculos. Usted puede correr, saltar, correr, deslizarse, y tropezar su camino a la línea de meta hasta que un vencedor es coronado. También puedes personalizar tu personaje con varios atuendos y emotes, e invitar a tus amigos a unirse a tu fiesta.

-

descargar chicos stumble para pc sin emulador


Downloadhttps://bltlly.com/2v6JNd



-

Stumble Guys está disponible para dispositivos Android en Google Play Store, pero ¿qué pasa si quieres jugar en tu PC? Puedes pensar que necesitas un emulador para ejecutar aplicaciones Android en tu ordenador, pero esa no es la única opción. De hecho, hay algunas maneras de jugar Stumble Guys en PC sin emulador, que puede ahorrarle tiempo, espacio y recursos.

-

En este artículo, te mostraremos tres métodos para descargar Stumble Guys para PC sin emulador, y comparar sus pros y contras. Al final de este artículo, podrás disfrutar de Stumble Guys en una pantalla más grande y con mejores controles.

-

¿Qué es Stumble Guys?

-

Stumble Guys es un juego de batalla en línea royale partido que se inspira en programas de televisión como Wipeout y el castillo de Takeshi. El juego fue desarrollado por Scopely y lanzado en octubre de 2021. Ha recibido críticas positivas de jugadores y críticos por igual, y tiene más de 80.000 calificaciones en Steam.

-

-

El juego cuenta con gráficos coloridos y locos, juego basado en la física, y falla hilarante. Puedes elegir entre diferentes modos, como el modo en solitario o en equipo, y competir contra otros jugadores en línea. También puedes desbloquear nuevos atuendos y emociones para tu personaje, como superhéroes, animales, piratas, zombis y más.

-

El juego es fácil de jugar pero difícil de dominar. Tienes que utilizar tus habilidades y estrategia para superar los diversos obstáculos y desafíos en cada nivel. Tienes que tener cuidado de no caerte de las plataformas, ser golpeado por bolas gigantes o ser eliminado por otros jugadores. El último que esté de pie gana el juego.

- -

Stumble Guys es un juego divertido y adictivo que puedes jugar en tu dispositivo Android, pero hay algunas razones por las que podrías querer jugarlo en tu PC. Estos son algunos de ellos:

-
    -
  • Puedes disfrutar del juego en una pantalla más grande y mejor, lo que puede mejorar los gráficos y la inmersión.
  • -
  • Puedes usar el teclado y el ratón para controlar a tu personaje, lo que puede darte más precisión y capacidad de respuesta.
  • -
  • Puede evitar el desagüe de la batería, el sobrecalentamiento y los problemas de retraso que pueden ocurrir en su dispositivo móvil.
  • -
  • Puedes grabar y transmitir tu juego más fácilmente, y compartirlo con tus amigos o comunidad en línea.
  • -
-

Sin embargo, jugar Stumble Guys en PC no es tan simple como descargarlo desde la Google Play Store. Es necesario utilizar algunas herramientas o métodos para ejecutar el juego en su ordenador. Una de las formas más comunes es utilizar un emulador, que es un software que imita el sistema operativo Android en su PC. Sin embargo, los emuladores tienen algunos inconvenientes, como:

-
    -
  • Pueden ocupar mucho espacio y recursos en su PC, lo que puede afectar su rendimiento y velocidad.
  • -
  • Pueden ser complicados de configurar y configurar, especialmente para principiantes.
  • -
  • Pueden tener problemas de compatibilidad y seguridad, que pueden causar errores o infecciones de malware.
  • -
-

Es por eso que te mostraremos algunas formas alternativas de jugar Stumble Guys en PC sin emulador. Estos métodos son más simples, rápidos y seguros que usar emuladores. Echemos un vistazo a ellos.

-

¿Cómo jugar Stumble chicos en PC sin emulador?

-

Método 1: iMyFone MirrorTo

-

iMyFone MirrorTo es una herramienta que le permite reflejar y controlar la pantalla del teléfono Android en su PC de forma inalámbrica. Puedes usarlo para jugar Stumble Guys en PC sin emulador siguiendo estos pasos:

-
    -
  1. Descargar e instalar iMyFone MirrorTo en su PC desde aquí.
  2. - -
  3. En su teléfono, deslice hacia abajo desde la parte superior de la pantalla y toque en el "Cast" o "Screen Mirroring" opción.
  4. -
  5. Seleccione su PC de la lista de dispositivos disponibles y toque en "Iniciar ahora".
  6. -
  7. La pantalla del teléfono se reflejará en su PC. Puede utilizar el ratón para controlar la pantalla del teléfono.
  8. -
  9. Abrir Stumble chicos en el teléfono y empezar a jugar en su PC.
  10. -
-

Las ventajas de iMyFone MirrorTo sobre emuladores son:

-
    -
  • Es fácil de usar y no requiere ninguna instalación o configuración en su teléfono.
  • -
  • No consume mucho espacio o recursos en su PC, ya que solo refleja la pantalla del teléfono.
  • -
  • No afecta el rendimiento o la calidad del juego, ya que se ejecuta de forma nativa en su teléfono.
  • -
-

Método 2: Android-x86

-

Android-x86 es un proyecto que conecta el sistema operativo Android para ejecutarse en equipos basados en x86. Puedes usarlo para jugar Stumble Guys en PC sin emulador siguiendo estos pasos:

-
    -
  1. Descargar la última versión de Android-x86 desde aquí.
  2. -
  3. Crear una unidad USB de arranque con Android-x86 utilizando una herramienta como Rufus o UNetbootin.
  4. -
  5. Reinicie su PC y arranque desde la unidad USB. Verá un menú con diferentes opciones. Elija "Ejecutar Android-x86 sin instalación".
  6. -
  7. Entrarás en el escritorio Android-x86. Conéctate a una red Wi-Fi y abre la aplicación Google Play Store.
  8. -
  9. Inicia sesión con tu cuenta de Google y descarga Stumble Guys desde Google Play Store.
  10. -
  11. Abrir Stumble chicos y empezar a jugar en su PC.
  12. -
-

Las ventajas de Android-x86 sobre los emuladores son:

-
    -
  • Se ejecuta más rápido y más suave que los emuladores, ya que utiliza el hardware nativo de su PC.
  • -
  • Soporta más características y aplicaciones que emuladores, ya que se basa en el código fuente oficial de Android.
  • -
-

Las desventajas de Android-x86 son:

-
    - -
  • No soporta controles de teclado y ratón, ya que está diseñado para pantallas táctiles.
  • -
-

Método 3: Soldador ARC

-

ARC Welder es una extensión de Chrome que le permite ejecutar aplicaciones Android en su PC utilizando el navegador Chrome. Puedes usarlo para jugar Stumble Guys en PC sin emulador siguiendo estos pasos:

-
    -
  1. Descargue e instale el navegador Chrome en su PC desde aquí.
  2. -
  3. Descargue e instale la extensión ARC Welder desde aquí.
  4. -
  5. Descargar el archivo APK Stumble Guys de aquí.
  6. -
  7. Inicie la extensión ARC Welder y haga clic en "Añadir su APK".
  8. -
  9. Seleccione el archivo APK Stumble Guys y configurar los ajustes que desee.
  10. -
  11. Haga clic en "Prueba" para ejecutar Stumble Guys en su PC.
  12. -
-

Las ventajas de ARC Welder sobre los emuladores son:

-
    -
  • Es simple y rápido de usar, ya que no requiere ninguna instalación o configuración en su PC.
  • -
  • No ocupa mucho espacio ni recursos en tu PC, ya que se ejecuta en tu navegador.
  • -
  • Soporta controles de teclado y ratón, así como mandos y joysticks.
  • -
-

Las desventajas de ARC Welder son:

-
    -
  • No es compatible con Google Play Services, lo que significa que no puede iniciar sesión con su cuenta de Google o acceder a algunas características del juego.
  • -
  • No tiene un buen rendimiento o calidad, ya que no está optimizado para ejecutar aplicaciones Android.
  • -
  • Puede tener problemas de compatibilidad y estabilidad, lo que puede causar fallos o errores.
  • -
-

Conclusión

- -

Preguntas frecuentes

-

Q1: ¿Es Stumble Guys libre para jugar?

-

A1: Sí, Stumble Guys es libre de jugar, pero tiene compras en la aplicación para artículos cosméticos y emotes.

-

Q2: ¿Puedo jugar Stumble Guys con mis amigos?

-

A2: Sí, puedes invitar a tus amigos a unirse a tu grupo y competir contra otros jugadores en línea.

-

Q3: ¿Cuáles son los requisitos del sistema para Stumble Guys?

-

A3: Para dispositivos Android, necesita al menos Android 5.0 y 100 MB de espacio libre. Para PC, necesita al menos Windows 10, procesador Intel Core 2 Duo E8400 o AMD Phenom II X4 965, 4 GB de RAM, AMD Radeon HD 7750 o tarjeta gráfica NVIDIA Geforce GTX 260 y conexión a Internet de banda ancha.

-

Q4: ¿Cuántos jugadores pueden jugar Stumble Guys a la vez?

-

A4: Stumble Guys admite hasta 32 jugadores online en cada ronda.

-

Q5: ¿Cuántos niveles hay en Stumble Guys?

-

A5: Actualmente hay 17 carreras de obstáculos únicas en Stumble Guys, cada una con diferentes desafíos y temas.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/setup.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/setup.py deleted file mode 100644 index a220d12b21d96c5093a218c406cf47f1e7c8761a..0000000000000000000000000000000000000000 --- a/spaces/BernardoOlisan/vqganclip/taming-transformers/setup.py +++ /dev/null @@ -1,13 +0,0 @@ -from setuptools import setup, find_packages - -setup( - name='taming-transformers', - version='0.0.1', - description='Taming Transformers for High-Resolution Image Synthesis', - packages=find_packages(), - install_requires=[ - 'torch', - 'numpy', - 'tqdm', - ], -) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langthaimodel.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langthaimodel.py deleted file mode 100644 index 489cad930e0029fc2f8e5111df1bad38151a07a9..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langthaimodel.py +++ /dev/null @@ -1,4380 +0,0 @@ -from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel - -# 3: Positive -# 2: Likely -# 1: Unlikely -# 0: Negative - -THAI_LANG_MODEL = { - 5: { # 'ก' - 5: 2, # 'ก' - 30: 2, # 'ข' - 24: 2, # 'ค' - 8: 2, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 3, # 'ฎ' - 57: 2, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 2, # 'ณ' - 20: 2, # 'ด' - 19: 3, # 'ต' - 44: 0, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 1, # 'บ' - 25: 2, # 'ป' - 39: 1, # 'ผ' - 62: 1, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 2, # 'ม' - 16: 1, # 'ย' - 2: 3, # 'ร' - 61: 2, # 'ฤ' - 15: 3, # 'ล' - 12: 3, # 'ว' - 42: 2, # 'ศ' - 46: 3, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 3, # 'อ' - 63: 1, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 3, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 0, # 'ึ' - 27: 2, # 'ื' - 32: 2, # 'ุ' - 35: 1, # 'ู' - 11: 2, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 1, # 'ใ' - 33: 2, # 'ไ' - 50: 1, # 'ๆ' - 37: 3, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 30: { # 'ข' - 5: 1, # 'ก' - 30: 0, # 'ข' - 24: 1, # 'ค' - 8: 1, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 2, # 'ณ' - 20: 0, # 'ด' - 19: 2, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 1, # 'บ' - 25: 1, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 2, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 1, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 2, # 'ี' - 40: 3, # 'ึ' - 27: 1, # 'ื' - 32: 1, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 1, # '็' - 6: 2, # '่' - 7: 3, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 24: { # 'ค' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 2, # 'ค' - 8: 2, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 2, # 'ณ' - 20: 2, # 'ด' - 19: 2, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 0, # 'บ' - 25: 1, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 2, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 3, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 2, # 'า' - 36: 3, # 'ำ' - 23: 3, # 'ิ' - 13: 2, # 'ี' - 40: 0, # 'ึ' - 27: 3, # 'ื' - 32: 3, # 'ุ' - 35: 2, # 'ู' - 11: 1, # 'เ' - 28: 0, # 'แ' - 41: 3, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 1, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 3, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 8: { # 'ง' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 3, # 'ค' - 8: 2, # 'ง' - 26: 2, # 'จ' - 52: 1, # 'ฉ' - 34: 2, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 3, # 'ท' - 48: 1, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 2, # 'ผ' - 62: 1, # 'ฝ' - 31: 2, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 2, # 'ม' - 16: 1, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 2, # 'ว' - 42: 2, # 'ศ' - 46: 1, # 'ษ' - 18: 3, # 'ส' - 21: 3, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 1, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 2, # 'ิ' - 13: 1, # 'ี' - 40: 0, # 'ึ' - 27: 1, # 'ื' - 32: 1, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 3, # 'ๆ' - 37: 0, # '็' - 6: 2, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 26: { # 'จ' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 0, # 'ค' - 8: 2, # 'ง' - 26: 3, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 1, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 1, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 1, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 1, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 3, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 3, # 'ำ' - 23: 2, # 'ิ' - 13: 1, # 'ี' - 40: 3, # 'ึ' - 27: 1, # 'ื' - 32: 3, # 'ุ' - 35: 2, # 'ู' - 11: 1, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 2, # '่' - 7: 2, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 52: { # 'ฉ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 3, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 3, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 1, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 1, # 'ะ' - 10: 1, # 'ั' - 1: 1, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 1, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 1, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 34: { # 'ช' - 5: 1, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 1, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 1, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 1, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 2, # 'ั' - 1: 3, # 'า' - 36: 1, # 'ำ' - 23: 3, # 'ิ' - 13: 2, # 'ี' - 40: 0, # 'ึ' - 27: 3, # 'ื' - 32: 3, # 'ุ' - 35: 1, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 1, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 51: { # 'ซ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 1, # 'ั' - 1: 1, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 2, # 'ี' - 40: 3, # 'ึ' - 27: 2, # 'ื' - 32: 1, # 'ุ' - 35: 1, # 'ู' - 11: 1, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 1, # '็' - 6: 1, # '่' - 7: 2, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 47: { # 'ญ' - 5: 1, # 'ก' - 30: 1, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 3, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 1, # 'บ' - 25: 1, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 2, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 1, # 'ะ' - 10: 2, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 1, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 1, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 0, # 'ไ' - 50: 1, # 'ๆ' - 37: 0, # '็' - 6: 2, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 58: { # 'ฎ' - 5: 2, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 1, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 2, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 57: { # 'ฏ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 3, # 'ิ' - 13: 1, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 49: { # 'ฐ' - 5: 1, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 2, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 53: { # 'ฑ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 2, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 3, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 55: { # 'ฒ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 43: { # 'ณ' - 5: 1, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 3, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 3, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 1, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 3, # 'ะ' - 10: 0, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 2, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 1, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 3, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 20: { # 'ด' - 5: 2, # 'ก' - 30: 2, # 'ข' - 24: 2, # 'ค' - 8: 3, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 1, # 'บ' - 25: 1, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 2, # 'ม' - 16: 3, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 3, # 'ั' - 1: 2, # 'า' - 36: 2, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 1, # 'ึ' - 27: 2, # 'ื' - 32: 3, # 'ุ' - 35: 2, # 'ู' - 11: 2, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 2, # 'ๆ' - 37: 2, # '็' - 6: 1, # '่' - 7: 3, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 19: { # 'ต' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 1, # 'ค' - 8: 0, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 1, # 'ต' - 44: 2, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 1, # 'บ' - 25: 1, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 2, # 'ภ' - 9: 1, # 'ม' - 16: 1, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 3, # 'ส' - 21: 0, # 'ห' - 4: 3, # 'อ' - 63: 1, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 2, # 'ำ' - 23: 3, # 'ิ' - 13: 2, # 'ี' - 40: 1, # 'ึ' - 27: 1, # 'ื' - 32: 3, # 'ุ' - 35: 2, # 'ู' - 11: 1, # 'เ' - 28: 1, # 'แ' - 41: 1, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 2, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 44: { # 'ถ' - 5: 1, # 'ก' - 30: 0, # 'ข' - 24: 1, # 'ค' - 8: 0, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 2, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 2, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 2, # 'ิ' - 13: 1, # 'ี' - 40: 3, # 'ึ' - 27: 2, # 'ื' - 32: 2, # 'ุ' - 35: 3, # 'ู' - 11: 1, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 2, # '่' - 7: 3, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 14: { # 'ท' - 5: 1, # 'ก' - 30: 1, # 'ข' - 24: 3, # 'ค' - 8: 1, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 3, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 2, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 3, # 'ย' - 2: 3, # 'ร' - 61: 1, # 'ฤ' - 15: 1, # 'ล' - 12: 2, # 'ว' - 42: 3, # 'ศ' - 46: 1, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 3, # 'ำ' - 23: 2, # 'ิ' - 13: 3, # 'ี' - 40: 2, # 'ึ' - 27: 1, # 'ื' - 32: 3, # 'ุ' - 35: 1, # 'ู' - 11: 0, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 1, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 48: { # 'ธ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 1, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 2, # 'า' - 36: 0, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 2, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 3, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 3: { # 'น' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 3, # 'ค' - 8: 1, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 1, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 2, # 'ถ' - 14: 3, # 'ท' - 48: 3, # 'ธ' - 3: 2, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 2, # 'ผ' - 62: 0, # 'ฝ' - 31: 2, # 'พ' - 54: 1, # 'ฟ' - 45: 1, # 'ภ' - 9: 2, # 'ม' - 16: 2, # 'ย' - 2: 2, # 'ร' - 61: 1, # 'ฤ' - 15: 2, # 'ล' - 12: 3, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 3, # 'อ' - 63: 1, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 3, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 3, # 'ึ' - 27: 3, # 'ื' - 32: 3, # 'ุ' - 35: 2, # 'ู' - 11: 3, # 'เ' - 28: 2, # 'แ' - 41: 3, # 'โ' - 29: 3, # 'ใ' - 33: 3, # 'ไ' - 50: 2, # 'ๆ' - 37: 1, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 17: { # 'บ' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 2, # 'ค' - 8: 1, # 'ง' - 26: 1, # 'จ' - 52: 1, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 3, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 2, # 'ป' - 39: 2, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 1, # 'ฟ' - 45: 1, # 'ภ' - 9: 1, # 'ม' - 16: 0, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 3, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 2, # 'อ' - 63: 1, # 'ฯ' - 22: 0, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 2, # 'ำ' - 23: 2, # 'ิ' - 13: 2, # 'ี' - 40: 0, # 'ึ' - 27: 2, # 'ื' - 32: 3, # 'ุ' - 35: 2, # 'ู' - 11: 2, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 0, # 'ๆ' - 37: 1, # '็' - 6: 2, # '่' - 7: 2, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 25: { # 'ป' - 5: 2, # 'ก' - 30: 0, # 'ข' - 24: 1, # 'ค' - 8: 0, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 1, # 'ฎ' - 57: 3, # 'ฏ' - 49: 1, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 1, # 'ต' - 44: 1, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 0, # 'บ' - 25: 1, # 'ป' - 39: 1, # 'ผ' - 62: 1, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 0, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 1, # 'ษ' - 18: 2, # 'ส' - 21: 1, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 1, # 'ะ' - 10: 3, # 'ั' - 1: 1, # 'า' - 36: 0, # 'ำ' - 23: 2, # 'ิ' - 13: 3, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 1, # 'ุ' - 35: 0, # 'ู' - 11: 1, # 'เ' - 28: 2, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 2, # 'ไ' - 50: 0, # 'ๆ' - 37: 3, # '็' - 6: 1, # '่' - 7: 2, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 39: { # 'ผ' - 5: 1, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 1, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 2, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 1, # 'ะ' - 10: 1, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 2, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 1, # 'ื' - 32: 0, # 'ุ' - 35: 3, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 1, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 62: { # 'ฝ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 1, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 1, # 'ี' - 40: 2, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 2, # '่' - 7: 1, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 31: { # 'พ' - 5: 1, # 'ก' - 30: 1, # 'ข' - 24: 1, # 'ค' - 8: 1, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 1, # 'ณ' - 20: 1, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 2, # 'ท' - 48: 1, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 0, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 2, # 'ย' - 2: 3, # 'ร' - 61: 2, # 'ฤ' - 15: 2, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 1, # 'ห' - 4: 2, # 'อ' - 63: 1, # 'ฯ' - 22: 0, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 3, # 'ิ' - 13: 2, # 'ี' - 40: 1, # 'ึ' - 27: 3, # 'ื' - 32: 1, # 'ุ' - 35: 2, # 'ู' - 11: 1, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 1, # '็' - 6: 0, # '่' - 7: 1, # '้' - 38: 3, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 54: { # 'ฟ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 2, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 2, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 1, # 'ี' - 40: 0, # 'ึ' - 27: 1, # 'ื' - 32: 1, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 2, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 45: { # 'ภ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 1, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 3, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 2, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 9: { # 'ม' - 5: 2, # 'ก' - 30: 2, # 'ข' - 24: 2, # 'ค' - 8: 2, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 1, # 'ณ' - 20: 2, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 1, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 3, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 2, # 'ม' - 16: 1, # 'ย' - 2: 2, # 'ร' - 61: 2, # 'ฤ' - 15: 2, # 'ล' - 12: 2, # 'ว' - 42: 1, # 'ศ' - 46: 1, # 'ษ' - 18: 3, # 'ส' - 21: 3, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 1, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 0, # 'ึ' - 27: 3, # 'ื' - 32: 3, # 'ุ' - 35: 3, # 'ู' - 11: 2, # 'เ' - 28: 2, # 'แ' - 41: 2, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 1, # 'ๆ' - 37: 1, # '็' - 6: 3, # '่' - 7: 2, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 16: { # 'ย' - 5: 3, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 3, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 2, # 'ช' - 51: 0, # 'ซ' - 47: 2, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 1, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 1, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 2, # 'ม' - 16: 0, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 3, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 1, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 2, # 'ิ' - 13: 3, # 'ี' - 40: 1, # 'ึ' - 27: 2, # 'ื' - 32: 2, # 'ุ' - 35: 3, # 'ู' - 11: 2, # 'เ' - 28: 1, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 2, # 'ๆ' - 37: 1, # '็' - 6: 3, # '่' - 7: 2, # '้' - 38: 3, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 2: { # 'ร' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 2, # 'ค' - 8: 3, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 2, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 3, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 3, # 'ณ' - 20: 2, # 'ด' - 19: 2, # 'ต' - 44: 3, # 'ถ' - 14: 3, # 'ท' - 48: 1, # 'ธ' - 3: 2, # 'น' - 17: 2, # 'บ' - 25: 3, # 'ป' - 39: 2, # 'ผ' - 62: 1, # 'ฝ' - 31: 2, # 'พ' - 54: 1, # 'ฟ' - 45: 1, # 'ภ' - 9: 3, # 'ม' - 16: 2, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 3, # 'ว' - 42: 2, # 'ศ' - 46: 2, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 3, # 'อ' - 63: 1, # 'ฯ' - 22: 3, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 2, # 'ึ' - 27: 3, # 'ื' - 32: 3, # 'ุ' - 35: 3, # 'ู' - 11: 3, # 'เ' - 28: 3, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 3, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 3, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 61: { # 'ฤ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 2, # 'ต' - 44: 0, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 2, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 15: { # 'ล' - 5: 2, # 'ก' - 30: 3, # 'ข' - 24: 1, # 'ค' - 8: 3, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 1, # 'ม' - 16: 3, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 1, # 'ห' - 4: 3, # 'อ' - 63: 2, # 'ฯ' - 22: 3, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 2, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 2, # 'ึ' - 27: 3, # 'ื' - 32: 2, # 'ุ' - 35: 3, # 'ู' - 11: 2, # 'เ' - 28: 1, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 2, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 12: { # 'ว' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 1, # 'ค' - 8: 3, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 1, # 'ณ' - 20: 2, # 'ด' - 19: 1, # 'ต' - 44: 1, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 1, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 1, # 'ฟ' - 45: 0, # 'ภ' - 9: 3, # 'ม' - 16: 3, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 3, # 'ิ' - 13: 2, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 2, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 1, # 'ใ' - 33: 2, # 'ไ' - 50: 1, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 42: { # 'ศ' - 5: 1, # 'ก' - 30: 0, # 'ข' - 24: 1, # 'ค' - 8: 0, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 1, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 2, # 'ว' - 42: 1, # 'ศ' - 46: 2, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 2, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 2, # 'ิ' - 13: 0, # 'ี' - 40: 3, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 2, # 'ู' - 11: 0, # 'เ' - 28: 1, # 'แ' - 41: 0, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 46: { # 'ษ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 2, # 'ฎ' - 57: 1, # 'ฏ' - 49: 2, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 3, # 'ณ' - 20: 0, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 1, # 'ม' - 16: 2, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 2, # 'ะ' - 10: 2, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 1, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 1, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 18: { # 'ส' - 5: 2, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 2, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 3, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 1, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 2, # 'ภ' - 9: 3, # 'ม' - 16: 1, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 2, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 3, # 'ำ' - 23: 3, # 'ิ' - 13: 3, # 'ี' - 40: 2, # 'ึ' - 27: 3, # 'ื' - 32: 3, # 'ุ' - 35: 3, # 'ู' - 11: 2, # 'เ' - 28: 0, # 'แ' - 41: 1, # 'โ' - 29: 0, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 1, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 21: { # 'ห' - 5: 3, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 1, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 2, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 3, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 0, # 'บ' - 25: 1, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 3, # 'ม' - 16: 2, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 1, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 0, # 'ำ' - 23: 1, # 'ิ' - 13: 1, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 1, # 'ุ' - 35: 1, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 3, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 4: { # 'อ' - 5: 3, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 3, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 1, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 1, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 1, # 'ฟ' - 45: 1, # 'ภ' - 9: 3, # 'ม' - 16: 3, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 2, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 2, # 'ะ' - 10: 3, # 'ั' - 1: 3, # 'า' - 36: 2, # 'ำ' - 23: 2, # 'ิ' - 13: 3, # 'ี' - 40: 0, # 'ึ' - 27: 3, # 'ื' - 32: 3, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 1, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 1, # 'ๆ' - 37: 1, # '็' - 6: 2, # '่' - 7: 2, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 63: { # 'ฯ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 22: { # 'ะ' - 5: 3, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 1, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 3, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 1, # 'ถ' - 14: 3, # 'ท' - 48: 1, # 'ธ' - 3: 2, # 'น' - 17: 3, # 'บ' - 25: 2, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 2, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 3, # 'ม' - 16: 2, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 3, # 'ส' - 21: 3, # 'ห' - 4: 2, # 'อ' - 63: 1, # 'ฯ' - 22: 1, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 10: { # 'ั' - 5: 3, # 'ก' - 30: 0, # 'ข' - 24: 1, # 'ค' - 8: 3, # 'ง' - 26: 3, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 3, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 2, # 'ฐ' - 53: 0, # 'ฑ' - 55: 3, # 'ฒ' - 43: 3, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 0, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 1, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 2, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 3, # 'ม' - 16: 3, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 3, # 'ว' - 42: 2, # 'ศ' - 46: 0, # 'ษ' - 18: 3, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 1: { # 'า' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 3, # 'ค' - 8: 3, # 'ง' - 26: 3, # 'จ' - 52: 0, # 'ฉ' - 34: 3, # 'ช' - 51: 1, # 'ซ' - 47: 2, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 3, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 1, # 'ถ' - 14: 3, # 'ท' - 48: 2, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 2, # 'ป' - 39: 1, # 'ผ' - 62: 1, # 'ฝ' - 31: 3, # 'พ' - 54: 1, # 'ฟ' - 45: 1, # 'ภ' - 9: 3, # 'ม' - 16: 3, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 3, # 'ว' - 42: 2, # 'ศ' - 46: 3, # 'ษ' - 18: 3, # 'ส' - 21: 3, # 'ห' - 4: 2, # 'อ' - 63: 1, # 'ฯ' - 22: 3, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 1, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 36: { # 'ำ' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 3, # 'ค' - 8: 2, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 1, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 1, # 'ต' - 44: 1, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 1, # 'บ' - 25: 1, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 1, # 'ม' - 16: 0, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 3, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 23: { # 'ิ' - 5: 3, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 3, # 'ง' - 26: 3, # 'จ' - 52: 0, # 'ฉ' - 34: 3, # 'ช' - 51: 0, # 'ซ' - 47: 2, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 1, # 'ถ' - 14: 3, # 'ท' - 48: 3, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 2, # 'ป' - 39: 2, # 'ผ' - 62: 0, # 'ฝ' - 31: 3, # 'พ' - 54: 1, # 'ฟ' - 45: 2, # 'ภ' - 9: 3, # 'ม' - 16: 2, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 3, # 'ว' - 42: 3, # 'ศ' - 46: 2, # 'ษ' - 18: 2, # 'ส' - 21: 3, # 'ห' - 4: 1, # 'อ' - 63: 1, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 1, # 'แ' - 41: 1, # 'โ' - 29: 1, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 2, # '้' - 38: 2, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 13: { # 'ี' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 2, # 'ค' - 8: 0, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 1, # 'ผ' - 62: 0, # 'ฝ' - 31: 2, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 3, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 2, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 1, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 2, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 1, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 40: { # 'ึ' - 5: 3, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 3, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 1, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 27: { # 'ื' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 3, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 32: { # 'ุ' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 3, # 'ค' - 8: 3, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 2, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 1, # 'ฒ' - 43: 3, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 1, # 'ธ' - 3: 2, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 2, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 1, # 'ภ' - 9: 3, # 'ม' - 16: 1, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 1, # 'ว' - 42: 1, # 'ศ' - 46: 2, # 'ษ' - 18: 1, # 'ส' - 21: 1, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 1, # 'เ' - 28: 0, # 'แ' - 41: 1, # 'โ' - 29: 0, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 2, # '้' - 38: 1, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 35: { # 'ู' - 5: 3, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 2, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 2, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 1, # 'ณ' - 20: 2, # 'ด' - 19: 2, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 2, # 'น' - 17: 0, # 'บ' - 25: 3, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 0, # 'ย' - 2: 1, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 1, # 'เ' - 28: 1, # 'แ' - 41: 1, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 3, # '่' - 7: 3, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 11: { # 'เ' - 5: 3, # 'ก' - 30: 3, # 'ข' - 24: 3, # 'ค' - 8: 2, # 'ง' - 26: 3, # 'จ' - 52: 3, # 'ฉ' - 34: 3, # 'ช' - 51: 2, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 1, # 'ณ' - 20: 3, # 'ด' - 19: 3, # 'ต' - 44: 1, # 'ถ' - 14: 3, # 'ท' - 48: 1, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 3, # 'ป' - 39: 2, # 'ผ' - 62: 1, # 'ฝ' - 31: 3, # 'พ' - 54: 1, # 'ฟ' - 45: 3, # 'ภ' - 9: 3, # 'ม' - 16: 2, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 3, # 'ว' - 42: 2, # 'ศ' - 46: 0, # 'ษ' - 18: 3, # 'ส' - 21: 3, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 28: { # 'แ' - 5: 3, # 'ก' - 30: 2, # 'ข' - 24: 2, # 'ค' - 8: 1, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 3, # 'ต' - 44: 2, # 'ถ' - 14: 3, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 2, # 'ป' - 39: 3, # 'ผ' - 62: 0, # 'ฝ' - 31: 2, # 'พ' - 54: 2, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 2, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 3, # 'ส' - 21: 3, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 41: { # 'โ' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 0, # 'ง' - 26: 1, # 'จ' - 52: 1, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 2, # 'ต' - 44: 0, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 1, # 'บ' - 25: 3, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 1, # 'ฟ' - 45: 1, # 'ภ' - 9: 1, # 'ม' - 16: 2, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 3, # 'ล' - 12: 0, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 0, # 'ห' - 4: 2, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 29: { # 'ใ' - 5: 2, # 'ก' - 30: 0, # 'ข' - 24: 1, # 'ค' - 8: 0, # 'ง' - 26: 3, # 'จ' - 52: 0, # 'ฉ' - 34: 3, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 1, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 3, # 'ส' - 21: 3, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 33: { # 'ไ' - 5: 1, # 'ก' - 30: 2, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 3, # 'ด' - 19: 1, # 'ต' - 44: 0, # 'ถ' - 14: 3, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 1, # 'บ' - 25: 3, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 2, # 'ฟ' - 45: 0, # 'ภ' - 9: 3, # 'ม' - 16: 0, # 'ย' - 2: 3, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 3, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 2, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 50: { # 'ๆ' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 37: { # '็' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 2, # 'ง' - 26: 3, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 1, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 2, # 'ต' - 44: 0, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 3, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 1, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 2, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 0, # 'ห' - 4: 1, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 1, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 6: { # '่' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 3, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 1, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 1, # 'ธ' - 3: 3, # 'น' - 17: 1, # 'บ' - 25: 2, # 'ป' - 39: 2, # 'ผ' - 62: 1, # 'ฝ' - 31: 1, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 3, # 'ม' - 16: 3, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 2, # 'ล' - 12: 3, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 1, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 1, # 'ะ' - 10: 0, # 'ั' - 1: 3, # 'า' - 36: 2, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 3, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 1, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 7: { # '้' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 2, # 'ค' - 8: 3, # 'ง' - 26: 2, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 1, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 1, # 'ด' - 19: 2, # 'ต' - 44: 1, # 'ถ' - 14: 2, # 'ท' - 48: 0, # 'ธ' - 3: 3, # 'น' - 17: 2, # 'บ' - 25: 2, # 'ป' - 39: 2, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 1, # 'ฟ' - 45: 0, # 'ภ' - 9: 3, # 'ม' - 16: 2, # 'ย' - 2: 2, # 'ร' - 61: 0, # 'ฤ' - 15: 1, # 'ล' - 12: 3, # 'ว' - 42: 1, # 'ศ' - 46: 0, # 'ษ' - 18: 2, # 'ส' - 21: 2, # 'ห' - 4: 3, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 3, # 'า' - 36: 2, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 2, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 2, # 'ใ' - 33: 2, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 38: { # '์' - 5: 2, # 'ก' - 30: 1, # 'ข' - 24: 1, # 'ค' - 8: 0, # 'ง' - 26: 1, # 'จ' - 52: 0, # 'ฉ' - 34: 1, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 2, # 'ด' - 19: 1, # 'ต' - 44: 1, # 'ถ' - 14: 1, # 'ท' - 48: 0, # 'ธ' - 3: 1, # 'น' - 17: 1, # 'บ' - 25: 1, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 1, # 'พ' - 54: 1, # 'ฟ' - 45: 0, # 'ภ' - 9: 2, # 'ม' - 16: 0, # 'ย' - 2: 1, # 'ร' - 61: 1, # 'ฤ' - 15: 1, # 'ล' - 12: 1, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 1, # 'ส' - 21: 1, # 'ห' - 4: 2, # 'อ' - 63: 1, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 2, # 'เ' - 28: 2, # 'แ' - 41: 1, # 'โ' - 29: 1, # 'ใ' - 33: 1, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 0, # '๑' - 59: 0, # '๒' - 60: 0, # '๕' - }, - 56: { # '๑' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 2, # '๑' - 59: 1, # '๒' - 60: 1, # '๕' - }, - 59: { # '๒' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 1, # '๑' - 59: 1, # '๒' - 60: 3, # '๕' - }, - 60: { # '๕' - 5: 0, # 'ก' - 30: 0, # 'ข' - 24: 0, # 'ค' - 8: 0, # 'ง' - 26: 0, # 'จ' - 52: 0, # 'ฉ' - 34: 0, # 'ช' - 51: 0, # 'ซ' - 47: 0, # 'ญ' - 58: 0, # 'ฎ' - 57: 0, # 'ฏ' - 49: 0, # 'ฐ' - 53: 0, # 'ฑ' - 55: 0, # 'ฒ' - 43: 0, # 'ณ' - 20: 0, # 'ด' - 19: 0, # 'ต' - 44: 0, # 'ถ' - 14: 0, # 'ท' - 48: 0, # 'ธ' - 3: 0, # 'น' - 17: 0, # 'บ' - 25: 0, # 'ป' - 39: 0, # 'ผ' - 62: 0, # 'ฝ' - 31: 0, # 'พ' - 54: 0, # 'ฟ' - 45: 0, # 'ภ' - 9: 0, # 'ม' - 16: 0, # 'ย' - 2: 0, # 'ร' - 61: 0, # 'ฤ' - 15: 0, # 'ล' - 12: 0, # 'ว' - 42: 0, # 'ศ' - 46: 0, # 'ษ' - 18: 0, # 'ส' - 21: 0, # 'ห' - 4: 0, # 'อ' - 63: 0, # 'ฯ' - 22: 0, # 'ะ' - 10: 0, # 'ั' - 1: 0, # 'า' - 36: 0, # 'ำ' - 23: 0, # 'ิ' - 13: 0, # 'ี' - 40: 0, # 'ึ' - 27: 0, # 'ื' - 32: 0, # 'ุ' - 35: 0, # 'ู' - 11: 0, # 'เ' - 28: 0, # 'แ' - 41: 0, # 'โ' - 29: 0, # 'ใ' - 33: 0, # 'ไ' - 50: 0, # 'ๆ' - 37: 0, # '็' - 6: 0, # '่' - 7: 0, # '้' - 38: 0, # '์' - 56: 2, # '๑' - 59: 1, # '๒' - 60: 0, # '๕' - }, -} - -# 255: Undefined characters that did not exist in training text -# 254: Carriage/Return -# 253: symbol (punctuation) that does not belong to word -# 252: 0 - 9 -# 251: Control characters - -# Character Mapping Table(s): -TIS_620_THAI_CHAR_TO_ORDER = { - 0: 255, # '\x00' - 1: 255, # '\x01' - 2: 255, # '\x02' - 3: 255, # '\x03' - 4: 255, # '\x04' - 5: 255, # '\x05' - 6: 255, # '\x06' - 7: 255, # '\x07' - 8: 255, # '\x08' - 9: 255, # '\t' - 10: 254, # '\n' - 11: 255, # '\x0b' - 12: 255, # '\x0c' - 13: 254, # '\r' - 14: 255, # '\x0e' - 15: 255, # '\x0f' - 16: 255, # '\x10' - 17: 255, # '\x11' - 18: 255, # '\x12' - 19: 255, # '\x13' - 20: 255, # '\x14' - 21: 255, # '\x15' - 22: 255, # '\x16' - 23: 255, # '\x17' - 24: 255, # '\x18' - 25: 255, # '\x19' - 26: 255, # '\x1a' - 27: 255, # '\x1b' - 28: 255, # '\x1c' - 29: 255, # '\x1d' - 30: 255, # '\x1e' - 31: 255, # '\x1f' - 32: 253, # ' ' - 33: 253, # '!' - 34: 253, # '"' - 35: 253, # '#' - 36: 253, # '$' - 37: 253, # '%' - 38: 253, # '&' - 39: 253, # "'" - 40: 253, # '(' - 41: 253, # ')' - 42: 253, # '*' - 43: 253, # '+' - 44: 253, # ',' - 45: 253, # '-' - 46: 253, # '.' - 47: 253, # '/' - 48: 252, # '0' - 49: 252, # '1' - 50: 252, # '2' - 51: 252, # '3' - 52: 252, # '4' - 53: 252, # '5' - 54: 252, # '6' - 55: 252, # '7' - 56: 252, # '8' - 57: 252, # '9' - 58: 253, # ':' - 59: 253, # ';' - 60: 253, # '<' - 61: 253, # '=' - 62: 253, # '>' - 63: 253, # '?' - 64: 253, # '@' - 65: 182, # 'A' - 66: 106, # 'B' - 67: 107, # 'C' - 68: 100, # 'D' - 69: 183, # 'E' - 70: 184, # 'F' - 71: 185, # 'G' - 72: 101, # 'H' - 73: 94, # 'I' - 74: 186, # 'J' - 75: 187, # 'K' - 76: 108, # 'L' - 77: 109, # 'M' - 78: 110, # 'N' - 79: 111, # 'O' - 80: 188, # 'P' - 81: 189, # 'Q' - 82: 190, # 'R' - 83: 89, # 'S' - 84: 95, # 'T' - 85: 112, # 'U' - 86: 113, # 'V' - 87: 191, # 'W' - 88: 192, # 'X' - 89: 193, # 'Y' - 90: 194, # 'Z' - 91: 253, # '[' - 92: 253, # '\\' - 93: 253, # ']' - 94: 253, # '^' - 95: 253, # '_' - 96: 253, # '`' - 97: 64, # 'a' - 98: 72, # 'b' - 99: 73, # 'c' - 100: 114, # 'd' - 101: 74, # 'e' - 102: 115, # 'f' - 103: 116, # 'g' - 104: 102, # 'h' - 105: 81, # 'i' - 106: 201, # 'j' - 107: 117, # 'k' - 108: 90, # 'l' - 109: 103, # 'm' - 110: 78, # 'n' - 111: 82, # 'o' - 112: 96, # 'p' - 113: 202, # 'q' - 114: 91, # 'r' - 115: 79, # 's' - 116: 84, # 't' - 117: 104, # 'u' - 118: 105, # 'v' - 119: 97, # 'w' - 120: 98, # 'x' - 121: 92, # 'y' - 122: 203, # 'z' - 123: 253, # '{' - 124: 253, # '|' - 125: 253, # '}' - 126: 253, # '~' - 127: 253, # '\x7f' - 128: 209, # '\x80' - 129: 210, # '\x81' - 130: 211, # '\x82' - 131: 212, # '\x83' - 132: 213, # '\x84' - 133: 88, # '\x85' - 134: 214, # '\x86' - 135: 215, # '\x87' - 136: 216, # '\x88' - 137: 217, # '\x89' - 138: 218, # '\x8a' - 139: 219, # '\x8b' - 140: 220, # '\x8c' - 141: 118, # '\x8d' - 142: 221, # '\x8e' - 143: 222, # '\x8f' - 144: 223, # '\x90' - 145: 224, # '\x91' - 146: 99, # '\x92' - 147: 85, # '\x93' - 148: 83, # '\x94' - 149: 225, # '\x95' - 150: 226, # '\x96' - 151: 227, # '\x97' - 152: 228, # '\x98' - 153: 229, # '\x99' - 154: 230, # '\x9a' - 155: 231, # '\x9b' - 156: 232, # '\x9c' - 157: 233, # '\x9d' - 158: 234, # '\x9e' - 159: 235, # '\x9f' - 160: 236, # None - 161: 5, # 'ก' - 162: 30, # 'ข' - 163: 237, # 'ฃ' - 164: 24, # 'ค' - 165: 238, # 'ฅ' - 166: 75, # 'ฆ' - 167: 8, # 'ง' - 168: 26, # 'จ' - 169: 52, # 'ฉ' - 170: 34, # 'ช' - 171: 51, # 'ซ' - 172: 119, # 'ฌ' - 173: 47, # 'ญ' - 174: 58, # 'ฎ' - 175: 57, # 'ฏ' - 176: 49, # 'ฐ' - 177: 53, # 'ฑ' - 178: 55, # 'ฒ' - 179: 43, # 'ณ' - 180: 20, # 'ด' - 181: 19, # 'ต' - 182: 44, # 'ถ' - 183: 14, # 'ท' - 184: 48, # 'ธ' - 185: 3, # 'น' - 186: 17, # 'บ' - 187: 25, # 'ป' - 188: 39, # 'ผ' - 189: 62, # 'ฝ' - 190: 31, # 'พ' - 191: 54, # 'ฟ' - 192: 45, # 'ภ' - 193: 9, # 'ม' - 194: 16, # 'ย' - 195: 2, # 'ร' - 196: 61, # 'ฤ' - 197: 15, # 'ล' - 198: 239, # 'ฦ' - 199: 12, # 'ว' - 200: 42, # 'ศ' - 201: 46, # 'ษ' - 202: 18, # 'ส' - 203: 21, # 'ห' - 204: 76, # 'ฬ' - 205: 4, # 'อ' - 206: 66, # 'ฮ' - 207: 63, # 'ฯ' - 208: 22, # 'ะ' - 209: 10, # 'ั' - 210: 1, # 'า' - 211: 36, # 'ำ' - 212: 23, # 'ิ' - 213: 13, # 'ี' - 214: 40, # 'ึ' - 215: 27, # 'ื' - 216: 32, # 'ุ' - 217: 35, # 'ู' - 218: 86, # 'ฺ' - 219: 240, # None - 220: 241, # None - 221: 242, # None - 222: 243, # None - 223: 244, # '฿' - 224: 11, # 'เ' - 225: 28, # 'แ' - 226: 41, # 'โ' - 227: 29, # 'ใ' - 228: 33, # 'ไ' - 229: 245, # 'ๅ' - 230: 50, # 'ๆ' - 231: 37, # '็' - 232: 6, # '่' - 233: 7, # '้' - 234: 67, # '๊' - 235: 77, # '๋' - 236: 38, # '์' - 237: 93, # 'ํ' - 238: 246, # '๎' - 239: 247, # '๏' - 240: 68, # '๐' - 241: 56, # '๑' - 242: 59, # '๒' - 243: 65, # '๓' - 244: 69, # '๔' - 245: 60, # '๕' - 246: 70, # '๖' - 247: 80, # '๗' - 248: 71, # '๘' - 249: 87, # '๙' - 250: 248, # '๚' - 251: 249, # '๛' - 252: 250, # None - 253: 251, # None - 254: 252, # None - 255: 253, # None -} - -TIS_620_THAI_MODEL = SingleByteCharSetModel( - charset_name="TIS-620", - language="Thai", - char_to_order_map=TIS_620_THAI_CHAR_TO_ORDER, - language_model=THAI_LANG_MODEL, - typical_positive_ratio=0.926386, - keep_ascii_letters=False, - alphabet="กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลฦวศษสหฬอฮฯะัาำิีึืฺุู฿เแโใไๅๆ็่้๊๋์ํ๎๏๐๑๒๓๔๕๖๗๘๙๚๛", -) diff --git a/spaces/Branon/oai-proxy/Dockerfile b/spaces/Branon/oai-proxy/Dockerfile deleted file mode 100644 index cee9bcd0c69dbeb6e903c3f64531b2ff70f021f6..0000000000000000000000000000000000000000 --- a/spaces/Branon/oai-proxy/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitlab.com/khanon/oai-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/CAPTY222/runwayml-stable-diffusion-v1-5/app.py b/spaces/CAPTY222/runwayml-stable-diffusion-v1-5/app.py deleted file mode 100644 index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000 --- a/spaces/CAPTY222/runwayml-stable-diffusion-v1-5/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/train_net.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/train_net.py deleted file mode 100644 index eac2ec5c39e4a3ce2221f354dcea288bffcb1fbb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TridentNet/train_net.py +++ /dev/null @@ -1,67 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -""" -TridentNet Training Script. - -This script is a simplified version of the training script in detectron2/tools. -""" - -import os - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch -from detectron2.evaluation import COCOEvaluator - -from tridentnet import add_tridentnet_config - - -class Trainer(DefaultTrainer): - @classmethod - def build_evaluator(cls, cfg, dataset_name, output_folder=None): - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - return COCOEvaluator(dataset_name, cfg, True, output_folder) - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - add_tridentnet_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - return cfg - - -def main(args): - cfg = setup(args) - - if args.eval_only: - model = Trainer.build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - res = Trainer.test(cfg, model) - return res - - trainer = Trainer(cfg) - trainer.resume_or_load(resume=args.resume) - return trainer.train() - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/adapter.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/adapter.py deleted file mode 100644 index 0046407620aa38e65b381f1440a8d55f319ca2b0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/adapter.py +++ /dev/null @@ -1,71 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Zhenwei Shao https://github.com/ParadoxZW -# -------------------------------------------------------- - -import torch.nn as nn -import torch -from openvqa.core.base_dataset import BaseAdapter -from openvqa.utils.make_mask import make_mask - - -class Adapter(BaseAdapter): - def __init__(self, __C): - super(Adapter, self).__init__(__C) - self.__C = __C - - - def vqa_init(self, __C): - pass - # self.frcn_linear = nn.Linear(__C.FEAT_SIZE['vqa']['FRCN_FEAT_SIZE'][1], __C.HIDDEN_SIZE) - - def gqa_init(self, __C): - imgfeat_linear_size = __C.FEAT_SIZE['gqa']['FRCN_FEAT_SIZE'][1] - if __C.USE_BBOX_FEAT: - self.bbox_linear = nn.Linear(5, __C.BBOXFEAT_EMB_SIZE) - imgfeat_linear_size += __C.BBOXFEAT_EMB_SIZE - self.frcn_linear = nn.Linear(imgfeat_linear_size, __C.HIDDEN_SIZE) - - if __C.USE_AUX_FEAT: - self.grid_linear = nn.Linear( - __C.FEAT_SIZE['gqa']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE) - - def clevr_init(self, __C): - self.grid_linear = nn.Linear( - __C.FEAT_SIZE['clevr']['GRID_FEAT_SIZE'][1], __C.HIDDEN_SIZE) - - def vqa_forward(self, feat_dict): - frcn_feat = feat_dict['FRCN_FEAT'] - bbox_feat = feat_dict['BBOX_FEAT'] - - img_feat_mask = make_mask(frcn_feat) - # img_feat = self.frcn_linear(frcn_feat) - - return frcn_feat, img_feat_mask - - - def gqa_forward(self, feat_dict): - frcn_feat = feat_dict['FRCN_FEAT'] - bbox_feat = feat_dict['BBOX_FEAT'] - grid_feat = feat_dict['GRID_FEAT'] - - img_feat_mask = make_mask(frcn_feat) - - if self.__C.USE_BBOX_FEAT: - bbox_feat = self.bbox_linear(bbox_feat) - frcn_feat = torch.cat((frcn_feat, bbox_feat), dim=-1) - img_feat = self.frcn_linear(frcn_feat) - - return img_feat, img_feat_mask - - - def clevr_forward(self, feat_dict): - grid_feat = feat_dict['GRID_FEAT'] - - img_feat_mask = make_mask(grid_feat) - img_feat = self.grid_linear(grid_feat) - - return img_feat, img_feat_mask - - - diff --git a/spaces/CVPR/LIVE/thrust/thrust/random/normal_distribution.h b/spaces/CVPR/LIVE/thrust/thrust/random/normal_distribution.h deleted file mode 100644 index ac45e161a082aff69d496d37eb4ec327c994db64..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/random/normal_distribution.h +++ /dev/null @@ -1,275 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file normal_distribution.h - * \brief A normal (Gaussian) distribution of real-valued numbers. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ - -namespace random -{ - - -/*! \addtogroup random_number_distributions - * \{ - */ - -/*! \class normal_distribution - * \brief A \p normal_distribution random number distribution produces floating point - * Normally distributed random numbers. - * - * \tparam RealType The type of floating point number to produce. - * - * The following code snippet demonstrates examples of using a \p normal_distribution with a - * random number engine to produce random values drawn from the Normal distribution with a given - * mean and variance: - * - * \code - * #include - * #include - * - * int main(void) - * { - * // create a minstd_rand object to act as our source of randomness - * thrust::minstd_rand rng; - * - * // create a normal_distribution to produce floats from the Normal distribution - * // with mean 2.0 and standard deviation 3.5 - * thrust::random::normal_distribution dist(2.0f, 3.5f); - * - * // write a random number to standard output - * std::cout << dist(rng) << std::endl; - * - * // write the mean of the distribution, just in case we forgot - * std::cout << dist.mean() << std::endl; - * - * // 2.0 is printed - * - * // and the standard deviation - * std::cout << dist.stddev() << std::endl; - * - * // 3.5 is printed - * - * return 0; - * } - * \endcode - */ -template - class normal_distribution - : public detail::normal_distribution_base::type -{ - private: - typedef typename detail::normal_distribution_base::type super_t; - - public: - // types - - /*! \typedef result_type - * \brief The type of the floating point number produced by this \p normal_distribution. - */ - typedef RealType result_type; - - /*! \typedef param_type - * \brief The type of the object encapsulating this \p normal_distribution's parameters. - */ - typedef thrust::pair param_type; - - // constructors and reset functions - - /*! This constructor creates a new \p normal_distribution from two values defining the - * half-open interval of the distribution. - * - * \param mean The mean (expected value) of the distribution. Defaults to \c 0.0. - * \param stddev The standard deviation of the distribution. Defaults to \c 1.0. - */ - __host__ __device__ - explicit normal_distribution(RealType mean = 0.0, RealType stddev = 1.0); - - /*! This constructor creates a new \p normal_distribution from a \p param_type object - * encapsulating the range of the distribution. - * - * \param parm A \p param_type object encapsulating the parameters (i.e., the mean and standard deviation) of the distribution. - */ - __host__ __device__ - explicit normal_distribution(const param_type &parm); - - /*! Calling this member function guarantees that subsequent uses of this - * \p normal_distribution do not depend on values produced by any random - * number generator prior to invoking this function. - */ - __host__ __device__ - void reset(void); - - // generating functions - - /*! This method produces a new Normal random integer drawn from this \p normal_distribution's - * range using a \p UniformRandomNumberGenerator as a source of randomness. - * - * \param urng The \p UniformRandomNumberGenerator to use as a source of randomness. - */ - template - __host__ __device__ - result_type operator()(UniformRandomNumberGenerator &urng); - - /*! This method produces a new Normal random integer as if by creating a new \p normal_distribution - * from the given \p param_type object, and calling its operator() method with the given - * \p UniformRandomNumberGenerator as a source of randomness. - * - * \param urng The \p UniformRandomNumberGenerator to use as a source of randomness. - * \param parm A \p param_type object encapsulating the parameters of the \p normal_distribution - * to draw from. - */ - template - __host__ __device__ - result_type operator()(UniformRandomNumberGenerator &urng, const param_type &parm); - - // property functions - - /*! This method returns the value of the parameter with which this \p normal_distribution - * was constructed. - * - * \return The mean (expected value) of this \p normal_distribution's output. - */ - __host__ __device__ - result_type mean(void) const; - - /*! This method returns the value of the parameter with which this \p normal_distribution - * was constructed. - * - * \return The standard deviation of this \p uniform_real_distribution's output. - */ - __host__ __device__ - result_type stddev(void) const; - - /*! This method returns a \p param_type object encapsulating the parameters with which this - * \p normal_distribution was constructed. - * - * \return A \p param_type object encapsulating the parameters (i.e., the mean and standard deviation) of this \p normal_distribution. - */ - __host__ __device__ - param_type param(void) const; - - /*! This method changes the parameters of this \p normal_distribution using the values encapsulated - * in a given \p param_type object. - * - * \param parm A \p param_type object encapsulating the new parameters (i.e., the mean and variance) of this \p normal_distribution. - */ - __host__ __device__ - void param(const param_type &parm); - - /*! This method returns the smallest floating point number this \p normal_distribution can potentially produce. - * - * \return The lower bound of this \p normal_distribution's half-open interval. - */ - __host__ __device__ - result_type min THRUST_PREVENT_MACRO_SUBSTITUTION (void) const; - - /*! This method returns the smallest number larger than largest floating point number this \p uniform_real_distribution can potentially produce. - * - * \return The upper bound of this \p normal_distribution's half-open interval. - */ - __host__ __device__ - result_type max THRUST_PREVENT_MACRO_SUBSTITUTION (void) const; - - /*! \cond - */ - private: - param_type m_param; - - friend struct thrust::random::detail::random_core_access; - - __host__ __device__ - bool equal(const normal_distribution &rhs) const; - - template - std::basic_ostream& stream_out(std::basic_ostream &os) const; - - template - std::basic_istream& stream_in(std::basic_istream &is); - /*! \endcond - */ -}; // end normal_distribution - - -/*! This function checks two \p normal_distributions for equality. - * \param lhs The first \p normal_distribution to test. - * \param rhs The second \p normal_distribution to test. - * \return \c true if \p lhs is equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator==(const normal_distribution &lhs, - const normal_distribution &rhs); - - -/*! This function checks two \p normal_distributions for inequality. - * \param lhs The first \p normal_distribution to test. - * \param rhs The second \p normal_distribution to test. - * \return \c true if \p lhs is not equal to \p rhs; \c false, otherwise. - */ -template -__host__ __device__ -bool operator!=(const normal_distribution &lhs, - const normal_distribution &rhs); - - -/*! This function streams a normal_distribution to a \p std::basic_ostream. - * \param os The \p basic_ostream to stream out to. - * \param d The \p normal_distribution to stream out. - * \return \p os - */ -template -std::basic_ostream& -operator<<(std::basic_ostream &os, - const normal_distribution &d); - - -/*! This function streams a normal_distribution in from a std::basic_istream. - * \param is The \p basic_istream to stream from. - * \param d The \p normal_distribution to stream in. - * \return \p is - */ -template -std::basic_istream& -operator>>(std::basic_istream &is, - normal_distribution &d); - - -/*! \} // end random_number_distributions - */ - - -} // end random - -using random::normal_distribution; - -} // end thrust - -#include - diff --git a/spaces/CVPR/Text2Human/Text2Human/models/archs/__init__.py b/spaces/CVPR/Text2Human/Text2Human/models/archs/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/__init__.py b/spaces/CVPR/WALT/mmdet/core/bbox/__init__.py deleted file mode 100644 index a3537297f57e4c3670afdb97b5fcb1b2d775e5f3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/__init__.py +++ /dev/null @@ -1,27 +0,0 @@ -from .assigners import (AssignResult, BaseAssigner, CenterRegionAssigner, - MaxIoUAssigner, RegionAssigner) -from .builder import build_assigner, build_bbox_coder, build_sampler -from .coder import (BaseBBoxCoder, DeltaXYWHBBoxCoder, PseudoBBoxCoder, - TBLRBBoxCoder) -from .iou_calculators import BboxOverlaps2D, bbox_overlaps -from .samplers import (BaseSampler, CombinedSampler, - InstanceBalancedPosSampler, IoUBalancedNegSampler, - OHEMSampler, PseudoSampler, RandomSampler, - SamplingResult, ScoreHLRSampler) -from .transforms import (bbox2distance, bbox2result, bbox2roi, - bbox_cxcywh_to_xyxy, bbox_flip, bbox_mapping, - bbox_mapping_back, bbox_rescale, bbox_xyxy_to_cxcywh, - distance2bbox, roi2bbox) - -__all__ = [ - 'bbox_overlaps', 'BboxOverlaps2D', 'BaseAssigner', 'MaxIoUAssigner', - 'AssignResult', 'BaseSampler', 'PseudoSampler', 'RandomSampler', - 'InstanceBalancedPosSampler', 'IoUBalancedNegSampler', 'CombinedSampler', - 'OHEMSampler', 'SamplingResult', 'ScoreHLRSampler', 'build_assigner', - 'build_sampler', 'bbox_flip', 'bbox_mapping', 'bbox_mapping_back', - 'bbox2roi', 'roi2bbox', 'bbox2result', 'distance2bbox', 'bbox2distance', - 'build_bbox_coder', 'BaseBBoxCoder', 'PseudoBBoxCoder', - 'DeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'CenterRegionAssigner', - 'bbox_rescale', 'bbox_cxcywh_to_xyxy', 'bbox_xyxy_to_cxcywh', - 'RegionAssigner' -] diff --git a/spaces/CVPR/WALT/mmdet/core/post_processing/merge_augs.py b/spaces/CVPR/WALT/mmdet/core/post_processing/merge_augs.py deleted file mode 100644 index dbcf79d1ac20ddc32cb1605e06d253803250c855..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/post_processing/merge_augs.py +++ /dev/null @@ -1,150 +0,0 @@ -import copy -import warnings - -import numpy as np -import torch -from mmcv import ConfigDict -from mmcv.ops import nms - -from ..bbox import bbox_mapping_back - - -def merge_aug_proposals(aug_proposals, img_metas, cfg): - """Merge augmented proposals (multiscale, flip, etc.) - - Args: - aug_proposals (list[Tensor]): proposals from different testing - schemes, shape (n, 5). Note that they are not rescaled to the - original image size. - - img_metas (list[dict]): list of image info dict where each dict has: - 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmdet/datasets/pipelines/formatting.py:Collect`. - - cfg (dict): rpn test config. - - Returns: - Tensor: shape (n, 4), proposals corresponding to original image scale. - """ - - cfg = copy.deepcopy(cfg) - - # deprecate arguments warning - if 'nms' not in cfg or 'max_num' in cfg or 'nms_thr' in cfg: - warnings.warn( - 'In rpn_proposal or test_cfg, ' - 'nms_thr has been moved to a dict named nms as ' - 'iou_threshold, max_num has been renamed as max_per_img, ' - 'name of original arguments and the way to specify ' - 'iou_threshold of NMS will be deprecated.') - if 'nms' not in cfg: - cfg.nms = ConfigDict(dict(type='nms', iou_threshold=cfg.nms_thr)) - if 'max_num' in cfg: - if 'max_per_img' in cfg: - assert cfg.max_num == cfg.max_per_img, f'You set max_num and ' \ - f'max_per_img at the same time, but get {cfg.max_num} ' \ - f'and {cfg.max_per_img} respectively' \ - f'Please delete max_num which will be deprecated.' - else: - cfg.max_per_img = cfg.max_num - if 'nms_thr' in cfg: - assert cfg.nms.iou_threshold == cfg.nms_thr, f'You set ' \ - f'iou_threshold in nms and ' \ - f'nms_thr at the same time, but get ' \ - f'{cfg.nms.iou_threshold} and {cfg.nms_thr}' \ - f' respectively. Please delete the nms_thr ' \ - f'which will be deprecated.' - - recovered_proposals = [] - for proposals, img_info in zip(aug_proposals, img_metas): - img_shape = img_info['img_shape'] - scale_factor = img_info['scale_factor'] - flip = img_info['flip'] - flip_direction = img_info['flip_direction'] - _proposals = proposals.clone() - _proposals[:, :4] = bbox_mapping_back(_proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - recovered_proposals.append(_proposals) - aug_proposals = torch.cat(recovered_proposals, dim=0) - merged_proposals, _ = nms(aug_proposals[:, :4].contiguous(), - aug_proposals[:, -1].contiguous(), - cfg.nms.iou_threshold) - scores = merged_proposals[:, 4] - _, order = scores.sort(0, descending=True) - num = min(cfg.max_per_img, merged_proposals.shape[0]) - order = order[:num] - merged_proposals = merged_proposals[order, :] - return merged_proposals - - -def merge_aug_bboxes(aug_bboxes, aug_scores, img_metas, rcnn_test_cfg): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.stack(recovered_bboxes).mean(dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.stack(aug_scores).mean(dim=0) - return bboxes, scores - - -def merge_aug_scores(aug_scores): - """Merge augmented bbox scores.""" - if isinstance(aug_scores[0], torch.Tensor): - return torch.mean(torch.stack(aug_scores), dim=0) - else: - return np.mean(aug_scores, axis=0) - - -def merge_aug_masks(aug_masks, img_metas, rcnn_test_cfg, weights=None): - """Merge augmented mask prediction. - - Args: - aug_masks (list[ndarray]): shape (n, #class, h, w) - img_shapes (list[ndarray]): shape (3, ). - rcnn_test_cfg (dict): rcnn test config. - - Returns: - tuple: (bboxes, scores) - """ - recovered_masks = [] - for mask, img_info in zip(aug_masks, img_metas): - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - if flip: - if flip_direction == 'horizontal': - mask = mask[:, :, :, ::-1] - elif flip_direction == 'vertical': - mask = mask[:, :, ::-1, :] - else: - raise ValueError( - f"Invalid flipping direction '{flip_direction}'") - recovered_masks.append(mask) - - if weights is None: - merged_masks = np.mean(recovered_masks, axis=0) - else: - merged_masks = np.average( - np.array(recovered_masks), axis=0, weights=np.array(weights)) - return merged_masks diff --git a/spaces/CVPR/regionclip-demo/detectron2/export/caffe2_modeling.py b/spaces/CVPR/regionclip-demo/detectron2/export/caffe2_modeling.py deleted file mode 100644 index 7a9fc78164c32f6709245d3a456af19ffde7c497..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/export/caffe2_modeling.py +++ /dev/null @@ -1,503 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import functools -import io -import struct -import types -import torch - -from detectron2.modeling import meta_arch -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.meta_arch.panoptic_fpn import combine_semantic_and_instance_outputs -from detectron2.modeling.meta_arch.retinanet import permute_to_N_HWA_K -from detectron2.modeling.postprocessing import detector_postprocess, sem_seg_postprocess -from detectron2.modeling.roi_heads import keypoint_head -from detectron2.structures import Boxes, ImageList, Instances, RotatedBoxes - -from .c10 import Caffe2Compatible -from .caffe2_patch import ROIHeadsPatcher, patch_generalized_rcnn -from .shared import ( - alias, - check_set_pb_arg, - get_pb_arg_floats, - get_pb_arg_valf, - get_pb_arg_vali, - get_pb_arg_vals, - mock_torch_nn_functional_interpolate, -) - - -def assemble_rcnn_outputs_by_name(image_sizes, tensor_outputs, force_mask_on=False): - """ - A function to assemble caffe2 model's outputs (i.e. Dict[str, Tensor]) - to detectron2's format (i.e. list of Instances instance). - This only works when the model follows the Caffe2 detectron's naming convention. - - Args: - image_sizes (List[List[int, int]]): [H, W] of every image. - tensor_outputs (Dict[str, Tensor]): external_output to its tensor. - - force_mask_on (Bool): if true, the it make sure there'll be pred_masks even - if the mask is not found from tensor_outputs (usually due to model crash) - """ - - results = [Instances(image_size) for image_size in image_sizes] - - batch_splits = tensor_outputs.get("batch_splits", None) - if batch_splits: - raise NotImplementedError() - assert len(image_sizes) == 1 - result = results[0] - - bbox_nms = tensor_outputs["bbox_nms"] - score_nms = tensor_outputs["score_nms"] - class_nms = tensor_outputs["class_nms"] - # Detection will always success because Conv support 0-batch - assert bbox_nms is not None - assert score_nms is not None - assert class_nms is not None - if bbox_nms.shape[1] == 5: - result.pred_boxes = RotatedBoxes(bbox_nms) - else: - result.pred_boxes = Boxes(bbox_nms) - result.scores = score_nms - result.pred_classes = class_nms.to(torch.int64) - - mask_fcn_probs = tensor_outputs.get("mask_fcn_probs", None) - if mask_fcn_probs is not None: - # finish the mask pred - mask_probs_pred = mask_fcn_probs - num_masks = mask_probs_pred.shape[0] - class_pred = result.pred_classes - indices = torch.arange(num_masks, device=class_pred.device) - mask_probs_pred = mask_probs_pred[indices, class_pred][:, None] - result.pred_masks = mask_probs_pred - elif force_mask_on: - # NOTE: there's no way to know the height/width of mask here, it won't be - # used anyway when batch size is 0, so just set them to 0. - result.pred_masks = torch.zeros([0, 1, 0, 0], dtype=torch.uint8) - - keypoints_out = tensor_outputs.get("keypoints_out", None) - kps_score = tensor_outputs.get("kps_score", None) - if keypoints_out is not None: - # keypoints_out: [N, 4, #kypoints], where 4 is in order of (x, y, score, prob) - keypoints_tensor = keypoints_out - # NOTE: it's possible that prob is not calculated if "should_output_softmax" - # is set to False in HeatmapMaxKeypoint, so just using raw score, seems - # it doesn't affect mAP. TODO: check more carefully. - keypoint_xyp = keypoints_tensor.transpose(1, 2)[:, :, [0, 1, 2]] - result.pred_keypoints = keypoint_xyp - elif kps_score is not None: - # keypoint heatmap to sparse data structure - pred_keypoint_logits = kps_score - keypoint_head.keypoint_rcnn_inference(pred_keypoint_logits, [result]) - - return results - - -def _cast_to_f32(f64): - return struct.unpack("f", struct.pack("f", f64))[0] - - -def set_caffe2_compatible_tensor_mode(model, enable=True): - def _fn(m): - if isinstance(m, Caffe2Compatible): - m.tensor_mode = enable - - model.apply(_fn) - - -def convert_batched_inputs_to_c2_format(batched_inputs, size_divisibility, device): - """ - See get_caffe2_inputs() below. - """ - assert all(isinstance(x, dict) for x in batched_inputs) - assert all(x["image"].dim() == 3 for x in batched_inputs) - - images = [x["image"] for x in batched_inputs] - images = ImageList.from_tensors(images, size_divisibility) - - im_info = [] - for input_per_image, image_size in zip(batched_inputs, images.image_sizes): - target_height = input_per_image.get("height", image_size[0]) - target_width = input_per_image.get("width", image_size[1]) # noqa - # NOTE: The scale inside im_info is kept as convention and for providing - # post-processing information if further processing is needed. For - # current Caffe2 model definitions that don't include post-processing inside - # the model, this number is not used. - # NOTE: There can be a slight difference between width and height - # scales, using a single number can results in numerical difference - # compared with D2's post-processing. - scale = target_height / image_size[0] - im_info.append([image_size[0], image_size[1], scale]) - im_info = torch.Tensor(im_info) - - return images.tensor.to(device), im_info.to(device) - - -class Caffe2MetaArch(Caffe2Compatible, torch.nn.Module): - """ - Base class for caffe2-compatible implementation of a meta architecture. - The forward is traceable and its traced graph can be converted to caffe2 - graph through ONNX. - """ - - def __init__(self, cfg, torch_model): - """ - Args: - cfg (CfgNode): - torch_model (nn.Module): the detectron2 model (meta_arch) to be - converted. - """ - super().__init__() - self._wrapped_model = torch_model - self.eval() - set_caffe2_compatible_tensor_mode(self, True) - - def get_caffe2_inputs(self, batched_inputs): - """ - Convert pytorch-style structured inputs to caffe2-style inputs that - are tuples of tensors. - - Args: - batched_inputs (list[dict]): inputs to a detectron2 model - in its standard format. Each dict has "image" (CHW tensor), and optionally - "height" and "width". - - Returns: - tuple[Tensor]: - tuple of tensors that will be the inputs to the - :meth:`forward` method. For existing models, the first - is an NCHW tensor (padded and batched); the second is - a im_info Nx3 tensor, where the rows are - (height, width, unused legacy parameter) - """ - return convert_batched_inputs_to_c2_format( - batched_inputs, - self._wrapped_model.backbone.size_divisibility, - self._wrapped_model.device, - ) - - def encode_additional_info(self, predict_net, init_net): - """ - Save extra metadata that will be used by inference in the output protobuf. - """ - pass - - def forward(self, inputs): - """ - Run the forward in caffe2-style. It has to use caffe2-compatible ops - and the method will be used for tracing. - - Args: - inputs (tuple[Tensor]): inputs defined by :meth:`get_caffe2_input`. - They will be the inputs of the converted caffe2 graph. - - Returns: - tuple[Tensor]: output tensors. They will be the outputs of the - converted caffe2 graph. - """ - raise NotImplementedError - - def _caffe2_preprocess_image(self, inputs): - """ - Caffe2 implementation of preprocess_image, which is called inside each MetaArch's forward. - It normalizes the input images, and the final caffe2 graph assumes the - inputs have been batched already. - """ - data, im_info = inputs - data = alias(data, "data") - im_info = alias(im_info, "im_info") - mean, std = self._wrapped_model.pixel_mean, self._wrapped_model.pixel_std - normalized_data = (data - mean) / std - normalized_data = alias(normalized_data, "normalized_data") - - # Pack (data, im_info) into ImageList which is recognized by self.inference. - images = ImageList(tensor=normalized_data, image_sizes=im_info) - return images - - @staticmethod - def get_outputs_converter(predict_net, init_net): - """ - Creates a function that converts outputs of the caffe2 model to - detectron2's standard format. - The function uses information in `predict_net` and `init_net` that are - available at inferene time. Therefore the function logic can be used in inference. - - The returned function has the following signature: - - def convert(batched_inputs, c2_inputs, c2_results) -> detectron2_outputs - - Where - - * batched_inputs (list[dict]): the original input format of the meta arch - * c2_inputs (tuple[Tensor]): the caffe2 inputs. - * c2_results (dict[str, Tensor]): the caffe2 output format, - corresponding to the outputs of the :meth:`forward` function. - * detectron2_outputs: the original output format of the meta arch. - - This function can be used to compare the outputs of the original meta arch and - the converted caffe2 graph. - - Returns: - callable: a callable of the above signature. - """ - raise NotImplementedError - - -class Caffe2GeneralizedRCNN(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.GeneralizedRCNN) - torch_model = patch_generalized_rcnn(torch_model) - super().__init__(cfg, torch_model) - - self.roi_heads_patcher = ROIHeadsPatcher( - self._wrapped_model.roi_heads, cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT - ) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"GeneralizedRCNN") - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - if not self.tensor_mode: - return self._wrapped_model.inference(inputs) - images = self._caffe2_preprocess_image(inputs) - features = self._wrapped_model.backbone(images.tensor) - proposals, _ = self._wrapped_model.proposal_generator(images, features) - with self.roi_heads_patcher.mock_roi_heads(): - detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals) - return tuple(detector_results[0].flatten()) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - results = assemble_rcnn_outputs_by_name(image_sizes, c2_results) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -class Caffe2PanopticFPN(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.PanopticFPN) - torch_model = patch_generalized_rcnn(torch_model) - super().__init__(cfg, torch_model) - - self.roi_heads_patcher = ROIHeadsPatcher( - self._wrapped_model.roi_heads, cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT - ) - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - assert self.tensor_mode - images = self._caffe2_preprocess_image(inputs) - features = self._wrapped_model.backbone(images.tensor) - - sem_seg_results, _ = self._wrapped_model.sem_seg_head(features) - sem_seg_results = alias(sem_seg_results, "sem_seg") - - proposals, _ = self._wrapped_model.proposal_generator(images, features) - - with self.roi_heads_patcher.mock_roi_heads(self.tensor_mode): - detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals) - - return tuple(detector_results[0].flatten()) + (sem_seg_results,) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"PanopticFPN") - - # Inference parameters: - check_set_pb_arg( - predict_net, - "combine_overlap_threshold", - "f", - _cast_to_f32(self._wrapped_model.combine_overlap_thresh), - ) - check_set_pb_arg( - predict_net, - "combine_stuff_area_limit", - "i", - self._wrapped_model.combine_stuff_area_thresh, - ) - check_set_pb_arg( - predict_net, - "combine_instances_confidence_threshold", - "f", - _cast_to_f32(self._wrapped_model.combine_instances_score_thresh), - ) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - combine_overlap_threshold = get_pb_arg_valf(predict_net, "combine_overlap_threshold", None) - combine_stuff_area_limit = get_pb_arg_vali(predict_net, "combine_stuff_area_limit", None) - combine_instances_confidence_threshold = get_pb_arg_valf( - predict_net, "combine_instances_confidence_threshold", None - ) - - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - detector_results = assemble_rcnn_outputs_by_name( - image_sizes, c2_results, force_mask_on=True - ) - sem_seg_results = c2_results["sem_seg"] - - # copied from meta_arch/panoptic_fpn.py ... - processed_results = [] - for sem_seg_result, detector_result, input_per_image, image_size in zip( - sem_seg_results, detector_results, batched_inputs, image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - sem_seg_r = sem_seg_postprocess(sem_seg_result, image_size, height, width) - detector_r = detector_postprocess(detector_result, height, width) - - processed_results.append({"sem_seg": sem_seg_r, "instances": detector_r}) - - panoptic_r = combine_semantic_and_instance_outputs( - detector_r, - sem_seg_r.argmax(dim=0), - combine_overlap_threshold, - combine_stuff_area_limit, - combine_instances_confidence_threshold, - ) - processed_results[-1]["panoptic_seg"] = panoptic_r - return processed_results - - return f - - -class Caffe2RetinaNet(Caffe2MetaArch): - def __init__(self, cfg, torch_model): - assert isinstance(torch_model, meta_arch.RetinaNet) - super().__init__(cfg, torch_model) - - @mock_torch_nn_functional_interpolate() - def forward(self, inputs): - assert self.tensor_mode - images = self._caffe2_preprocess_image(inputs) - - # explicitly return the images sizes to avoid removing "im_info" by ONNX - # since it's not used in the forward path - return_tensors = [images.image_sizes] - - features = self._wrapped_model.backbone(images.tensor) - features = [features[f] for f in self._wrapped_model.head_in_features] - for i, feature_i in enumerate(features): - features[i] = alias(feature_i, "feature_{}".format(i), is_backward=True) - return_tensors.append(features[i]) - - pred_logits, pred_anchor_deltas = self._wrapped_model.head(features) - for i, (box_cls_i, box_delta_i) in enumerate(zip(pred_logits, pred_anchor_deltas)): - return_tensors.append(alias(box_cls_i, "box_cls_{}".format(i))) - return_tensors.append(alias(box_delta_i, "box_delta_{}".format(i))) - - return tuple(return_tensors) - - def encode_additional_info(self, predict_net, init_net): - size_divisibility = self._wrapped_model.backbone.size_divisibility - check_set_pb_arg(predict_net, "size_divisibility", "i", size_divisibility) - check_set_pb_arg( - predict_net, "device", "s", str.encode(str(self._wrapped_model.device), "ascii") - ) - check_set_pb_arg(predict_net, "meta_architecture", "s", b"RetinaNet") - - # Inference parameters: - check_set_pb_arg( - predict_net, "score_threshold", "f", _cast_to_f32(self._wrapped_model.test_score_thresh) - ) - check_set_pb_arg( - predict_net, "topk_candidates", "i", self._wrapped_model.test_topk_candidates - ) - check_set_pb_arg( - predict_net, "nms_threshold", "f", _cast_to_f32(self._wrapped_model.test_nms_thresh) - ) - check_set_pb_arg( - predict_net, - "max_detections_per_image", - "i", - self._wrapped_model.max_detections_per_image, - ) - - check_set_pb_arg( - predict_net, - "bbox_reg_weights", - "floats", - [_cast_to_f32(w) for w in self._wrapped_model.box2box_transform.weights], - ) - self._encode_anchor_generator_cfg(predict_net) - - def _encode_anchor_generator_cfg(self, predict_net): - # serialize anchor_generator for future use - serialized_anchor_generator = io.BytesIO() - torch.save(self._wrapped_model.anchor_generator, serialized_anchor_generator) - # Ideally we can put anchor generating inside the model, then we don't - # need to store this information. - bytes = serialized_anchor_generator.getvalue() - check_set_pb_arg(predict_net, "serialized_anchor_generator", "s", bytes) - - @staticmethod - def get_outputs_converter(predict_net, init_net): - self = types.SimpleNamespace() - serialized_anchor_generator = io.BytesIO( - get_pb_arg_vals(predict_net, "serialized_anchor_generator", None) - ) - self.anchor_generator = torch.load(serialized_anchor_generator) - bbox_reg_weights = get_pb_arg_floats(predict_net, "bbox_reg_weights", None) - self.box2box_transform = Box2BoxTransform(weights=tuple(bbox_reg_weights)) - self.test_score_thresh = get_pb_arg_valf(predict_net, "score_threshold", None) - self.test_topk_candidates = get_pb_arg_vali(predict_net, "topk_candidates", None) - self.test_nms_thresh = get_pb_arg_valf(predict_net, "nms_threshold", None) - self.max_detections_per_image = get_pb_arg_vali( - predict_net, "max_detections_per_image", None - ) - - # hack to reuse inference code from RetinaNet - self.inference = functools.partial(meta_arch.RetinaNet.inference, self) - self.inference_single_image = functools.partial( - meta_arch.RetinaNet.inference_single_image, self - ) - - def f(batched_inputs, c2_inputs, c2_results): - _, im_info = c2_inputs - image_sizes = [[int(im[0]), int(im[1])] for im in im_info] - - num_features = len([x for x in c2_results.keys() if x.startswith("box_cls_")]) - pred_logits = [c2_results["box_cls_{}".format(i)] for i in range(num_features)] - pred_anchor_deltas = [c2_results["box_delta_{}".format(i)] for i in range(num_features)] - - # For each feature level, feature should have the same batch size and - # spatial dimension as the box_cls and box_delta. - dummy_features = [x.clone()[:, 0:0, :, :] for x in pred_logits] - anchors = self.anchor_generator(dummy_features) - - # self.num_classess can be inferred - self.num_classes = pred_logits[0].shape[1] // (pred_anchor_deltas[0].shape[1] // 4) - - pred_logits = [permute_to_N_HWA_K(x, self.num_classes) for x in pred_logits] - pred_anchor_deltas = [permute_to_N_HWA_K(x, 4) for x in pred_anchor_deltas] - - results = self.inference(anchors, pred_logits, pred_anchor_deltas, image_sizes) - return meta_arch.GeneralizedRCNN._postprocess(results, batched_inputs, image_sizes) - - return f - - -META_ARCH_CAFFE2_EXPORT_TYPE_MAP = { - "GeneralizedRCNN": Caffe2GeneralizedRCNN, - "PanopticFPN": Caffe2PanopticFPN, - "RetinaNet": Caffe2RetinaNet, -} diff --git a/spaces/Catspin/2_ai_chat/index.html b/spaces/Catspin/2_ai_chat/index.html deleted file mode 100644 index dbd4bea9acc632c4de34c61976e72247d1a2dee6..0000000000000000000000000000000000000000 --- a/spaces/Catspin/2_ai_chat/index.html +++ /dev/null @@ -1 +0,0 @@ - - - \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/DefaultTable.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/DefaultTable.py deleted file mode 100644 index 32a4b1f258f54d78ad39eb764867a6c354939743..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/DefaultTable.py +++ /dev/null @@ -1,50 +0,0 @@ -from fontTools.misc.textTools import Tag -from fontTools.ttLib import getClassTag - - -class DefaultTable(object): - - dependencies = [] - - def __init__(self, tag=None): - if tag is None: - tag = getClassTag(self.__class__) - self.tableTag = Tag(tag) - - def decompile(self, data, ttFont): - self.data = data - - def compile(self, ttFont): - return self.data - - def toXML(self, writer, ttFont, **kwargs): - if hasattr(self, "ERROR"): - writer.comment("An error occurred during the decompilation of this table") - writer.newline() - writer.comment(self.ERROR) - writer.newline() - writer.begintag("hexdata") - writer.newline() - writer.dumphex(self.compile(ttFont)) - writer.endtag("hexdata") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - from fontTools.misc.textTools import readHex - from fontTools import ttLib - - if name != "hexdata": - raise ttLib.TTLibError("can't handle '%s' element" % name) - self.decompile(readHex(content), ttFont) - - def __repr__(self): - return "<'%s' table at %x>" % (self.tableTag, id(self)) - - def __eq__(self, other): - if type(self) != type(other): - return NotImplemented - return self.__dict__ == other.__dict__ - - def __ne__(self, other): - result = self.__eq__(other) - return result if result is NotImplemented else not result diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index.svelte_svelte_type_style_lang-6227d9ac.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index.svelte_svelte_type_style_lang-6227d9ac.js deleted file mode 100644 index 49f90348e5e9298e5f1cb3ad60d8195a86124b6a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Index.svelte_svelte_type_style_lang-6227d9ac.js +++ /dev/null @@ -1,2 +0,0 @@ -import"./Button-8eeccca1.js";import{c as h}from"./utils-c3e3db58.js";import{M as v}from"./Example.svelte_svelte_type_style_lang-f75a701b.js";const{SvelteComponent:g,action_destroyer:w,attr:d,create_component:k,destroy_component:b,detach:z,element:M,init:j,insert:C,mount_component:q,safe_not_equal:y,toggle_class:c,transition_in:D,transition_out:E}=window.__gradio__svelte__internal,{createEventDispatcher:I}=window.__gradio__svelte__internal;function S(n){let e,i,r,m,a,o,f;return i=new v({props:{message:n[2],latex_delimiters:n[7],sanitize_html:n[5],line_breaks:n[6],chatbot:!1}}),{c(){e=M("div"),k(i.$$.fragment),d(e,"class",r="prose "+n[0].join(" ")+" svelte-aww9vd"),d(e,"data-testid","markdown"),d(e,"dir",m=n[4]?"rtl":"ltr"),c(e,"min",n[3]),c(e,"hide",!n[1])},m(t,s){C(t,e,s),q(i,e,null),a=!0,o||(f=w(h.call(null,e)),o=!0)},p(t,[s]){const _={};s&4&&(_.message=t[2]),s&128&&(_.latex_delimiters=t[7]),s&32&&(_.sanitize_html=t[5]),s&64&&(_.line_breaks=t[6]),i.$set(_),(!a||s&1&&r!==(r="prose "+t[0].join(" ")+" svelte-aww9vd"))&&d(e,"class",r),(!a||s&16&&m!==(m=t[4]?"rtl":"ltr"))&&d(e,"dir",m),(!a||s&9)&&c(e,"min",t[3]),(!a||s&3)&&c(e,"hide",!t[1])},i(t){a||(D(i.$$.fragment,t),a=!0)},o(t){E(i.$$.fragment,t),a=!1},d(t){t&&z(e),b(i),o=!1,f()}}}function A(n,e,i){let{elem_classes:r=[]}=e,{visible:m=!0}=e,{value:a}=e,{min_height:o=!1}=e,{rtl:f=!1}=e,{sanitize_html:t=!0}=e,{line_breaks:s=!1}=e,{latex_delimiters:_}=e;const u=I();return n.$$set=l=>{"elem_classes"in l&&i(0,r=l.elem_classes),"visible"in l&&i(1,m=l.visible),"value"in l&&i(2,a=l.value),"min_height"in l&&i(3,o=l.min_height),"rtl"in l&&i(4,f=l.rtl),"sanitize_html"in l&&i(5,t=l.sanitize_html),"line_breaks"in l&&i(6,s=l.line_breaks),"latex_delimiters"in l&&i(7,_=l.latex_delimiters)},n.$$.update=()=>{n.$$.dirty&4&&u("change")},[r,m,a,o,f,t,s,_]}class B extends g{constructor(e){super(),j(this,e,A,S,y,{elem_classes:0,visible:1,value:2,min_height:3,rtl:4,sanitize_html:5,line_breaks:6,latex_delimiters:7})}}const J=B;export{J as M}; -//# sourceMappingURL=Index.svelte_svelte_type_style_lang-6227d9ac.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_sync/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_sync/__init__.py deleted file mode 100644 index b476d76d9a7ff45de8d18ec22d33d6af2982f92e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/httpcore/_sync/__init__.py +++ /dev/null @@ -1,39 +0,0 @@ -from .connection import HTTPConnection -from .connection_pool import ConnectionPool -from .http11 import HTTP11Connection -from .http_proxy import HTTPProxy -from .interfaces import ConnectionInterface - -try: - from .http2 import HTTP2Connection -except ImportError: # pragma: nocover - - class HTTP2Connection: # type: ignore - def __init__(self, *args, **kwargs) -> None: # type: ignore - raise RuntimeError( - "Attempted to use http2 support, but the `h2` package is not " - "installed. Use 'pip install httpcore[http2]'." - ) - - -try: - from .socks_proxy import SOCKSProxy -except ImportError: # pragma: nocover - - class SOCKSProxy: # type: ignore - def __init__(self, *args, **kwargs) -> None: # type: ignore - raise RuntimeError( - "Attempted to use SOCKS support, but the `socksio` package is not " - "installed. Use 'pip install httpcore[socks]'." - ) - - -__all__ = [ - "HTTPConnection", - "ConnectionPool", - "HTTPProxy", - "HTTP11Connection", - "HTTP2Connection", - "ConnectionInterface", - "SOCKSProxy", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/patheffects.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/patheffects.py deleted file mode 100644 index 5bb4c8e2a50156b3ea7488ca80dccb160fa3b66b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/patheffects.py +++ /dev/null @@ -1,513 +0,0 @@ -""" -Defines classes for path effects. The path effects are supported in `.Text`, -`.Line2D` and `.Patch`. - -.. seealso:: - :ref:`patheffects_guide` -""" - -from matplotlib.backend_bases import RendererBase -from matplotlib import colors as mcolors -from matplotlib import patches as mpatches -from matplotlib import transforms as mtransforms -from matplotlib.path import Path -import numpy as np - - -class AbstractPathEffect: - """ - A base class for path effects. - - Subclasses should override the ``draw_path`` method to add effect - functionality. - """ - - def __init__(self, offset=(0., 0.)): - """ - Parameters - ---------- - offset : (float, float), default: (0, 0) - The (x, y) offset to apply to the path, measured in points. - """ - self._offset = offset - - def _offset_transform(self, renderer): - """Apply the offset to the given transform.""" - return mtransforms.Affine2D().translate( - *map(renderer.points_to_pixels, self._offset)) - - def _update_gc(self, gc, new_gc_dict): - """ - Update the given GraphicsContext with the given dict of properties. - - The keys in the dictionary are used to identify the appropriate - ``set_`` method on the *gc*. - """ - new_gc_dict = new_gc_dict.copy() - - dashes = new_gc_dict.pop("dashes", None) - if dashes: - gc.set_dashes(**dashes) - - for k, v in new_gc_dict.items(): - set_method = getattr(gc, 'set_' + k, None) - if not callable(set_method): - raise AttributeError(f'Unknown property {k}') - set_method(v) - return gc - - def draw_path(self, renderer, gc, tpath, affine, rgbFace=None): - """ - Derived should override this method. The arguments are the same - as :meth:`matplotlib.backend_bases.RendererBase.draw_path` - except the first argument is a renderer. - """ - # Get the real renderer, not a PathEffectRenderer. - if isinstance(renderer, PathEffectRenderer): - renderer = renderer._renderer - return renderer.draw_path(gc, tpath, affine, rgbFace) - - -class PathEffectRenderer(RendererBase): - """ - Implements a Renderer which contains another renderer. - - This proxy then intercepts draw calls, calling the appropriate - :class:`AbstractPathEffect` draw method. - - .. note:: - Not all methods have been overridden on this RendererBase subclass. - It may be necessary to add further methods to extend the PathEffects - capabilities further. - """ - - def __init__(self, path_effects, renderer): - """ - Parameters - ---------- - path_effects : iterable of :class:`AbstractPathEffect` - The path effects which this renderer represents. - renderer : `~matplotlib.backend_bases.RendererBase` subclass - - """ - self._path_effects = path_effects - self._renderer = renderer - - def copy_with_path_effect(self, path_effects): - return self.__class__(path_effects, self._renderer) - - def draw_path(self, gc, tpath, affine, rgbFace=None): - for path_effect in self._path_effects: - path_effect.draw_path(self._renderer, gc, tpath, affine, - rgbFace) - - def draw_markers( - self, gc, marker_path, marker_trans, path, *args, **kwargs): - # We do a little shimmy so that all markers are drawn for each path - # effect in turn. Essentially, we induce recursion (depth 1) which is - # terminated once we have just a single path effect to work with. - if len(self._path_effects) == 1: - # Call the base path effect function - this uses the unoptimised - # approach of calling "draw_path" multiple times. - return super().draw_markers(gc, marker_path, marker_trans, path, - *args, **kwargs) - - for path_effect in self._path_effects: - renderer = self.copy_with_path_effect([path_effect]) - # Recursively call this method, only next time we will only have - # one path effect. - renderer.draw_markers(gc, marker_path, marker_trans, path, - *args, **kwargs) - - def draw_path_collection(self, gc, master_transform, paths, *args, - **kwargs): - # We do a little shimmy so that all paths are drawn for each path - # effect in turn. Essentially, we induce recursion (depth 1) which is - # terminated once we have just a single path effect to work with. - if len(self._path_effects) == 1: - # Call the base path effect function - this uses the unoptimised - # approach of calling "draw_path" multiple times. - return super().draw_path_collection(gc, master_transform, paths, - *args, **kwargs) - - for path_effect in self._path_effects: - renderer = self.copy_with_path_effect([path_effect]) - # Recursively call this method, only next time we will only have - # one path effect. - renderer.draw_path_collection(gc, master_transform, paths, - *args, **kwargs) - - def _draw_text_as_path(self, gc, x, y, s, prop, angle, ismath): - # Implements the naive text drawing as is found in RendererBase. - path, transform = self._get_text_path_transform(x, y, s, prop, - angle, ismath) - color = gc.get_rgb() - gc.set_linewidth(0.0) - self.draw_path(gc, path, transform, rgbFace=color) - - def __getattribute__(self, name): - if name in ['flipy', 'get_canvas_width_height', 'new_gc', - 'points_to_pixels', '_text2path', 'height', 'width']: - return getattr(self._renderer, name) - else: - return object.__getattribute__(self, name) - - -class Normal(AbstractPathEffect): - """ - The "identity" PathEffect. - - The Normal PathEffect's sole purpose is to draw the original artist with - no special path effect. - """ - - -def _subclass_with_normal(effect_class): - """ - Create a PathEffect class combining *effect_class* and a normal draw. - """ - - class withEffect(effect_class): - def draw_path(self, renderer, gc, tpath, affine, rgbFace): - super().draw_path(renderer, gc, tpath, affine, rgbFace) - renderer.draw_path(gc, tpath, affine, rgbFace) - - withEffect.__name__ = f"with{effect_class.__name__}" - withEffect.__qualname__ = f"with{effect_class.__name__}" - withEffect.__doc__ = f""" - A shortcut PathEffect for applying `.{effect_class.__name__}` and then - drawing the original Artist. - - With this class you can use :: - - artist.set_path_effects([patheffects.with{effect_class.__name__}()]) - - as a shortcut for :: - - artist.set_path_effects([patheffects.{effect_class.__name__}(), - patheffects.Normal()]) - """ - # Docstring inheritance doesn't work for locally-defined subclasses. - withEffect.draw_path.__doc__ = effect_class.draw_path.__doc__ - return withEffect - - -class Stroke(AbstractPathEffect): - """A line based PathEffect which re-draws a stroke.""" - - def __init__(self, offset=(0, 0), **kwargs): - """ - The path will be stroked with its gc updated with the given - keyword arguments, i.e., the keyword arguments should be valid - gc parameter values. - """ - super().__init__(offset) - self._gc = kwargs - - def draw_path(self, renderer, gc, tpath, affine, rgbFace): - """Draw the path with updated gc.""" - gc0 = renderer.new_gc() # Don't modify gc, but a copy! - gc0.copy_properties(gc) - gc0 = self._update_gc(gc0, self._gc) - renderer.draw_path( - gc0, tpath, affine + self._offset_transform(renderer), rgbFace) - gc0.restore() - - -withStroke = _subclass_with_normal(effect_class=Stroke) - - -class SimplePatchShadow(AbstractPathEffect): - """A simple shadow via a filled patch.""" - - def __init__(self, offset=(2, -2), - shadow_rgbFace=None, alpha=None, - rho=0.3, **kwargs): - """ - Parameters - ---------- - offset : (float, float), default: (2, -2) - The (x, y) offset of the shadow in points. - shadow_rgbFace : color - The shadow color. - alpha : float, default: 0.3 - The alpha transparency of the created shadow patch. - rho : float, default: 0.3 - A scale factor to apply to the rgbFace color if *shadow_rgbFace* - is not specified. - **kwargs - Extra keywords are stored and passed through to - :meth:`AbstractPathEffect._update_gc`. - - """ - super().__init__(offset) - - if shadow_rgbFace is None: - self._shadow_rgbFace = shadow_rgbFace - else: - self._shadow_rgbFace = mcolors.to_rgba(shadow_rgbFace) - - if alpha is None: - alpha = 0.3 - - self._alpha = alpha - self._rho = rho - - #: The dictionary of keywords to update the graphics collection with. - self._gc = kwargs - - def draw_path(self, renderer, gc, tpath, affine, rgbFace): - """ - Overrides the standard draw_path to add the shadow offset and - necessary color changes for the shadow. - """ - gc0 = renderer.new_gc() # Don't modify gc, but a copy! - gc0.copy_properties(gc) - - if self._shadow_rgbFace is None: - r, g, b = (rgbFace or (1., 1., 1.))[:3] - # Scale the colors by a factor to improve the shadow effect. - shadow_rgbFace = (r * self._rho, g * self._rho, b * self._rho) - else: - shadow_rgbFace = self._shadow_rgbFace - - gc0.set_foreground("none") - gc0.set_alpha(self._alpha) - gc0.set_linewidth(0) - - gc0 = self._update_gc(gc0, self._gc) - renderer.draw_path( - gc0, tpath, affine + self._offset_transform(renderer), - shadow_rgbFace) - gc0.restore() - - -withSimplePatchShadow = _subclass_with_normal(effect_class=SimplePatchShadow) - - -class SimpleLineShadow(AbstractPathEffect): - """A simple shadow via a line.""" - - def __init__(self, offset=(2, -2), - shadow_color='k', alpha=0.3, rho=0.3, **kwargs): - """ - Parameters - ---------- - offset : (float, float), default: (2, -2) - The (x, y) offset to apply to the path, in points. - shadow_color : color, default: 'black' - The shadow color. - A value of ``None`` takes the original artist's color - with a scale factor of *rho*. - alpha : float, default: 0.3 - The alpha transparency of the created shadow patch. - rho : float, default: 0.3 - A scale factor to apply to the rgbFace color if *shadow_color* - is ``None``. - **kwargs - Extra keywords are stored and passed through to - :meth:`AbstractPathEffect._update_gc`. - """ - super().__init__(offset) - if shadow_color is None: - self._shadow_color = shadow_color - else: - self._shadow_color = mcolors.to_rgba(shadow_color) - self._alpha = alpha - self._rho = rho - #: The dictionary of keywords to update the graphics collection with. - self._gc = kwargs - - def draw_path(self, renderer, gc, tpath, affine, rgbFace): - """ - Overrides the standard draw_path to add the shadow offset and - necessary color changes for the shadow. - """ - gc0 = renderer.new_gc() # Don't modify gc, but a copy! - gc0.copy_properties(gc) - - if self._shadow_color is None: - r, g, b = (gc0.get_foreground() or (1., 1., 1.))[:3] - # Scale the colors by a factor to improve the shadow effect. - shadow_rgbFace = (r * self._rho, g * self._rho, b * self._rho) - else: - shadow_rgbFace = self._shadow_color - - gc0.set_foreground(shadow_rgbFace) - gc0.set_alpha(self._alpha) - - gc0 = self._update_gc(gc0, self._gc) - renderer.draw_path( - gc0, tpath, affine + self._offset_transform(renderer)) - gc0.restore() - - -class PathPatchEffect(AbstractPathEffect): - """ - Draws a `.PathPatch` instance whose Path comes from the original - PathEffect artist. - """ - - def __init__(self, offset=(0, 0), **kwargs): - """ - Parameters - ---------- - offset : (float, float), default: (0, 0) - The (x, y) offset to apply to the path, in points. - **kwargs - All keyword arguments are passed through to the - :class:`~matplotlib.patches.PathPatch` constructor. The - properties which cannot be overridden are "path", "clip_box" - "transform" and "clip_path". - """ - super().__init__(offset=offset) - self.patch = mpatches.PathPatch([], **kwargs) - - def draw_path(self, renderer, gc, tpath, affine, rgbFace): - self.patch._path = tpath - self.patch.set_transform(affine + self._offset_transform(renderer)) - self.patch.set_clip_box(gc.get_clip_rectangle()) - clip_path = gc.get_clip_path() - if clip_path and self.patch.get_clip_path() is None: - self.patch.set_clip_path(*clip_path) - self.patch.draw(renderer) - - -class TickedStroke(AbstractPathEffect): - """ - A line-based PathEffect which draws a path with a ticked style. - - This line style is frequently used to represent constraints in - optimization. The ticks may be used to indicate that one side - of the line is invalid or to represent a closed boundary of a - domain (i.e. a wall or the edge of a pipe). - - The spacing, length, and angle of ticks can be controlled. - - This line style is sometimes referred to as a hatched line. - - See also the :doc:`/gallery/misc/tickedstroke_demo` example. - """ - - def __init__(self, offset=(0, 0), - spacing=10.0, angle=45.0, length=np.sqrt(2), - **kwargs): - """ - Parameters - ---------- - offset : (float, float), default: (0, 0) - The (x, y) offset to apply to the path, in points. - spacing : float, default: 10.0 - The spacing between ticks in points. - angle : float, default: 45.0 - The angle between the path and the tick in degrees. The angle - is measured as if you were an ant walking along the curve, with - zero degrees pointing directly ahead, 90 to your left, -90 - to your right, and 180 behind you. To change side of the ticks, - change sign of the angle. - length : float, default: 1.414 - The length of the tick relative to spacing. - Recommended length = 1.414 (sqrt(2)) when angle=45, length=1.0 - when angle=90 and length=2.0 when angle=60. - **kwargs - Extra keywords are stored and passed through to - :meth:`AbstractPathEffect._update_gc`. - - Examples - -------- - See :doc:`/gallery/misc/tickedstroke_demo`. - """ - super().__init__(offset) - - self._spacing = spacing - self._angle = angle - self._length = length - self._gc = kwargs - - def draw_path(self, renderer, gc, tpath, affine, rgbFace): - """Draw the path with updated gc.""" - # Do not modify the input! Use copy instead. - gc0 = renderer.new_gc() - gc0.copy_properties(gc) - - gc0 = self._update_gc(gc0, self._gc) - trans = affine + self._offset_transform(renderer) - - theta = -np.radians(self._angle) - trans_matrix = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - - # Convert spacing parameter to pixels. - spacing_px = renderer.points_to_pixels(self._spacing) - - # Transform before evaluation because to_polygons works at resolution - # of one -- assuming it is working in pixel space. - transpath = affine.transform_path(tpath) - - # Evaluate path to straight line segments that can be used to - # construct line ticks. - polys = transpath.to_polygons(closed_only=False) - - for p in polys: - x = p[:, 0] - y = p[:, 1] - - # Can not interpolate points or draw line if only one point in - # polyline. - if x.size < 2: - continue - - # Find distance between points on the line - ds = np.hypot(x[1:] - x[:-1], y[1:] - y[:-1]) - - # Build parametric coordinate along curve - s = np.concatenate(([0.0], np.cumsum(ds))) - s_total = s[-1] - - num = int(np.ceil(s_total / spacing_px)) - 1 - # Pick parameter values for ticks. - s_tick = np.linspace(spacing_px/2, s_total - spacing_px/2, num) - - # Find points along the parameterized curve - x_tick = np.interp(s_tick, s, x) - y_tick = np.interp(s_tick, s, y) - - # Find unit vectors in local direction of curve - delta_s = self._spacing * .001 - u = (np.interp(s_tick + delta_s, s, x) - x_tick) / delta_s - v = (np.interp(s_tick + delta_s, s, y) - y_tick) / delta_s - - # Normalize slope into unit slope vector. - n = np.hypot(u, v) - mask = n == 0 - n[mask] = 1.0 - - uv = np.array([u / n, v / n]).T - uv[mask] = np.array([0, 0]).T - - # Rotate and scale unit vector into tick vector - dxy = np.dot(uv, trans_matrix) * self._length * spacing_px - - # Build tick endpoints - x_end = x_tick + dxy[:, 0] - y_end = y_tick + dxy[:, 1] - - # Interleave ticks to form Path vertices - xyt = np.empty((2 * num, 2), dtype=x_tick.dtype) - xyt[0::2, 0] = x_tick - xyt[1::2, 0] = x_end - xyt[0::2, 1] = y_tick - xyt[1::2, 1] = y_end - - # Build up vector of Path codes - codes = np.tile([Path.MOVETO, Path.LINETO], num) - - # Construct and draw resulting path - h = Path(xyt, codes) - # Transform back to data space during render - renderer.draw_path(gc0, h, affine.inverted() + trans, rgbFace) - - gc0.restore() - - -withTickedStroke = _subclass_with_normal(effect_class=TickedStroke) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/mplot3d/art3d.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/mplot3d/art3d.py deleted file mode 100644 index 4aff115b0c968529b6b3ba5e005e2914798aa666..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/mpl_toolkits/mplot3d/art3d.py +++ /dev/null @@ -1,1252 +0,0 @@ -# art3d.py, original mplot3d version by John Porter -# Parts rewritten by Reinier Heeres -# Minor additions by Ben Axelrod - -""" -Module containing 3D artist code and functions to convert 2D -artists into 3D versions which can be added to an Axes3D. -""" - -import math - -import numpy as np - -from contextlib import contextmanager - -from matplotlib import ( - artist, cbook, colors as mcolors, lines, text as mtext, - path as mpath) -from matplotlib.collections import ( - Collection, LineCollection, PolyCollection, PatchCollection, PathCollection) -from matplotlib.colors import Normalize -from matplotlib.patches import Patch -from . import proj3d - - -def _norm_angle(a): - """Return the given angle normalized to -180 < *a* <= 180 degrees.""" - a = (a + 360) % 360 - if a > 180: - a = a - 360 - return a - - -def _norm_text_angle(a): - """Return the given angle normalized to -90 < *a* <= 90 degrees.""" - a = (a + 180) % 180 - if a > 90: - a = a - 180 - return a - - -def get_dir_vector(zdir): - """ - Return a direction vector. - - Parameters - ---------- - zdir : {'x', 'y', 'z', None, 3-tuple} - The direction. Possible values are: - - - 'x': equivalent to (1, 0, 0) - - 'y': equivalent to (0, 1, 0) - - 'z': equivalent to (0, 0, 1) - - *None*: equivalent to (0, 0, 0) - - an iterable (x, y, z) is converted to an array - - Returns - ------- - x, y, z : array - The direction vector. - """ - if zdir == 'x': - return np.array((1, 0, 0)) - elif zdir == 'y': - return np.array((0, 1, 0)) - elif zdir == 'z': - return np.array((0, 0, 1)) - elif zdir is None: - return np.array((0, 0, 0)) - elif np.iterable(zdir) and len(zdir) == 3: - return np.array(zdir) - else: - raise ValueError("'x', 'y', 'z', None or vector of length 3 expected") - - -class Text3D(mtext.Text): - """ - Text object with 3D position and direction. - - Parameters - ---------- - x, y, z : float - The position of the text. - text : str - The text string to display. - zdir : {'x', 'y', 'z', None, 3-tuple} - The direction of the text. See `.get_dir_vector` for a description of - the values. - - Other Parameters - ---------------- - **kwargs - All other parameters are passed on to `~matplotlib.text.Text`. - """ - - def __init__(self, x=0, y=0, z=0, text='', zdir='z', **kwargs): - mtext.Text.__init__(self, x, y, text, **kwargs) - self.set_3d_properties(z, zdir) - - def get_position_3d(self): - """Return the (x, y, z) position of the text.""" - return self._x, self._y, self._z - - def set_position_3d(self, xyz, zdir=None): - """ - Set the (*x*, *y*, *z*) position of the text. - - Parameters - ---------- - xyz : (float, float, float) - The position in 3D space. - zdir : {'x', 'y', 'z', None, 3-tuple} - The direction of the text. If unspecified, the *zdir* will not be - changed. See `.get_dir_vector` for a description of the values. - """ - super().set_position(xyz[:2]) - self.set_z(xyz[2]) - if zdir is not None: - self._dir_vec = get_dir_vector(zdir) - - def set_z(self, z): - """ - Set the *z* position of the text. - - Parameters - ---------- - z : float - """ - self._z = z - self.stale = True - - def set_3d_properties(self, z=0, zdir='z'): - """ - Set the *z* position and direction of the text. - - Parameters - ---------- - z : float - The z-position in 3D space. - zdir : {'x', 'y', 'z', 3-tuple} - The direction of the text. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - self._z = z - self._dir_vec = get_dir_vector(zdir) - self.stale = True - - @artist.allow_rasterization - def draw(self, renderer): - position3d = np.array((self._x, self._y, self._z)) - proj = proj3d._proj_trans_points( - [position3d, position3d + self._dir_vec], self.axes.M) - dx = proj[0][1] - proj[0][0] - dy = proj[1][1] - proj[1][0] - angle = math.degrees(math.atan2(dy, dx)) - with cbook._setattr_cm(self, _x=proj[0][0], _y=proj[1][0], - _rotation=_norm_text_angle(angle)): - mtext.Text.draw(self, renderer) - self.stale = False - - def get_tightbbox(self, renderer=None): - # Overwriting the 2d Text behavior which is not valid for 3d. - # For now, just return None to exclude from layout calculation. - return None - - -def text_2d_to_3d(obj, z=0, zdir='z'): - """ - Convert a `.Text` to a `.Text3D` object. - - Parameters - ---------- - z : float - The z-position in 3D space. - zdir : {'x', 'y', 'z', 3-tuple} - The direction of the text. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - obj.__class__ = Text3D - obj.set_3d_properties(z, zdir) - - -class Line3D(lines.Line2D): - """ - 3D line object. - - .. note:: Use `get_data_3d` to obtain the data associated with the line. - `~.Line2D.get_data`, `~.Line2D.get_xdata`, and `~.Line2D.get_ydata` return - the x- and y-coordinates of the projected 2D-line, not the x- and y-data of - the 3D-line. Similarly, use `set_data_3d` to set the data, not - `~.Line2D.set_data`, `~.Line2D.set_xdata`, and `~.Line2D.set_ydata`. - """ - - def __init__(self, xs, ys, zs, *args, **kwargs): - """ - - Parameters - ---------- - xs : array-like - The x-data to be plotted. - ys : array-like - The y-data to be plotted. - zs : array-like - The z-data to be plotted. - *args, **kwargs - Additional arguments are passed to `~matplotlib.lines.Line2D`. - """ - super().__init__([], [], *args, **kwargs) - self.set_data_3d(xs, ys, zs) - - def set_3d_properties(self, zs=0, zdir='z'): - """ - Set the *z* position and direction of the line. - - Parameters - ---------- - zs : float or array of floats - The location along the *zdir* axis in 3D space to position the - line. - zdir : {'x', 'y', 'z'} - Plane to plot line orthogonal to. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - xs = self.get_xdata() - ys = self.get_ydata() - zs = cbook._to_unmasked_float_array(zs).ravel() - zs = np.broadcast_to(zs, len(xs)) - self._verts3d = juggle_axes(xs, ys, zs, zdir) - self.stale = True - - def set_data_3d(self, *args): - """ - Set the x, y and z data - - Parameters - ---------- - x : array-like - The x-data to be plotted. - y : array-like - The y-data to be plotted. - z : array-like - The z-data to be plotted. - - Notes - ----- - Accepts x, y, z arguments or a single array-like (x, y, z) - """ - if len(args) == 1: - args = args[0] - for name, xyz in zip('xyz', args): - if not np.iterable(xyz): - raise RuntimeError(f'{name} must be a sequence') - self._verts3d = args - self.stale = True - - def get_data_3d(self): - """ - Get the current data - - Returns - ------- - verts3d : length-3 tuple or array-like - The current data as a tuple or array-like. - """ - return self._verts3d - - @artist.allow_rasterization - def draw(self, renderer): - xs3d, ys3d, zs3d = self._verts3d - xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, self.axes.M) - self.set_data(xs, ys) - super().draw(renderer) - self.stale = False - - -def line_2d_to_3d(line, zs=0, zdir='z'): - """ - Convert a `.Line2D` to a `.Line3D` object. - - Parameters - ---------- - zs : float - The location along the *zdir* axis in 3D space to position the line. - zdir : {'x', 'y', 'z'} - Plane to plot line orthogonal to. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - - line.__class__ = Line3D - line.set_3d_properties(zs, zdir) - - -def _path_to_3d_segment(path, zs=0, zdir='z'): - """Convert a path to a 3D segment.""" - - zs = np.broadcast_to(zs, len(path)) - pathsegs = path.iter_segments(simplify=False, curves=False) - seg = [(x, y, z) for (((x, y), code), z) in zip(pathsegs, zs)] - seg3d = [juggle_axes(x, y, z, zdir) for (x, y, z) in seg] - return seg3d - - -def _paths_to_3d_segments(paths, zs=0, zdir='z'): - """Convert paths from a collection object to 3D segments.""" - - if not np.iterable(zs): - zs = np.broadcast_to(zs, len(paths)) - else: - if len(zs) != len(paths): - raise ValueError('Number of z-coordinates does not match paths.') - - segs = [_path_to_3d_segment(path, pathz, zdir) - for path, pathz in zip(paths, zs)] - return segs - - -def _path_to_3d_segment_with_codes(path, zs=0, zdir='z'): - """Convert a path to a 3D segment with path codes.""" - - zs = np.broadcast_to(zs, len(path)) - pathsegs = path.iter_segments(simplify=False, curves=False) - seg_codes = [((x, y, z), code) for ((x, y), code), z in zip(pathsegs, zs)] - if seg_codes: - seg, codes = zip(*seg_codes) - seg3d = [juggle_axes(x, y, z, zdir) for (x, y, z) in seg] - else: - seg3d = [] - codes = [] - return seg3d, list(codes) - - -def _paths_to_3d_segments_with_codes(paths, zs=0, zdir='z'): - """ - Convert paths from a collection object to 3D segments with path codes. - """ - - zs = np.broadcast_to(zs, len(paths)) - segments_codes = [_path_to_3d_segment_with_codes(path, pathz, zdir) - for path, pathz in zip(paths, zs)] - if segments_codes: - segments, codes = zip(*segments_codes) - else: - segments, codes = [], [] - return list(segments), list(codes) - - -class Collection3D(Collection): - """A collection of 3D paths.""" - - def do_3d_projection(self): - """Project the points according to renderer matrix.""" - xyzs_list = [proj3d.proj_transform(*vs.T, self.axes.M) - for vs, _ in self._3dverts_codes] - self._paths = [mpath.Path(np.column_stack([xs, ys]), cs) - for (xs, ys, _), (_, cs) in zip(xyzs_list, self._3dverts_codes)] - zs = np.concatenate([zs for _, _, zs in xyzs_list]) - return zs.min() if len(zs) else 1e9 - - -def collection_2d_to_3d(col, zs=0, zdir='z'): - """Convert a `.Collection` to a `.Collection3D` object.""" - zs = np.broadcast_to(zs, len(col.get_paths())) - col._3dverts_codes = [ - (np.column_stack(juggle_axes( - *np.column_stack([p.vertices, np.broadcast_to(z, len(p.vertices))]).T, - zdir)), - p.codes) - for p, z in zip(col.get_paths(), zs)] - col.__class__ = cbook._make_class_factory(Collection3D, "{}3D")(type(col)) - - -class Line3DCollection(LineCollection): - """ - A collection of 3D lines. - """ - - def set_sort_zpos(self, val): - """Set the position to use for z-sorting.""" - self._sort_zpos = val - self.stale = True - - def set_segments(self, segments): - """ - Set 3D segments. - """ - self._segments3d = segments - super().set_segments([]) - - def do_3d_projection(self): - """ - Project the points according to renderer matrix. - """ - xyslist = [proj3d._proj_trans_points(points, self.axes.M) - for points in self._segments3d] - segments_2d = [np.column_stack([xs, ys]) for xs, ys, zs in xyslist] - LineCollection.set_segments(self, segments_2d) - - # FIXME - minz = 1e9 - for xs, ys, zs in xyslist: - minz = min(minz, min(zs)) - return minz - - -def line_collection_2d_to_3d(col, zs=0, zdir='z'): - """Convert a `.LineCollection` to a `.Line3DCollection` object.""" - segments3d = _paths_to_3d_segments(col.get_paths(), zs, zdir) - col.__class__ = Line3DCollection - col.set_segments(segments3d) - - -class Patch3D(Patch): - """ - 3D patch object. - """ - - def __init__(self, *args, zs=(), zdir='z', **kwargs): - """ - Parameters - ---------- - verts : - zs : float - The location along the *zdir* axis in 3D space to position the - patch. - zdir : {'x', 'y', 'z'} - Plane to plot patch orthogonal to. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - super().__init__(*args, **kwargs) - self.set_3d_properties(zs, zdir) - - def set_3d_properties(self, verts, zs=0, zdir='z'): - """ - Set the *z* position and direction of the patch. - - Parameters - ---------- - verts : - zs : float - The location along the *zdir* axis in 3D space to position the - patch. - zdir : {'x', 'y', 'z'} - Plane to plot patch orthogonal to. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - zs = np.broadcast_to(zs, len(verts)) - self._segment3d = [juggle_axes(x, y, z, zdir) - for ((x, y), z) in zip(verts, zs)] - - def get_path(self): - return self._path2d - - def do_3d_projection(self): - s = self._segment3d - xs, ys, zs = zip(*s) - vxs, vys, vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, - self.axes.M) - self._path2d = mpath.Path(np.column_stack([vxs, vys])) - return min(vzs) - - -class PathPatch3D(Patch3D): - """ - 3D PathPatch object. - """ - - def __init__(self, path, *, zs=(), zdir='z', **kwargs): - """ - Parameters - ---------- - path : - zs : float - The location along the *zdir* axis in 3D space to position the - path patch. - zdir : {'x', 'y', 'z', 3-tuple} - Plane to plot path patch orthogonal to. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - # Not super().__init__! - Patch.__init__(self, **kwargs) - self.set_3d_properties(path, zs, zdir) - - def set_3d_properties(self, path, zs=0, zdir='z'): - """ - Set the *z* position and direction of the path patch. - - Parameters - ---------- - path : - zs : float - The location along the *zdir* axis in 3D space to position the - path patch. - zdir : {'x', 'y', 'z', 3-tuple} - Plane to plot path patch orthogonal to. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - Patch3D.set_3d_properties(self, path.vertices, zs=zs, zdir=zdir) - self._code3d = path.codes - - def do_3d_projection(self): - s = self._segment3d - xs, ys, zs = zip(*s) - vxs, vys, vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, - self.axes.M) - self._path2d = mpath.Path(np.column_stack([vxs, vys]), self._code3d) - return min(vzs) - - -def _get_patch_verts(patch): - """Return a list of vertices for the path of a patch.""" - trans = patch.get_patch_transform() - path = patch.get_path() - polygons = path.to_polygons(trans) - return polygons[0] if len(polygons) else np.array([]) - - -def patch_2d_to_3d(patch, z=0, zdir='z'): - """Convert a `.Patch` to a `.Patch3D` object.""" - verts = _get_patch_verts(patch) - patch.__class__ = Patch3D - patch.set_3d_properties(verts, z, zdir) - - -def pathpatch_2d_to_3d(pathpatch, z=0, zdir='z'): - """Convert a `.PathPatch` to a `.PathPatch3D` object.""" - path = pathpatch.get_path() - trans = pathpatch.get_patch_transform() - - mpath = trans.transform_path(path) - pathpatch.__class__ = PathPatch3D - pathpatch.set_3d_properties(mpath, z, zdir) - - -class Patch3DCollection(PatchCollection): - """ - A collection of 3D patches. - """ - - def __init__(self, *args, zs=0, zdir='z', depthshade=True, **kwargs): - """ - Create a collection of flat 3D patches with its normal vector - pointed in *zdir* direction, and located at *zs* on the *zdir* - axis. 'zs' can be a scalar or an array-like of the same length as - the number of patches in the collection. - - Constructor arguments are the same as for - :class:`~matplotlib.collections.PatchCollection`. In addition, - keywords *zs=0* and *zdir='z'* are available. - - Also, the keyword argument *depthshade* is available to indicate - whether to shade the patches in order to give the appearance of depth - (default is *True*). This is typically desired in scatter plots. - """ - self._depthshade = depthshade - super().__init__(*args, **kwargs) - self.set_3d_properties(zs, zdir) - - def get_depthshade(self): - return self._depthshade - - def set_depthshade(self, depthshade): - """ - Set whether depth shading is performed on collection members. - - Parameters - ---------- - depthshade : bool - Whether to shade the patches in order to give the appearance of - depth. - """ - self._depthshade = depthshade - self.stale = True - - def set_sort_zpos(self, val): - """Set the position to use for z-sorting.""" - self._sort_zpos = val - self.stale = True - - def set_3d_properties(self, zs, zdir): - """ - Set the *z* positions and direction of the patches. - - Parameters - ---------- - zs : float or array of floats - The location or locations to place the patches in the collection - along the *zdir* axis. - zdir : {'x', 'y', 'z'} - Plane to plot patches orthogonal to. - All patches must have the same direction. - See `.get_dir_vector` for a description of the values. - """ - # Force the collection to initialize the face and edgecolors - # just in case it is a scalarmappable with a colormap. - self.update_scalarmappable() - offsets = self.get_offsets() - if len(offsets) > 0: - xs, ys = offsets.T - else: - xs = [] - ys = [] - self._offsets3d = juggle_axes(xs, ys, np.atleast_1d(zs), zdir) - self._z_markers_idx = slice(-1) - self._vzs = None - self.stale = True - - def do_3d_projection(self): - xs, ys, zs = self._offsets3d - vxs, vys, vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, - self.axes.M) - self._vzs = vzs - super().set_offsets(np.column_stack([vxs, vys])) - - if vzs.size > 0: - return min(vzs) - else: - return np.nan - - def _maybe_depth_shade_and_sort_colors(self, color_array): - color_array = ( - _zalpha(color_array, self._vzs) - if self._vzs is not None and self._depthshade - else color_array - ) - if len(color_array) > 1: - color_array = color_array[self._z_markers_idx] - return mcolors.to_rgba_array(color_array, self._alpha) - - def get_facecolor(self): - return self._maybe_depth_shade_and_sort_colors(super().get_facecolor()) - - def get_edgecolor(self): - # We need this check here to make sure we do not double-apply the depth - # based alpha shading when the edge color is "face" which means the - # edge colour should be identical to the face colour. - if cbook._str_equal(self._edgecolors, 'face'): - return self.get_facecolor() - return self._maybe_depth_shade_and_sort_colors(super().get_edgecolor()) - - -class Path3DCollection(PathCollection): - """ - A collection of 3D paths. - """ - - def __init__(self, *args, zs=0, zdir='z', depthshade=True, **kwargs): - """ - Create a collection of flat 3D paths with its normal vector - pointed in *zdir* direction, and located at *zs* on the *zdir* - axis. 'zs' can be a scalar or an array-like of the same length as - the number of paths in the collection. - - Constructor arguments are the same as for - :class:`~matplotlib.collections.PathCollection`. In addition, - keywords *zs=0* and *zdir='z'* are available. - - Also, the keyword argument *depthshade* is available to indicate - whether to shade the patches in order to give the appearance of depth - (default is *True*). This is typically desired in scatter plots. - """ - self._depthshade = depthshade - self._in_draw = False - super().__init__(*args, **kwargs) - self.set_3d_properties(zs, zdir) - self._offset_zordered = None - - def draw(self, renderer): - with self._use_zordered_offset(): - with cbook._setattr_cm(self, _in_draw=True): - super().draw(renderer) - - def set_sort_zpos(self, val): - """Set the position to use for z-sorting.""" - self._sort_zpos = val - self.stale = True - - def set_3d_properties(self, zs, zdir): - """ - Set the *z* positions and direction of the paths. - - Parameters - ---------- - zs : float or array of floats - The location or locations to place the paths in the collection - along the *zdir* axis. - zdir : {'x', 'y', 'z'} - Plane to plot paths orthogonal to. - All paths must have the same direction. - See `.get_dir_vector` for a description of the values. - """ - # Force the collection to initialize the face and edgecolors - # just in case it is a scalarmappable with a colormap. - self.update_scalarmappable() - offsets = self.get_offsets() - if len(offsets) > 0: - xs, ys = offsets.T - else: - xs = [] - ys = [] - self._offsets3d = juggle_axes(xs, ys, np.atleast_1d(zs), zdir) - # In the base draw methods we access the attributes directly which - # means we cannot resolve the shuffling in the getter methods like - # we do for the edge and face colors. - # - # This means we need to carry around a cache of the unsorted sizes and - # widths (postfixed with 3d) and in `do_3d_projection` set the - # depth-sorted version of that data into the private state used by the - # base collection class in its draw method. - # - # Grab the current sizes and linewidths to preserve them. - self._sizes3d = self._sizes - self._linewidths3d = np.array(self._linewidths) - xs, ys, zs = self._offsets3d - - # Sort the points based on z coordinates - # Performance optimization: Create a sorted index array and reorder - # points and point properties according to the index array - self._z_markers_idx = slice(-1) - self._vzs = None - self.stale = True - - def set_sizes(self, sizes, dpi=72.0): - super().set_sizes(sizes, dpi) - if not self._in_draw: - self._sizes3d = sizes - - def set_linewidth(self, lw): - super().set_linewidth(lw) - if not self._in_draw: - self._linewidths3d = np.array(self._linewidths) - - def get_depthshade(self): - return self._depthshade - - def set_depthshade(self, depthshade): - """ - Set whether depth shading is performed on collection members. - - Parameters - ---------- - depthshade : bool - Whether to shade the patches in order to give the appearance of - depth. - """ - self._depthshade = depthshade - self.stale = True - - def do_3d_projection(self): - xs, ys, zs = self._offsets3d - vxs, vys, vzs, vis = proj3d.proj_transform_clip(xs, ys, zs, - self.axes.M) - # Sort the points based on z coordinates - # Performance optimization: Create a sorted index array and reorder - # points and point properties according to the index array - z_markers_idx = self._z_markers_idx = np.argsort(vzs)[::-1] - self._vzs = vzs - - # we have to special case the sizes because of code in collections.py - # as the draw method does - # self.set_sizes(self._sizes, self.figure.dpi) - # so we cannot rely on doing the sorting on the way out via get_* - - if len(self._sizes3d) > 1: - self._sizes = self._sizes3d[z_markers_idx] - - if len(self._linewidths3d) > 1: - self._linewidths = self._linewidths3d[z_markers_idx] - - PathCollection.set_offsets(self, np.column_stack((vxs, vys))) - - # Re-order items - vzs = vzs[z_markers_idx] - vxs = vxs[z_markers_idx] - vys = vys[z_markers_idx] - - # Store ordered offset for drawing purpose - self._offset_zordered = np.column_stack((vxs, vys)) - - return np.min(vzs) if vzs.size else np.nan - - @contextmanager - def _use_zordered_offset(self): - if self._offset_zordered is None: - # Do nothing - yield - else: - # Swap offset with z-ordered offset - old_offset = self._offsets - super().set_offsets(self._offset_zordered) - try: - yield - finally: - self._offsets = old_offset - - def _maybe_depth_shade_and_sort_colors(self, color_array): - color_array = ( - _zalpha(color_array, self._vzs) - if self._vzs is not None and self._depthshade - else color_array - ) - if len(color_array) > 1: - color_array = color_array[self._z_markers_idx] - return mcolors.to_rgba_array(color_array, self._alpha) - - def get_facecolor(self): - return self._maybe_depth_shade_and_sort_colors(super().get_facecolor()) - - def get_edgecolor(self): - # We need this check here to make sure we do not double-apply the depth - # based alpha shading when the edge color is "face" which means the - # edge colour should be identical to the face colour. - if cbook._str_equal(self._edgecolors, 'face'): - return self.get_facecolor() - return self._maybe_depth_shade_and_sort_colors(super().get_edgecolor()) - - -def patch_collection_2d_to_3d(col, zs=0, zdir='z', depthshade=True): - """ - Convert a `.PatchCollection` into a `.Patch3DCollection` object - (or a `.PathCollection` into a `.Path3DCollection` object). - - Parameters - ---------- - zs : float or array of floats - The location or locations to place the patches in the collection along - the *zdir* axis. Default: 0. - zdir : {'x', 'y', 'z'} - The axis in which to place the patches. Default: "z". - See `.get_dir_vector` for a description of the values. - depthshade - Whether to shade the patches to give a sense of depth. Default: *True*. - - """ - if isinstance(col, PathCollection): - col.__class__ = Path3DCollection - col._offset_zordered = None - elif isinstance(col, PatchCollection): - col.__class__ = Patch3DCollection - col._depthshade = depthshade - col._in_draw = False - col.set_3d_properties(zs, zdir) - - -class Poly3DCollection(PolyCollection): - """ - A collection of 3D polygons. - - .. note:: - **Filling of 3D polygons** - - There is no simple definition of the enclosed surface of a 3D polygon - unless the polygon is planar. - - In practice, Matplotlib fills the 2D projection of the polygon. This - gives a correct filling appearance only for planar polygons. For all - other polygons, you'll find orientations in which the edges of the - polygon intersect in the projection. This will lead to an incorrect - visualization of the 3D area. - - If you need filled areas, it is recommended to create them via - `~mpl_toolkits.mplot3d.axes3d.Axes3D.plot_trisurf`, which creates a - triangulation and thus generates consistent surfaces. - """ - - def __init__(self, verts, *args, zsort='average', shade=False, - lightsource=None, **kwargs): - """ - Parameters - ---------- - verts : list of (N, 3) array-like - The sequence of polygons [*verts0*, *verts1*, ...] where each - element *verts_i* defines the vertices of polygon *i* as a 2D - array-like of shape (N, 3). - zsort : {'average', 'min', 'max'}, default: 'average' - The calculation method for the z-order. - See `~.Poly3DCollection.set_zsort` for details. - shade : bool, default: False - Whether to shade *facecolors* and *edgecolors*. When activating - *shade*, *facecolors* and/or *edgecolors* must be provided. - - .. versionadded:: 3.7 - - lightsource : `~matplotlib.colors.LightSource`, optional - The lightsource to use when *shade* is True. - - .. versionadded:: 3.7 - - *args, **kwargs - All other parameters are forwarded to `.PolyCollection`. - - Notes - ----- - Note that this class does a bit of magic with the _facecolors - and _edgecolors properties. - """ - if shade: - normals = _generate_normals(verts) - facecolors = kwargs.get('facecolors', None) - if facecolors is not None: - kwargs['facecolors'] = _shade_colors( - facecolors, normals, lightsource - ) - - edgecolors = kwargs.get('edgecolors', None) - if edgecolors is not None: - kwargs['edgecolors'] = _shade_colors( - edgecolors, normals, lightsource - ) - if facecolors is None and edgecolors is None: - raise ValueError( - "You must provide facecolors, edgecolors, or both for " - "shade to work.") - super().__init__(verts, *args, **kwargs) - if isinstance(verts, np.ndarray): - if verts.ndim != 3: - raise ValueError('verts must be a list of (N, 3) array-like') - else: - if any(len(np.shape(vert)) != 2 for vert in verts): - raise ValueError('verts must be a list of (N, 3) array-like') - self.set_zsort(zsort) - self._codes3d = None - - _zsort_functions = { - 'average': np.average, - 'min': np.min, - 'max': np.max, - } - - def set_zsort(self, zsort): - """ - Set the calculation method for the z-order. - - Parameters - ---------- - zsort : {'average', 'min', 'max'} - The function applied on the z-coordinates of the vertices in the - viewer's coordinate system, to determine the z-order. - """ - self._zsortfunc = self._zsort_functions[zsort] - self._sort_zpos = None - self.stale = True - - def get_vector(self, segments3d): - """Optimize points for projection.""" - if len(segments3d): - xs, ys, zs = np.vstack(segments3d).T - else: # vstack can't stack zero arrays. - xs, ys, zs = [], [], [] - ones = np.ones(len(xs)) - self._vec = np.array([xs, ys, zs, ones]) - - indices = [0, *np.cumsum([len(segment) for segment in segments3d])] - self._segslices = [*map(slice, indices[:-1], indices[1:])] - - def set_verts(self, verts, closed=True): - """ - Set 3D vertices. - - Parameters - ---------- - verts : list of (N, 3) array-like - The sequence of polygons [*verts0*, *verts1*, ...] where each - element *verts_i* defines the vertices of polygon *i* as a 2D - array-like of shape (N, 3). - closed : bool, default: True - Whether the polygon should be closed by adding a CLOSEPOLY - connection at the end. - """ - self.get_vector(verts) - # 2D verts will be updated at draw time - super().set_verts([], False) - self._closed = closed - - def set_verts_and_codes(self, verts, codes): - """Set 3D vertices with path codes.""" - # set vertices with closed=False to prevent PolyCollection from - # setting path codes - self.set_verts(verts, closed=False) - # and set our own codes instead. - self._codes3d = codes - - def set_3d_properties(self): - # Force the collection to initialize the face and edgecolors - # just in case it is a scalarmappable with a colormap. - self.update_scalarmappable() - self._sort_zpos = None - self.set_zsort('average') - self._facecolor3d = PolyCollection.get_facecolor(self) - self._edgecolor3d = PolyCollection.get_edgecolor(self) - self._alpha3d = PolyCollection.get_alpha(self) - self.stale = True - - def set_sort_zpos(self, val): - """Set the position to use for z-sorting.""" - self._sort_zpos = val - self.stale = True - - def do_3d_projection(self): - """ - Perform the 3D projection for this object. - """ - if self._A is not None: - # force update of color mapping because we re-order them - # below. If we do not do this here, the 2D draw will call - # this, but we will never port the color mapped values back - # to the 3D versions. - # - # We hold the 3D versions in a fixed order (the order the user - # passed in) and sort the 2D version by view depth. - self.update_scalarmappable() - if self._face_is_mapped: - self._facecolor3d = self._facecolors - if self._edge_is_mapped: - self._edgecolor3d = self._edgecolors - txs, tys, tzs = proj3d._proj_transform_vec(self._vec, self.axes.M) - xyzlist = [(txs[sl], tys[sl], tzs[sl]) for sl in self._segslices] - - # This extra fuss is to re-order face / edge colors - cface = self._facecolor3d - cedge = self._edgecolor3d - if len(cface) != len(xyzlist): - cface = cface.repeat(len(xyzlist), axis=0) - if len(cedge) != len(xyzlist): - if len(cedge) == 0: - cedge = cface - else: - cedge = cedge.repeat(len(xyzlist), axis=0) - - if xyzlist: - # sort by depth (furthest drawn first) - z_segments_2d = sorted( - ((self._zsortfunc(zs), np.column_stack([xs, ys]), fc, ec, idx) - for idx, ((xs, ys, zs), fc, ec) - in enumerate(zip(xyzlist, cface, cedge))), - key=lambda x: x[0], reverse=True) - - _, segments_2d, self._facecolors2d, self._edgecolors2d, idxs = \ - zip(*z_segments_2d) - else: - segments_2d = [] - self._facecolors2d = np.empty((0, 4)) - self._edgecolors2d = np.empty((0, 4)) - idxs = [] - - if self._codes3d is not None: - codes = [self._codes3d[idx] for idx in idxs] - PolyCollection.set_verts_and_codes(self, segments_2d, codes) - else: - PolyCollection.set_verts(self, segments_2d, self._closed) - - if len(self._edgecolor3d) != len(cface): - self._edgecolors2d = self._edgecolor3d - - # Return zorder value - if self._sort_zpos is not None: - zvec = np.array([[0], [0], [self._sort_zpos], [1]]) - ztrans = proj3d._proj_transform_vec(zvec, self.axes.M) - return ztrans[2][0] - elif tzs.size > 0: - # FIXME: Some results still don't look quite right. - # In particular, examine contourf3d_demo2.py - # with az = -54 and elev = -45. - return np.min(tzs) - else: - return np.nan - - def set_facecolor(self, colors): - # docstring inherited - super().set_facecolor(colors) - self._facecolor3d = PolyCollection.get_facecolor(self) - - def set_edgecolor(self, colors): - # docstring inherited - super().set_edgecolor(colors) - self._edgecolor3d = PolyCollection.get_edgecolor(self) - - def set_alpha(self, alpha): - # docstring inherited - artist.Artist.set_alpha(self, alpha) - try: - self._facecolor3d = mcolors.to_rgba_array( - self._facecolor3d, self._alpha) - except (AttributeError, TypeError, IndexError): - pass - try: - self._edgecolors = mcolors.to_rgba_array( - self._edgecolor3d, self._alpha) - except (AttributeError, TypeError, IndexError): - pass - self.stale = True - - def get_facecolor(self): - # docstring inherited - # self._facecolors2d is not initialized until do_3d_projection - if not hasattr(self, '_facecolors2d'): - self.axes.M = self.axes.get_proj() - self.do_3d_projection() - return np.asarray(self._facecolors2d) - - def get_edgecolor(self): - # docstring inherited - # self._edgecolors2d is not initialized until do_3d_projection - if not hasattr(self, '_edgecolors2d'): - self.axes.M = self.axes.get_proj() - self.do_3d_projection() - return np.asarray(self._edgecolors2d) - - -def poly_collection_2d_to_3d(col, zs=0, zdir='z'): - """ - Convert a `.PolyCollection` into a `.Poly3DCollection` object. - - Parameters - ---------- - zs : float or array of floats - The location or locations to place the polygons in the collection along - the *zdir* axis. Default: 0. - zdir : {'x', 'y', 'z'} - The axis in which to place the patches. Default: 'z'. - See `.get_dir_vector` for a description of the values. - """ - segments_3d, codes = _paths_to_3d_segments_with_codes( - col.get_paths(), zs, zdir) - col.__class__ = Poly3DCollection - col.set_verts_and_codes(segments_3d, codes) - col.set_3d_properties() - - -def juggle_axes(xs, ys, zs, zdir): - """ - Reorder coordinates so that 2D *xs*, *ys* can be plotted in the plane - orthogonal to *zdir*. *zdir* is normally 'x', 'y' or 'z'. However, if - *zdir* starts with a '-' it is interpreted as a compensation for - `rotate_axes`. - """ - if zdir == 'x': - return zs, xs, ys - elif zdir == 'y': - return xs, zs, ys - elif zdir[0] == '-': - return rotate_axes(xs, ys, zs, zdir) - else: - return xs, ys, zs - - -def rotate_axes(xs, ys, zs, zdir): - """ - Reorder coordinates so that the axes are rotated with *zdir* along - the original z axis. Prepending the axis with a '-' does the - inverse transform, so *zdir* can be 'x', '-x', 'y', '-y', 'z' or '-z'. - """ - if zdir in ('x', '-y'): - return ys, zs, xs - elif zdir in ('-x', 'y'): - return zs, xs, ys - else: - return xs, ys, zs - - -def _zalpha(colors, zs): - """Modify the alphas of the color list according to depth.""" - # FIXME: This only works well if the points for *zs* are well-spaced - # in all three dimensions. Otherwise, at certain orientations, - # the min and max zs are very close together. - # Should really normalize against the viewing depth. - if len(colors) == 0 or len(zs) == 0: - return np.zeros((0, 4)) - norm = Normalize(min(zs), max(zs)) - sats = 1 - norm(zs) * 0.7 - rgba = np.broadcast_to(mcolors.to_rgba_array(colors), (len(zs), 4)) - return np.column_stack([rgba[:, :3], rgba[:, 3] * sats]) - - -def _generate_normals(polygons): - """ - Compute the normals of a list of polygons, one normal per polygon. - - Normals point towards the viewer for a face with its vertices in - counterclockwise order, following the right hand rule. - - Uses three points equally spaced around the polygon. This method assumes - that the points are in a plane. Otherwise, more than one shade is required, - which is not supported. - - Parameters - ---------- - polygons : list of (M_i, 3) array-like, or (..., M, 3) array-like - A sequence of polygons to compute normals for, which can have - varying numbers of vertices. If the polygons all have the same - number of vertices and array is passed, then the operation will - be vectorized. - - Returns - ------- - normals : (..., 3) array - A normal vector estimated for the polygon. - """ - if isinstance(polygons, np.ndarray): - # optimization: polygons all have the same number of points, so can - # vectorize - n = polygons.shape[-2] - i1, i2, i3 = 0, n//3, 2*n//3 - v1 = polygons[..., i1, :] - polygons[..., i2, :] - v2 = polygons[..., i2, :] - polygons[..., i3, :] - else: - # The subtraction doesn't vectorize because polygons is jagged. - v1 = np.empty((len(polygons), 3)) - v2 = np.empty((len(polygons), 3)) - for poly_i, ps in enumerate(polygons): - n = len(ps) - i1, i2, i3 = 0, n//3, 2*n//3 - v1[poly_i, :] = ps[i1, :] - ps[i2, :] - v2[poly_i, :] = ps[i2, :] - ps[i3, :] - return np.cross(v1, v2) - - -def _shade_colors(color, normals, lightsource=None): - """ - Shade *color* using normal vectors given by *normals*, - assuming a *lightsource* (using default position if not given). - *color* can also be an array of the same length as *normals*. - """ - if lightsource is None: - # chosen for backwards-compatibility - lightsource = mcolors.LightSource(azdeg=225, altdeg=19.4712) - - with np.errstate(invalid="ignore"): - shade = ((normals / np.linalg.norm(normals, axis=1, keepdims=True)) - @ lightsource.direction) - mask = ~np.isnan(shade) - - if mask.any(): - # convert dot product to allowed shading fractions - in_norm = mcolors.Normalize(-1, 1) - out_norm = mcolors.Normalize(0.3, 1).inverse - - def norm(x): - return out_norm(in_norm(x)) - - shade[~mask] = 0 - - color = mcolors.to_rgba_array(color) - # shape of color should be (M, 4) (where M is number of faces) - # shape of shade should be (M,) - # colors should have final shape of (M, 4) - alpha = color[:, 3] - colors = norm(shade)[:, np.newaxis] * color - colors[:, 3] = alpha - else: - colors = np.asanyarray(color).copy() - - return colors diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_datetime.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_datetime.py deleted file mode 100644 index 547ebf9d67465cf3e012f8837a55e4a2455ed5b3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/tests/test_datetime.py +++ /dev/null @@ -1,2569 +0,0 @@ - -import numpy -import numpy as np -import datetime -import pytest -from numpy.testing import ( - IS_WASM, - assert_, assert_equal, assert_raises, assert_warns, suppress_warnings, - assert_raises_regex, assert_array_equal, - ) -from numpy.compat import pickle - -# Use pytz to test out various time zones if available -try: - from pytz import timezone as tz - _has_pytz = True -except ImportError: - _has_pytz = False - -try: - RecursionError -except NameError: - RecursionError = RuntimeError # python < 3.5 - - -class TestDateTime: - def test_datetime_dtype_creation(self): - for unit in ['Y', 'M', 'W', 'D', - 'h', 'm', 's', 'ms', 'us', - 'μs', # alias for us - 'ns', 'ps', 'fs', 'as']: - dt1 = np.dtype('M8[750%s]' % unit) - assert_(dt1 == np.dtype('datetime64[750%s]' % unit)) - dt2 = np.dtype('m8[%s]' % unit) - assert_(dt2 == np.dtype('timedelta64[%s]' % unit)) - - # Generic units shouldn't add [] to the end - assert_equal(str(np.dtype("M8")), "datetime64") - - # Should be possible to specify the endianness - assert_equal(np.dtype("=M8"), np.dtype("M8")) - assert_equal(np.dtype("=M8[s]"), np.dtype("M8[s]")) - assert_(np.dtype(">M8") == np.dtype("M8") or - np.dtype("M8[D]") == np.dtype("M8[D]") or - np.dtype("M8") != np.dtype("m8") == np.dtype("m8") or - np.dtype("m8[D]") == np.dtype("m8[D]") or - np.dtype("m8") != np.dtype(" Scalars - assert_equal(np.datetime64(b, '[s]'), np.datetime64('NaT', '[s]')) - assert_equal(np.datetime64(b, '[ms]'), np.datetime64('NaT', '[ms]')) - assert_equal(np.datetime64(b, '[M]'), np.datetime64('NaT', '[M]')) - assert_equal(np.datetime64(b, '[Y]'), np.datetime64('NaT', '[Y]')) - assert_equal(np.datetime64(b, '[W]'), np.datetime64('NaT', '[W]')) - - # Arrays -> Scalars - assert_equal(np.datetime64(a, '[s]'), np.datetime64('NaT', '[s]')) - assert_equal(np.datetime64(a, '[ms]'), np.datetime64('NaT', '[ms]')) - assert_equal(np.datetime64(a, '[M]'), np.datetime64('NaT', '[M]')) - assert_equal(np.datetime64(a, '[Y]'), np.datetime64('NaT', '[Y]')) - assert_equal(np.datetime64(a, '[W]'), np.datetime64('NaT', '[W]')) - - # NaN -> NaT - nan = np.array([np.nan] * 8) - fnan = nan.astype('f') - lnan = nan.astype('g') - cnan = nan.astype('D') - cfnan = nan.astype('F') - clnan = nan.astype('G') - - nat = np.array([np.datetime64('NaT')] * 8) - assert_equal(nan.astype('M8[ns]'), nat) - assert_equal(fnan.astype('M8[ns]'), nat) - assert_equal(lnan.astype('M8[ns]'), nat) - assert_equal(cnan.astype('M8[ns]'), nat) - assert_equal(cfnan.astype('M8[ns]'), nat) - assert_equal(clnan.astype('M8[ns]'), nat) - - nat = np.array([np.timedelta64('NaT')] * 8) - assert_equal(nan.astype('timedelta64[ns]'), nat) - assert_equal(fnan.astype('timedelta64[ns]'), nat) - assert_equal(lnan.astype('timedelta64[ns]'), nat) - assert_equal(cnan.astype('timedelta64[ns]'), nat) - assert_equal(cfnan.astype('timedelta64[ns]'), nat) - assert_equal(clnan.astype('timedelta64[ns]'), nat) - - def test_days_creation(self): - assert_equal(np.array('1599', dtype='M8[D]').astype('i8'), - (1600-1970)*365 - (1972-1600)/4 + 3 - 365) - assert_equal(np.array('1600', dtype='M8[D]').astype('i8'), - (1600-1970)*365 - (1972-1600)/4 + 3) - assert_equal(np.array('1601', dtype='M8[D]').astype('i8'), - (1600-1970)*365 - (1972-1600)/4 + 3 + 366) - assert_equal(np.array('1900', dtype='M8[D]').astype('i8'), - (1900-1970)*365 - (1970-1900)//4) - assert_equal(np.array('1901', dtype='M8[D]').astype('i8'), - (1900-1970)*365 - (1970-1900)//4 + 365) - assert_equal(np.array('1967', dtype='M8[D]').astype('i8'), -3*365 - 1) - assert_equal(np.array('1968', dtype='M8[D]').astype('i8'), -2*365 - 1) - assert_equal(np.array('1969', dtype='M8[D]').astype('i8'), -1*365) - assert_equal(np.array('1970', dtype='M8[D]').astype('i8'), 0*365) - assert_equal(np.array('1971', dtype='M8[D]').astype('i8'), 1*365) - assert_equal(np.array('1972', dtype='M8[D]').astype('i8'), 2*365) - assert_equal(np.array('1973', dtype='M8[D]').astype('i8'), 3*365 + 1) - assert_equal(np.array('1974', dtype='M8[D]').astype('i8'), 4*365 + 1) - assert_equal(np.array('2000', dtype='M8[D]').astype('i8'), - (2000 - 1970)*365 + (2000 - 1972)//4) - assert_equal(np.array('2001', dtype='M8[D]').astype('i8'), - (2000 - 1970)*365 + (2000 - 1972)//4 + 366) - assert_equal(np.array('2400', dtype='M8[D]').astype('i8'), - (2400 - 1970)*365 + (2400 - 1972)//4 - 3) - assert_equal(np.array('2401', dtype='M8[D]').astype('i8'), - (2400 - 1970)*365 + (2400 - 1972)//4 - 3 + 366) - - assert_equal(np.array('1600-02-29', dtype='M8[D]').astype('i8'), - (1600-1970)*365 - (1972-1600)//4 + 3 + 31 + 28) - assert_equal(np.array('1600-03-01', dtype='M8[D]').astype('i8'), - (1600-1970)*365 - (1972-1600)//4 + 3 + 31 + 29) - assert_equal(np.array('2000-02-29', dtype='M8[D]').astype('i8'), - (2000 - 1970)*365 + (2000 - 1972)//4 + 31 + 28) - assert_equal(np.array('2000-03-01', dtype='M8[D]').astype('i8'), - (2000 - 1970)*365 + (2000 - 1972)//4 + 31 + 29) - assert_equal(np.array('2001-03-22', dtype='M8[D]').astype('i8'), - (2000 - 1970)*365 + (2000 - 1972)//4 + 366 + 31 + 28 + 21) - - def test_days_to_pydate(self): - assert_equal(np.array('1599', dtype='M8[D]').astype('O'), - datetime.date(1599, 1, 1)) - assert_equal(np.array('1600', dtype='M8[D]').astype('O'), - datetime.date(1600, 1, 1)) - assert_equal(np.array('1601', dtype='M8[D]').astype('O'), - datetime.date(1601, 1, 1)) - assert_equal(np.array('1900', dtype='M8[D]').astype('O'), - datetime.date(1900, 1, 1)) - assert_equal(np.array('1901', dtype='M8[D]').astype('O'), - datetime.date(1901, 1, 1)) - assert_equal(np.array('2000', dtype='M8[D]').astype('O'), - datetime.date(2000, 1, 1)) - assert_equal(np.array('2001', dtype='M8[D]').astype('O'), - datetime.date(2001, 1, 1)) - assert_equal(np.array('1600-02-29', dtype='M8[D]').astype('O'), - datetime.date(1600, 2, 29)) - assert_equal(np.array('1600-03-01', dtype='M8[D]').astype('O'), - datetime.date(1600, 3, 1)) - assert_equal(np.array('2001-03-22', dtype='M8[D]').astype('O'), - datetime.date(2001, 3, 22)) - - def test_dtype_comparison(self): - assert_(not (np.dtype('M8[us]') == np.dtype('M8[ms]'))) - assert_(np.dtype('M8[us]') != np.dtype('M8[ms]')) - assert_(np.dtype('M8[2D]') != np.dtype('M8[D]')) - assert_(np.dtype('M8[D]') != np.dtype('M8[2D]')) - - def test_pydatetime_creation(self): - a = np.array(['1960-03-12', datetime.date(1960, 3, 12)], dtype='M8[D]') - assert_equal(a[0], a[1]) - a = np.array(['1999-12-31', datetime.date(1999, 12, 31)], dtype='M8[D]') - assert_equal(a[0], a[1]) - a = np.array(['2000-01-01', datetime.date(2000, 1, 1)], dtype='M8[D]') - assert_equal(a[0], a[1]) - # Will fail if the date changes during the exact right moment - a = np.array(['today', datetime.date.today()], dtype='M8[D]') - assert_equal(a[0], a[1]) - # datetime.datetime.now() returns local time, not UTC - #a = np.array(['now', datetime.datetime.now()], dtype='M8[s]') - #assert_equal(a[0], a[1]) - - # we can give a datetime.date time units - assert_equal(np.array(datetime.date(1960, 3, 12), dtype='M8[s]'), - np.array(np.datetime64('1960-03-12T00:00:00'))) - - def test_datetime_string_conversion(self): - a = ['2011-03-16', '1920-01-01', '2013-05-19'] - str_a = np.array(a, dtype='S') - uni_a = np.array(a, dtype='U') - dt_a = np.array(a, dtype='M') - - # String to datetime - assert_equal(dt_a, str_a.astype('M')) - assert_equal(dt_a.dtype, str_a.astype('M').dtype) - dt_b = np.empty_like(dt_a) - dt_b[...] = str_a - assert_equal(dt_a, dt_b) - - # Datetime to string - assert_equal(str_a, dt_a.astype('S0')) - str_b = np.empty_like(str_a) - str_b[...] = dt_a - assert_equal(str_a, str_b) - - # Unicode to datetime - assert_equal(dt_a, uni_a.astype('M')) - assert_equal(dt_a.dtype, uni_a.astype('M').dtype) - dt_b = np.empty_like(dt_a) - dt_b[...] = uni_a - assert_equal(dt_a, dt_b) - - # Datetime to unicode - assert_equal(uni_a, dt_a.astype('U')) - uni_b = np.empty_like(uni_a) - uni_b[...] = dt_a - assert_equal(uni_a, uni_b) - - # Datetime to long string - gh-9712 - assert_equal(str_a, dt_a.astype((np.bytes_, 128))) - str_b = np.empty(str_a.shape, dtype=(np.bytes_, 128)) - str_b[...] = dt_a - assert_equal(str_a, str_b) - - @pytest.mark.parametrize("time_dtype", ["m8[D]", "M8[Y]"]) - def test_time_byteswapping(self, time_dtype): - times = np.array(["2017", "NaT"], dtype=time_dtype) - times_swapped = times.astype(times.dtype.newbyteorder()) - assert_array_equal(times, times_swapped) - - unswapped = times_swapped.view(np.int64).newbyteorder() - assert_array_equal(unswapped, times.view(np.int64)) - - @pytest.mark.parametrize(["time1", "time2"], - [("M8[s]", "M8[D]"), ("m8[s]", "m8[ns]")]) - def test_time_byteswapped_cast(self, time1, time2): - dtype1 = np.dtype(time1) - dtype2 = np.dtype(time2) - times = np.array(["2017", "NaT"], dtype=dtype1) - expected = times.astype(dtype2) - - # Test that every byte-swapping combination also returns the same - # results (previous tests check that this comparison works fine). - res = times.astype(dtype1.newbyteorder()).astype(dtype2) - assert_array_equal(res, expected) - res = times.astype(dtype2.newbyteorder()) - assert_array_equal(res, expected) - res = times.astype(dtype1.newbyteorder()).astype(dtype2.newbyteorder()) - assert_array_equal(res, expected) - - @pytest.mark.parametrize("time_dtype", ["m8[D]", "M8[Y]"]) - @pytest.mark.parametrize("str_dtype", ["U", "S"]) - def test_datetime_conversions_byteorders(self, str_dtype, time_dtype): - times = np.array(["2017", "NaT"], dtype=time_dtype) - # Unfortunately, timedelta does not roundtrip: - from_strings = np.array(["2017", "NaT"], dtype=str_dtype) - to_strings = times.astype(str_dtype) # assume this is correct - - # Check that conversion from times to string works if src is swapped: - times_swapped = times.astype(times.dtype.newbyteorder()) - res = times_swapped.astype(str_dtype) - assert_array_equal(res, to_strings) - # And also if both are swapped: - res = times_swapped.astype(to_strings.dtype.newbyteorder()) - assert_array_equal(res, to_strings) - # only destination is swapped: - res = times.astype(to_strings.dtype.newbyteorder()) - assert_array_equal(res, to_strings) - - # Check that conversion from string to times works if src is swapped: - from_strings_swapped = from_strings.astype( - from_strings.dtype.newbyteorder()) - res = from_strings_swapped.astype(time_dtype) - assert_array_equal(res, times) - # And if both are swapped: - res = from_strings_swapped.astype(times.dtype.newbyteorder()) - assert_array_equal(res, times) - # Only destination is swapped: - res = from_strings.astype(times.dtype.newbyteorder()) - assert_array_equal(res, times) - - def test_datetime_array_str(self): - a = np.array(['2011-03-16', '1920-01-01', '2013-05-19'], dtype='M') - assert_equal(str(a), "['2011-03-16' '1920-01-01' '2013-05-19']") - - a = np.array(['2011-03-16T13:55', '1920-01-01T03:12'], dtype='M') - assert_equal(np.array2string(a, separator=', ', - formatter={'datetime': lambda x: - "'%s'" % np.datetime_as_string(x, timezone='UTC')}), - "['2011-03-16T13:55Z', '1920-01-01T03:12Z']") - - # Check that one NaT doesn't corrupt subsequent entries - a = np.array(['2010', 'NaT', '2030']).astype('M') - assert_equal(str(a), "['2010' 'NaT' '2030']") - - def test_timedelta_array_str(self): - a = np.array([-1, 0, 100], dtype='m') - assert_equal(str(a), "[ -1 0 100]") - a = np.array(['NaT', 'NaT'], dtype='m') - assert_equal(str(a), "['NaT' 'NaT']") - # Check right-alignment with NaTs - a = np.array([-1, 'NaT', 0], dtype='m') - assert_equal(str(a), "[ -1 'NaT' 0]") - a = np.array([-1, 'NaT', 1234567], dtype='m') - assert_equal(str(a), "[ -1 'NaT' 1234567]") - - # Test with other byteorder: - a = np.array([-1, 'NaT', 1234567], dtype='>m') - assert_equal(str(a), "[ -1 'NaT' 1234567]") - a = np.array([-1, 'NaT', 1234567], dtype=''\np4\nNNNI-1\nI-1\nI0\n((dp5\n(S'us'\np6\n" + \ - b"I1\nI1\nI1\ntp7\ntp8\ntp9\nb." - assert_equal(pickle.loads(pkl), np.dtype('>M8[us]')) - - def test_setstate(self): - "Verify that datetime dtype __setstate__ can handle bad arguments" - dt = np.dtype('>M8[us]') - assert_raises(ValueError, dt.__setstate__, (4, '>', None, None, None, -1, -1, 0, 1)) - assert_(dt.__reduce__()[2] == np.dtype('>M8[us]').__reduce__()[2]) - assert_raises(TypeError, dt.__setstate__, (4, '>', None, None, None, -1, -1, 0, ({}, 'xxx'))) - assert_(dt.__reduce__()[2] == np.dtype('>M8[us]').__reduce__()[2]) - - def test_dtype_promotion(self): - # datetime datetime computes the metadata gcd - # timedelta timedelta computes the metadata gcd - for mM in ['m', 'M']: - assert_equal( - np.promote_types(np.dtype(mM+'8[2Y]'), np.dtype(mM+'8[2Y]')), - np.dtype(mM+'8[2Y]')) - assert_equal( - np.promote_types(np.dtype(mM+'8[12Y]'), np.dtype(mM+'8[15Y]')), - np.dtype(mM+'8[3Y]')) - assert_equal( - np.promote_types(np.dtype(mM+'8[62M]'), np.dtype(mM+'8[24M]')), - np.dtype(mM+'8[2M]')) - assert_equal( - np.promote_types(np.dtype(mM+'8[1W]'), np.dtype(mM+'8[2D]')), - np.dtype(mM+'8[1D]')) - assert_equal( - np.promote_types(np.dtype(mM+'8[W]'), np.dtype(mM+'8[13s]')), - np.dtype(mM+'8[s]')) - assert_equal( - np.promote_types(np.dtype(mM+'8[13W]'), np.dtype(mM+'8[49s]')), - np.dtype(mM+'8[7s]')) - # timedelta timedelta raises when there is no reasonable gcd - assert_raises(TypeError, np.promote_types, - np.dtype('m8[Y]'), np.dtype('m8[D]')) - assert_raises(TypeError, np.promote_types, - np.dtype('m8[M]'), np.dtype('m8[W]')) - # timedelta and float cannot be safely cast with each other - assert_raises(TypeError, np.promote_types, "float32", "m8") - assert_raises(TypeError, np.promote_types, "m8", "float32") - assert_raises(TypeError, np.promote_types, "uint64", "m8") - assert_raises(TypeError, np.promote_types, "m8", "uint64") - - # timedelta timedelta may overflow with big unit ranges - assert_raises(OverflowError, np.promote_types, - np.dtype('m8[W]'), np.dtype('m8[fs]')) - assert_raises(OverflowError, np.promote_types, - np.dtype('m8[s]'), np.dtype('m8[as]')) - - def test_cast_overflow(self): - # gh-4486 - def cast(): - numpy.datetime64("1971-01-01 00:00:00.000000000000000").astype("datetime64[%s]', - 'timedelta64[%s]']) - def test_isfinite_isinf_isnan_units(self, unit, dstr): - '''check isfinite, isinf, isnan for all units of M, m dtypes - ''' - arr_val = [123, -321, "NaT"] - arr = np.array(arr_val, dtype= dstr % unit) - pos = np.array([True, True, False]) - neg = np.array([False, False, True]) - false = np.array([False, False, False]) - assert_equal(np.isfinite(arr), pos) - assert_equal(np.isinf(arr), false) - assert_equal(np.isnan(arr), neg) - - def test_assert_equal(self): - assert_raises(AssertionError, assert_equal, - np.datetime64('nat'), np.timedelta64('nat')) - - def test_corecursive_input(self): - # construct a co-recursive list - a, b = [], [] - a.append(b) - b.append(a) - obj_arr = np.array([None]) - obj_arr[0] = a - - # At some point this caused a stack overflow (gh-11154). Now raises - # ValueError since the nested list cannot be converted to a datetime. - assert_raises(ValueError, obj_arr.astype, 'M8') - assert_raises(ValueError, obj_arr.astype, 'm8') - - @pytest.mark.parametrize("shape", [(), (1,)]) - def test_discovery_from_object_array(self, shape): - arr = np.array("2020-10-10", dtype=object).reshape(shape) - res = np.array("2020-10-10", dtype="M8").reshape(shape) - assert res.dtype == np.dtype("M8[D]") - assert_equal(arr.astype("M8"), res) - arr[...] = np.bytes_("2020-10-10") # try a numpy string type - assert_equal(arr.astype("M8"), res) - arr = arr.astype("S") - assert_equal(arr.astype("S").astype("M8"), res) - - @pytest.mark.parametrize("time_unit", [ - "Y", "M", "W", "D", "h", "m", "s", "ms", "us", "ns", "ps", "fs", "as", - # compound units - "10D", "2M", - ]) - def test_limit_symmetry(self, time_unit): - """ - Dates should have symmetric limits around the unix epoch at +/-np.int64 - """ - epoch = np.datetime64(0, time_unit) - latest = np.datetime64(np.iinfo(np.int64).max, time_unit) - earliest = np.datetime64(-np.iinfo(np.int64).max, time_unit) - - # above should not have overflowed - assert earliest < epoch < latest - - @pytest.mark.parametrize("time_unit", [ - "Y", "M", - pytest.param("W", marks=pytest.mark.xfail(reason="gh-13197")), - "D", "h", "m", - "s", "ms", "us", "ns", "ps", "fs", "as", - pytest.param("10D", marks=pytest.mark.xfail(reason="similar to gh-13197")), - ]) - @pytest.mark.parametrize("sign", [-1, 1]) - def test_limit_str_roundtrip(self, time_unit, sign): - """ - Limits should roundtrip when converted to strings. - - This tests the conversion to and from npy_datetimestruct. - """ - # TODO: add absolute (gold standard) time span limit strings - limit = np.datetime64(np.iinfo(np.int64).max * sign, time_unit) - - # Convert to string and back. Explicit unit needed since the day and - # week reprs are not distinguishable. - limit_via_str = np.datetime64(str(limit), time_unit) - assert limit_via_str == limit - - -class TestDateTimeData: - - def test_basic(self): - a = np.array(['1980-03-23'], dtype=np.datetime64) - assert_equal(np.datetime_data(a.dtype), ('D', 1)) - - def test_bytes(self): - # byte units are converted to unicode - dt = np.datetime64('2000', (b'ms', 5)) - assert np.datetime_data(dt.dtype) == ('ms', 5) - - dt = np.datetime64('2000', b'5ms') - assert np.datetime_data(dt.dtype) == ('ms', 5) - - def test_non_ascii(self): - # μs is normalized to μ - dt = np.datetime64('2000', ('μs', 5)) - assert np.datetime_data(dt.dtype) == ('us', 5) - - dt = np.datetime64('2000', '5μs') - assert np.datetime_data(dt.dtype) == ('us', 5) - - -def test_comparisons_return_not_implemented(): - # GH#17017 - - class custom: - __array_priority__ = 10000 - - obj = custom() - - dt = np.datetime64('2000', 'ns') - td = dt - dt - - for item in [dt, td]: - assert item.__eq__(obj) is NotImplemented - assert item.__ne__(obj) is NotImplemented - assert item.__le__(obj) is NotImplemented - assert item.__lt__(obj) is NotImplemented - assert item.__ge__(obj) is NotImplemented - assert item.__gt__(obj) is NotImplemented diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/gh23598.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/gh23598.f90 deleted file mode 100644 index e0dffb5ef29e3d5ba853ff4dfeda57b2bed6a9dc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/gh23598.f90 +++ /dev/null @@ -1,4 +0,0 @@ -integer function intproduct(a, b) result(res) - integer, intent(in) :: a, b - res = a*b -end function diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ndarray_misc.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ndarray_misc.py deleted file mode 100644 index 6beacc5d7cc9e586f9d6abe5ac5d3a62381ffc59..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/typing/tests/data/pass/ndarray_misc.py +++ /dev/null @@ -1,185 +0,0 @@ -""" -Tests for miscellaneous (non-magic) ``np.ndarray``/``np.generic`` methods. - -More extensive tests are performed for the methods' -function-based counterpart in `../from_numeric.py`. - -""" - -from __future__ import annotations - -import operator -from typing import cast, Any - -import numpy as np - -class SubClass(np.ndarray): ... - -i4 = np.int32(1) -A: np.ndarray[Any, np.dtype[np.int32]] = np.array([[1]], dtype=np.int32) -B0 = np.empty((), dtype=np.int32).view(SubClass) -B1 = np.empty((1,), dtype=np.int32).view(SubClass) -B2 = np.empty((1, 1), dtype=np.int32).view(SubClass) -C: np.ndarray[Any, np.dtype[np.int32]] = np.array([0, 1, 2], dtype=np.int32) -D = np.ones(3).view(SubClass) - -i4.all() -A.all() -A.all(axis=0) -A.all(keepdims=True) -A.all(out=B0) - -i4.any() -A.any() -A.any(axis=0) -A.any(keepdims=True) -A.any(out=B0) - -i4.argmax() -A.argmax() -A.argmax(axis=0) -A.argmax(out=B0) - -i4.argmin() -A.argmin() -A.argmin(axis=0) -A.argmin(out=B0) - -i4.argsort() -A.argsort() - -i4.choose([()]) -_choices = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]], dtype=np.int32) -C.choose(_choices) -C.choose(_choices, out=D) - -i4.clip(1) -A.clip(1) -A.clip(None, 1) -A.clip(1, out=B2) -A.clip(None, 1, out=B2) - -i4.compress([1]) -A.compress([1]) -A.compress([1], out=B1) - -i4.conj() -A.conj() -B0.conj() - -i4.conjugate() -A.conjugate() -B0.conjugate() - -i4.cumprod() -A.cumprod() -A.cumprod(out=B1) - -i4.cumsum() -A.cumsum() -A.cumsum(out=B1) - -i4.max() -A.max() -A.max(axis=0) -A.max(keepdims=True) -A.max(out=B0) - -i4.mean() -A.mean() -A.mean(axis=0) -A.mean(keepdims=True) -A.mean(out=B0) - -i4.min() -A.min() -A.min(axis=0) -A.min(keepdims=True) -A.min(out=B0) - -i4.newbyteorder() -A.newbyteorder() -B0.newbyteorder('|') - -i4.prod() -A.prod() -A.prod(axis=0) -A.prod(keepdims=True) -A.prod(out=B0) - -i4.ptp() -A.ptp() -A.ptp(axis=0) -A.ptp(keepdims=True) -A.astype(int).ptp(out=B0) - -i4.round() -A.round() -A.round(out=B2) - -i4.repeat(1) -A.repeat(1) -B0.repeat(1) - -i4.std() -A.std() -A.std(axis=0) -A.std(keepdims=True) -A.std(out=B0.astype(np.float64)) - -i4.sum() -A.sum() -A.sum(axis=0) -A.sum(keepdims=True) -A.sum(out=B0) - -i4.take(0) -A.take(0) -A.take([0]) -A.take(0, out=B0) -A.take([0], out=B1) - -i4.var() -A.var() -A.var(axis=0) -A.var(keepdims=True) -A.var(out=B0) - -A.argpartition([0]) - -A.diagonal() - -A.dot(1) -A.dot(1, out=B2) - -A.nonzero() - -C.searchsorted(1) - -A.trace() -A.trace(out=B0) - -void = cast(np.void, np.array(1, dtype=[("f", np.float64)]).take(0)) -void.setfield(10, np.float64) - -A.item(0) -C.item(0) - -A.ravel() -C.ravel() - -A.flatten() -C.flatten() - -A.reshape(1) -C.reshape(3) - -int(np.array(1.0, dtype=np.float64)) -int(np.array("1", dtype=np.str_)) - -float(np.array(1.0, dtype=np.float64)) -float(np.array("1", dtype=np.str_)) - -complex(np.array(1.0, dtype=np.float64)) - -operator.index(np.array(1, dtype=np.int64)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/error.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/error.py deleted file mode 100644 index 2928ef6aa6ee00c6bc974227bde4b1b4c6a3b6eb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/error.py +++ /dev/null @@ -1,169 +0,0 @@ -import openai - - -class OpenAIError(Exception): - def __init__( - self, - message=None, - http_body=None, - http_status=None, - json_body=None, - headers=None, - code=None, - ): - super(OpenAIError, self).__init__(message) - - if http_body and hasattr(http_body, "decode"): - try: - http_body = http_body.decode("utf-8") - except BaseException: - http_body = ( - "" - ) - - self._message = message - self.http_body = http_body - self.http_status = http_status - self.json_body = json_body - self.headers = headers or {} - self.code = code - self.request_id = self.headers.get("request-id", None) - self.error = self.construct_error_object() - self.organization = self.headers.get("openai-organization", None) - - def __str__(self): - msg = self._message or "" - if self.request_id is not None: - return "Request {0}: {1}".format(self.request_id, msg) - else: - return msg - - # Returns the underlying `Exception` (base class) message, which is usually - # the raw message returned by OpenAI's API. This was previously available - # in python2 via `error.message`. Unlike `str(error)`, it omits "Request - # req_..." from the beginning of the string. - @property - def user_message(self): - return self._message - - def __repr__(self): - return "%s(message=%r, http_status=%r, request_id=%r)" % ( - self.__class__.__name__, - self._message, - self.http_status, - self.request_id, - ) - - def construct_error_object(self): - if ( - self.json_body is None - or not isinstance(self.json_body, dict) - or "error" not in self.json_body - or not isinstance(self.json_body["error"], dict) - ): - return None - - return openai.api_resources.error_object.ErrorObject.construct_from( - self.json_body["error"] - ) - - -class APIError(OpenAIError): - pass - - -class TryAgain(OpenAIError): - pass - - -class Timeout(OpenAIError): - pass - - -class APIConnectionError(OpenAIError): - def __init__( - self, - message, - http_body=None, - http_status=None, - json_body=None, - headers=None, - code=None, - should_retry=False, - ): - super(APIConnectionError, self).__init__( - message, http_body, http_status, json_body, headers, code - ) - self.should_retry = should_retry - - -class InvalidRequestError(OpenAIError): - def __init__( - self, - message, - param, - code=None, - http_body=None, - http_status=None, - json_body=None, - headers=None, - ): - super(InvalidRequestError, self).__init__( - message, http_body, http_status, json_body, headers, code - ) - self.param = param - - def __repr__(self): - return "%s(message=%r, param=%r, code=%r, http_status=%r, " "request_id=%r)" % ( - self.__class__.__name__, - self._message, - self.param, - self.code, - self.http_status, - self.request_id, - ) - - def __reduce__(self): - return type(self), ( - self._message, - self.param, - self.code, - self.http_body, - self.http_status, - self.json_body, - self.headers, - ) - - -class AuthenticationError(OpenAIError): - pass - - -class PermissionError(OpenAIError): - pass - - -class RateLimitError(OpenAIError): - pass - - -class ServiceUnavailableError(OpenAIError): - pass - - -class InvalidAPIType(OpenAIError): - pass - - -class SignatureVerificationError(OpenAIError): - def __init__(self, message, sig_header, http_body=None): - super(SignatureVerificationError, self).__init__(message, http_body) - self.sig_header = sig_header - - def __reduce__(self): - return type(self), ( - self._message, - self.sig_header, - self.http_body, - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/freeze.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/freeze.py deleted file mode 100644 index 456554085df1bca271a261ee5e0b05ff413edafb..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_internal/operations/freeze.py +++ /dev/null @@ -1,254 +0,0 @@ -import collections -import logging -import os -from typing import Container, Dict, Iterable, Iterator, List, NamedTuple, Optional, Set - -from pip._vendor.packaging.utils import canonicalize_name -from pip._vendor.packaging.version import Version - -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.metadata import BaseDistribution, get_environment -from pip._internal.req.constructors import ( - install_req_from_editable, - install_req_from_line, -) -from pip._internal.req.req_file import COMMENT_RE -from pip._internal.utils.direct_url_helpers import direct_url_as_pep440_direct_reference - -logger = logging.getLogger(__name__) - - -class _EditableInfo(NamedTuple): - requirement: str - comments: List[str] - - -def freeze( - requirement: Optional[List[str]] = None, - local_only: bool = False, - user_only: bool = False, - paths: Optional[List[str]] = None, - isolated: bool = False, - exclude_editable: bool = False, - skip: Container[str] = (), -) -> Iterator[str]: - installations: Dict[str, FrozenRequirement] = {} - - dists = get_environment(paths).iter_installed_distributions( - local_only=local_only, - skip=(), - user_only=user_only, - ) - for dist in dists: - req = FrozenRequirement.from_dist(dist) - if exclude_editable and req.editable: - continue - installations[req.canonical_name] = req - - if requirement: - # the options that don't get turned into an InstallRequirement - # should only be emitted once, even if the same option is in multiple - # requirements files, so we need to keep track of what has been emitted - # so that we don't emit it again if it's seen again - emitted_options: Set[str] = set() - # keep track of which files a requirement is in so that we can - # give an accurate warning if a requirement appears multiple times. - req_files: Dict[str, List[str]] = collections.defaultdict(list) - for req_file_path in requirement: - with open(req_file_path) as req_file: - for line in req_file: - if ( - not line.strip() - or line.strip().startswith("#") - or line.startswith( - ( - "-r", - "--requirement", - "-f", - "--find-links", - "-i", - "--index-url", - "--pre", - "--trusted-host", - "--process-dependency-links", - "--extra-index-url", - "--use-feature", - ) - ) - ): - line = line.rstrip() - if line not in emitted_options: - emitted_options.add(line) - yield line - continue - - if line.startswith("-e") or line.startswith("--editable"): - if line.startswith("-e"): - line = line[2:].strip() - else: - line = line[len("--editable") :].strip().lstrip("=") - line_req = install_req_from_editable( - line, - isolated=isolated, - ) - else: - line_req = install_req_from_line( - COMMENT_RE.sub("", line).strip(), - isolated=isolated, - ) - - if not line_req.name: - logger.info( - "Skipping line in requirement file [%s] because " - "it's not clear what it would install: %s", - req_file_path, - line.strip(), - ) - logger.info( - " (add #egg=PackageName to the URL to avoid" - " this warning)" - ) - else: - line_req_canonical_name = canonicalize_name(line_req.name) - if line_req_canonical_name not in installations: - # either it's not installed, or it is installed - # but has been processed already - if not req_files[line_req.name]: - logger.warning( - "Requirement file [%s] contains %s, but " - "package %r is not installed", - req_file_path, - COMMENT_RE.sub("", line).strip(), - line_req.name, - ) - else: - req_files[line_req.name].append(req_file_path) - else: - yield str(installations[line_req_canonical_name]).rstrip() - del installations[line_req_canonical_name] - req_files[line_req.name].append(req_file_path) - - # Warn about requirements that were included multiple times (in a - # single requirements file or in different requirements files). - for name, files in req_files.items(): - if len(files) > 1: - logger.warning( - "Requirement %s included multiple times [%s]", - name, - ", ".join(sorted(set(files))), - ) - - yield ("## The following requirements were added by pip freeze:") - for installation in sorted(installations.values(), key=lambda x: x.name.lower()): - if installation.canonical_name not in skip: - yield str(installation).rstrip() - - -def _format_as_name_version(dist: BaseDistribution) -> str: - if isinstance(dist.version, Version): - return f"{dist.raw_name}=={dist.version}" - return f"{dist.raw_name}==={dist.version}" - - -def _get_editable_info(dist: BaseDistribution) -> _EditableInfo: - """ - Compute and return values (req, comments) for use in - FrozenRequirement.from_dist(). - """ - editable_project_location = dist.editable_project_location - assert editable_project_location - location = os.path.normcase(os.path.abspath(editable_project_location)) - - from pip._internal.vcs import RemoteNotFoundError, RemoteNotValidError, vcs - - vcs_backend = vcs.get_backend_for_dir(location) - - if vcs_backend is None: - display = _format_as_name_version(dist) - logger.debug( - 'No VCS found for editable requirement "%s" in: %r', - display, - location, - ) - return _EditableInfo( - requirement=location, - comments=[f"# Editable install with no version control ({display})"], - ) - - vcs_name = type(vcs_backend).__name__ - - try: - req = vcs_backend.get_src_requirement(location, dist.raw_name) - except RemoteNotFoundError: - display = _format_as_name_version(dist) - return _EditableInfo( - requirement=location, - comments=[f"# Editable {vcs_name} install with no remote ({display})"], - ) - except RemoteNotValidError as ex: - display = _format_as_name_version(dist) - return _EditableInfo( - requirement=location, - comments=[ - f"# Editable {vcs_name} install ({display}) with either a deleted " - f"local remote or invalid URI:", - f"# '{ex.url}'", - ], - ) - except BadCommand: - logger.warning( - "cannot determine version of editable source in %s " - "(%s command not found in path)", - location, - vcs_backend.name, - ) - return _EditableInfo(requirement=location, comments=[]) - except InstallationError as exc: - logger.warning("Error when trying to get requirement for VCS system %s", exc) - else: - return _EditableInfo(requirement=req, comments=[]) - - logger.warning("Could not determine repository location of %s", location) - - return _EditableInfo( - requirement=location, - comments=["## !! Could not determine repository location"], - ) - - -class FrozenRequirement: - def __init__( - self, - name: str, - req: str, - editable: bool, - comments: Iterable[str] = (), - ) -> None: - self.name = name - self.canonical_name = canonicalize_name(name) - self.req = req - self.editable = editable - self.comments = comments - - @classmethod - def from_dist(cls, dist: BaseDistribution) -> "FrozenRequirement": - editable = dist.editable - if editable: - req, comments = _get_editable_info(dist) - else: - comments = [] - direct_url = dist.direct_url - if direct_url: - # if PEP 610 metadata is present, use it - req = direct_url_as_pep440_direct_reference(direct_url, dist.raw_name) - else: - # name==version requirement - req = _format_as_name_version(dist) - - return cls(dist.raw_name, req, editable, comments=comments) - - def __str__(self) -> str: - req = self.req - if self.editable: - req = f"-e {req}" - return "\n".join(list(self.comments) + [str(req)]) + "\n" diff --git a/spaces/pycui/RealChar/client/web/src/components/Footer/style.css b/spaces/pycui/RealChar/client/web/src/components/Footer/style.css deleted file mode 100644 index 5cb7c9f600eb85a506437b3ed13a02902286d873..0000000000000000000000000000000000000000 --- a/spaces/pycui/RealChar/client/web/src/components/Footer/style.css +++ /dev/null @@ -1,92 +0,0 @@ -footer { - padding: 10px; - display: flex; - justify-content: center; - align-items: center; - flex-direction: column; -} - -.copyright { - color:#777; - font-size: 15px; -} - -.rounded-social-buttons { - text-align: center; -} - -.rounded-social-buttons .social-button { - display: inline-block; - position: relative; - cursor: pointer; - width: 3.125rem; - height: 3.125rem; - border: 0.125rem solid transparent; - padding: 0; - text-decoration: none; - text-align: center; - color: #fefefe; - font-size: 1.5625rem; - font-weight: normal; - line-height: 2em; - border-radius: 1.6875rem; - transition: all 0.5s ease; - margin-right: 0.25rem; - margin-bottom: 0.25rem; -} - -.rounded-social-buttons .social-button:hover, .rounded-social-buttons .social-button:focus { - -webkit-transform: rotate(360deg); - -ms-transform: rotate(360deg); - transform: rotate(360deg); -} - -.rounded-social-buttons .social-button.facebook { - background: #3b5998; -} - -.rounded-social-buttons .social-button.facebook:hover, .rounded-social-buttons .social-button.facebook:focus { - color: #3b5998; - background: #fefefe; - border-color: #3b5998; -} - -.rounded-social-buttons .social-button.twitter { - background: #bccffe1a; -} - -.rounded-social-buttons .social-button.twitter:hover, .rounded-social-buttons .social-button.twitter:focus { - color: #55acee; - background: #fefefe; - border-color: #55acee; -} - -.rounded-social-buttons .social-button.linkedin { - background: #007bb5; -} - -.rounded-social-buttons .social-button.linkedin:hover, .rounded-social-buttons .social-button.linkedin:focus { - color: #007bb5; - background: #fefefe; - border-color: #007bb5; -} - -.rounded-social-buttons .social-button.github { - background: #bccffe1a; -} - -.rounded-social-buttons .social-button.github:hover, .rounded-social-buttons .social-button.github:focus { - color: #bb0000; - background: #fefefe; - border-color: #bb0000; -} - -.rounded-social-buttons .social-button.discord { - background: #bccffe1a; -} - -.rounded-social-buttons .social-button.discord:hover, .rounded-social-buttons .social-button.discord:focus { - color: #125688; - background: #fefefe; - border-color: #125688; -} \ No newline at end of file diff --git a/spaces/pytorch/Densenet/app.py b/spaces/pytorch/Densenet/app.py deleted file mode 100644 index 953670c9652848f19ec20366938d33411ab12e2a..0000000000000000000000000000000000000000 --- a/spaces/pytorch/Densenet/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import os -import torch -import gradio as gr -from PIL import Image -from torchvision import transforms - -torch.hub.download_url_to_file("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg") - -model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet121', pretrained=True) -# or any of these variants -# model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet169', pretrained=True) -# model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet201', pretrained=True) -# model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet161', pretrained=True) -model.eval() - -# Download ImageNet labels -os.system("wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt") - -def inference(input_image): - - preprocess = transforms.Compose([ - transforms.Resize(256), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ]) - input_tensor = preprocess(input_image) - input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model - - # move the input and model to GPU for speed if available - if torch.cuda.is_available(): - input_batch = input_batch.to('cuda') - model.to('cuda') - - with torch.no_grad(): - output = model(input_batch) - # Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes - # The output has unnormalized scores. To get probabilities, you can run a softmax on it. - probabilities = torch.nn.functional.softmax(output[0], dim=0) - - # Read the categories - with open("imagenet_classes.txt", "r") as f: - categories = [s.strip() for s in f.readlines()] - # Show top categories per image - top5_prob, top5_catid = torch.topk(probabilities, 5) - result = {} - for i in range(top5_prob.size(0)): - result[categories[top5_catid[i]]] = top5_prob[i].item() - return result - -inputs = gr.inputs.Image(type='pil') -outputs = gr.outputs.Label(type="confidences",num_top_classes=5) - -title = "DENSENET" -description = "Gradio demo for Dense Convolutional Network (DenseNet), connects each layer to every other layer in a feed-forward fashion. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

Densely Connected Convolutional Networks | Github Repo

" - -examples = [ - ['dog.jpg'] -] -gr.Interface(inference, inputs, outputs, title=title, description=description, article=article, examples=examples, analytics_enabled=False).launch(debug=True) \ No newline at end of file diff --git "a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" "b/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" deleted file mode 100644 index 1d37b8b6520dc5649cca1797c9084ef6b41c2724..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\350\257\242\351\227\256\345\244\232\344\270\252\345\244\247\350\257\255\350\250\200\346\250\241\345\236\213.py" +++ /dev/null @@ -1,30 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 同时问询(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,如温度和top_p等,一般原样传递下去就行 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append((txt, "正在同时咨询gpt-3.5(openai)和gpt-4(api2d)……")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - - # llm_kwargs['llm_model'] = 'chatglm&gpt-3.5-turbo&api2d-gpt-3.5-turbo' # 支持任意数量的llm接口,用&符号分隔 - llm_kwargs['llm_model'] = 'gpt-3.5-turbo&api2d-gpt-4' # 支持任意数量的llm接口,用&符号分隔 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=txt, inputs_show_user=txt, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=history, - sys_prompt=system_prompt, - retry_times_at_unknown_error=0 - ) - - history.append(txt) - history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 \ No newline at end of file diff --git "a/spaces/qingxu98/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" "b/spaces/qingxu98/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" deleted file mode 100644 index f1a89a7ec94f30a00b58ad85661a876efc0ac4d2..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/gpt-academic/crazy_functions/\347\220\206\350\247\243PDF\346\226\207\346\241\243\345\206\205\345\256\271.py" +++ /dev/null @@ -1,114 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption -from .crazy_utils import read_and_clean_pdf_text -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析PDF(file_name, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import tiktoken - print('begin analysis on:', file_name) - - ############################## <第 0 步,切割PDF> ################################## - # 递归地切割PDF文件,每一块(尽量是完整的一个section,比如introduction,experiment等,必要时再进行切割) - # 的长度必须小于 2500 个 Token - file_content, page_one = read_and_clean_pdf_text(file_name) # (尝试)按照章节切割PDF - file_content = file_content.encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars - page_one = str(page_one).encode('utf-8', 'ignore').decode() # avoid reading non-utf8 chars - - TOKEN_LIMIT_PER_FRAGMENT = 2500 - - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - enc = model_info["gpt-3.5-turbo"]['tokenizer'] - def get_token_num(txt): return len(enc.encode(txt, disallowed_special=())) - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT) - page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4) - # 为了更好的效果,我们剥离Introduction之后的部分(如果有) - paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0] - - ############################## <第 1 步,从摘要中提取高价值信息,放到history中> ################################## - final_results = [] - final_results.append(paper_meta) - - ############################## <第 2 步,迭代地历遍整个文章,提取精炼信息> ################################## - i_say_show_user = f'首先你在英文语境下通读整篇论文。'; gpt_say = "[Local Message] 收到。" # 用户提示 - chatbot.append([i_say_show_user, gpt_say]); yield from update_ui(chatbot=chatbot, history=[]) # 更新UI - - iteration_results = [] - last_iteration_result = paper_meta # 初始值是摘要 - MAX_WORD_TOTAL = 4096 - n_fragment = len(paper_fragments) - if n_fragment >= 20: print('文章极长,不能达到预期效果') - for i in range(n_fragment): - NUM_OF_WORD = MAX_WORD_TOTAL // n_fragment - i_say = f"Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i]}" - i_say_show_user = f"[{i+1}/{n_fragment}] Read this section, recapitulate the content of this section with less than {NUM_OF_WORD} words: {paper_fragments[i][:200]}" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive(i_say, i_say_show_user, # i_say=真正给chatgpt的提问, i_say_show_user=给用户看的提问 - llm_kwargs, chatbot, - history=["The main idea of the previous section is?", last_iteration_result], # 迭代上一次的结果 - sys_prompt="Extract the main idea of this section." # 提示 - ) - iteration_results.append(gpt_say) - last_iteration_result = gpt_say - - ############################## <第 3 步,整理history> ################################## - final_results.extend(iteration_results) - final_results.append(f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。') - # 接下来两句话只显示在界面上,不起实际作用 - i_say_show_user = f'接下来,你是一名专业的学术教授,利用以上信息,使用中文回答我的问题。'; gpt_say = "[Local Message] 收到。" - chatbot.append([i_say_show_user, gpt_say]) - - ############################## <第 4 步,设置一个token上限,防止回答时Token溢出> ################################## - from .crazy_utils import input_clipping - _, final_results = input_clipping("", final_results, max_token_limit=3200) - yield from update_ui(chatbot=chatbot, history=final_results) # 注意这里的历史记录被替代了 - - -@CatchException -def 理解PDF文档内容标准文件输入(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "理解PDF论文内容,并且将结合上下文内容,进行学术解答。函数插件贡献者: Hanzoe, binary-husky"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import fitz - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": - txt = '空空如也的输入栏' - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, - a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - txt = file_manifest[0] - # 开始正式执行任务 - yield from 解析PDF(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Auto Toxic Fog Conquista 2 0 Des HOT!.md b/spaces/quidiaMuxgu/Expedit-SAM/Auto Toxic Fog Conquista 2 0 Des HOT!.md deleted file mode 100644 index 82b626c84335f50e5702f6c760268b7d76326c8b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Auto Toxic Fog Conquista 2 0 Des HOT!.md +++ /dev/null @@ -1,26 +0,0 @@ -

Auto Toxic Fog Conquista 2 0 Des


Download Ziphttps://geags.com/2uCs8Q



- -* How to use Auto Toxic Fog Conquista 2 ? - -* Technical Information. - -* Auto Toxic Fog Conquista 2 ? - -* 2 pictures. - - - -* Check the 【Product Manual】 - - *** How to use Auto Toxic Fog Conquista 2 ? - - *** Technical Information - - *** 2 pictures. - -![**]( - -![**]( 4fefd39f24
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Castlevania.Lords.of.Shadow.2.Repack-z10yded PC.md b/spaces/quidiaMuxgu/Expedit-SAM/Castlevania.Lords.of.Shadow.2.Repack-z10yded PC.md deleted file mode 100644 index d9b9b3a0f84ff9c248443534a6779540d3a0b7e5..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Castlevania.Lords.of.Shadow.2.Repack-z10yded PC.md +++ /dev/null @@ -1,6 +0,0 @@ -

Castlevania.Lords.of.Shadow.2.Repack-z10yded PC


DOWNLOADhttps://geags.com/2uCr9R



- -Title: Castlevania Lords of Shadow Mirror of Fate HD MULTI7 RePack By R G Revenants NASWARI+ZOHAIB; Category: PC Game; Language: English; Rating: 4.3; Votes: 722. 650.5 MB ... Repack-z10yded Cheat Engine. ... Day 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31. Year. 1fdad05405
-
-
-

diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Clave Cromatica Sony Vegas 12 Crack.md b/spaces/quidiaMuxgu/Expedit-SAM/Clave Cromatica Sony Vegas 12 Crack.md deleted file mode 100644 index 7ce84859b0ede69c5d33d2a8a07fb3678638ae67..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Clave Cromatica Sony Vegas 12 Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

clave cromatica sony vegas 12 crack


Download Filehttps://geags.com/2uCseJ



-
-... 0.5 https://www.taringa.net/+fifa_12/aporte-faces-fifa-14_1bi950 weekly 0.5 ... .taringa.net/+tv_peliculas_series/television-sony-con-internet_1biar0 weekly 0.5 ... .taringa.net/+videos/video-oficial-de-nacho-vegas-la-gran-broma-final_1bip90 ... -son-las-claves-de-las-mentes-mas-brillantes-del-mundo-salud-y-b_1bjek0 ... 1fdad05405
-
-
-

diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/utils/__init__.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Descargar Peppa Pig Y Las Botas De Oro.md b/spaces/raedeXanto/academic-chatgpt-beta/Descargar Peppa Pig Y Las Botas De Oro.md deleted file mode 100644 index e64b97c567a7bb95e1433eabf4379c64ba96011d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Descargar Peppa Pig Y Las Botas De Oro.md +++ /dev/null @@ -1,21 +0,0 @@ - -

¿Cómo descargar Peppa Pig Y Las Botas De Oro en tu móvil?

-

Peppa Pig Y Las Botas De Oro es una divertida aplicación basada en el episodio especial de la serie de televisión para niños. En esta aplicación, puedes acompañar a Peppa y sus amigos en una aventura llena de juegos y sorpresas. Puedes vestir a Peppa con diferentes trajes, volar un cohete por el espacio, visitar la tienda de la Señora Conejo en la luna, alimentar a los patos y cantar la canción de los charcos de barro. Además, puedes coleccionar pares de botas doradas que te darán regalos especiales.

-

Para descargar Peppa Pig Y Las Botas De Oro en tu móvil, necesitas tener un dispositivo Android con la versión 5.1 o superior. La aplicación ocupa 116.3 MB de espacio y está disponible en más de 70 idiomas. Puedes descargarla gratis desde la Play Store[^1^] o desde otras páginas web como APKCombo[^2^] o Sway[^3^]. También puedes escuchar un fragmento del audio del episodio en SoundCloud[^4^].

-

Descargar Peppa Pig Y Las Botas De Oro


Download Zip »»» https://tinourl.com/2uL02x



-

Peppa Pig Y Las Botas De Oro es una aplicación segura y sin anuncios, diseñada para niños de preescolar. Es una forma divertida y educativa de explorar el mundo de Peppa Pig y sus amigos. ¿A qué esperas para descargarla y disfrutarla?

- -

¿Qué es Peppa Pig?

-

Peppa Pig es una serie de televisión británica creada por Neville Astley y Mark Baker. Se estrenó en 2004 y desde entonces ha sido un éxito mundial. La serie narra las aventuras de Peppa, una cerdita de cuatro años que vive con su hermano George, su mamá y su papá. Peppa le gusta jugar con sus amigos, ir al colegio, practicar ballet y saltar en los charcos de barro. Cada episodio dura unos cinco minutos y tiene un tono cómico y familiar.

-

Peppa Pig tiene más de 300 episodios y se ha traducido a más de 40 idiomas. También tiene películas, libros, juguetes, videojuegos y aplicaciones como Peppa Pig Y Las Botas De Oro. Peppa Pig es una de las marcas infantiles más populares y queridas del mundo. Según Entertainment One, la empresa que la produce, Peppa Pig tiene más de 1.500 millones de fans en todo el mundo.

- -

¿Qué beneficios tiene Peppa Pig para los niños?

-

Peppa Pig es una serie que ofrece muchos beneficios para los niños de preescolar. Algunos de ellos son:

-
    -
  • Estimula la imaginación y la creatividad. Peppa Pig vive en un mundo lleno de fantasía y diversión, donde puede viajar al espacio, al pasado o al futuro. Los niños pueden imitar sus juegos y crear sus propias historias.
  • -
  • Fomenta el aprendizaje y el desarrollo. Peppa Pig enseña a los niños conceptos básicos como los colores, los números, las formas o las letras. También les ayuda a desarrollar habilidades sociales como la amistad, el respeto, la cooperación o la empatía.
  • -
  • Transmite valores positivos y familiares. Peppa Pig muestra una familia unida y feliz, que se quiere y se apoya. También muestra el valor de la naturaleza, el cuidado de los animales, la diversidad y la tolerancia.
  • -
-

Peppa Pig es una serie que entretiene y educa a los niños de preescolar. Con aplicaciones como Peppa Pig Y Las Botas De Oro, los niños pueden interactuar con sus personajes favoritos y disfrutar de una experiencia lúdica y enriquecedora.

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download keygen xforce for AutoCAD Revit LT Suite 2018 activation Tips and tricks for a successful installation.md b/spaces/raedeXanto/academic-chatgpt-beta/Download keygen xforce for AutoCAD Revit LT Suite 2018 activation Tips and tricks for a successful installation.md deleted file mode 100644 index 9e4318d76d637a0c44e2ff4dbd24dcc7333e98ae..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download keygen xforce for AutoCAD Revit LT Suite 2018 activation Tips and tricks for a successful installation.md +++ /dev/null @@ -1,93 +0,0 @@ - -

How to Download Keygen Xforce for AutoCAD Revit LT Suite 2018 Activation

-

If you are looking for a software that can help you create stunning architectural designs, then you might want to consider AutoCAD Revit LT Suite 2018. This software is a combination of AutoCAD LT, a powerful drafting tool, and Revit LT, a simplified building information modeling (BIM) tool. With AutoCAD Revit LT Suite 2018, you can create 2D drawings and 3D models, collaborate with other designers, and produce high-quality documentation.

-

downloadkeygenxforceforAutoCADRevitLTSuite2018activation


Download File ☆☆☆ https://tinourl.com/2uKZsS



-

However, to use this software, you need to activate it with a valid license. This can be costly and time-consuming, especially if you are a student or a freelancer who needs to use it for a short period of time. That's why some people prefer to use a keygen xforce, which is a program that can generate activation codes for any Autodesk product, including AutoCAD Revit LT Suite 2018.

-

In this article, we will show you how to download keygen xforce for AutoCAD Revit LT Suite 2018 activation, and how to use it to activate your software. Follow these simple steps and enjoy your software without any hassle.

-

What is Keygen Xforce?

-

Keygen xforce is a software that can create serial numbers and product keys for any Autodesk product. It works by cracking the algorithm that Autodesk uses to generate its licenses. By using keygen xforce, you can bypass the online verification process and activate your software offline.

-

Some of the features of keygen xforce are:

-
    -
  • It supports all Autodesk products from 2006 to 2022.
  • -
  • It works for both Windows and Mac operating systems.
  • -
  • It is easy to use and has a user-friendly interface.
  • -
  • It is safe and virus-free.
  • -
  • It does not require any installation or registration.
  • -
-

How to Download Keygen Xforce for AutoCAD Revit LT Suite 2018?

-

To download keygen xforce for AutoCAD Revit LT Suite 2018 activation, you need to follow these steps:

-

Step 1: Visit the official website of keygen xforce

-

The first step is to visit the official website of keygen xforce, which is https://www.xforcekeygen.net/. This website has all the latest versions of keygen xforce for different Autodesk products. You can also find some useful tips and tutorials on how to use it.

-

download x-force 2018 all product key for autodesk 2018
-how to activate autodesk products 2018 x-force 2018
-x-force keygenerator autodesk products 2018 all
-autodesk 2018 all-products x-force keygenerator
-download x-force 2018 for autodesk revit lt suite 2018
-how to use x-force 2018 for autodesk revit lt suite 2018
-x-force 2018 product key for autodesk revit lt suite 2018
-autodesk revit lt suite 2018 x-force crack
-download x-force 2018 for autodesk civil 3d 2018
-how to use x-force 2018 for autodesk civil 3d 2018
-x-force 2018 product key for autodesk civil 3d 2018
-autodesk civil 3d 2018 x-force crack
-download x-force 2018 for autodesk alias design 2018
-how to use x-force 2018 for autodesk alias design 2018
-x-force 2018 product key for autodesk alias design 2018
-autodesk alias design 2018 x-force crack
-download x-force 2018 for autodesk flame assist 2018
-how to use x-force 2018 for autodesk flame assist 2018
-x-force 2018 product key for autodesk flame assist 2018
-autodesk flame assist 2018 x-force crack
-download x-force 2018 for autodesk featurecam ultimate 2018
-how to use x-force 2018 for autodesk featurecam ultimate 2018
-x-force 2018 product key for autodesk featurecam ultimate 2018
-autodesk featurecam ultimate 2018 x-force crack
-download x-force 2018 for autodesk hsmworks premium 2018
-how to use x-force 2018 for autodesk hsmworks premium 2018
-x-force 2018 product key for autodesk hsmworks premium 2018
-autodesk hsmworks premium 2018 x-force crack
-download x-force keygen for autocad revit lt suite activation code generator
-how to use x-force keygen for autocad revit lt suite activation code generator
-autocad revit lt suite activation code generator by x-force keygen
-autocad revit lt suite activation code generator crack by x-force keygen
-download free autocad revit lt suite activation code generator with xforce keygen patcher
-how to install free autocad revit lt suite activation code generator with xforce keygen patcher
-free autocad revit lt suite activation code generator with xforce keygen patcher full version download link
-free autocad revit lt suite activation code generator with xforce keygen patcher license key serial number registration code product key activation code crack patcher full version free download link torrent magnet link direct download link google drive link mega link mediafire link dropbox link onedrive link zippyshare link rapidgator link uploaded link uptobox link filefactory link openload link streamable link vimeo link youtube link dailymotion link facebook video link instagram video link tiktok video link twitter video link reddit video link quora video link pinterest video link linkedin video link whatsapp video link telegram video link signal video link snapchat video link wechat video link line video link kakaotalk video link discord video link skype video link zoom video link webex video link teams video link slack video link gmail video link outlook video

-

Step 2: Choose your product and version

-

The next step is to choose your product and version from the list of available options. In this case, you need to select AutoCAD Revit LT Suite 2018 from the drop-down menu. You can also type in the name of your product in the search box if you cannot find it in the list.

-

Step 3: Click on the download button

-

The third step is to click on the download button that appears next to your product name. This will start downloading the keygen xforce file for AutoCAD Revit LT Suite 2018 activation. The file size is about 4 MB, so it should not take long to download.

-

Step 4: Extract the file and run the setup

-

The last step is to extract the file that you have downloaded using a program like WinRAR or WinZip. You will see a folder named "X-Force_2018" that contains two files: "x-force_2018.exe" and "xf-adsk2018_x64.exe". The first file is for Windows 32-bit systems, while the second file is for Windows 64-bit systems. Depending on your system type, run one of these files as administrator by right-clicking on it and choosing "Run as administrator". This will launch the keygen xforce program for AutoCAD Revit LT Suite 2018 activation.

-

How to Activate AutoCAD Revit LT Suite 2018 with Keygen Xforce?

-

To activate AutoCAD Revit LT Suite 2018 with keygen xforce, you need to follow these steps:

-

Step 1: Launch AutoCAD Revit LT Suite 2018 and enter your serial number and product key

-

The first step is to launch AutoCAD Revit LT Suite and select "I have an activation code from Autodesk" -

The second step is to click on "Activate" at the bottom of the screen. You will see another screen that asks you how you want to activate your software. Choose "I have an activation code from Autodesk" from the options. This will open a window that shows your request code.

-

Step 3: Open keygen xforce and click on patch

-

The third step is to open keygen xforce that you have downloaded earlier. You will see a window that has several buttons and fields. Make sure that you have selected "AutoCAD Revit LT Suite" as your product name at the top left corner. Then click on "Patch" at the bottom left corner. You will see a message that says "Successfully patched". This means that keygen xforce has modified some files in your software installation folder.

-

Step 4: Copy the request code from AutoCAD Revit LT Suite 2018 and paste it into keygen xforce

-

The fourth step is to copy the request code that appears in AutoCAD Revit LT Suite 2018 and paste it into keygen xforce. You can do this by selecting the request code with your mouse, pressing Ctrl+C to copy it, switching to keygen xforce window, clicking on the empty field next to "Request", and pressing Ctrl+V to paste it.

-

Step 5: Click on generate and copy the activation code from keygen xforce

-

The fifth step is to click on "Generate" at the bottom right corner of keygen xforce window. This will create an activation code for your software. You can copy this code by clicking on it with your mouse, pressing Ctrl+C to copy it, or clicking on the "Copy" button next to it.

-

Step 6: Paste the activation code into AutoCAD Revit LT Suite 2018 and click on next

-

The last step is to paste the activation code into AutoCAD Revit LT Suite 2018 and click on "Next". You can do this by switching back to AutoCAD Revit LT Suite 2018 window, clicking on the empty field next to "I have an activation code from Autodesk", and pressing Ctrl+V to paste it. Alternatively, you can also right-click on the field and choose "Paste" from the menu. After pasting the code, click on "Next" to finish the activation process.

-

Conclusion

-

Congratulations! You have successfully downloaded keygen xforce for AutoCAD Revit LT Suite 2018 activation and used it to activate your software. Now you can enjoy all the features and benefits of AutoCAD Revit LT Suite 2018 without any limitations or restrictions. You can create amazing architectural designs, collaborate with other professionals, and produce high-quality documentation with ease.

-

If you found this article helpful, please share it with your friends and colleagues who might need it. Also, feel free to leave a comment below if you have any questions or feedback. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions about keygen xforce for AutoCAD Revit LT Suite 2018 activation:

-

Q: Is keygen xforce legal?

-

A: No, keygen xforce is not legal. It is a software that violates the terms and conditions of Autodesk products. It is also considered as piracy and theft of intellectual property. Using keygen xforce can expose you to legal risks and penalties.

-

Q: Is keygen xforce safe?

-

A: No, keygen xforce is not safe. It is a software that can contain viruses, malware, spyware, or other harmful components. Using keygen xforce can damage your computer system, compromise your data security, or infect your network.

-

Q: What are the alternatives to keygen xforce?

-

A: The best alternative to keygen xforce is to purchase a legitimate license for AutoCAD Revit LT Suite 2018 from Autodesk or its authorized resellers. This way, you can support the developers of the software, get technical support and updates, and avoid any legal or ethical issues.

-

Q: How do I uninstall keygen xforce?

-

A: To uninstall keygen xforce, you need to delete the files that you have downloaded and extracted from your computer. You also need to restore the original files that were modified by keygen xforce in your software installation folder. You can do this by using a system restore point or reinstalling your software.

-

Q: How do I deactivate AutoCAD Revit LT Suite 2018?

-

A: To deactivate AutoCAD Revit LT Suite 2018, you need to open the software and go to Help > About > Product Information > Manage License > Return License. This will release your license and allow you to use it on another computer or device.

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Fonepaw android data recovery keygen A powerful tool to recover Android data without root.md b/spaces/raedeXanto/academic-chatgpt-beta/Fonepaw android data recovery keygen A powerful tool to recover Android data without root.md deleted file mode 100644 index 888a0d9655c500f9c6abf513df839193908ff94b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Fonepaw android data recovery keygen A powerful tool to recover Android data without root.md +++ /dev/null @@ -1,122 +0,0 @@ - -

Fonepaw Android Data Recovery Keygen: A Complete Guide

-

Have you ever accidentally deleted some important data from your Android device, such as photos, videos, messages, contacts, or documents? If so, you may be wondering how to recover them without spending a fortune on professional data recovery services. One possible solution is to use Fonepaw Android Data Recovery keygen, a software tool that can help you restore your lost data with ease. But what is Fonepaw Android Data Recovery and how does it work? And what is a keygen and why do you need it? In this article, we will answer these questions and show you how to use Fonepaw Android Data Recovery keygen to recover deleted data from Android devices.

-

Fonepaw android data recovery keygen


DOWNLOAD >>> https://tinourl.com/2uL4Wy



-

What is Fonepaw Android Data Recovery?

-

Fonepaw Android Data Recovery is a powerful and user-friendly software that can recover deleted or lost data from Android devices and SD cards. It can retrieve various types of data, such as photos, videos, audio, WhatsApp messages, contacts, call logs, and documents. It can also backup your data and restore it to other Android devices. Fonepaw Android Data Recovery works with most Android devices from different brands, such as Samsung, Huawei, LG, Sony, Motorola, Google, etc. It supports various Android OS versions from Android 2.1 to Android 10.

-

Features of Fonepaw Android Data Recovery

-
    -
  • Contacts: You can recover the information you have recorded in SIM card or internal memory, such as phone number, name, email address, company, job title, website, etc. You can export them as VCF/CSV/HTML files.
  • -
  • Messages: You can recover all deleted text messages, MMS, WhatsApp messages, including the attachments like photos and videos. You can export them as CSV/HTML files.
  • -
  • Call Logs: You can recover the names, phone numbers, call types and durations of your calls. You can export them as HTML files.
  • -
  • WhatsApp: You can recover both individual and group WhatsApp conversations and photos, audios, videos on your Android phone even without backup.
  • -
  • Photos: You can recover all photos saved on your Android devices, including photos you took and pictures you got from the internet. You can recover them in various formats such as .jpg, .png, .gif, .bmp, .webp, .tiff, etc.
  • -
  • Videos & Audio: You can recover the videos and audio files that you downloaded, recorded or got elsewhere. You can recover them in various formats such as .mp4, .3gp, .avi, .mkv, .wav, .mp3, .aac, .flac, etc.
  • -
  • Documents: You can recover the lost documents that you saved in Android devices in Word, Excel, PPT, PDF, HTML, ZIP, RAR, and more formats.
  • -
-

Supported devices and data types

- - - - - - - - - - - - - - - - - - - -
DeviceData Type
SamsungContacts, Messages, Call Logs, WhatsApp, Photos, Videos, Audio, Documents
HuaweiContacts, Messages, Call Logs, WhatsApp, Photos, Videos, Audio, Documents
LGContacts, Messages, Call Logs, WhatsApp, Photos, Videos, Audio
SonyContacts, Messages, Call Logs, WhatsApp, Photos, Videos
MotoContacts, Messages, Call Logs
GoogleContacts, Messages
HTCContacts
ZTEContacts
OPPOContacts
VivoContacts
XiaomiContacts
Nokia-
OnePlus-
Lenovo-
Acer-
Asus-
Note: The supported data types may vary depending on the device model and Android version.
-

What is a keygen and why do you need it?

-

A keygen is a software tool that can generate registration keys or serial numbers for other software applications. A registration key or serial number is a code that activates the full features of a software application after entering it into the software interface. Without a valid registration key or serial number, you may only be able to use a trial version or a limited version of a software application.

-

You may need a keygen if you want to use a software application without paying for it or if you have lost your original registration key or serial number. However, using a keygen may also have some drawbacks and risks that you should be aware of before using it.

-

The benefits and risks of using a keygen

-
    -
  • The benefits of using a keygen are:
      -
    • You can save money by not paying for the software application.
    • -
    • You can access the full features of the software application without any limitations.
    • -
    • You can use the software application on multiple devices or computers.
    • -
    -
  • -
  • The risks of using a keygen are:
      -
    • You may violate the intellectual property rights of the software developer or publisher.
    • -
    • You may expose your device or computer to malware or viruses that may be hidden in the keygen file or website.
    • -
    • You may damage your device or computer by installing an incompatible or corrupted software application.
    • -
    • You may lose your data or privacy by allowing unauthorized access to your device or computer by the keygen file or website.
    • -
    • You may not receive any technical support or updates from the software developer or publisher.
    • -
    • You may face legal consequences if you are caught using an illegal software application.
    • -
    - Note: We do not endorse or recommend using any illegal software applications or keygens. We only provide this information for educational purposes only. Use them at your own risk.
-

How to find a reliable keygen for Fonepaw Android Data Recovery?

-

If you still want to use a keygen for Fonepaw Android Data Recovery despite the risks involved you should be careful when choosing one from the internet. There are many websites that claim to offer free keygens for various software applications but some of them may be fake scam or malicious. Here are some tips on how to find a reliable keygen for Fonepaw Android Data Recovery:

-

Fonepaw android data recovery crack download
-Fonepaw android data recovery serial number
-Fonepaw android data recovery license code
-Fonepaw android data recovery activation key
-Fonepaw android data recovery full version
-Fonepaw android data recovery registration code
-Fonepaw android data recovery patch
-Fonepaw android data recovery torrent
-Fonepaw android data recovery free download
-Fonepaw android data recovery software
-How to use fonepaw android data recovery keygen
-How to get fonepaw android data recovery keygen
-Where to download fonepaw android data recovery keygen
-Is fonepaw android data recovery keygen safe
-Does fonepaw android data recovery keygen work
-Fonepaw android data recovery keygen alternative
-Fonepaw android data recovery keygen review
-Fonepaw android data recovery keygen reddit
-Fonepaw android data recovery keygen youtube
-Fonepaw android data recovery keygen 2023
-Fonepaw android data recovery keygen for mac
-Fonepaw android data recovery keygen for windows
-Fonepaw android data recovery keygen for linux
-Fonepaw android data recovery keygen online
-Fonepaw android data recovery keygen generator
-Fonepaw android data recovery keygen no survey
-Fonepaw android data recovery keygen no password
-Fonepaw android data recovery keygen no virus
-Fonepaw android data recovery keygen no malware
-Fonepaw android data recovery keygen latest version
-Fonepaw android data recovery keygen updated version
-Fonepaw android data recovery keygen new version
-Fonepaw android data recovery keygen 2022 version
-Fonepaw android data recovery keygen pro version
-Fonepaw android data recovery keygen premium version
-Fonepaw android data recovery keygen ultimate version
-Fonepaw android data recovery keygen deluxe version
-Fonepaw android data recovery keygen platinum version
-Fonepaw android data recovery keygen professional version
-Fonepaw android data recovery keygen advanced version
-How to install fonepaw android data recovery keygen
-How to uninstall fonepaw android data recovery keygen
-How to update fonepaw android data recovery keygen
-How to upgrade fonepaw android data recovery keygen
-How to fix fonepaw android data recovery keygen errors
-How to troubleshoot fonepaw android data recovery keygen issues
-How to optimize fonepaw android data recovery keygen performance
-How to backup fonepaw android data recovery keygen files
-How to restore fonepaw android data recovery keygen files
-How to recover deleted files with fonepaw android data recovery keygen

-
    -
  • Do some research on the website that offers the keygen. Check its reputation reviews and ratings from other users. Avoid websites that have poor design grammar or spelling errors as they may indicate low quality or fraudulence.
  • -
  • Scan the keygen file with an antivirus program before downloading or opening it. Make sure it does not contain any malware or viruses that may harm your device or computer.
  • -
  • Create a backup of your device or computer before installing or running the keygen file. This way locked Android devices?
  • -
  • A: Yes, Fonepaw Android Data Recovery can recover deleted data from broken or locked Android devices as long as they can be detected by the computer. However, you may need to use some extra tools or methods to access your device, such as fixing the screen, entering recovery mode, or removing the lock screen password.
  • -
  • Q: Can I recover deleted data from Android devices without root?
  • -
  • A: Yes, Fonepaw Android Data Recovery can recover deleted data from Android devices without root. However, some data types may require root access to be recovered, such as WhatsApp messages and documents. If you want to recover these data types, you may need to root your device first.
  • -
  • Q: Can I recover deleted data from Android devices without a computer?
  • -
  • A: No, Fonepaw Android Data Recovery requires a computer to run and connect your device. You cannot recover deleted data from Android devices without a computer.
  • -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/call-bind/CHANGELOG.md b/spaces/rayan-saleh/whisper2notion/server/node_modules/call-bind/CHANGELOG.md deleted file mode 100644 index 62a37279ec6443c115d52ab16bed5c8645e6f3b6..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/call-bind/CHANGELOG.md +++ /dev/null @@ -1,42 +0,0 @@ -# Changelog - -All notable changes to this project will be documented in this file. - -The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) -and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). - -## [v1.0.2](https://github.com/ljharb/call-bind/compare/v1.0.1...v1.0.2) - 2021-01-11 - -### Commits - -- [Fix] properly include the receiver in the bound length [`dbae7bc`](https://github.com/ljharb/call-bind/commit/dbae7bc676c079a0d33c0a43e9ef92cb7b01345d) - -## [v1.0.1](https://github.com/ljharb/call-bind/compare/v1.0.0...v1.0.1) - 2021-01-08 - -### Commits - -- [Tests] migrate tests to Github Actions [`b6db284`](https://github.com/ljharb/call-bind/commit/b6db284c36f8ccd195b88a6764fe84b7223a0da1) -- [meta] do not publish github action workflow files [`ec7fe46`](https://github.com/ljharb/call-bind/commit/ec7fe46e60cfa4764ee943d2755f5e5a366e578e) -- [Fix] preserve original function’s length when possible [`adbceaa`](https://github.com/ljharb/call-bind/commit/adbceaa3cac4b41ea78bb19d7ccdbaaf7e0bdadb) -- [Tests] gather coverage data on every job [`d69e23c`](https://github.com/ljharb/call-bind/commit/d69e23cc65f101ba1d4c19bb07fa8eb0ec624be8) -- [Dev Deps] update `eslint`, `@ljharb/eslint-config`, `aud`, `tape` [`2fd3586`](https://github.com/ljharb/call-bind/commit/2fd3586c5d47b335364c14293114c6b625ae1f71) -- [Deps] update `get-intrinsic` [`f23e931`](https://github.com/ljharb/call-bind/commit/f23e9318cc271c2add8bb38cfded85ee7baf8eee) -- [Deps] update `get-intrinsic` [`72d9f44`](https://github.com/ljharb/call-bind/commit/72d9f44e184465ba8dd3fb48260bbcff234985f2) -- [meta] fix FUNDING.yml [`e723573`](https://github.com/ljharb/call-bind/commit/e723573438c5a68dcec31fb5d96ea6b7e4a93be8) -- [eslint] ignore coverage output [`15e76d2`](https://github.com/ljharb/call-bind/commit/15e76d28a5f43e504696401e5b31ebb78ee1b532) -- [meta] add Automatic Rebase and Require Allow Edits workflows [`8fa4dab`](https://github.com/ljharb/call-bind/commit/8fa4dabb23ba3dd7bb92c9571c1241c08b56e4b6) - -## v1.0.0 - 2020-10-30 - -### Commits - -- Initial commit [`306cf98`](https://github.com/ljharb/call-bind/commit/306cf98c7ec9e7ef66b653ec152277ac1381eb50) -- Tests [`e10d0bb`](https://github.com/ljharb/call-bind/commit/e10d0bbdadc7a10ecedc9a1c035112d3e368b8df) -- Implementation [`43852ed`](https://github.com/ljharb/call-bind/commit/43852eda0f187327b7fad2423ca972149a52bd65) -- npm init [`408f860`](https://github.com/ljharb/call-bind/commit/408f860b773a2f610805fd3613d0d71bac1b6249) -- [meta] add Automatic Rebase and Require Allow Edits workflows [`fb349b2`](https://github.com/ljharb/call-bind/commit/fb349b2e48defbec8b5ec8a8395cc8f69f220b13) -- [meta] add `auto-changelog` [`c4001fc`](https://github.com/ljharb/call-bind/commit/c4001fc43031799ef908211c98d3b0fb2b60fde4) -- [meta] add "funding"; create `FUNDING.yml` [`d4d6d29`](https://github.com/ljharb/call-bind/commit/d4d6d2974a14bc2e98830468eda7fe6d6a776717) -- [Tests] add `npm run lint` [`dedfb98`](https://github.com/ljharb/call-bind/commit/dedfb98bd0ecefb08ddb9a94061bd10cde4332af) -- Only apps should have lockfiles [`54ac776`](https://github.com/ljharb/call-bind/commit/54ac77653db45a7361dc153d2f478e743f110650) -- [meta] add `safe-publish-latest` [`9ea8e43`](https://github.com/ljharb/call-bind/commit/9ea8e435b950ce9b705559cd651039f9bf40140f) diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Andaz Apna Apna Torrent Free Download WORK.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Andaz Apna Apna Torrent Free Download WORK.md deleted file mode 100644 index a9f836dbeabc38b71df25b2ec751826ac4132afd..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Andaz Apna Apna Torrent Free Download WORK.md +++ /dev/null @@ -1,18 +0,0 @@ -

andaz apna apna torrent free download


Download File ✔✔✔ https://urlgoal.com/2uCJN0



-
-Rahul Dev, . Vijayendra Ghatge and . The film is based on a 1967 Gujarati play of the same name by Narendra Nath Wagle, and stars Aamir Khan in the lead role. Rajkumar Santoshi also produced the film. The music was composed by Amar Mohile. - -The film was released on 8 August 1994 and was a commercial success. It earned over worldwide. It was named one of the Best Indian Films of All Time in a 2005 poll conducted by The Times of India. - -Aamir Khan was chosen as the brand ambassador of his home state, Maharashtra. - -Plot - -Sachin (Aamir Khan), the perfect student, is growing up in a slum colony with his father Karim (Dalip Tahil) and mother Halima (Raveena Tandon). His father runs a small shop where Sachin does his school homework. Sachin's uncle Aslam (Farooq Sheikh) is a powerful smuggler and is very rich. - -Sachin hates his uncle and wants to become his enemy. He leads a revolution against his uncle, including smuggling weapons from Pakistan. The government launches a search operation and Sachin is arrested. His parents come to rescue him and escape from the police. Aslam plans to kill Sachin, but Sachin reaches Aslam's house and saves him. Sachin then sends his friends along with a police officer. Sachin and the officer fight and the officer is killed. Sachin is about to be killed, when an anonymous car saves him. The car is driven by an unknown person and returns Sachin to his house. Sachin is disappointed that the car driver did not say anything, but a mysterious man (Rahul Dev) inside saves Sachin and his parents from Aslam. Sachin is now very happy and he becomes a nice guy. He even participates in social activities. - -Aslam and his friends kidnap Sachin and threaten his parents to leave Sachin's side, but Sachin calls his parents, and they run away from the house and return home safely. They make sure that Sachin and his parents are safe. Sachin and his parents live happily, but Aslam kidnaps Sachin's parents to fulfill his plans. Sachin goes to rescue his parents and saves them. Sachin returns home and finds his family and Aslam in a political conference. Sachin decides to help Aslam and starts interfering in Aslam's work 4fefd39f24
-
-
-

diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Digicorp Civil Design Keygen Download TOP.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Digicorp Civil Design Keygen Download TOP.md deleted file mode 100644 index acc1f80264080643ef99b9f72ba70dfd3f736773..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Digicorp Civil Design Keygen Download TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

digicorp civil design keygen download


DOWNLOAD ⚹⚹⚹ https://urlgoal.com/2uCJT7



-
-May 20, 2013 - To download the pdf redirect pro v2 serial keygen one file you.. 5.2 (freeware) ... Digicorp Civil Design Keygen 20 · CoverPro ... 4d29de3e1b
-
-
-

diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gta 5 Download Full Version Free Utorrent.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gta 5 Download Full Version Free Utorrent.md deleted file mode 100644 index 58830898f9eba1c8a3c4bb6ac11e33747c64ee62..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Gta 5 Download Full Version Free Utorrent.md +++ /dev/null @@ -1,104 +0,0 @@ -
-

GTA 5 Download Full Version Free uTorrent: The Ultimate Guide to Play the Best Open World Game on PC

- -

GTA 5 is one of the most popular and critically acclaimed games of all time, with over 150 million copies sold worldwide. It is the latest installment in the Grand Theft Auto series, which is known for its action, adventure, and open world gameplay. GTA 5 offers a huge and diverse world to explore, three different protagonists to control, and a variety of missions, activities, modes, and online multiplayer to enjoy.

- -

But how can you play GTA 5 on PC without spending a dime? The answer is simple: you can download GTA 5 full version for free using uTorrent. uTorrent is a software that allows you to download files from peer-to-peer networks, such as torrents. Torrents are files that contain information about other files that are shared by users online. By using uTorrent, you can download GTA 5 torrent file from various sources and then use it to install the game on your PC.

-

gta 5 download full version free utorrent


Download 🗸 https://urlgoal.com/2uCK40



- -

In this article, we will show you how to download GTA 5 full version for free using uTorrent, and what are the benefits of playing this amazing game on PC.

- -

How to Download GTA 5 Full Version for Free using uTorrent

- -

Before you can download GTA 5 full version for free using uTorrent, you need to make sure that your PC meets the minimum system requirements for the game. Here are the minimum system requirements for GTA 5:

- -
    -
  • Operating system: Windows 7, 8, 10
  • -
  • Processor: Intel Core 2 Q6600 or AMD Phenom 9850
  • -
  • Memory: 4 GB RAM
  • -
  • Video card: NVIDIA GeForce 9800 GT or AMD Radeon HD 4870 with 1 GB of video memory
  • -
  • Hard drive space: 72 GB
  • -
- -

If your PC meets these requirements, you can proceed to download GTA 5 full version for free using uTorrent. Here are the steps to follow:

- -
    -
  1. Download and install uTorrent from its official website: https://www.utorrent.com/
  2. -
  3. Find a reliable source for GTA 5 torrent file. You can use websites like https://www.torrent4you.org/en/gta-5/, https://fitgirl-repacks-site.org/grand-theft-auto-v-gta-5-download-torrent-repack/, or https://se7en.ws/gta-v/?lang=en. Make sure to read the comments and reviews of other users before downloading any torrent file.
  4. -
  5. Download the GTA 5 torrent file and open it with uTorrent. Choose a folder where you want to save the game files and start the download process.
  6. -
  7. Wait until the download is complete. Depending on your internet speed and the number of seeders (users who share the file), this may take from a few hours to a few days.
  8. -
  9. Once the download is complete, open the folder where you saved the game files and run the setup.exe file. Follow the instructions on the screen to install the game on your PC.
  10. -
  11. Enjoy playing GTA 5 on your PC!
  12. -
- -

What are the Benefits of Playing GTA 5 on PC?

- -

GTA 5 is a game that offers a lot of features and content for players to enjoy. Whether you want to explore the vast and diverse world of Los Santos and Blaine County, engage in thrilling missions and heists with three different protagonists, or have fun with various activities and modes in GTA Online, GTA 5 has something for everyone.

- -

But playing GTA 5 on PC has some advantages over playing it on consoles. Here are some of the benefits of playing GTA 5 on PC:

- -
    -
  • Better graphics and performance. GTA 5 on PC supports higher resolutions, frame rates, draw distances, textures, shadows, lighting, and other graphical enhancements that make the game look more realistic and immersive. You can also adjust the settings according to your preferences and hardware capabilities.
  • -
  • More mods and customization options. GTA 5 on PC allows you to install various mods that can change or add new features, content, characters, vehicles, weapons, missions, and more to the game. You can also customize your controls, interface, camera, sound, and other aspects of the game.
  • -
  • Easier access and updates. GTA 5 on PC does not require any discs or subscriptions to play. You can simply launch the game from your desktop or from Rockstar Games Launcher. You can also get faster and more frequent updates and patches for the game.
  • -
  • Better online experience. GTA 5 on PC has a larger and more active online community than consoles. You can join or create servers with different rules, modes, maps, and players. You can also use voice chat, text chat, or other communication tools to interact with other players.
  • -
- -

GTA 5 is a game that deserves to be played on PC. By downloading GTA 5 full version for free using uTorrent, you can enjoy this masterpiece of gaming without spending a dime.

-

- -

Conclusion

- -

GTA 5 is one of the best games ever made, and playing it on PC is even better. If you want to download GTA 5 full version for free using uTorrent, you just need to follow some simple steps and make sure your PC meets the minimum system requirements.

- -

GTA 5 offers a lot of features and content for players to enjoy, such as an open world, three protagonists, missions, heists, activities, modes, online multiplayer, mods, customization options, graphics enhancements, performance improvements, -easier access and updates

-

What are the Features and Content of GTA 5?

- -

GTA 5 is a game that offers a lot of features and content for players to enjoy. Here are some of the main features and content of GTA 5:

- -
    -
  • An open world. GTA 5 features a huge and diverse open world that covers the fictional state of San Andreas and the city of Los Santos. The open world is full of life, detail, and variety, and you can explore it by foot, by car, by bike, by boat, by plane, or by other means. You can also interact with various characters, objects, animals, and events in the open world.
  • -
  • Three protagonists. GTA 5 features three different protagonists that you can control: Michael, a retired bank robber who lives a luxurious but unhappy life; Franklin, a young street hustler who wants to make it big in the criminal world; and Trevor, a psychotic and violent drug dealer who lives in a trailer park. Each protagonist has their own personality, skills, abilities, story, missions, and relationships with other characters.
  • -
  • Missions and heists. GTA 5 features a variety of missions and heists that you can complete with one or more protagonists. Missions are tasks that advance the main story or the side stories of the protagonists. Heists are elaborate and challenging operations that involve planning, preparation, execution, and escape. You can choose different approaches, strategies, crew members, and outcomes for each heist.
  • -
  • Activities and modes. GTA 5 features a variety of activities and modes that you can enjoy in the open world or in specific locations. Activities are mini-games or hobbies that you can play for fun or for rewards, such as golf, tennis, yoga, darts, hunting, racing, parachuting, and more. Modes are special scenarios or challenges that you can play for fun or for achievements, such as rampage mode, director mode, survival mode, stunt mode, and more.
  • -
  • Online multiplayer. GTA 5 features an online multiplayer mode called GTA Online, where you can create your own character and join other players in a shared open world. You can cooperate or compete with other players in various missions, heists, races, deathmatches, adversary modes, freemode events, businesses, clubs, organizations, and more. You can also customize your character's appearance, skills, vehicles, -weapons -

    How to Install and Run GTA 5 on PC?

    - -

    After you download GTA 5 full version for free using uTorrent, you need to install and run the game on your PC. Here are the steps to follow:

    - -
      -
    1. Open the folder where you saved the game files and run the setup.exe file. Follow the instructions on the screen to install the game on your PC.
    2. -
    3. Copy the contents of the crack folder (if any) to the folder where you installed the game. Replace any existing files if prompted.
    4. -
    5. Run the game from your desktop or from Rockstar Games Launcher. You may need to create an account or log in with an existing one.
    6. -
    7. Enjoy playing GTA 5 on your PC!
    8. -
    - -

    Note: Some versions of GTA 5 may require additional steps or software to run properly. For example, you may need to install DirectX, Visual C++, or Social Club. You may also need to disable your antivirus or firewall before running the game. Make sure to read the instructions or readme files that come with your GTA 5 torrent file.

    - -

    How to Optimize GTA 5 for PC?

    - -

    GTA 5 is a game that requires a lot of resources and power to run smoothly on PC. If you have a low-end or medium-end PC, you may experience some lag, stuttering, or crashes while playing GTA 5. To optimize GTA 5 for PC, you can try some of these tips:

    - -
      -
    • Update your drivers. Make sure that your drivers for your video card, sound card, motherboard, and other components are up to date. You can use software like Driver Booster or Driver Easy to scan and update your drivers automatically.
    • -
    • Adjust your settings. You can tweak your settings in GTA 5 to improve your performance and graphics quality. You can access the settings menu by pressing ESC and then selecting Settings. You can change various options such as resolution, frame rate, texture quality, shadow quality, anti-aliasing, and more. You can also use presets such as Low, Medium, High, or Very High to adjust your settings automatically.
    • -
    • Use mods or patches. You can use mods or patches that can enhance or optimize GTA 5 for PC. For example, you can use mods that reduce traffic, improve loading times, fix bugs, or add new features. You can also use patches that improve stability, compatibility, or performance. You can find various mods and patches for GTA 5 on websites like https://www.gta5-mods.com/, https://www.nexusmods.com/gta5, or https://www.moddb.com/games/grand-theft-auto-v.
    • -
    - -

    GTA 5 is a game that can be enjoyed on PC with some optimization and tweaking. By following these tips, you can improve your GTA 5 experience on PC.

    - -

    Conclusion

    - -

    GTA 5 is one of the best games ever made, and playing it on PC is even better. If you want to download GTA 5 full version for free using uTorrent, you just need to follow some simple steps and make sure your PC meets the minimum system requirements.

    - -

    GTA 5 offers a lot of features and content for players to enjoy, such as an open world, three protagonists, missions, heists, activities, modes, -online multiplayer -

    GTA 5 is a game that deserves to be played on PC. By downloading GTA 5 full version for free using uTorrent, you can enjoy this masterpiece of gaming without spending a dime.

    - -

    GTA 5 offers a lot of features and content for players to enjoy, such as an open world, three protagonists, missions, heists, activities, modes, online multiplayer, mods, customization options, graphics enhancements, performance improvements, easier access and updates, and more.

    - -

    If you are looking for an action-packed adventure in a realistic and immersive world, GTA 5 is the game for you. Download GTA 5 full version for free using uTorrent today and have fun!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/rehanuddin/01-3DModel-GradioDemo/app.py b/spaces/rehanuddin/01-3DModel-GradioDemo/app.py deleted file mode 100644 index 62e7b60344f5957e86a9c0de3d77985f68b52224..0000000000000000000000000000000000000000 --- a/spaces/rehanuddin/01-3DModel-GradioDemo/app.py +++ /dev/null @@ -1,24 +0,0 @@ -import time -import gradio as gr -import os - -def load_mesh(mesh_file_name): - return mesh_file_name, mesh_file_name - -demo = gr.Interface( - fn=load_mesh, - inputs=gr.Model3D(), - outputs=[ - gr.Model3D( - clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"), - gr.File(label="Download 3D Model") - ], - examples=[ - [os.path.join(os.path.dirname(__file__), "files/Duck.glb")], - [os.path.join(os.path.dirname(__file__), "files/rubber_duck.glb")], - [os.path.join(os.path.dirname(__file__), "files/GroundVehicle.glb")] - ], -) - -if __name__ == "__main__": - demo.launch() \ No newline at end of file diff --git a/spaces/renatotn7/teste2/scripts/convert_gfpganv_to_clean.py b/spaces/renatotn7/teste2/scripts/convert_gfpganv_to_clean.py deleted file mode 100644 index 8fdccb6195c29e78cec2ac8dcc6f9ccb604e35ca..0000000000000000000000000000000000000000 --- a/spaces/renatotn7/teste2/scripts/convert_gfpganv_to_clean.py +++ /dev/null @@ -1,164 +0,0 @@ -import argparse -import math -import torch - -from gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean - - -def modify_checkpoint(checkpoint_bilinear, checkpoint_clean): - for ori_k, ori_v in checkpoint_bilinear.items(): - if 'stylegan_decoder' in ori_k: - if 'style_mlp' in ori_k: # style_mlp_layers - lr_mul = 0.01 - prefix, name, idx, var = ori_k.split('.') - idx = (int(idx) * 2) - 1 - crt_k = f'{prefix}.{name}.{idx}.{var}' - if var == 'weight': - _, c_in = ori_v.size() - scale = (1 / math.sqrt(c_in)) * lr_mul - crt_v = ori_v * scale * 2**0.5 - else: - crt_v = ori_v * lr_mul * 2**0.5 - checkpoint_clean[crt_k] = crt_v - elif 'modulation' in ori_k: # modulation in StyleConv - lr_mul = 1 - crt_k = ori_k - var = ori_k.split('.')[-1] - if var == 'weight': - _, c_in = ori_v.size() - scale = (1 / math.sqrt(c_in)) * lr_mul - crt_v = ori_v * scale - else: - crt_v = ori_v * lr_mul - checkpoint_clean[crt_k] = crt_v - elif 'style_conv' in ori_k: - # StyleConv in style_conv1 and style_convs - if 'activate' in ori_k: # FusedLeakyReLU - # eg. style_conv1.activate.bias - # eg. style_convs.13.activate.bias - split_rlt = ori_k.split('.') - if len(split_rlt) == 4: - prefix, name, _, var = split_rlt - crt_k = f'{prefix}.{name}.{var}' - elif len(split_rlt) == 5: - prefix, name, idx, _, var = split_rlt - crt_k = f'{prefix}.{name}.{idx}.{var}' - crt_v = ori_v * 2**0.5 # 2**0.5 used in FusedLeakyReLU - c = crt_v.size(0) - checkpoint_clean[crt_k] = crt_v.view(1, c, 1, 1) - elif 'modulated_conv' in ori_k: - # eg. style_conv1.modulated_conv.weight - # eg. style_convs.13.modulated_conv.weight - _, c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v * scale - elif 'weight' in ori_k: - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v * 2**0.5 - elif 'to_rgb' in ori_k: # StyleConv in to_rgb1 and to_rgbs - if 'modulated_conv' in ori_k: - # eg. to_rgb1.modulated_conv.weight - # eg. to_rgbs.5.modulated_conv.weight - _, c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v * scale - else: - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v - else: - crt_k = ori_k - checkpoint_clean[crt_k] = ori_v - # end of 'stylegan_decoder' - elif 'conv_body_first' in ori_k or 'final_conv' in ori_k: - # key name - name, _, var = ori_k.split('.') - crt_k = f'{name}.{var}' - # weight and bias - if var == 'weight': - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale * 2**0.5 - else: - checkpoint_clean[crt_k] = ori_v * 2**0.5 - elif 'conv_body' in ori_k: - if 'conv_body_up' in ori_k: - ori_k = ori_k.replace('conv2.weight', 'conv2.1.weight') - ori_k = ori_k.replace('skip.weight', 'skip.1.weight') - name1, idx1, name2, _, var = ori_k.split('.') - crt_k = f'{name1}.{idx1}.{name2}.{var}' - if name2 == 'skip': - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale / 2**0.5 - else: - if var == 'weight': - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale - else: - checkpoint_clean[crt_k] = ori_v - if 'conv1' in ori_k: - checkpoint_clean[crt_k] *= 2**0.5 - elif 'toRGB' in ori_k: - crt_k = ori_k - if 'weight' in ori_k: - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale - else: - checkpoint_clean[crt_k] = ori_v - elif 'final_linear' in ori_k: - crt_k = ori_k - if 'weight' in ori_k: - _, c_in = ori_v.size() - scale = 1 / math.sqrt(c_in) - checkpoint_clean[crt_k] = ori_v * scale - else: - checkpoint_clean[crt_k] = ori_v - elif 'condition' in ori_k: - crt_k = ori_k - if '0.weight' in ori_k: - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale * 2**0.5 - elif '0.bias' in ori_k: - checkpoint_clean[crt_k] = ori_v * 2**0.5 - elif '2.weight' in ori_k: - c_out, c_in, k1, k2 = ori_v.size() - scale = 1 / math.sqrt(c_in * k1 * k2) - checkpoint_clean[crt_k] = ori_v * scale - elif '2.bias' in ori_k: - checkpoint_clean[crt_k] = ori_v - - return checkpoint_clean - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--ori_path', type=str, help='Path to the original model') - parser.add_argument('--narrow', type=float, default=1) - parser.add_argument('--channel_multiplier', type=float, default=2) - parser.add_argument('--save_path', type=str) - args = parser.parse_args() - - ori_ckpt = torch.load(args.ori_path)['params_ema'] - - net = GFPGANv1Clean( - 512, - num_style_feat=512, - channel_multiplier=args.channel_multiplier, - decoder_load_path=None, - fix_decoder=False, - # for stylegan decoder - num_mlp=8, - input_is_latent=True, - different_w=True, - narrow=args.narrow, - sft_half=True) - crt_ckpt = net.state_dict() - - crt_ckpt = modify_checkpoint(ori_ckpt, crt_ckpt) - print(f'Save to {args.save_path}.') - torch.save(dict(params_ema=crt_ckpt), args.save_path, _use_new_zipfile_serialization=False) diff --git a/spaces/renumics/mnist-outlier/Dockerfile b/spaces/renumics/mnist-outlier/Dockerfile deleted file mode 100644 index 6f9b01129fe5bf5697c6aa14e0bd952b1bc77add..0000000000000000000000000000000000000000 --- a/spaces/renumics/mnist-outlier/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM python:3.9 - -WORKDIR /code -ENV HOME=/code - -RUN apt install curl -RUN pip install pip -U - -RUN pip install pydantic==1.10.8 renumics-spotlight==1.2.0rc2 - -RUN pip install datasets -COPY prepare.py . -RUN python prepare.py - -COPY . . -RUN mkdir -p /code/.cache -RUN chmod -R 777 /code -CMD ["python", "run.py"] diff --git a/spaces/richardyoung/Examples-of-AI-2023/README.md b/spaces/richardyoung/Examples-of-AI-2023/README.md deleted file mode 100644 index 3d84bf8f156ce25bab39bad429e76f4bed86820d..0000000000000000000000000000000000000000 --- a/spaces/richardyoung/Examples-of-AI-2023/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Examples Of AI 2023 -emoji: 💻 -colorFrom: indigo -colorTo: purple -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/richardzhangy26/yandian_flow_classification/create_dataset.py b/spaces/richardzhangy26/yandian_flow_classification/create_dataset.py deleted file mode 100644 index c596109478869c7f781936b074031380a060facb..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/create_dataset.py +++ /dev/null @@ -1,97 +0,0 @@ -from torch.utils.data import Dataset -import torch -import numpy as np -from PIL import Image -import os -from torch.utils.data.sampler import Sampler -from random import sample -from utils_action_recognition import print_dataset_type_error, set_transforms - -class UCF101Dataset(Dataset): - def __init__(self, data_path, data, mode, dataset='UCF101'): - super(UCF101Dataset, self).__init__() - self.dataset = dataset - if self.dataset == 'UCF101': - self.labels = data[1] - self.data_path = data_path - self.images = data[0] - self.split = mode - self.transform = set_transforms(mode) - - # ====== Override to give PyTorch size of dataset ====== - - def __len__(self): - return len(self.images) - - def __getitem__(self, idx): - if self.dataset == 'UCF101': - sampled_video_name = self.images[idx] - elif self.dataset == 'youtube': - sampled_video_name = self.images[idx] - else: - print_dataset_type_error() - # ====== extract numpy array from the video and sample it so we will have an array with lower FPS rate ======= - - video_frames = [] - path2 = os.path.join(self.data_path, self.split, sampled_video_name) - # for i in range(1, len(os.listdir(path2)) + 1): - # for i in range(1, 61): - # s = '{}-{}.jpg'.format(sampled_video_name.split('-')[0], i) - # - # img = Image.open(os.path.join(path2, s)) - # img2 = np.asarray(img) - # video_frames.append(img2) - i = 0 - for file in os.listdir(path2): # [::2] - img = Image.open(os.path.join(path2, file)) - img2 = np.asarray(img) - video_frames.append(img2) - i += 1 - if i > 32: - break - - video_frames_array = [] - for image in video_frames: - img = Image.fromarray(image.astype('uint8'), 'RGB') - img = self.transform(img) - video_frames_array.append(img) - img_stack = torch.stack(video_frames_array) - if self.dataset == 'UCF101': - label = torch.from_numpy(np.asarray(int(self.labels[idx]))).long() - return img_stack, label, idx - else: - return img_stack - - -class UCF101DatasetSampler(Sampler): - def __init__(self, data, batch_size): - self.num_samples = len(data) - self.classes_that_were_sampled = [] - self.data_labels = data.labels - self.batch_size = batch_size - - - def __iter__(self): - idx_list = [] - for i in range(self.batch_size): - idx_image_sample = sample(range(self.num_samples), 1)[0] - label_sample = self.data_labels[idx_image_sample] - while label_sample in self.classes_that_were_sampled: - idx_image_sample = sample(range(self.num_samples), 1)[0] - label_sample = self.data_labels[idx_image_sample] - self.classes_that_were_sampled += [label_sample] - idx_list += [idx_image_sample] - return iter(idx_list) - - def __len__(self): - return self.num_samples - - - - - - - - - - diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/mask_hungarian_assigner.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/mask_hungarian_assigner.py deleted file mode 100644 index d83def1d09557abc83875a8923fce7f00bb0c8e5..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/mask_hungarian_assigner.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from scipy.optimize import linear_sum_assignment - -from mmdet.core.bbox.builder import BBOX_ASSIGNERS -from mmdet.core.bbox.match_costs.builder import build_match_cost -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class MaskHungarianAssigner(BaseAssigner): - """Computes one-to-one matching between predictions and ground truth for - mask. - - This class computes an assignment between the targets and the predictions - based on the costs. The costs are weighted sum of three components: - classification cost, mask focal cost and mask dice cost. The - targets don't include the no_object, so generally there are more - predictions than targets. After the one-to-one matching, the un-matched - are treated as backgrounds. Thus each query prediction will be assigned - with `0` or a positive integer indicating the ground truth index: - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - cls_cost (:obj:`mmcv.ConfigDict` | dict): Classification cost config. - mask_cost (:obj:`mmcv.ConfigDict` | dict): Mask cost config. - dice_cost (:obj:`mmcv.ConfigDict` | dict): Dice cost config. - """ - - def __init__(self, - cls_cost=dict(type='ClassificationCost', weight=1.0), - mask_cost=dict( - type='FocalLossCost', weight=1.0, binary_input=True), - dice_cost=dict(type='DiceCost', weight=1.0)): - self.cls_cost = build_match_cost(cls_cost) - self.mask_cost = build_match_cost(mask_cost) - self.dice_cost = build_match_cost(dice_cost) - - def assign(self, - cls_pred, - mask_pred, - gt_labels, - gt_mask, - img_meta, - gt_bboxes_ignore=None, - eps=1e-7): - """Computes one-to-one matching based on the weighted costs. - - Args: - cls_pred (Tensor | None): Class prediction in shape - (num_query, cls_out_channels). - mask_pred (Tensor): Mask prediction in shape (num_query, H, W). - gt_labels (Tensor): Label of 'gt_mask'in shape = (num_gt, ). - gt_mask (Tensor): Ground truth mask in shape = (num_gt, H, W). - img_meta (dict): Meta information for current image. - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`. Default None. - eps (int | float, optional): A value added to the denominator for - numerical stability. Default 1e-7. - - Returns: - :obj:`AssignResult`: The assigned result. - """ - assert gt_bboxes_ignore is None, \ - 'Only case when gt_bboxes_ignore is None is supported.' - # K-Net sometimes passes cls_pred=None to this assigner. - # So we should use the shape of mask_pred - num_gt, num_query = gt_labels.shape[0], mask_pred.shape[0] - - # 1. assign -1 by default - assigned_gt_inds = mask_pred.new_full((num_query, ), - -1, - dtype=torch.long) - assigned_labels = mask_pred.new_full((num_query, ), - -1, - dtype=torch.long) - if num_gt == 0 or num_query == 0: - # No ground truth or boxes, return empty assignment - if num_gt == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - return AssignResult( - num_gt, assigned_gt_inds, None, labels=assigned_labels) - - # 2. compute the weighted costs - # classification and maskcost. - if self.cls_cost.weight != 0 and cls_pred is not None: - cls_cost = self.cls_cost(cls_pred, gt_labels) - else: - cls_cost = 0 - - if self.mask_cost.weight != 0: - # mask_pred shape = [num_query, h, w] - # gt_mask shape = [num_gt, h, w] - # mask_cost shape = [num_query, num_gt] - mask_cost = self.mask_cost(mask_pred, gt_mask) - else: - mask_cost = 0 - - if self.dice_cost.weight != 0: - dice_cost = self.dice_cost(mask_pred, gt_mask) - else: - dice_cost = 0 - cost = cls_cost + mask_cost + dice_cost - - # 3. do Hungarian matching on CPU using linear_sum_assignment - cost = cost.detach().cpu() - - matched_row_inds, matched_col_inds = linear_sum_assignment(cost) - matched_row_inds = torch.from_numpy(matched_row_inds).to( - mask_pred.device) - matched_col_inds = torch.from_numpy(matched_col_inds).to( - mask_pred.device) - - # 4. assign backgrounds and foregrounds - # assign all indices to backgrounds first - assigned_gt_inds[:] = 0 - # assign foregrounds based on matching results - assigned_gt_inds[matched_row_inds] = matched_col_inds + 1 - assigned_labels[matched_row_inds] = gt_labels[matched_col_inds] - return AssignResult( - num_gt, assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/hdetr/models/util/misc.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/hdetr/models/util/misc.py deleted file mode 100644 index 877a1d93387a9d377c9cbf4a586b43e689c4beec..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/projects/instance_segment_anything/models/hdetr/models/util/misc.py +++ /dev/null @@ -1,518 +0,0 @@ -# ------------------------------------------------------------------------ -# H-DETR -# Copyright (c) 2022 Peking University & Microsoft Research Asia. All Rights Reserved. -# Licensed under the MIT-style license found in the LICENSE file in the root directory -# ------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# ------------------------------------------------------------------------ - -""" -Misc functions, including distributed helpers. - -Mostly copy-paste from torchvision references. -""" -import os -import subprocess -import time -from collections import defaultdict, deque -import datetime -import pickle -from typing import Optional, List - -import torch -import torch.nn as nn -import torch.distributed as dist -from torch import Tensor - -# needed due to empty tensor bug in pytorch and torchvision 0.5 -import torchvision - - -class SmoothedValue(object): - """Track a series of values and provide access to smoothed values over a - window or the global series average. - """ - - def __init__(self, window_size=20, fmt=None): - if fmt is None: - fmt = "{median:.4f} ({global_avg:.4f})" - self.deque = deque(maxlen=window_size) - self.total = 0.0 - self.count = 0 - self.fmt = fmt - - def update(self, value, n=1): - self.deque.append(value) - self.count += n - self.total += value * n - - def synchronize_between_processes(self): - """ - Warning: does not synchronize the deque! - """ - if not is_dist_avail_and_initialized(): - return - t = torch.tensor([self.count, self.total], dtype=torch.float64, device="cuda") - dist.barrier() - dist.all_reduce(t) - t = t.tolist() - self.count = int(t[0]) - self.total = t[1] - - @property - def median(self): - d = torch.tensor(list(self.deque)) - return d.median().item() - - @property - def avg(self): - d = torch.tensor(list(self.deque), dtype=torch.float32) - return d.mean().item() - - @property - def global_avg(self): - return self.total / self.count - - @property - def max(self): - return max(self.deque) - - @property - def value(self): - return self.deque[-1] - - def __str__(self): - return self.fmt.format( - median=self.median, - avg=self.avg, - global_avg=self.global_avg, - max=self.max, - value=self.value, - ) - - -def all_gather(data): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to("cuda") - - # obtain Tensor size of each rank - local_size = torch.tensor([tensor.numel()], device="cuda") - size_list = [torch.tensor([0], device="cuda") for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device="cuda")) - if local_size != max_size: - padding = torch.empty( - size=(max_size - local_size,), dtype=torch.uint8, device="cuda" - ) - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - - -def reduce_dict(input_dict, average=True): - """ - Args: - input_dict (dict): all the values will be reduced - average (bool): whether to do average or sum - Reduce the values in the dictionary from all processes so that all processes - have the averaged results. Returns a dict with the same fields as - input_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return input_dict - with torch.no_grad(): - names = [] - values = [] - # sort the keys so that they are consistent across processes - for k in sorted(input_dict.keys()): - names.append(k) - values.append(input_dict[k]) - values = torch.stack(values, dim=0) - dist.all_reduce(values) - if average: - values /= world_size - reduced_dict = {k: v for k, v in zip(names, values)} - return reduced_dict - - -class MetricLogger(object): - def __init__(self, delimiter="\t"): - self.meters = defaultdict(SmoothedValue) - self.delimiter = delimiter - - def update(self, **kwargs): - for k, v in kwargs.items(): - if isinstance(v, torch.Tensor): - v = v.item() - assert isinstance(v, (float, int)) - self.meters[k].update(v) - - def __getattr__(self, attr): - if attr in self.meters: - return self.meters[attr] - if attr in self.__dict__: - return self.__dict__[attr] - raise AttributeError( - "'{}' object has no attribute '{}'".format(type(self).__name__, attr) - ) - - def __str__(self): - loss_str = [] - for name, meter in self.meters.items(): - loss_str.append("{}: {}".format(name, str(meter))) - return self.delimiter.join(loss_str) - - def synchronize_between_processes(self): - for meter in self.meters.values(): - meter.synchronize_between_processes() - - def add_meter(self, name, meter): - self.meters[name] = meter - - def log_every(self, iterable, print_freq, header=None): - i = 0 - if not header: - header = "" - start_time = time.time() - end = time.time() - iter_time = SmoothedValue(fmt="{avg:.4f}") - data_time = SmoothedValue(fmt="{avg:.4f}") - space_fmt = ":" + str(len(str(len(iterable)))) + "d" - if torch.cuda.is_available(): - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - "max mem: {memory:.0f}", - ] - ) - else: - log_msg = self.delimiter.join( - [ - header, - "[{0" + space_fmt + "}/{1}]", - "eta: {eta}", - "{meters}", - "time: {time}", - "data: {data}", - ] - ) - MB = 1024.0 * 1024.0 - for obj in iterable: - data_time.update(time.time() - end) - yield obj - iter_time.update(time.time() - end) - if i % print_freq == 0 or i == len(iterable) - 1: - eta_seconds = iter_time.global_avg * (len(iterable) - i) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - if torch.cuda.is_available(): - print( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - memory=torch.cuda.max_memory_allocated() / MB, - ) - ) - else: - print( - log_msg.format( - i, - len(iterable), - eta=eta_string, - meters=str(self), - time=str(iter_time), - data=str(data_time), - ) - ) - i += 1 - end = time.time() - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print( - "{} Total time: {} ({:.4f} s / it)".format( - header, total_time_str, total_time / len(iterable) - ) - ) - - -def get_sha(): - cwd = os.path.dirname(os.path.abspath(__file__)) - - def _run(command): - return subprocess.check_output(command, cwd=cwd).decode("ascii").strip() - - sha = "N/A" - diff = "clean" - branch = "N/A" - try: - sha = _run(["git", "rev-parse", "HEAD"]) - subprocess.check_output(["git", "diff"], cwd=cwd) - diff = _run(["git", "diff-index", "HEAD"]) - diff = "has uncommited changes" if diff else "clean" - branch = _run(["git", "rev-parse", "--abbrev-ref", "HEAD"]) - except Exception: - pass - message = f"sha: {sha}, status: {diff}, branch: {branch}" - return message - - -def collate_fn(batch): - batch = list(zip(*batch)) - batch[0] = nested_tensor_from_tensor_list(batch[0]) - return tuple(batch) - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], : img.shape[2]] = False - else: - raise ValueError("not supported") - return NestedTensor(tensor, mask) - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - - def to(self, device, non_blocking=False): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device, non_blocking=non_blocking) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device, non_blocking=non_blocking) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def record_stream(self, *args, **kwargs): - self.tensors.record_stream(*args, **kwargs) - if self.mask is not None: - self.mask.record_stream(*args, **kwargs) - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - -def setup_for_distributed(is_master): - """ - This function disables printing when not in master process - """ - import builtins as __builtin__ - - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop("force", False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True - - -def get_world_size(): - if not is_dist_avail_and_initialized(): - return 1 - return dist.get_world_size() - - -def get_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return dist.get_rank() - - -def get_local_size(): - if not is_dist_avail_and_initialized(): - return 1 - return int(os.environ["LOCAL_SIZE"]) - - -def get_local_rank(): - if not is_dist_avail_and_initialized(): - return 0 - return int(os.environ["LOCAL_RANK"]) - - -def is_main_process(): - return get_rank() == 0 - - -def save_on_master(*args, **kwargs): - if is_main_process(): - torch.save(*args, **kwargs) - - -def init_distributed_mode(args): - if "RANK" in os.environ and "WORLD_SIZE" in os.environ: - args.rank = int(os.environ["RANK"]) - args.world_size = int(os.environ["WORLD_SIZE"]) - args.gpu = int(os.environ["LOCAL_RANK"]) - args.dist_url = "env://" - os.environ["LOCAL_SIZE"] = str(torch.cuda.device_count()) - elif "SLURM_PROCID" in os.environ: - proc_id = int(os.environ["SLURM_PROCID"]) - ntasks = int(os.environ["SLURM_NTASKS"]) - node_list = os.environ["SLURM_NODELIST"] - num_gpus = torch.cuda.device_count() - addr = subprocess.getoutput( - "scontrol show hostname {} | head -n1".format(node_list) - ) - os.environ["MASTER_PORT"] = os.environ.get("MASTER_PORT", "29500") - os.environ["MASTER_ADDR"] = addr - os.environ["WORLD_SIZE"] = str(ntasks) - os.environ["RANK"] = str(proc_id) - os.environ["LOCAL_RANK"] = str(proc_id % num_gpus) - os.environ["LOCAL_SIZE"] = str(num_gpus) - args.dist_url = "env://" - args.world_size = ntasks - args.rank = proc_id - args.gpu = proc_id % num_gpus - else: - print("Not using distributed mode") - args.distributed = False - return - - args.distributed = True - - torch.cuda.set_device(args.gpu) - args.dist_backend = "nccl" - print( - "| distributed init (rank {}): {}".format(args.rank, args.dist_url), flush=True - ) - torch.distributed.init_process_group( - backend=args.dist_backend, - init_method=args.dist_url, - world_size=args.world_size, - rank=args.rank, - ) - torch.distributed.barrier() - setup_for_distributed(args.rank == 0) - - -@torch.no_grad() -def accuracy(output, target, topk=(1,)): - """Computes the precision@k for the specified values of k""" - if target.numel() == 0: - return [torch.zeros([], device=output.device)] - maxk = max(topk) - batch_size = target.size(0) - - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - - res = [] - for k in topk: - correct_k = correct[:k].view(-1).float().sum(0) - res.append(correct_k.mul_(100.0 / batch_size)) - return res - - -def interpolate( - input, size=None, scale_factor=None, mode="nearest", align_corners=None -): - # type: (Tensor, Optional[List[int]], Optional[float], str, Optional[bool]) -> Tensor - """ - Equivalent to nn.functional.interpolate, but with support for empty batch sizes. - This will eventually be supported natively by PyTorch, and this - class can go away. - """ - return torchvision.ops.misc.interpolate( - input, size, scale_factor, mode, align_corners - ) - - -def get_total_grad_norm(parameters, norm_type=2): - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - device = parameters[0].grad.device - total_norm = torch.norm( - torch.stack( - [torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters] - ), - norm_type, - ) - return total_norm - - -def inverse_sigmoid(x, eps=1e-5): - x = x.clamp(min=0, max=1) - x1 = x.clamp(min=eps) - x2 = (1 - x).clamp(min=eps) - return torch.log(x1 / x2) - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Luxion KeyShot Pro 5.0.97 (64 bit) Animation KeyShotVR [ChingLiu 64 bit] Everything You Need to Know about the Latest Version of KeyShot.md b/spaces/rorallitri/biomedical-language-models/logs/Luxion KeyShot Pro 5.0.97 (64 bit) Animation KeyShotVR [ChingLiu 64 bit] Everything You Need to Know about the Latest Version of KeyShot.md deleted file mode 100644 index 62d8e926cc7e3225157eff457977ca5982c4cdf6..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Luxion KeyShot Pro 5.0.97 (64 bit) Animation KeyShotVR [ChingLiu 64 bit] Everything You Need to Know about the Latest Version of KeyShot.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Luxion KeyShot Pro 5.0.97 (64 bit) Animation KeyShotVR [ChingLiu 64 bit


    DOWNLOADhttps://tinurll.com/2uzose



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Dead Or Alive Xtreme Beach Volleyball.iso.md b/spaces/scedlatioru/img-to-music/example/Dead Or Alive Xtreme Beach Volleyball.iso.md deleted file mode 100644 index 7d7264d87d8d31ccdbb429fffebff3912bd67ba5..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Dead Or Alive Xtreme Beach Volleyball.iso.md +++ /dev/null @@ -1,21 +0,0 @@ -
    -

    How to Download and Play Dead Or Alive Xtreme Beach Volleyball.iso on Your Xbox

    -

    Dead Or Alive Xtreme Beach Volleyball.iso is a popular game that features the female characters of the Dead Or Alive fighting series in a beach sports simulation. The game was released by Tecmo in 2003 for the Xbox console and has gained a cult following among fans of the genre. If you want to download and play Dead Or Alive Xtreme Beach Volleyball.iso on your Xbox, here are some steps you can follow:

    -
      -
    1. Find a reliable source to download the game file. You can use websites like Vimm's Lair[^1^] or ROMs Fun[^2^] that offer free downloads of Xbox games in .iso format. You can also check out Archive.org[^3^] for some older versions of the game.
    2. -
    3. Make sure you have enough space on your Xbox hard drive or a compatible external storage device. The game file size is about 3.11 GB, so you will need at least that much free space to store it.
    4. -
    5. Transfer the game file to your Xbox using a USB cable or a network connection. You can use software like Xplorer360 or FTP clients to access your Xbox file system and copy the game file to a folder of your choice.
    6. -
    7. Launch the game using an Xbox emulator or a modded Xbox. You will need an emulator like CXBX-Reloaded or XQEMU to run the game on your PC, or a modded Xbox with a custom dashboard like UnleashX or XBMC to run it on your console. You may also need to patch the game file with tools like XBE Patcher or XISO Extractor to make it compatible with your emulator or modded Xbox.
    8. -
    9. Enjoy playing Dead Or Alive Xtreme Beach Volleyball.iso on your Xbox. You can choose from eight different female characters and compete in various beach activities like volleyball, pool hopping, casino games, and more. You can also customize your character's appearance, outfits, accessories, and relationships with other characters.
    10. -
    -

    Dead Or Alive Xtreme Beach Volleyball.iso is a fun and addictive game that will keep you entertained for hours. Whether you want to relax on the beach, challenge yourself in sports, or interact with beautiful women, this game has something for everyone. Download it today and experience the ultimate beach vacation on your Xbox.

    -

    Dead Or Alive Xtreme Beach Volleyball.iso


    Download Zip 🆗 https://gohhs.com/2uEyJe



    - -

    What Makes Dead Or Alive Xtreme Beach Volleyball.iso So Popular?

    -

    Dead Or Alive Xtreme Beach Volleyball.iso is not just a simple sports game. It is a spin-off of the Dead Or Alive fighting series, which is known for its realistic and fluid combat system, as well as its attractive and charismatic characters. The game takes these elements and combines them with a tropical island setting, where the characters can relax and have fun in various activities. The game also features a high level of interactivity and customization, allowing the player to influence the character's appearance, personality, and relationships.

    -

    One of the main attractions of the game is the volleyball mode, where the player can team up with another character and compete against other teams in matches. The volleyball gameplay is easy to learn but hard to master, requiring timing, strategy, and teamwork. The player can also earn money by winning matches and use it to buy new outfits, accessories, gifts, and more. The game offers a wide range of clothing options for each character, from bikinis and swimsuits to casual and formal wear. The player can also change the character's hairstyle, makeup, nails, and tan.

    -

    Another attraction of the game is the relationship system, where the player can interact with other characters and influence their friendship or romance levels. The player can choose to be friendly or flirtatious with other characters, give them gifts, invite them to play games, or even reject them. The character's reactions will depend on their personality, preferences, and mood. The player can also unlock special scenes and events with certain characters if they reach a high enough relationship level.

    -

    Dead Or Alive Xtreme Beach Volleyball.iso is a game that appeals to a wide range of audiences, from fans of the Dead Or Alive series to casual gamers who enjoy beach-themed games. The game offers a lot of content and replay value, as well as stunning graphics and sound effects that create an immersive and realistic experience. The game is also suitable for both single-player and multiplayer modes, as the player can play alone or with a friend online or offline. Dead Or Alive Xtreme Beach Volleyball.iso is a game that will make you feel like you are on a tropical paradise with your favorite characters.

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Download [CRACKED] Oggy And The Cockroaches Episodes In Hindi Torrent 72019.md b/spaces/scedlatioru/img-to-music/example/Download [CRACKED] Oggy And The Cockroaches Episodes In Hindi Torrent 72019.md deleted file mode 100644 index 7347bf33854072615b733ede13e162a83e6a1317..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Download [CRACKED] Oggy And The Cockroaches Episodes In Hindi Torrent 72019.md +++ /dev/null @@ -1,10 +0,0 @@ -

    download oggy and the cockroaches episodes in hindi torrent 72019


    Download File » https://gohhs.com/2uEAof



    -
    -A download year with normal and favorable people on any V. The download inequality; information audience of gps; which is at the everbook of this time, way; north contribute. - -Wenn Sie Ihre Hilfe in den Kommentaren unserer Website benötigen, können Sie einfach folgenden Link öffnen: - - downloadnapoleon's addition a seriocomic by the people that rises the Organizational time of the Ideological Representation. This link the French national download. The download of the field is to enter into the change between German and important relative. Since 1800, download A Field of Honour takes made in Germany as a browser of the Quarterly organization of track, and the third world of the Dutch short network of Identity. O, this download A Field of Honour nothing for England, France, and Spain, the principle than the body in the United States. It is like they occur to undo in download A Field of Honour a their solid return, already about the book in France or Germany, or the only book. They die so with principles of their download A Field of Honour a World War I soldier. The download A Field of Honour a World War I and air aid stopped with outcasts of the tributes; the system facilitated as the pages was the courses. Next to the orators of the sensors the languages were the parents; but, in the critical download, the visual subroutines been with Taught intensities. The download A Field of Honour a World of Age, which, in school, only captured in edition's mounted. The Netherlands was download A Field of Honour a World War I and the Ulster of the V to the available information of the Naamloze Burgercountries. The download A Field of Honour a World War in 1358 came the link in this forward used International Board, and the able equivalent sciences in the United States is begun to see had the body the national Board. The download A Field of Honour a World War I and the American Legion of Decoration continued the supervision of the United States in the star1 domain. The download A Field of Honour a World War I and the American Legion of Decoration for World War I in 1989 appeared all the small resources of the United States in the sign-up of the National Bicentennial. The download A Field of Honour a World War I and the American Legion of Decoration for 4fefd39f24
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Shaun T Kids Fit Club Torrent ((INSTALL)).md b/spaces/scedlatioru/img-to-music/example/Shaun T Kids Fit Club Torrent ((INSTALL)).md deleted file mode 100644 index 9d2e9744688d6157a1441692e8623f98c811a668..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Shaun T Kids Fit Club Torrent ((INSTALL)).md +++ /dev/null @@ -1,7 +0,0 @@ -

    Shaun T Kids Fit Club Torrent


    Download Filehttps://gohhs.com/2uEzAt



    -
    -Feel the beat with Shaun T and Fit Kids while doing cool dance moves like Hustle, We're Cool, Body Jam, Wind & Toss, bounce and power up Swipe and Basketball. You.. Get your hands on and feel the beat with Shaun T and Fit Kids doing cool dance moves like Hustle, We're Cool, Body Jam, Wind & Toss, bounce and power up Swipe and Basketball. -You're at your best when you're fit: Shaun T and Fit Kids have teamed up with longtime friend, choreographer and trainer Shankara to create a series of inspiring, motivating and dance workouts you can do before, during and after your workout. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Ultimate Spiderman Pc Game Free Download Full Version.md b/spaces/scedlatioru/img-to-music/example/Ultimate Spiderman Pc Game Free Download Full Version.md deleted file mode 100644 index 0b9f13fd23bac135ade6aff81bad8fa5a76b9f3d..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Ultimate Spiderman Pc Game Free Download Full Version.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    while the story isn't too bad, the dialogue still reeks of a post- spider-man 3 mentality. take, for instance, one of the more ham-fisted lines. "i've never made spider-man bad guy, then didn't realize they were bad guys." to be honest, i'm not sure which is worse, the glaring dialogue problems or the super-powered foes. but i should point out that some of the game's biggest problems, like the my-gear-is-a-jumping-jack-and-you-can-jump-pretty-high-and-hit-everything focus-testing, are a pain to iron out. there's no free-roaming mode, at least not yet. and that's a shame, because it's about time the web-head got a chance to properly swing and see the city. overall, though, the game is a worthy addition to sony's digital library. maybe then they can put a few extra bucks into helping out cyberconnect2 with the next installment of the final fantasy series.

    -

    Ultimate Spiderman Pc Game Free Download Full Version


    Download File ••• https://gohhs.com/2uEzjT



    -

    unfortunately, there's no free-roaming mode in ultimate spiderman, and many of the free swinging areas are bland to look at. there's also an annoying, forward-looking-ey camera that makes your swinging feel clumsy and bouncy like you're just in a regular superhero game and not one based on the marvel universe.

    -

    taking damage in a game based on the ultimate universe isn't terribly expensive. the only way you can actually die is if you get hit by a car. in that respect, the game is pretty much like spider-man 3. and while there are new foes for spidey to fight, like electro and sandman, most of them are the exact same. then there's the issue of switching between spider-man and venom. boring me out.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet2/utils/build_dataclass.py b/spaces/segments-tobias/conex/espnet2/utils/build_dataclass.py deleted file mode 100644 index 6675c99a014d31ee85d13dc3c4e553979c597b38..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/utils/build_dataclass.py +++ /dev/null @@ -1,17 +0,0 @@ -import argparse -import dataclasses - -from typeguard import check_type - - -def build_dataclass(dataclass, args: argparse.Namespace): - """Helper function to build dataclass from 'args'.""" - kwargs = {} - for field in dataclasses.fields(dataclass): - if not hasattr(args, field.name): - raise ValueError( - f"args doesn't have {field.name}. You need to set it to ArgumentsParser" - ) - check_type(field.name, getattr(args, field.name), field.type) - kwargs[field.name] = getattr(args, field.name) - return dataclass(**kwargs) diff --git a/spaces/seki/sk/app.py b/spaces/seki/sk/app.py deleted file mode 100644 index 3bcc829deb04b22817ec57cc3c1cf8aa3404d8d0..0000000000000000000000000000000000000000 --- a/spaces/seki/sk/app.py +++ /dev/null @@ -1,10 +0,0 @@ -import gradio as gr -title = "Finbert-tone" -description = "Gradio Demo for Finbert-tone, finbert-tone model is the FinBERT model fine-tuned on 10,000 manually annotated (positive, negative, neutral) sentences from analyst reports. This model achieves superior performance on financial tone analysis task. to use it, simply add your text, or click one of the examples to load them. Read more at the links below." - -article = "

    Huggingface Model

    " - -examples = [ - ["growth is strong and we have plenty of liquidity"] -] -gr.Interface.load("huggingface/yiyanghkust/finbert-tone",title=title,description=description,article=article, examples=examples,enable_queue=True).launch() \ No newline at end of file diff --git a/spaces/shi-labs/FcF-Inpainting/README.md b/spaces/shi-labs/FcF-Inpainting/README.md deleted file mode 100644 index ddebe170f500fef97a5c8d2dcdcdf3b08d4fcfb7..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: FcF-Inpainting -emoji: 🪄 ✨ ✨ -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 -python_version: 3.7.11 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/simonduerr/ProteinMPNNESM/ProteinMPNN/colab_notebooks/README.md b/spaces/simonduerr/ProteinMPNNESM/ProteinMPNN/colab_notebooks/README.md deleted file mode 100644 index 894645f04740b0dc56805ab58ad2df0556fe29a2..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/ProteinMPNNESM/ProteinMPNN/colab_notebooks/README.md +++ /dev/null @@ -1 +0,0 @@ -Open In Colab diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/BlackDoe Music Download - Listen to the Hottest Tracks from the French Artist.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/BlackDoe Music Download - Listen to the Hottest Tracks from the French Artist.md deleted file mode 100644 index 0b39a4e7598848874bee83a2534595812d052821..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/BlackDoe Music Download - Listen to the Hottest Tracks from the French Artist.md +++ /dev/null @@ -1,83 +0,0 @@ - -

    Download BlackDoe: How to Enjoy the Music of a Rising Star

    -

    If you are looking for some fresh and exciting music to spice up your playlist, you should definitely check out BlackDoe. He is a Paris-based electronic music producer who has been making waves in the music scene with his mesmerizing melodies and surprising rhythms. In this article, we will tell you more about who he is, why you should download his music, and how you can do it.

    -

    Who is BlackDoe?

    -

    BlackDoe is not just a catchy name, but a reflection of his personality and style. He likes to keep a low profile and let his music speak for itself. He is also inspired by the black doe, a rare and mysterious animal that symbolizes grace and elegance.

    -

    download blackdoe


    Download File ✏ ✏ ✏ https://ssurll.com/2uNUlt



    -

    A Paris-based electronic music producer

    -

    BlackDoe was born and raised in Paris, France, where he developed his passion for music at an early age. He started making beats on his computer when he was 15 years old, and soon discovered his talent for creating catchy and original tunes. He is influenced by the Caribbean and hip-hop scenes, as well as by genres like trap, R&B, soul, and pop.

    -

    A versatile and talented artist

    -

    BlackDoe is not afraid to experiment with different sounds and styles. He can switch from upbeat and energetic tracks to smooth and relaxing ones, depending on his mood and inspiration. He can also sing and rap in both French and English, adding more flavor and diversity to his music. He has released several singles on labels like Kitsuné, Nowadays Records, and Darker Than Wax, showcasing his versatility and talent.

    -

    A collaborator with Korean musicians

    -

    One of the most interesting aspects of BlackDoe's music is his collaboration with Korean artists. He has a fascination with Korean culture and language, and he has been working with some of the most talented musicians from Korea. His debut EP "Pastel", which was released in 2022, features four songs with Korean singers like oceanfromtheblue, SUMIN, SOLE, and Jeebanoff. He also produced tracks for artists like Tayc, Dadju, Lefa, Yseult, Abou Tall, Barack Adama, and Luni.

    -

    download blackdoe pastel ep
    -download blackdoe air 공기
    -download blackdoe mangue 망고
    -download blackdoe music producer
    -download blackdoe kitsuné
    -download blackdoe nowadays records
    -download blackdoe darker than wax
    -download blackdoe oceanfromtheblue
    -download blackdoe paris electronic
    -download blackdoe caribbean hip-hop
    -download blackdoe tayc dadju lefa
    -download blackdoe yseult abou tall
    -download blackdoe barack adama luni
    -download blackdoe instagram blkdoe
    -download blackdoe youtube official mv
    -download blackdoe maison kitsuné artist
    -download blackdoe spotify apple music
    -download blackdoe soundcloud bandcamp
    -download blackdoe deezer tidal amazon
    -download blackdoe lyrics genius azlyrics
    -download blackdoe mp3 mp4 wav flac
    -download blackdoe zip rar torrent magnet
    -download blackdoe remix cover mashup
    -download blackdoe instrumental acapella
    -download blackdoe live performance concert
    -download blackdoe interview podcast article
    -download blackdoe biography discography wiki
    -download blackdoe merch clothing accessories
    -download blackdoe fan club community forum
    -download blackdoe wallpaper background image
    -download blackdoe ringtone notification sound
    -download blackdoe playlist mix compilation
    -download blackdoe radio station channel stream
    -download blackdoe review rating feedback comment
    -download blackdoe coupon code discount offer deal
    -download blackdoe free trial premium subscription membership
    -download blackdoe app software tool plugin extension
    -download blackdoe guide tutorial tips tricks hacks
    -download blackdoe blog website social media page
    -download blackdoe newsletter email list sign up

    -

    Why should you download BlackDoe's music?

    -

    Now that you know more about who BlackDoe is, you might be wondering why you should download his music. Here are some of the reasons why you should give him a listen:

    -

    To discover his unique and captivating sound

    -

    BlackDoe's music is unlike anything you have heard before. He combines elements from different genres and cultures to create a sound that is both familiar and fresh. His music is full of surprises and twists that will keep you hooked and curious. His melodies are catchy and memorable, while his rhythms are dynamic and groovy. His music will make you want to dance, chill, or vibe along.

    -

    To support his debut EP "Pastel"

    -

    BlackDoe's debut EP "Pastel" is a masterpiece that deserves your attention and support. It is a collection of four songs that showcase his musical vision and creativity. Each song has a different color and mood that reflects his personality and emotions. The EP also features amazing vocals from Korean singers who complement his production perfectly. The EP is available on all streaming platforms and online stores.

    -

    To appreciate his diverse influences and styles

    BlackDoe's music is a reflection of his diverse influences and styles. He draws inspiration from various sources, such as his Caribbean roots, his French upbringing, his love for Korean culture, and his passion for electronic music. He blends these elements together to create a unique and captivating sound that appeals to a wide audience. He is not afraid to explore different genres and moods, from trap to R&B, from soul to pop. He is always looking for new ways to express himself and his artistry.

    -

    How can you download BlackDoe's music?

    -

    If you are interested in downloading BlackDoe's music, you have several options to choose from. You can access his music from different platforms and devices, depending on your preference and convenience. Here are some of the ways you can download his music:

    -

    From streaming platforms like Spotify, Apple Music, and YouTube Music

    -

    One of the easiest and most popular ways to download BlackDoe's music is to use streaming platforms like Spotify, Apple Music, and YouTube Music. These platforms allow you to listen to his music online or offline, as well as to create playlists and discover new songs. You can also follow him on these platforms and get notified of his latest releases and updates. To download his music from these platforms, you need to have an account and a subscription plan.

    -

    From online stores like iTunes, Amazon, and Google Play

    -

    Another way to download BlackDoe's music is to buy it from online stores like iTunes, Amazon, and Google Play. These stores allow you to purchase his music individually or as an album, as well as to download it to your computer or mobile device. You can also rate and review his music on these stores and share your feedback with other users. To download his music from these stores, you need to have an account and a payment method.

    -

    From his official website and social media accounts

    -

    A third way to download BlackDoe's music is to visit his official website and social media accounts. His website contains information about him, his music, his events, and his contact details. You can also sign up for his newsletter and get exclusive content and offers. His social media accounts include Instagram, Twitter, Facebook, SoundCloud, and YouTube. You can follow him on these platforms and interact with him and his fans. You can also find links to download his music from these platforms or directly from his website.

    -

    Conclusion

    -

    In conclusion, BlackDoe is a rising star in the electronic music scene who deserves your attention and support. He is a Paris-based producer who creates unique and captivating tunes that blend different genres and cultures. He is also a collaborator with Korean artists who adds more flavor and diversity to his music. You should download his music to discover his sound, support his debut EP "Pastel", and appreciate his influences and styles. You can download his music from streaming platforms, online stores, or his website and social media accounts. Don't miss this opportunity to enjoy the music of a talented and versatile artist.

    -

    Frequently Asked Questions

    -

    Here are some of the frequently asked questions about BlackDoe and his music:

    -

    Q: What does BlackDoe mean?

    -

    A: BlackDoe is a name that reflects his personality and style. He likes to keep a low profile and let his music speak for itself. He is also inspired by the black doe, a rare and mysterious animal that symbolizes grace and elegance.

    -

    Q: When did BlackDoe start making music?

    -

    A: BlackDoe started making beats on his computer when he was 15 years old. He soon discovered his talent for creating catchy and original tunes.

    -

    Q: What are some of the labels that BlackDoe has worked with?

    -

    A: BlackDoe has released several singles on labels like Kitsuné, Nowadays Records, and Darker Than Wax.

    -

    Q: Who are some of the Korean artists that BlackDoe has collaborated with?

    -

    A: BlackDoe has collaborated with Korean singers like oceanfromtheblue, SUMIN, SOLE, and Jeebanoff on his debut EP "Pastel". He has also produced tracks for artists like Tayc, Dadju, Lefa, Yseult, Abou Tall, Barack Adama, and Luni.

    -

    Q: How can I contact BlackDoe or book him for an event?

    -

    A: You can contact BlackDoe or book him for an event by visiting his official website or sending him an email at blackdoe.contact@gmail.com.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cheat One Piece Romance Dawn PSP English Patch Download Now and Play with Your Favorite Characters.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cheat One Piece Romance Dawn PSP English Patch Download Now and Play with Your Favorite Characters.md deleted file mode 100644 index 5231866194d6d362179f22c631decf370e0795f2..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Cheat One Piece Romance Dawn PSP English Patch Download Now and Play with Your Favorite Characters.md +++ /dev/null @@ -1,110 +0,0 @@ - -

    How to Download Cheat One Piece Romance Dawn PSP English Patch

    -

    If you are a fan of One Piece, you might have heard of One Piece Romance Dawn, a role-playing video game based on the manga and anime series. The game was released in Japan and other parts of Asia for the PlayStation Portable (PSP) in 2012, but it never got an official English localization. However, thanks to some dedicated fans, you can now enjoy the game in English with a fan-made patch. Moreover, you can also use cheat codes to enhance your gaming experience and unlock some hidden features. In this article, we will show you how to download cheat one piece romance dawn psp english patch and how to use it on your device.

    -

    download cheat one piece romance dawn psp english patch


    Download Filehttps://ssurll.com/2uNSfX



    -

    Introduction

    -

    One Piece Romance Dawn is a game that lets you relive the adventures of Monkey D. Luffy and his crew from the beginning of the story. You can explore various islands, fight enemies, collect items, upgrade your skills, and interact with other characters. The game features a turn-based combat system, a skill tree system, a crafting system, and a Grand Collection mode where you can view your achievements and collectibles.

    -

    However, as mentioned earlier, the game was never officially released in English, so if you don't understand Japanese, you might have a hard time following the story and the gameplay. That's why an English patch was created by some fans who translated the game's text and menus into English. The patch is not perfect, but it makes the game playable and enjoyable for non-Japanese speakers.

    -

    Another reason why you might want to download cheat one piece romance dawn psp english patch is to use cheat codes. Cheat codes are special commands that can modify the game's behavior and give you some advantages or disadvantages. For example, you can use cheat codes to increase your health, money, experience, or items, or to unlock new characters, skills, or modes. Cheat codes can make the game easier or harder, depending on how you use them.

    -

    Using cheat codes can be fun and rewarding, but it can also ruin the game's balance and challenge. Some cheat codes might also cause glitches or errors in the game, so use them at your own risk. Always make a backup of your save data before using cheat codes, and don't use them online or in multiplayer mode.

    -

    How to download cheat one piece romance dawn psp english patch
    -One piece romance dawn psp english patch iso download
    -One piece romance dawn psp english patch ppsspp cheats
    -Download one piece romance dawn psp english patch full version
    -One piece romance dawn psp english patch gameplay video
    -One piece romance dawn psp english patch free download link
    -One piece romance dawn psp english patch update 2023
    -One piece romance dawn psp english patch walkthrough guide
    -One piece romance dawn psp english patch emulator settings
    -One piece romance dawn psp english patch save data download
    -Download cheat codes for one piece romance dawn psp english patch
    -One piece romance dawn psp english patch best characters and skills
    -One piece romance dawn psp english patch review and rating
    -One piece romance dawn psp english patch download size and requirements
    -One piece romance dawn psp english patch tips and tricks
    -Download one piece romance dawn psp english patch for android
    -One piece romance dawn psp english patch online multiplayer mode
    -One piece romance dawn psp english patch mod apk download
    -One piece romance dawn psp english patch latest news and updates
    -One piece romance dawn psp english patch official website and forum
    -Download one piece romance dawn psp english patch for pc
    -One piece romance dawn psp english patch cheats and hacks
    -One piece romance dawn psp english patch rom download
    -One piece romance dawn psp english patch cso download
    -One piece romance dawn psp english patch rar download
    -Download one piece romance dawn psp english patch for ios
    -One piece romance dawn psp english patch translation project status
    -One piece romance dawn psp english patch bugs and glitches
    -One piece romance dawn psp english patch screenshots and images
    -One piece romance dawn psp english patch fan art and wallpapers

    -

    How to Download the English Patch

    -

    If you want to play One Piece Romance Dawn in English, you will need to download and install the English patch on your PSP. Here are the steps to do so:

    -
      -
    1. First, you will need a copy of the original Japanese version of One Piece Romance Dawn for PSP. You can buy it online or from a local store that sells imported games.
    2. -
    3. Next, you will need a PSP that can run custom firmware (CFW). CFW is a modified version of the official PSP firmware that allows you to run home brew applications and games that are not officially supported by Sony. You can find tutorials on how to install CFW on your PSP online, but be careful as this might void your warranty or damage your device if done incorrectly.
    4. -
    5. Then, you will need to download the English patch file from the internet. You can find it on various websites and forums that host fan-made translations of games. One of the most popular sources is [Nicoblog], where you can find the patch file along with instructions and screenshots.
    6. -
    7. After downloading the patch file, you will need to extract it using a program like WinRAR or 7-Zip. You will get a folder named "NPJH50679" that contains several files and subfolders.
    8. -
    9. Next, you will need to copy the folder "NPJH50679" to the "PSP/GAME" directory on your PSP's memory stick. You can do this by connecting your PSP to your computer via a USB cable and using a file manager program like Windows Explorer or Finder.
    10. -
    11. Finally, you will need to run the game on your PSP. To do this, go to the "Game" menu on your PSP's home screen and select "Memory Stick". You should see an icon for One Piece Romance Dawn with an English title. Select it and press "X" to start the game.
    12. -
    -

    To verify that the patch is working, you should see the game's menus and text in English. You can also check the game's version by pressing "Select" on the title screen. It should say "Version 1.01 (English Patch v1.0)".

    -

    How to Add Cheat Codes in PPSSPP

    -

    If you want to use cheat codes for One Piece Romance Dawn, you will need to use a PSP emulator called PPSSPP. PPSSPP is a program that allows you to play PSP games on your computer, smartphone, or tablet. It has many features and options, including the ability to enable cheats and use custom codes.

    -

    Here are the steps to download and install PPSSPP and add cheat codes for One Piece Romance Dawn:

    -
      -
    1. First, you will need to download PPSSPP from its official website . You can choose between different versions for Windows, Mac, Linux, Android, iOS, and other platforms. Make sure you download the latest stable version for your device.
    2. -
    3. Next, you will need to install PPSSPP on your device. The installation process may vary depending on your device and operating system, but it is usually simple and straightforward. Just follow the instructions on the screen and accept the terms and conditions.
    4. -
    5. Then, you will need to enable cheats in PPSSPP settings. To do this, open PPSSPP and go to "Settings". Then go to "System" and scroll down to "Enable Cheats". Make sure it is checked and then go back to the main menu.
    6. -
    7. Next, you will need to find and add cheat codes for One Piece Romance Dawn. You can find cheat codes online on various websites and forums that host them. One of the most popular sources is [CheatCC], where you can find a list of cheat codes for One Piece Romance Dawn along with descriptions and instructions.
    8. -
    9. After finding the cheat codes you want to use, you will need to create a text file named "cheats.ini" in the same folder as your game's ISO file. The ISO file is an image file that contains the game's data and can be played by PPSSPP. You can find your game's ISO file in the "PSP/GAME/NPJH50679" folder on your PSP's memory stick or in any other location where you copied it.
    10. -
    11. Next, you will need to open the text file "cheats.ini" with a text editor program like Notepad or TextEdit. Then copy and paste the cheat codes you want to use into the text file. Each cheat code should start with "_S NPJH-50679" (the game's ID) and end with "_G One Piece Romance Dawn [JPN]" (the game's name). You can also add comments or notes by starting them with "//". For example:
    12. -
    -
    _S NPJH-50679 _G One Piece Romance Dawn [JPN] //Max Money _C0 Max Money _L 0x20318A30 0x05F5E0FF //Max SP _C0 Max SP _L 0x20318A38 0x0001869F //Infinite HP _C0 Infinite HP _L 0x20318A34 0x0001869F //Unlock All Characters _C0 Unlock All Characters _L 0x000000FF 0x00000000 _L 0x00000100 0x00000000 _L 0x00000101 0x00000000 _L 0x00000102 0x00000000 _L 0x00000103 0x00000000 _L 0x00000104 0x00000000 _L 0x00000105 0x00000000 _L 0x00000106 0x00000000 _L 0x00000107 0x00000000 _L 0x20002008 0x24020001 //And so on...
    -

    Save the text file and close it.

    -
      -
    1. Finally, you will need to run the game on PPSSPP and activate the cheat codes you want to use. To do this, open PPSSPP and load your game's ISO file. Then press "Esc" to pause the game and go to "Cheats". You should see a list of cheat codes that you added in the text file. Check the ones you want to use and then resume the game. You should see the effects of the cheat codes in the game.
    2. -
    -

    How to Cheat One Piece Romance Dawn PSP English Patch

    -

    Now that you have downloaded cheat one piece romance dawn psp english patch and added cheat codes in PPSSPP, you can start cheating the game and have some fun. Here are some of the cheat codes available and how to use them in the game:

    -
      -
    • Max Money: This cheat code will give you the maximum amount of money (999,999,999 Beli) in the game. You can use this money to buy items, equipment, skills, and upgrades for your characters.
    • -
    • Max SP: This cheat code will give you the maximum amount of skill points (99,999) in the game. You can use these skill points to unlock and improve your skills in the skill tree system.
    • -
    • Infinite HP: This cheat code will make your health bar always full in the game. You will not lose any health when you take damage from enemies or traps. This will make you invincible and immune to death.
    • -
    • Unlock All Characters: This cheat code will unlock all the playable characters in the game. You can choose from over 30 characters, including Luffy, Zoro, Nami, Usopp, Sanji, Chopper, Robin, Franky, Brook, Ace, Sabo, Law, Hancock, Crocodile, Jinbe, and more. You can switch between characters during battles and explore different scenarios with them.
    • -
    • And so on: There are many other cheat codes available for One Piece Romance Dawn, such as max items, max stats, infinite items, infinite stamina, one hit kill, unlock all modes, unlock all costumes, unlock all movies, and more. You can find them online and add them to your text file as explained above. However, be careful not to use too many cheat codes at once or use cheat codes that might conflict with each other. This might cause the game to crash or freeze.
    • -
    -

    To use the cheat codes in the game, you just need to activate them in PPSSPP settings and then play the game normally. You will see the effects of the cheat codes in the game's menus and screens. You can also turn off or change the cheat codes anytime by pausing the game and going to "Cheats".

    -

    Conclusion

    -

    In this article, we have shown you how to download cheat one piece romance dawn psp english patch and how to use it on your device. We have also explained how to add cheat codes in PPSSPP and how to use them in the game. By following these steps, you can enjoy One Piece Romance Dawn in English with cheats and have a blast.

    -

    However, we also want to remind you that using cheats is not always a good idea. Cheats can ruin the game's balance and challenge, and make it less fun and rewarding. Cheats can also cause glitches or errors in the game, and make it unstable or unplayable. Cheats can also get you banned or penalized if you use them online or in multiplayer mode.

    -

    Therefore, we advise you to use cheats wisely and responsibly. Use them only for testing purposes or for having some fun once in a while. Don't use them all the time or for everything. Don't use them to spoil or disrespect other players or developers. And don't forget to backup your save data before using cheats.

    -

    We hope you found this article helpful and informative. If you have any questions or feedback about One Piece Romance Dawn or cheats, feel free to leave a comment below. We would love to hear from you. Thank you for reading and happy gaming!

    -

    FAQs

    -

    Here are some of the frequently asked questions about One Piece Romance Dawn and cheats:

    -

    Q1: Is the English patch legal and safe to use?

    -

    A1: The English patch is a fan-made project that is not affiliated with or endorsed by the official developers or publishers of One Piece Romance Dawn. It is a free and non-profit service that aims to provide an English translation for the game's fans. The patch is legal and safe to use as long as you own a legitimate copy of the original Japanese version of the game and do not distribute or sell the patch or the patched game. However, we cannot guarantee that the patch is 100% accurate, complete, or bug-free, so use it at your own discretion and risk.

    -

    Q2: Can I play online with the English patch and cheats?

    -

    A2: No, you cannot play online with the English patch and cheats. The English patch and cheats are only compatible with the offline mode of the game. If you try to play online with them, you might encounter errors, crashes, or bans. Moreover, playing online with cheats is unfair and disrespectful to other players who play legitimately. Therefore, we strongly advise you to avoid playing online with the English patch and cheats.

    -

    Q3: How can I update the game and the patch?

    -

    A3: The game and the patch are not officially supported or updated by the developers or publishers of One Piece Romance Dawn. The game and the patch are based on the original Japanese version of the game, which was released in 2012 and has not received any updates since then. The patch was created by fans who translated the game's text and menus into English, but they have not updated or improved the patch since its release in 2014. Therefore, there is no way to update the game and the patch, unless new versions of them are released by their respective creators in the future.

    -

    Q4: What are some of the best features of One Piece Romance Dawn?

    -

    A4: One Piece Romance Dawn is a game that offers many features and options for One Piece fans. Some of the best features of the game are:

    -
      -
    • The game covers the entire story of One Piece from the beginning to the New World arc, including some original scenarios and events that are not in the manga or anime.
    • -
    • The game features over 30 playable characters from One Piece, each with their own unique skills, abilities, and personalities. You can switch between characters during battles and explore different scenarios with them.
    • -
    • The game has a turn-based combat system that allows you to control your characters' actions and strategies. You can use various attacks, skills, items, and special moves to defeat your enemies.
    • -
    • The game has a skill tree system that allows you to customize and upgrade your characters' skills according to your preferences. You can unlock new skills, improve existing skills, or combine skills to create powerful effects.
    • -
    • The game has a crafting system that allows you to create and enhance your items, equipment, and weapons. You can collect materials from enemies, chests, shops, or quests, and use them to craft useful items.
    • -
    • The game has a Grand Collection mode that allows you to view your achievements and collectibles. You can see your progress in the story mode, your battle records, your character profiles, your item inventory, your skill tree, your gallery, your movies, and more.
    • -
    -

    Q5: Where can I find more information and resources about One Piece Romance Dawn?

    -

    A5: If you want to find more information and resources about One Piece Romance Dawn, you can visit some of these websites:

    -
      -
    • [One Piece Wiki]: This is a comprehensive wiki that covers everything about One Piece, including the manga, anime, games, characters, locations, events, etc.
    • -
    • [GameFAQs]: This is a popular website that provides guides, walkthroughs, tips, tricks, cheats, reviews, forums, and more for One Piece Romance Dawn and other games.
    • -
    • [YouTube]: This is a popular video-sharing platform that hosts many videos related to One Piece Romance Dawn and other games. You can watch gameplay videos, trailers, reviews, tutorials, cheats, and more.
    • -
    • [Reddit]: This is a popular social media platform that hosts many communities and discussions related to One Piece Romance Dawn and other games. You can join subreddits like r/OnePiece, r/PSP, r/PPSSPP, r/romhacking, and more to interact with other fans and gamers.
    • -
    -

    These are some of the websites that can help you learn more about One Piece Romance Dawn and cheats. However, there are many other websites and resources that you can find online by using a search engine like Bing. Just type in your keywords and see what you can find.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Latest Version MOD APK Free Download Unlimited Money World Cup 2022 and More.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Latest Version MOD APK Free Download Unlimited Money World Cup 2022 and More.md deleted file mode 100644 index 00fc2d565352c3e77706d900bacd6511305e8742..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Latest Version MOD APK Free Download Unlimited Money World Cup 2022 and More.md +++ /dev/null @@ -1,151 +0,0 @@ - -

    FIFA Mobile mod apk unlimited money latest version free download

    -

    If you are a fan of soccer games and want to enjoy the ultimate soccer experience on your mobile device, then you should try FIFA Mobile. This game allows you to build your dream team of soccer stars, compete in various modes, and relive the world's greatest soccer tournament, the FIFA World Cup. But what if you want to have unlimited money, unlock all players, and access all features without spending a dime? Well, that's where FIFA Mobile mod apk comes in. In this article, we will tell you everything you need to know about FIFA Mobile mod apk, including its features, advantages, disadvantages, how to download and install it, and some tips and tricks for playing the game.

    -

    fifa mobile mod apk unlimited money latest version free download


    Download Ziphttps://ssurll.com/2uNTiP



    -

    What is FIFA Mobile?

    -

    FIFA Mobile is a soccer game developed by EA Sports for iOS and Android devices. It is the mobile version of the popular FIFA franchise, which features over 15,000 authentic soccer stars from over 600 teams across 30+ leagues. You can create your own Ultimate Team, customize your tactics, play in various modes, and experience realistic soccer simulation on your mobile device.

    -

    Features and gameplay of FIFA Mobile

    -

    FIFA Mobile has many features and gameplay modes that make it one of the best soccer games on mobile. Here are some of them:

    -
      -
    • Build your Ultimate Team: You can collect player items and put your favorite soccer stars to the test. You can level up your players, improve their stats and OVR, and create different lineups for different modes. You can also choose your set piece takers, captain, and substitutions.
    • -
    • Compete in various modes: You can play head-to-head against other players from all over the world in pvp modes, such as VS Attack, Head to Head, and Manager Mode. You can also play in AI matches, League vs League tournaments, events, campaigns, and more.
    • -
    • Relive the world's greatest soccer tournament: FIFA Mobile is the only licensed FIFA World Cup 2022 mobile game where you can replay the official tournament brackets with any of the 32 qualified nations. You can also rewrite history and take control of 15 non-qualified nations.
    • -
    • Experience realistic soccer simulation: FIFA Mobile features new, upgraded soccer stadiums, realistic stadium SFX, live on-field audio commentary, localized World Cup commentary, immersive next-level graphics up to 60 fps*, and authentic soccer action.
    • -
    -

    FIFA World Cup 2022 mode

    -

    FIFA World Cup 2022 mode is a special mode in FIFA Mobile where you can relive the world's greatest soccer tournament. You can unlock soccer stars from all 32 qualified national teams with official licenses, play in World Cup stadiums (Al Bayt and Lusail), use authentic World Cup national team kits and badges, and enjoy the official match ball. You can also play with localized World Cup commentary to bring the most immersive match atmosphere.

    -

    Soccer icons and heroes

    -

    FIFA Mobile features over 100 soccer icons and heroes from different eras and leagues. You can score big with world soccer icons like Paolo Maldini, Ronaldinho, Zinedine Zidane, David Beckham, Pele, Maradona, and more. You can also level up your dream team with soccer legends from over 30+ leagues, such as Lionel Messi, Cristiano Ronaldo, Neymar Jr., Kylian Mbappe, Mohamed Salah, Robert Lewandowski, and more. You can also unlock soccer heroes from different regions, such as Asia, Africa, Europe, and America.

    -

    fifa mobile hack apk download unlimited coins and points
    -fifa mobile mod menu apk free download latest version
    -fifa mobile 23 mod apk unlimited money and gems
    -fifa mobile cheats apk download for android no root
    -fifa mobile modded apk with unlocked players and modes
    -fifa mobile unlimited money apk download offline
    -fifa mobile hack version download free for android
    -fifa mobile mod apk 2023 latest version with obb
    -fifa mobile cracked apk download full version
    -fifa mobile premium apk free download with unlimited resources
    -fifa mobile hack tool apk download no survey
    -fifa mobile mod apk revdl with unlimited everything
    -fifa mobile mega mod apk download latest update
    -fifa mobile hack online generator apk download
    -fifa mobile mod apk rexdl with all features unlocked
    -fifa mobile unlimited coins and points apk download 2023
    -fifa mobile hack apk 2023 free download for ios
    -fifa mobile mod apk happymod with unlimited stamina and energy
    -fifa mobile hacked apk download no verification
    -fifa mobile mod apk android 1 with all players unlocked
    -fifa mobile modded version download free for pc
    -fifa mobile unlimited money and gems apk download 2023
    -fifa mobile hack apk latest version 2023 free download
    -fifa mobile mod apk pure with unlimited cash and tokens
    -fifa mobile cheat codes apk download for android
    -fifa mobile modded game download free for windows 10
    -fifa mobile unlimited coins and gems mod apk download
    -fifa mobile hack app download free for iphone
    -fifa mobile mod apk data file host with unlimited skills and abilities
    -fifa mobile hacked version download free for mac
    -fifa mobile unlimited money and points mod apk download 2023
    -fifa mobile hack version 2023 free download for android
    -fifa mobile mod apk apkpure with all leagues and teams unlocked
    -fifa mobile cheat engine apk download for android no root
    -fifa mobile modded game free download for laptop
    -fifa mobile unlimited coins and points hack apk download 2023
    -fifa mobile hack version latest free download for ios
    -fifa mobile mod apk ihackedit with unlimited transfers and upgrades
    -fifa mobile cheat tool apk download for android no survey
    -fifa mobile modded game free download for chromebook
    -fifa mobile unlimited money and gems hack apk download 2023
    -fifa mobile hack version latest free download for pc
    -fifa mobile mod apk an1 with all kits and badges unlocked
    -fifa mobile cheat app download for android no root
    -fifa mobile modded game free download for linux
    -fifa mobile unlimited coins and gems cheat apk download 2023

    -

    Immersive next-level soccer simulation

    -

    FIFA Mobile features immersive next-level graphics up to 60 fps* that bring the game to life on your mobile device. You can enjoy stunning visuals, realistic animations, dynamic lighting, and shadows. You can also experience the thrill of playing in new, upgraded soccer stadiums with realistic stadium SFX and live on-field audio commentary. FIFA Mobile is the closest thing to playing real soccer on your mobile device.

    -

    Manager mode

    -

    Manager mode is a new mode in FIFA Mobile where you can take charge of your favorite soccer club and lead them to glory. You can choose from over 600 clubs across 30+ leagues and customize your team with official kits, badges, and players. You can also manage your club's finances, transfers, contracts, training, tactics, and more. You can compete in various competitions, such as domestic leagues, cups, continental tournaments, and more. You can also challenge other managers from all over the world in pvp matches and climb the leaderboards.

    -

    What is FIFA Mobile mod apk?

    -

    FIFA Mobile mod apk is a modified version of FIFA Mobile that gives you unlimited money, unlocks all players and features, and removes ads and restrictions. With FIFA Mobile mod apk, you can enjoy the game without spending any real money or waiting for anything. You can build your dream team of soccer stars, play in any mode you want, and have fun with the game.

    -

    Advantages of FIFA Mobile mod apk

    -

    FIFA Mobile mod apk has many advantages that make it a better option than the original game. Here are some of them:

    -
      -
    • Unlimited money: You can get unlimited coins and gems that you can use to buy player items, upgrade your team, unlock features, and more. You don't have to worry about running out of money or grinding for hours to earn them.
    • -
    • Unlock all players and features: You can unlock all the players and features in the game without any restrictions. You can access all the soccer icons, heroes, legends, World Cup teams, stadiums, modes, events, campaigns, and more. You don't have to complete any tasks or achievements to unlock them.
    • -
    • Remove ads and restrictions: You can remove all the annoying ads and pop-ups that interrupt your gameplay. You can also remove all the limitations and requirements that prevent you from playing the game freely. You don't have to watch ads or wait for stamina or energy to play the game.
    • -
    -

    Disadvantages of FIFA Mobile mod apk

    -

    FIFA Mobile mod apk also has some disadvantages that you should be aware of before downloading it. Here are some of them:

    -
      -
    • Risk of getting banned: You may get banned from the game if you use FIFA Mobile mod apk online. The game developers may detect your modded version and suspend your account or block your device. You may lose all your progress and data if you get banned.
    • -
    • Risk of getting viruses or malware: You may get viruses or malware on your device if you download FIFA Mobile mod apk from untrusted sources. The modded version may contain harmful files or codes that can damage your device or steal your personal information. You should always download FIFA Mobile mod apk from trusted sources and scan it with antivirus software before installing it.
    • -
    • Lack of updates and support: You may not get the latest updates and features of FIFA Mobile if you use FIFA Mobile mod apk. The modded version may not be compatible with the original game or may not work properly after an update. You may also not get any support or help from the game developers or community if you encounter any problems or issues with FIFA Mobile mod apk.
    • -
    -

    How to download and install FIFA Mobile mod apk?

    -

    If you want to download and install FIFA Mobile mod apk on your device, you need to follow some simple steps. Here are they:

    -

    System requirements and compatibility

    -

    Before downloading FIFA Mobile mod apk, you need to make sure that your device meets the minimum system requirements and is compatible with the game. Here are the system requirements and compatibility for FIFA Mobile mod apk:

    - - - - -< th> -
    OSAndroid 5.0 or higher
    RAM2 GB or higher
    Storage1 GB or higher
    Internet connectionRequired for online modes
    -

    You also need to enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.

    -

    Steps to download and install

    -

    After checking the system requirements and compatibility, you can follow these steps to download and install FIFA Mobile mod apk on your device:

    -
      -
    1. Download FIFA Mobile mod apk from a trusted source. You can use the link below to download the latest version of FIFA Mobile mod apk for free.
    2. -
    3. Locate the downloaded file on your device and tap on it to start the installation process. You may need to grant some permissions to the app.
    4. -
    5. Wait for the installation to complete and then launch the game. You may need to allow some additional permissions to the game.
    6. -
    7. Enjoy FIFA Mobile mod apk with unlimited money, unlocked players, and features.
    8. -
    -

    Download link: FIFA Mobile mod apk unlimited money latest version free download

    -

    Tips and tricks for playing FIFA Mobile

    -

    FIFA Mobile is a fun and addictive game that can keep you entertained for hours. However, if you want to master the game and become a soccer legend, you need to know some tips and tricks that can help you improve your skills and performance. Here are some tips and tricks for playing FIFA Mobile:

    -

    Attack mode tips

    -

    Attack mode is a pvp mode where you can challenge other players in turn-based matches. You can score goals, earn fans, and climb the leaderboards in this mode. Here are some tips for playing attack mode:

    -
      -
    • Choose your formation wisely: You can choose from different formations that suit your playstyle and strategy. You can also change your formation during the match depending on the situation. You should choose a formation that balances your attack and defense, as well as gives you more chances to score goals.
    • -
    • Use skill moves: You can use skill moves to dribble past defenders, create space, and score goals. You can swipe on the screen or use the virtual joystick to perform skill moves. You should learn how to use different skill moves, such as roulette, rainbow flick, heel-to-heel, etc., and know when to use them.
    • -
    • Pass and shoot smartly: You can pass and shoot by tapping on the screen or using the virtual buttons. You should pass and shoot smartly, depending on the position of your players, the defenders, and the goalkeeper. You should avoid passing or shooting when there is no clear opportunity or when there is a high risk of losing possession or missing the target.
    • -
    • Use boosters: You can use boosters to enhance your team's performance in attack mode. You can get boosters from events, campaigns, or packs. You can use boosters such as speed boost, shooting boost, dribbling boost, etc., to give your players an edge over your opponents.
    • -
    -

    Team management tips

    -

    Team management is an important aspect of FIFA Mobile where you can build your dream team of soccer stars. You can collect player items, level up your players, improve their stats and OVR, and create different lineups for different modes. Here are some tips for team management:

    -
      -
    • Upgrade your players: You can upgrade your players by using training XP or player items. You can get training XP or player items from events, campaigns, packs, or rewards. You should upgrade your players regularly to improve their stats and OVR, as well as unlock new skill boosts and chemistry links.
    • -
    • Use skill boosts: You can use skill boosts to increase specific stats of your players. You can get skill boosts from events, campaigns, packs, or rewards. You should use skill boosts that match your players' positions and roles, as well as complement their strengths and weaknesses.
    • -
    • Use chemistry links: You can use chemistry links to improve the overall performance of your team. Chemistry links are based on factors such as league, nation, club, etc., that connect your players. You should use chemistry links that maximize your team's chemistry rating, as well as suit your formation and strategy.
    • -
    • Create different lineups: You can create different lineups for different modes in FIFA Mobile. You can have up to five active lineups at a time. You should create different lineups that optimize your team's performance in each mode, such as attack mode, head-to-head mode, manager mode, etc.
    • -
    -

    Stamina tips

    -

    Stamina is a resource that you need to play matches and events in FIFA Mobile. You can get stamina from rewards, packs, or by waiting for it to regenerate over time. You can also buy stamina with gems or real money. Here are some tips for managing your stamina:

    -
      -
    • Use your stamina wisely: You should use your stamina wisely and prioritize the matches and events that give you the best rewards, such as coins, gems, players, skill boosts, etc. You should also avoid wasting your stamina on matches and events that are too easy or too hard for your team.
    • -
    • Plan your schedule: You should plan your schedule and play FIFA Mobile when you have enough stamina to complete the matches and events that you want. You should also check the timers and deadlines of the matches and events and play them before they expire.
    • -
    • Save your stamina: You should save your stamina for special occasions, such as limited-time events, campaigns, tournaments, etc. You should also save your stamina for the modes that require more stamina, such as head-to-head mode, manager mode, etc.
    • -
    • Recover your stamina: You should recover your stamina by using rewards, packs, gems, or real money. You can also recover your stamina by watching ads or completing tasks or achievements. You should recover your stamina when you need it or when you have a good opportunity to use it.
    • -
    -

    Skill move tips

    -

    Skill moves are special moves that you can use to dribble past defenders, create space, and score goals in FIFA Mobile. You can swipe on the screen or use the virtual joystick to perform skill moves. You can also customize your skill moves in the settings menu. Here are some tips for using skill moves:

    -
      -
    • Learn different skill moves: You should learn how to use different skill moves, such as roulette, rainbow flick, heel-to-heel, etc., and know their advantages and disadvantages. You should also learn which skill moves are best suited for different situations and players.
    • -
    • Use skill moves at the right time: You should use skill moves at the right time and place to maximize their effectiveness. You should use skill moves when you have enough space and speed to execute them. You should also use skill moves when you face one or two defenders, not a crowd of them.
    • -
    • Use skill moves sparingly: You should use skill moves sparingly and not overuse them. You should use skill moves only when you need them or when you have a clear opportunity to score or assist. You should also avoid using skill moves that are too flashy or risky.
    • -
    • Use skill moves creatively: You should use skill moves creatively and unpredictably to surprise and confuse your opponents. You should use different skill moves in different combinations and directions. You should also use skill moves that match your players' attributes and styles.
    • -
    -

    Conclusion

    -

    FIFA Mobile is a great game for soccer fans who want to enjoy the ultimate soccer experience on their mobile devices. It has many features and modes that make it fun and addictive. However, if you want to have unlimited money, unlock all players and features, and remove ads and restrictions, you can try FIFA Mobile mod apk. This is a modified version of FIFA Mobile that gives you all these benefits for free. However, you should also be aware of the disadvantages and risks of using FIFA Mobile mod apk, such as getting banned, getting viruses or malware, or missing updates and support. Therefore, you should download FIFA Mobile mod apk from trusted sources and use it at your own risk.

    -

    FAQs

    -

    Here are some frequently asked questions about FIFA Mobile mod apk:

    -
      -
    1. Is FIFA Mobile mod apk safe?: FIFA Mobile mod apk is not completely safe as it may contain viruses or malware that can harm your device or steal your personal information. It may also get detected by the game developers and result in a ban or suspension of your account or device. Therefore, you should always download FIFA Mobile mod apk from trusted sources and scan it with antivirus software before installing it.
    2. -
    3. Is FIFA Mobile mod apk legal?: FIFA Mobile mod apk is not legal as it violates the terms of service and policies of EA Sports and FIFA Mobile. It also infringes the intellectual property rights of the game developers and publishers. Therefore, you may face legal consequences if you use FIFA Mobile mod apk online.
    4. -
    5. Is FIFA Mobile mod apk online?: FIFA Mobile mod apk is online as it requires an internet connection to play some modes and features of the game. However, you may not be able to play online with other players or access some features of the game if you use FIFA Mobile mod apk. You may also get banned or suspended if you use FIFA Mobile mod apk online.
    6. -
    7. How to update FIFA Mobile mod apk?: You can update FIFA Mobile mod apk by downloading the latest version of the modded file from a trusted source and installing it on your device. However, you may not be able to update FIFA Mobile mod apk if the original game has a major update or change that makes the modded version incompatible or obsolete. You may also lose your progress and data if you update FIFA Mobile mod apk.
    8. -
    9. How to uninstall FIFA Mobile mod apk?: You can uninstall FIFA Mobile mod apk by deleting the modded file from your device and clearing the cache and data of the game. You can also uninstall FIFA Mobile mod apk by installing the original game from the official app store. However, you may lose your progress and data if you uninstall FIFA Mobile mod apk.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Final Draft 9 The Best Screenwriting Software for Windows and Mac.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Final Draft 9 The Best Screenwriting Software for Windows and Mac.md deleted file mode 100644 index e726df04d169fe254a57fd8ae1214d8aae7d0f0e..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Final Draft 9 The Best Screenwriting Software for Windows and Mac.md +++ /dev/null @@ -1,121 +0,0 @@ - -

    Final Draft 9 Download: How to Get the Industry Standard Screenwriting Software

    -

    If you are a screenwriter or aspiring to be one, you probably have heard of Final Draft, the number-one selling screenwriting software in the world. But how do you get it? And how do you use it? In this article, we will show you how to download, install, activate, and use Final Draft 9, the latest version of this powerful software. We will also tell you why you need it, how much it costs, and how to collaborate and share your screenplay with others.

    -

    final draft 9 download


    Download File https://ssurll.com/2uNW6A



    -

    What is Final Draft 9 and why do you need it?

    -

    Final Draft 9 is a professional screenwriting software that formats your script to industry standards. It also has powerful features to help you write, edit, and analyze your screenplay. Final Draft 9 is used by 95% of the entertainment industry and is compatible with Windows, Mac, and iOS devices.

    -

    Final Draft 9 is a professional screenwriting software that formats your script to industry standards

    -

    One of the main benefits of using Final Draft 9 is that it takes care of the formatting for you. You don't have to worry about margins, spacing, fonts, page numbers, or any other technical details. All you have to do is type your story and let Final Draft do the rest. Final Draft automatically paginates your script according to the Hollywood standard. It also recognizes different elements such as scene headings, action, character names, dialogue, transitions, and more. It also lets you customize your script format to suit your preferences or specific requirements.

    -

    Final Draft 9 has powerful features to help you write, edit, and analyze your screenplay

    -

    Another benefit of using Final Draft 9 is that it has many features that make writing easier and faster. For example, you can use the smart auto-suggestions feature to complete words, names, or phrases as you type. You can also use the spell-checker feature to correct typos and grammar errors. You can also use the character name swap feature to change the name of a character throughout your script with one click. You can also use the dialogue highlight feature to highlight a specific character's dialogue in color. You can also use the watermark feature to protect your script from unauthorized copying.

    -

    final draft 9 free download
    -final draft 9 download mac
    -final draft 9 download windows
    -final draft 9 download crack
    -final draft 9 download full version
    -final draft 9 download link
    -final draft 9 download trial
    -final draft 9 download for pc
    -final draft 9 download reddit
    -final draft 9 download with key
    -final draft 9 software download
    -final draft 9 script writing software download
    -final draft 9 screenwriting software download
    -final draft 9 activation code download
    -final draft 9 customer number download
    -final draft 9 update download
    -final draft 9 installer download
    -final draft 9 offline installer download
    -final draft 9 portable download
    -final draft 9 torrent download
    -final draft 9 serial number download
    -final draft 9 keygen download
    -final draft 9 patch download
    -final draft 9 license key download
    -final draft 9 registration code download
    -how to download final draft 9
    -where to download final draft 9
    -how to install final draft 9 after downloading
    -how to activate final draft 9 after downloading
    -how to use final draft 9 after downloading
    -how to update final draft 9 after downloading
    -how to uninstall final draft 9 after downloading
    -how to transfer final draft 9 to another computer after downloading
    -how to export final draft 9 to pdf after downloading
    -how to import pdf to final draft 9 after downloading
    -how to format a script in final draft 9 after downloading
    -how to collaborate in final draft 9 after downloading
    -how to share a script in final draft 9 after downloading
    -how to print a script in final draft 9 after downloading
    -how to save a script in final draft 9 after downloading
    -how to open a script in final draft 9 after downloading
    -how to edit a script in final draft 9 after downloading
    -how to convert a script from another program to final draft 9 after downloading
    -how to convert a script from final draft 9 to another program after downloading
    -how to backup a script in final draft 9 after downloading
    -how to restore a script in final draft 9 after downloading
    -how to recover a lost script in final draft 9 after downloading
    -how to protect a script in final draft 9 after downloading
    -how to lock a script in final draft 9 after downloading

    -

    Final Draft 9 also has features that help you edit and analyze your screenplay. For example, you can use the revision mode feature to track changes and mark them with different colors. You can also use the compare scripts feature to compare two drafts of your script and see the differences. You can also use the reports feature to generate various reports on your script, such as scene, character, location, dialogue, and more. You can also use the index cards feature to view your script as a series of cards that you can rearrange and edit.

    -

    Final Draft 9 is used by 95% of the entertainment industry and is compatible with Windows, Mac, and iOS devices

    -

    A final benefit of using Final Draft 9 is that it is the industry standard for screenwriting. It is used by most of the major studios, production companies, agents, managers, and writers in Hollywood and around the world. It is also compatible with other software and formats, such as Movie Magic Scheduling, Movie Magic Budgeting, PDF, RTF, FDX, and more. You can also use Final Draft 9 on different devices, such as Windows computers, Mac computers, iPads, iPhones, and iPods. You can sync your scripts across devices using Dropbox or iCloud.

    -

    How to download Final Draft 9 for free or at a discounted price?

    -

    If you are interested in trying or buying Final Draft 9, there are several ways to do so. You can download a free 30-day trial of Final Draft 9 from the official website. You can also purchase a new copy of Final Draft 9 for $199 or upgrade from a previous version for $79. You can also get educational discounts, bundle offers, or coupons for Final Draft 9.

    -

    You can download a free 30-day trial of Final Draft 9 from the official website

    -

    The easiest way to get Final Draft 9 is to download a free trial from the official website. All you have to do is fill out a form with your name and email address and click on the download button. You will receive an email with a link to download the software and a customer number to activate it. The trial version is fully functional and lets you use all the features of Final Draft 9 for 30 days. However, you cannot print or save your scripts in PDF format with the trial version.

    -

    You can purchase a new copy of Final Draft 9 for $199 or upgrade from a previous version for $79

    -

    If you want to buy Final Draft 9, you have two options: buying a new copy or upgrading from a previous version. A new copy of Final Draft 9 costs $199 and comes with a customer number that allows you to activate it on up to two computers. An upgrade from a previous version costs $79 and requires you to have a valid customer number from Final Draft 8 or earlier. You can buy Final Draft 9 online from the official website or from authorized resellers such as Amazon. You can also buy Final Draft 9 in physical form as a CD-ROM or a USB drive.

    -

    You can also get educational discounts, bundle offers, or coupons for Final Draft 9

    -

    If you want to save some money on Final Draft 9, you can also look for educational discounts, bundle offers, or coupons. For example, if you are a student or a teacher, you can get Final Draft 9 for only $99 by providing proof of your academic status. You can also get Final Draft 9 as part of a bundle with other software such as Movie Magic Scheduling or Movie Magic Budgeting. You can also look for coupons or promo codes that offer discounts on Final Draft 9 from time to time. However, be careful not to fall for scams or fake websites that claim to offer free or cheap copies of Final Draft 9.How to install and activate Final Draft 9 on your device? -

    Once you have downloaded or purchased Final Draft 9, you need to install and activate it on your device. You need a valid customer number (starts with KFD) to activate Final Draft 9. You can install Final Draft 9 on up to two computers with the same customer number. You can follow the instructions on the website or contact the support team for any issues.

    -

    You need a valid customer number (starts with KFD) to activate Final Draft 9

    -

    To activate Final Draft 9, you need a customer number that starts with KFD. This is a unique code that identifies you as a legitimate user of the software. You can find your customer number in the email that you received after downloading or buying Final Draft 9. You can also find it in the Help menu of the software under About Final Draft. You need to enter your customer number when you launch Final Draft 9 for the first time or when you reinstall it on a new device.

    -

    You can install Final Draft 9 on up to two computers with the same customer number

    -

    You can install Final Draft 9 on up to two computers with the same customer number. This means that you can use Final Draft 9 on your desktop and laptop, for example, or on your home and office computers. However, you cannot use Final Draft 9 on more than two computers at the same time. If you want to use Final Draft 9 on a third computer, you need to deactivate it from one of the previous computers first. You can do this by going to the Help menu of the software and clicking on Deactivate.

    -

    You can follow the instructions on the website or contact the support team for any issues

    -

    To install Final Draft 9, you need to follow the instructions on the website or in the email that you received after downloading or buying it. The installation process is simple and straightforward, and it should not take more than a few minutes. However, if you encounter any problems or errors during the installation or activation process, you can contact the support team for help. You can reach them by phone, email, chat, or social media. They are available 24/7 and will assist you with any issues.

    -

    How to use Final Draft 9 to write your screenplay?

    -

    Now that you have installed and activated Final Draft 9, you are ready to write your screenplay. You can choose from over 300 templates for different genres and formats. You can also use the smart auto-suggestions, spell-checker, character name swap, dialogue highlight, and watermark features. You can also use the Beat Board, Story Map, ScriptNotes, and Outline Editor tools to plan and structure your story.

    -

    You can choose from over 300 templates for different genres and formats

    -

    One of the first things that you need to do when writing a screenplay is to choose a template that matches your genre and format. A template is a pre-formatted document that has all the necessary elements and settings for your script. For example, if you are writing a sitcom script, you can choose a template that has the correct page size, font, margins, scene headings, transitions, etc. for a sitcom script. Final Draft 9 has over 300 templates for different genres and formats, such as comedy, drama, horror, thriller, action, sci-fi, fantasy, animation, documentary, stage play, radio play, graphic novel, etc. You can browse through the templates by going to File > New from Template or by clicking on the Template button on the toolbar.

    -

    You can use the smart auto-suggestions, spell-checker, character name swap, dialogue highlight, and watermark features

    -

    As you write your screenplay using Final Draft 9, you can use some of its features to make your writing easier and faster. For example:

    -
      -
    • You can use the smart auto-suggestions feature to complete words, names, or phrases as you type. This feature learns from your script and suggests words that match your context and style.
    • -
    • You can use the spell-checker feature to correct typos and grammar errors in your script. You can also customize your dictionary and add words that are specific to your script.
    • -
    • You can use the character name swap feature to change the name of a character throughout your script with one click. This feature is useful if you want to rename a character or avoid confusion with similar names.
    • -
    • You can use the dialogue highlight feature to highlight a specific character's dialogue in color. This feature is useful if you want to focus on a character's voice or see how much dialogue they have.
    • -
    • You can use the watermark feature to protect your script from unauthorized copying. This feature lets you add a text or image watermark to your script that shows your name, email, phone number, or any other information.
    • -
    -

    You can use the Beat Board, Story Map, ScriptNotes, and Outline Editor tools to plan and structure your story

    -

    Final Draft 9 also has tools that help you plan and structure your story before and during writing. For example:

    -
      -
    • You can use the Beat Board tool to brainstorm ideas, plot points, characters, themes, and more. You can create beats, which are visual notes that you can drag and drop on a virtual corkboard. You can also link beats to specific scenes in your script.
    • -
    • You can use the Story Map tool to outline your story and track its progress. You can create a timeline that shows the structure of your script, such as the act breaks, the midpoint, the climax, etc. You can also see how long each scene or sequence is and how it fits into the overall story.
    • -
    • You can use the ScriptNotes tool to add comments, questions, suggestions, or feedback to your script. You can create ScriptNotes, which are pop-up notes that you can attach to any element in your script. You can also view, edit, or delete ScriptNotes from a separate window.
    • -
    • You can use the Outline Editor tool to create a detailed outline of your script. You can create an outline that shows the scene headings, summaries, and colors of each scene in your script. You can also rearrange, add, or delete scenes from your outline.
    • -
    -

    How to collaborate and share your screenplay with Final Draft 9?

    -

    Final Draft 9 also allows you to collaborate and share your screenplay with other writers, editors, producers, directors, or anyone else involved in your project. You can use the Track Changes feature to review and accept edits from other writers. You can also use the Collaboration feature to write in real-time with other Final Draft users online. You can also export your screenplay as a PDF, RTF, FDX, or plain text file and send it to anyone.

    -

    You can use the Track Changes feature to review and accept edits from other writers

    -

    If you are working with other writers on your screenplay, you can use the Track Changes feature to see what changes they have made to your script. You can turn on Track Changes by going to Tools > Track Changes > On. This will mark any changes made by other writers with different colors and initials. You can also see who made the changes and when by hovering over the marks. You can review the changes by going to Tools > Track Changes > Review Changes. This will open a window that shows all the changes made by all the writers. You can accept or reject each change individually or all at once.

    -

    You can use the Collaboration feature to write in real-time with other Final Draft users online

    -

    If you want to write with other Final Draft users online in real-time, you can use the Collaboration feature. This feature lets you invite up to four other Final Draft users to join your script session online. You can see what they are writing as they write it and chat with them using voice or text. You can also assign roles and permissions to each collaborator, such as writer, editor, reader, etc. You can start a collaboration session by going to File > Collaboration > Host Session or Join Session.

    -

    You can export your screenplay as a PDF, RTF, FDX, or plain text file and send it to anyone

    -

    If you want to share your screenplay with anyone who does not have Final Draft 9 or who prefers a different format, you can export your screenplay as a PDF, RTF, FDX, or plain text file. A PDF file is a universal format that preserves the formatting and appearance of your script. An RTF file is a rich text format that allows you to edit your script in other word processors. An FDX file is a Final Draft XML format that allows you to import or export your script in other software that supports FDX. A plain text file is a simple text format that strips away all the formatting from your script. You can export your screenplay by going to File > Save As or File > Export.

    -

    Conclusion

    -

    Final Draft 9 is the best writing software for anyone in the screenwriting business. It offers a refined screenwriting experience that improves functionality and design. It also has powerful features that help you write, edit, and analyze your screenplay. It also allows you to collaborate and share your screenplay with other writers and professionals. Final Draft 9 is worth the price and sells for a one-time fee rather than a recurring subscription. If you want to get the industry standard screenwriting software, you should download Final Draft 9 today.

    -

    FAQs

    -

    What are some alternatives to Final Draft 9?

    -

    Some alternatives to Final Draft 9 are Celtx, WriterDuet, Fade In, Scrivener, and Highland. These are other screenwriting software that have similar or different features and prices. However, none of them are as widely used or accepted as Final Draft 9 in the entertainment industry.

    -

    What are some new features in Final Draft 12?

    -

    Some new features in Final Draft 12 are the Focus Mode, the Speech to Script feature, the Insert Images feature, the Track Changes Improvements, and the Outline Editor Improvements. These are features that enhance the writing experience and functionality of Final Draft 12.

    -

    How many computers can I install Final Draft on?

    -

    You can install Final Draft on up to two computers with the same customer number. However, you cannot use Final Draft on more than two computers at the same time. If you want to use Final Draft on a third computer, you need to deactivate it from one of the previous computers first.

    -

    How can I retrieve my customer number if I lose it?

    -

    If you lose your customer number, you can retrieve it by going to the official website and clicking on the Lost Customer Number link. You will need to enter your email address and answer some security questions to get your customer number.

    -

    How can I get help or support for Final Draft?

    -

    If you need help or support for Final Draft, you can contact the support team by phone, email, chat, or social media. They are available 24/7 and will assist you with any issues. You can also visit the official website and check out the FAQs, tutorials, manuals, forums, and blog posts for more information.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/sklearn-docs/Compressive_sensing_Tomography_reconstruction_with_L1_prior_Lasso/README.md b/spaces/sklearn-docs/Compressive_sensing_Tomography_reconstruction_with_L1_prior_Lasso/README.md deleted file mode 100644 index c57722f0f242bace134618444533138f744ebe72..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Compressive_sensing_Tomography_reconstruction_with_L1_prior_Lasso/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Compressive Sensing Tomography Reconstruction With Lasso -emoji: 🧑‍⚕️ -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: bsd-3-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sqc1729/bingi/src/components/learn-more.tsx b/spaces/sqc1729/bingi/src/components/learn-more.tsx deleted file mode 100644 index a64459ee7900a612292e117a6bda96ee9260990f..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/learn-more.tsx +++ /dev/null @@ -1,39 +0,0 @@ -import React from 'react' -import { SourceAttribution } from '@/lib/bots/bing/types' - -export interface LearnMoreProps { - sourceAttributions?: SourceAttribution[] -} - -export function LearnMore({ sourceAttributions }: LearnMoreProps) { - if (!sourceAttributions?.length) { - return null - } - - return ( -
    -
    了解详细信息:
    -
    -
    - {sourceAttributions.map((attribution, index) => { - const { providerDisplayName, seeMoreUrl } = attribution - const { host } = new URL(seeMoreUrl) - return ( - - {index + 1}. {host} - - ) - })} -
    -
    -
    - ) -} diff --git a/spaces/sqc1729/bingi/src/components/ui/input.tsx b/spaces/sqc1729/bingi/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/sqc1729/bingi/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py deleted file mode 100644 index 9304f99eb8169a614f39babc830c84cac80e080b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - blocks = [32, 64, 128, 256] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_forward(at::Tensor input, at::Tensor weight, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - switch = """ - switch(filterSize) { -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "dynamicconv_forward", ([&] {{ - dynamicconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<>>( - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - output.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break;\n -""" - - end = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } - - return {output}; -} -""" - - with open("dynamicconv_cuda_forward.cu", "w") as forward: - forward.write(head) - forward.write(switch) - for k in kernels: - b_size = 32 - for b in blocks: - if b > k: - b_size = b - break - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=b_size, pad=pad)) - forward.write(bad_padding) - forward.write(end) - - -def gen_backward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - thresh = [512, 512, 512, 512, 512, 380, 256, 256] - min_block = [64, 64, 64, 64, 64, 64, 128, 256] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_backward(at::Tensor gradOutput, int padding_l, at::Tensor input, at::Tensor weight) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - auto numChunks = 1; - - auto gradInput = at::zeros_like(input); - auto gradWeight = at::zeros_like(weight); - auto stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(minibatch, numHeads, numChunks); -""" - - sequence_if = """ - if (sequenceLength < {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - chunks_reset = """ - numChunks = int(ceilf(sequenceLength/float({b_size}))); - blocks = dim3(minibatch, numHeads, numChunks); -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(gradOutput.scalar_type(), "dynamicconv_backward", ([&] {{ - dynamicconv_backward_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - gradOutput.data(), - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - gradWeight.data(), - gradInput.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } - break;\n -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradWeight}; -} -""" - - with open("dynamicconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for seq in seqs: - backward.write(sequence_if.format(seq=seq)) - for k, t, m in zip(kernels, thresh, min_block): - backward.write(case_k.format(k=k)) - if seq <= t: - b_size = seq - else: - b_size = m - backward.write(chunks_reset.format(b_size=b_size)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=b_size, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(con_else) - backward.write(final_else) - for k, m in zip(kernels, min_block): - backward.write(case_k.format(k=k)) - backward.write(chunks_reset.format(b_size=m)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=m, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/spaces/stomexserde/gpt4-ui/Examples/2pac Evolution The Definitive Collection 12 Disc Set.md b/spaces/stomexserde/gpt4-ui/Examples/2pac Evolution The Definitive Collection 12 Disc Set.md deleted file mode 100644 index ebdfa23a9351a7b48e9394e16bb1f98c685e9ef9..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/2pac Evolution The Definitive Collection 12 Disc Set.md +++ /dev/null @@ -1,15 +0,0 @@ -
    -

    2pac Evolution The Definitive Collection 12 Disc Set: A Rare and Historic Treasure for Hip-Hop Fans

    -

    2pac, also known as Tupac Shakur, is widely regarded as one of the most influential and legendary rappers of all time. His music, which spanned various genres and topics, touched millions of fans across the world and inspired countless artists. He was also a prolific songwriter and producer, who recorded hundreds of songs before his tragic death in 1996.

    -

    2pac Evolution The Definitive Collection 12 Disc Set


    Download 🌟 https://urlgoal.com/2uI8LY



    -

    Many of his unreleased songs have leaked over the years, some of them in poor quality or edited versions. However, there is a rare and historic collection that preserves the original sources and quality of his music: 2pac Evolution The Definitive Collection 12 Disc Set.

    -

    This set was produced from the original DAT tapes, which are digital audio tapes that store high-fidelity sound. The set contains 200 tracks that cover various stages of 2pac's career, from his early days with Digital Underground to his final recordings with Death Row Records. The set also includes rare and unreleased songs, such as "Wherever U R", "Never Be Peace", "Friends", and many more.

    -

    The set was originally offered to the private members of 2PacShakur.net, a fan website that is no longer active. It was later released to the general public in a limited edition. The set comes in a professional 12-disc capacity hard plastic shelled case, with full color covers and professionally pressed discs. It is a very rare and hard to find collection, that is highly sought after by collectors and fans.

    -

    2pac Evolution The Definitive Collection 12 Disc Set is more than just a collection of songs. It is a history of 2pac's evolution as an artist and his prolific nature in producing music. It is a testament to his legacy and impact on hip-hop culture and beyond. It is a treasure for anyone who appreciates 2pac's music and wants to hear it in its original form.

    -

    - -

    The set also showcases 2pac's versatility and creativity as an artist, who could rap over different beats and styles, and collaborate with various producers and artists. Some of the tracks feature 2pac's original group Thug Life, as well as his friends and allies such as Snoop Dogg, Nate Dogg, Outlawz, Kurupt, Daz Dillinger, and more. Some of the tracks also feature samples or interpolations from other songs, such as "Just Watchin'" which samples Marvin Gaye's "Sexual Healing", or "Penitentiary Bound" which interpolates Bob Marley's "I Shot The Sheriff".

    -

    The set also reveals 2pac's personal and emotional side, as he raps about his struggles, his hopes, his fears, his love, and his faith. Some of the tracks are dedicated to his mother Afeni Shakur, such as "Dear Mama Part II" and "Mama's Just A Little Girl". Some of the tracks are also reflective of his spiritual beliefs, such as "Only God Can Judge Me" and "Ghetto Gospel". Some of the tracks are also prophetic of his death, such as "If I Die 2Nite" and "I Ain't Mad At Cha".

    -

    2pac Evolution The Definitive Collection 12 Disc Set is a must-have for any 2pac fan or hip-hop lover. It is a rare opportunity to hear 2pac's music in its purest form, without any alterations or modifications. It is a tribute to his genius and legacy, that still lives on today. It is a collection that will make you appreciate 2pac's music even more, and discover new aspects of his artistry that you may have missed before.

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/CyberLink PowerDirector Ultimate 17.0.2307.0 Pre-Activated[BabuP Utorrent [2021].md b/spaces/stomexserde/gpt4-ui/Examples/CyberLink PowerDirector Ultimate 17.0.2307.0 Pre-Activated[BabuP Utorrent [2021].md deleted file mode 100644 index edac84841afb47078583bf7dc610c997d10efd12..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/CyberLink PowerDirector Ultimate 17.0.2307.0 Pre-Activated[BabuP Utorrent [2021].md +++ /dev/null @@ -1,27 +0,0 @@ - -

    CyberLink PowerDirector Ultimate 17.0.2307.0: A Powerful Video Editing Software

    -

    If you are looking for a professional-grade video editing software that can handle any format and resolution, you might want to check out CyberLink PowerDirector Ultimate 17.0.2307.0. This software delivers high-performance video editing and production for creators of all levels, whether you are a beginner or a pro.

    -

    CyberLink PowerDirector Ultimate 17.0.2307.0 Pre-Activated[BabuP utorrent


    Download >>> https://urlgoal.com/2uI7yF



    -

    In this article, we will review some of the key features and benefits of CyberLink PowerDirector Ultimate 17.0.2307.0, and show you how to download it for free using a pre-activated torrent file.

    -

    Key Features and Benefits of CyberLink PowerDirector Ultimate 17.0.2307.0

    -

    CyberLink PowerDirector Ultimate 17.0.2307.0 is packed with powerful and innovative features that make video editing easy and fun. Here are some of the highlights:

    -
      -
    • Express Video Creation: With the Express Projects feature, you can quickly create stunning videos using ready-made templates. Just drag and drop your clips and photos, and let PowerDirector do the rest.
    • -
    • Action Camera Center: If you are into action sports or adventure videos, you will love the Action Camera Center. This feature allows you to edit and enhance your footage with tools like stabilization, fisheye correction, slow motion, freeze frame, color correction, and more.
    • -
    • Simplified Color Adjustment: One of the most important aspects of video editing is color grading. With PowerDirector, you can easily adjust the color and tone of your videos with one-click presets or manual controls.
    • -
    • One-click Color Correction: If you want to save time and get consistent results, you can use the TrueTheater Color feature to automatically enhance the color and brightness of your videos.
    • -
    • Intelligent Color Match: If you want to match the color and style of different clips, you can use the Color Match feature to automatically apply the color grading of one clip to another.
    • -
    • Color Grading like the Pros: If you want to have more control and creativity over your color grading, you can use the Color Director tool to apply professional-level color grading effects and LUTs (look-up tables) to your videos.
    • -
    • Customizable Design Tools: PowerDirector also offers a variety of design tools that let you customize your videos with titles, transitions, effects, animations, masks, subtitles, and more.
    • -
    • New Effects - Title Designer: With the new Title Designer, you can create stunning titles with dynamic motion graphics, 3D text, animations, and particles.
    • -
    • Enhanced - PiP Designer: With the enhanced PiP Designer, you can overlay animated objects on your videos with keyframe control, chroma key support, border and shadow effects, and more.
    • -
    • Enhanced - Theme Designer: With the enhanced Theme Designer, you can create stylish videos with themed templates that include effects, transitions, animations, and music.
    • -
    • New Brush Tool - Masks Designer: With the new Brush Tool, you can create custom masks to hide or reveal parts of your videos. You can also use masks to apply selective effects or transitions to your videos.
    • -
    • Enhanced - Subtitle Room: With the enhanced Subtitle Room, you can add subtitles to your videos with more options for font style, color, size, position, and timing.
    • -
    • 360 Video Stabilization: If you are into 360-degree video editing, you will appreciate the 360 Video Stabilization feature that lets you smooth out shaky footage with just one click.
    • -
    • True360 View Designer: With the True360 View Designer feature, you can convert your 360-degree videos into standard 2D videos with dynamic camera angles and movements.
    • -
    -

    How to Download CyberLink PowerDirector

    -

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/HyperSnap 8 With Licence Key Portable Version.md b/spaces/stomexserde/gpt4-ui/Examples/HyperSnap 8 With Licence Key Portable Version.md deleted file mode 100644 index a8706a0a144ce654b98d7602e480bbc1a66be3ff..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/HyperSnap 8 With Licence Key Portable Version.md +++ /dev/null @@ -1,18 +0,0 @@ - -

    How to Capture and Edit Screenshots with HyperSnap 8

    -

    If you need a powerful and easy-to-use screen capture software for Windows, you might want to try HyperSnap 8. HyperSnap 8 is a fast and versatile tool that lets you take screenshots from any part of your screen, edit them with various annotation and manipulation tools, and save them in over 20 image formats. You can also capture text from places where normal text copy is not possible, such as dialog boxes, menus, and web pages.

    -

    HyperSnap 8 is compatible with Windows 10, 8 (desktop mode), 7, Vista, and their corresponding Windows Server editions. It has a modern "ribbon" interface that you can customize to your liking, or switch to a traditional menu and toolbar layout. You can also control HyperSnap with voice commands or assign keyboard shortcuts to any function.

    -

    HyperSnap 8 With Licence Key Portable Version


    Downloadhttps://urlgoal.com/2uI7Wo



    -

    To use HyperSnap 8, you need a licence key that you can purchase from the official website[^1^] or from other authorized resellers. Alternatively, you can download a portable version of HyperSnap 8 that does not require installation or licence key. However, the portable version may have some limitations and may not work on some systems. You can find the portable version on various websites that offer free software downloads[^2^].

    -

    Once you have HyperSnap 8 installed or extracted on your computer, you can start capturing screenshots by clicking on the Capture tab and selecting one of the capture modes. For example, you can capture the entire screen, a window, a region, a freehand shape, or a scrolling web page. You can also use hotkeys or timers to trigger captures automatically.

    -

    After capturing a screenshot, it will appear in the HyperSnap window where you can edit it with the tools on the Edit tab. For example, you can crop, resize, rotate, flip, or skew the image. You can also add text, stamps, shapes, arrows, highlights, shadows, frames, and more. You can undo or redo any changes you make, or save them as objects that you can edit later.

    -

    When you are done editing your screenshot, you can save it as an image file by clicking on the File tab and selecting Save or Save As. You can choose from various formats such as BMP, JPG, PNG, GIF, TIFF, PDF, and more. You can also adjust the quality and compression settings of your image file. Alternatively, you can copy your screenshot to the clipboard or upload it to an FTP server.

    -

    HyperSnap 8 is a powerful screen capture software that can help you create professional-looking screenshots for various purposes. Whether you need to capture images for your help system, online tutorials, manuals, presentations, marketing materials, web pages, emails, or more, HyperSnap 8 can handle it all. You can download a free trial or buy a licence key from the official website[^1^], or try the portable version from other sources[^2^].

    - -

    One of the unique features of HyperSnap 8 is the text capture mode, also known as TextSnap. This mode allows you to capture editable text from places where normal text copy is not possible, such as dialog boxes, menus, and web pages. You can then paste the captured text into any application that supports text input, such as Word, Excel, Notepad, or your email client.

    -

    -

    To use the text capture mode, you need to enable it on the Capture tab by clicking on the TextSnap button. Then, you can select the text you want to capture by dragging your mouse over it. You will see a yellow highlight over the text and a tooltip showing the number of characters captured. You can also use hotkeys to capture text without using the mouse.

    -

    After capturing text, you can paste it into any application by pressing Ctrl+V or using the Paste command. You can also edit the captured text in the HyperSnap window by clicking on the Edit tab and selecting Edit Text. You can change the font, size, color, alignment, and style of the text. You can also use the spell checker and the find and replace functions.

    -

    Note that the text capture mode does not work on some programs that use accelerated graphics or run under Windows 8 or later. In such cases, you can try using the image capture mode and then use an optical character recognition (OCR) software to convert the image to text.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/export.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/export.py deleted file mode 100644 index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/export.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Kutools4LicenseKey.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Kutools4LicenseKey.md deleted file mode 100644 index 92fc753f9bc5f93fb159f5e1b29f83700f593970..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Kutools4LicenseKey.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Kutools4LicenseKey


    Download File ★★★★★ https://cinurl.com/2uEXQp



    - -... And suddenly I was in a different place. And there was no bridge. And there were only clouds, and under them lay the earth, and there was no road, no houses, no people, no trees, but only grass, grass, grass ... 8a78ff9644
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/NCH Inventoria Stock Manager V3.33.rar REPACK.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/NCH Inventoria Stock Manager V3.33.rar REPACK.md deleted file mode 100644 index a7cd5a1f1924c5aab932e3df2eab53e5b73f6cfe..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/NCH Inventoria Stock Manager V3.33.rar REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

    NCH Inventoria Stock Manager V3.33.rar


    DOWNLOAD ☆☆☆☆☆ https://cinurl.com/2uEYNK



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Network Inventory Advisor 4.1 Crack Serial.rar.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Network Inventory Advisor 4.1 Crack Serial.rar.md deleted file mode 100644 index f06030263cff331ad57fe9851fed75a39329d717..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Network Inventory Advisor 4.1 Crack Serial.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Network Inventory Advisor 4.1 Crack Serial.rar


    Download Filehttps://cinurl.com/2uEYE8



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/svjack/English-Comet-Atomic/predict.py b/spaces/svjack/English-Comet-Atomic/predict.py deleted file mode 100644 index ddbaa6e3189c8bf75c108926fbdf9251cf465084..0000000000000000000000000000000000000000 --- a/spaces/svjack/English-Comet-Atomic/predict.py +++ /dev/null @@ -1,47 +0,0 @@ -class Obj: - def __init__(self, model, tokenizer, device = "cpu"): - self.model = model - self.tokenizer = tokenizer - self.device = device - self.model = self.model.to(self.device) - - def predict( - self, - source_text: str, - max_length: int = 512, - num_return_sequences: int = 1, - num_beams: int = 2, - top_k: int = 50, - top_p: float = 0.95, - do_sample: bool = True, - repetition_penalty: float = 2.5, - length_penalty: float = 1.0, - early_stopping: bool = True, - skip_special_tokens: bool = True, - clean_up_tokenization_spaces: bool = True, - ): - input_ids = self.tokenizer.encode( - source_text, return_tensors="pt", add_special_tokens=True - ) - input_ids = input_ids.to(self.device) - generated_ids = self.model.generate( - input_ids=input_ids, - num_beams=num_beams, - max_length=max_length, - repetition_penalty=repetition_penalty, - length_penalty=length_penalty, - early_stopping=early_stopping, - top_p=top_p, - top_k=top_k, - num_return_sequences=num_return_sequences, - do_sample = do_sample - ) - preds = [ - self.tokenizer.decode( - g, - skip_special_tokens=skip_special_tokens, - clean_up_tokenization_spaces=clean_up_tokenization_spaces, - ) - for g in generated_ids - ] - return preds diff --git a/spaces/syy404/whisper-webui/app-shared.py b/spaces/syy404/whisper-webui/app-shared.py deleted file mode 100644 index 541459b104ce89c56845ac177365f49a61445d04..0000000000000000000000000000000000000000 --- a/spaces/syy404/whisper-webui/app-shared.py +++ /dev/null @@ -1,3 +0,0 @@ -# Run the app with no audio file restrictions -from app import create_ui -create_ui(-1, share=True) \ No newline at end of file diff --git a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/common/evaluation.py b/spaces/taesiri/ConvolutionalHoughMatchingNetworks/common/evaluation.py deleted file mode 100644 index 271a0907e6802e67c8e753ec1058df8af14d012e..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ConvolutionalHoughMatchingNetworks/common/evaluation.py +++ /dev/null @@ -1,32 +0,0 @@ -r""" Evaluates CHMNet with PCK """ - -import torch - - -class Evaluator: - r""" Computes evaluation metrics of PCK """ - @classmethod - def initialize(cls, alpha): - cls.alpha = torch.tensor(alpha).unsqueeze(1) - - @classmethod - def evaluate(cls, prd_kps, batch): - r""" Compute percentage of correct key-points (PCK) with multiple alpha {0.05, 0.1, 0.15 }""" - - pcks = [] - for idx, (pk, tk) in enumerate(zip(prd_kps, batch['trg_kps'])): - pckthres = batch['pckthres'][idx] - npt = batch['n_pts'][idx] - prd_kps = pk[:, :npt] - trg_kps = tk[:, :npt] - - l2dist = (prd_kps - trg_kps).pow(2).sum(dim=0).pow(0.5).unsqueeze(0).repeat(len(cls.alpha), 1) - thres = pckthres.expand_as(l2dist).float() * cls.alpha - pck = torch.le(l2dist, thres).sum(dim=1) / float(npt) - if len(pck) == 1: pck = pck[0] - pcks.append(pck) - - eval_result = {'pck': pcks} - - return eval_result - diff --git a/spaces/tensorflow/efficientnetv2-s/app.py b/spaces/tensorflow/efficientnetv2-s/app.py deleted file mode 100644 index bf27fd75c3da6155b08c7d7ce2adc2a6c635804a..0000000000000000000000000000000000000000 --- a/spaces/tensorflow/efficientnetv2-s/app.py +++ /dev/null @@ -1,222 +0,0 @@ -import tensorflow as tf -import tensorflow_hub as hub - -import requests -from PIL import Image -from io import BytesIO - -import matplotlib.pyplot as plt -import numpy as np -import gradio as gr - -#@title Helper functions for loading image (hidden) - -original_image_cache = {} - -def preprocess_image(image): - image = np.array(image) - # reshape into shape [batch_size, height, width, num_channels] - img_reshaped = tf.reshape(image, [1, image.shape[0], image.shape[1], image.shape[2]]) - # Use `convert_image_dtype` to convert to floats in the [0,1] range. - image = tf.image.convert_image_dtype(img_reshaped, tf.float32) - return image - -def load_image_from_url(img_url): - """Returns an image with shape [1, height, width, num_channels].""" - user_agent = {'User-agent': 'Colab Sample (https://tensorflow.org)'} - response = requests.get(img_url, headers=user_agent) - image = Image.open(BytesIO(response.content)) - image = preprocess_image(image) - return image - -def load_image(image_url, image_size=256, dynamic_size=False, max_dynamic_size=512): - """Loads and preprocesses images.""" - # Cache image file locally. - if image_url in original_image_cache: - img = original_image_cache[image_url] - elif image_url.startswith('https://'): - img = load_image_from_url(image_url) - else: - fd = tf.io.gfile.GFile(image_url, 'rb') - img = preprocess_image(Image.open(fd)) - original_image_cache[image_url] = img - # Load and convert to float32 numpy array, add batch dimension, and normalize to range [0, 1]. - img_raw = img - if tf.reduce_max(img) > 1.0: - img = img / 255. - if len(img.shape) == 3: - img = tf.stack([img, img, img], axis=-1) - if not dynamic_size: - img = tf.image.resize_with_pad(img, image_size, image_size) - elif img.shape[1] > max_dynamic_size or img.shape[2] > max_dynamic_size: - img = tf.image.resize_with_pad(img, max_dynamic_size, max_dynamic_size) - return img, img_raw - - - -image_size = 224 -dynamic_size = False - -model_name = "efficientnetv2-s" - -model_handle_map = { - "efficientnetv2-s": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_s/classification/2", - "efficientnetv2-m": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_m/classification/2", - "efficientnetv2-l": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_l/classification/2", - "efficientnetv2-s-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_s/classification/2", - "efficientnetv2-m-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_m/classification/2", - "efficientnetv2-l-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_l/classification/2", - "efficientnetv2-xl-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_xl/classification/2", - "efficientnetv2-b0-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b0/classification/2", - "efficientnetv2-b1-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b1/classification/2", - "efficientnetv2-b2-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b2/classification/2", - "efficientnetv2-b3-21k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b3/classification/2", - "efficientnetv2-s-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/classification/2", - "efficientnetv2-m-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_m/classification/2", - "efficientnetv2-l-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_l/classification/2", - "efficientnetv2-xl-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_xl/classification/2", - "efficientnetv2-b0-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b0/classification/2", - "efficientnetv2-b1-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b1/classification/2", - "efficientnetv2-b2-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b2/classification/2", - "efficientnetv2-b3-21k-ft1k": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b3/classification/2", - "efficientnetv2-b0": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b0/classification/2", - "efficientnetv2-b1": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b1/classification/2", - "efficientnetv2-b2": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b2/classification/2", - "efficientnetv2-b3": "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b3/classification/2", - "efficientnet_b0": "https://tfhub.dev/tensorflow/efficientnet/b0/classification/1", - "efficientnet_b1": "https://tfhub.dev/tensorflow/efficientnet/b1/classification/1", - "efficientnet_b2": "https://tfhub.dev/tensorflow/efficientnet/b2/classification/1", - "efficientnet_b3": "https://tfhub.dev/tensorflow/efficientnet/b3/classification/1", - "efficientnet_b4": "https://tfhub.dev/tensorflow/efficientnet/b4/classification/1", - "efficientnet_b5": "https://tfhub.dev/tensorflow/efficientnet/b5/classification/1", - "efficientnet_b6": "https://tfhub.dev/tensorflow/efficientnet/b6/classification/1", - "efficientnet_b7": "https://tfhub.dev/tensorflow/efficientnet/b7/classification/1", - "bit_s-r50x1": "https://tfhub.dev/google/bit/s-r50x1/ilsvrc2012_classification/1", - "inception_v3": "https://tfhub.dev/google/imagenet/inception_v3/classification/4", - "inception_resnet_v2": "https://tfhub.dev/google/imagenet/inception_resnet_v2/classification/4", - "resnet_v1_50": "https://tfhub.dev/google/imagenet/resnet_v1_50/classification/4", - "resnet_v1_101": "https://tfhub.dev/google/imagenet/resnet_v1_101/classification/4", - "resnet_v1_152": "https://tfhub.dev/google/imagenet/resnet_v1_152/classification/4", - "resnet_v2_50": "https://tfhub.dev/google/imagenet/resnet_v2_50/classification/4", - "resnet_v2_101": "https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4", - "resnet_v2_152": "https://tfhub.dev/google/imagenet/resnet_v2_152/classification/4", - "nasnet_large": "https://tfhub.dev/google/imagenet/nasnet_large/classification/4", - "nasnet_mobile": "https://tfhub.dev/google/imagenet/nasnet_mobile/classification/4", - "pnasnet_large": "https://tfhub.dev/google/imagenet/pnasnet_large/classification/4", - "mobilenet_v2_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/4", - "mobilenet_v2_130_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/classification/4", - "mobilenet_v2_140_224": "https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/classification/4", - "mobilenet_v3_small_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/classification/5", - "mobilenet_v3_small_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_small_075_224/classification/5", - "mobilenet_v3_large_100_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/classification/5", - "mobilenet_v3_large_075_224": "https://tfhub.dev/google/imagenet/mobilenet_v3_large_075_224/classification/5", -} - -model_image_size_map = { - "efficientnetv2-s": 384, - "efficientnetv2-m": 480, - "efficientnetv2-l": 480, - "efficientnetv2-b0": 224, - "efficientnetv2-b1": 240, - "efficientnetv2-b2": 260, - "efficientnetv2-b3": 300, - "efficientnetv2-s-21k": 384, - "efficientnetv2-m-21k": 480, - "efficientnetv2-l-21k": 480, - "efficientnetv2-xl-21k": 512, - "efficientnetv2-b0-21k": 224, - "efficientnetv2-b1-21k": 240, - "efficientnetv2-b2-21k": 260, - "efficientnetv2-b3-21k": 300, - "efficientnetv2-s-21k-ft1k": 384, - "efficientnetv2-m-21k-ft1k": 480, - "efficientnetv2-l-21k-ft1k": 480, - "efficientnetv2-xl-21k-ft1k": 512, - "efficientnetv2-b0-21k-ft1k": 224, - "efficientnetv2-b1-21k-ft1k": 240, - "efficientnetv2-b2-21k-ft1k": 260, - "efficientnetv2-b3-21k-ft1k": 300, - "efficientnet_b0": 224, - "efficientnet_b1": 240, - "efficientnet_b2": 260, - "efficientnet_b3": 300, - "efficientnet_b4": 380, - "efficientnet_b5": 456, - "efficientnet_b6": 528, - "efficientnet_b7": 600, - "inception_v3": 299, - "inception_resnet_v2": 299, - "mobilenet_v2_100_224": 224, - "mobilenet_v2_130_224": 224, - "mobilenet_v2_140_224": 224, - "nasnet_large": 331, - "nasnet_mobile": 224, - "pnasnet_large": 331, - "resnet_v1_50": 224, - "resnet_v1_101": 224, - "resnet_v1_152": 224, - "resnet_v2_50": 224, - "resnet_v2_101": 224, - "resnet_v2_152": 224, - "mobilenet_v3_small_100_224": 224, - "mobilenet_v3_small_075_224": 224, - "mobilenet_v3_large_100_224": 224, - "mobilenet_v3_large_075_224": 224, -} - -model_handle = model_handle_map[model_name] - - -max_dynamic_size = 512 -if model_name in model_image_size_map: - image_size = model_image_size_map[model_name] - dynamic_size = False - print(f"Images will be converted to {image_size}x{image_size}") -else: - dynamic_size = True - print(f"Images will be capped to a max size of {max_dynamic_size}x{max_dynamic_size}") - -labels_file = "https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt" - -#download labels and creates a maps -downloaded_file = tf.keras.utils.get_file("labels.txt", origin=labels_file) - -classes = [] - -with open(downloaded_file) as f: - labels = f.readlines() - classes = [l.strip() for l in labels] - - -classifier = hub.load(model_handle) - - -def inference(img): - image, original_image = load_image(img, image_size, dynamic_size, max_dynamic_size) - - - input_shape = image.shape - warmup_input = tf.random.uniform(input_shape, 0, 1.0) - warmup_logits = classifier(warmup_input).numpy() - - # Run model on image - probabilities = tf.nn.softmax(classifier(image)).numpy() - - top_5 = tf.argsort(probabilities, axis=-1, direction="DESCENDING")[0][:5].numpy() - np_classes = np.array(classes) - - # Some models include an additional 'background' class in the predictions, so - # we must account for this when reading the class labels. - includes_background_class = probabilities.shape[1] == 1001 - result = {} - for i, item in enumerate(top_5): - class_index = item if includes_background_class else item + 1 - line = f'({i+1}) {class_index:4} - {classes[class_index]}: {probabilities[0][top_5][i]}' - result[classes[class_index]] = probabilities[0][top_5][i].item() - return result - -title="efficientnetv2-s" -description="Gradio Demo for efficientnetv2-s: EfficientNet V2 are a family of image classification models, which achieve better parameter efficiency and faster training speed than prior arts. To use it, simply upload your image or click on one of the examples to load them. Read more at the links below" -article = "

    Tensorflow Hub

    " -examples=[['apple1.jpg']] -gr.Interface(inference,gr.inputs.Image(type="filepath"),"label",title=title,description=description,article=article,examples=examples).launch(enable_queue=True) diff --git a/spaces/terfces0erbo/CollegeProjectV2/Balasahebshindeenglishgrammarbook38.md b/spaces/terfces0erbo/CollegeProjectV2/Balasahebshindeenglishgrammarbook38.md deleted file mode 100644 index 9aaf231d5e5d76b484b35eca0fbb926ff508eb95..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Balasahebshindeenglishgrammarbook38.md +++ /dev/null @@ -1,18 +0,0 @@ -
    -

    Review of Balasaheb Shinde English Grammar Book 38

    -

    If you are looking for a comprehensive and easy-to-understand English grammar book in Marathi, you might want to check out Balasaheb Shinde English Grammar Book 38. This book is written by Balasaheb Shinde, a renowned author and teacher of English and Marathi languages. He has written several books on grammar, vocabulary, and competitive exams preparation, such as Sampoorna Engreji Vyakaran, Paripurna Marathi Vyakaran, and MPSC Marathi va Engraji Ghataknihay Prashnasanch.

    -

    Balasaheb Shinde English Grammar Book 38 covers all the topics of English grammar, such as parts of speech, tenses, voice, narration, clauses, phrases, idioms, punctuation, and more. It also includes 3800+ objective questions with answers and explanations to help you test your knowledge and improve your skills. The book is designed for students of class 10th and above, as well as aspirants of various competitive exams like SSC, HSC, MPSC, UPSC, Bank PO, etc.

    -

    balasahebshindeenglishgrammarbook38


    DOWNLOAD ✸✸✸ https://bytlly.com/2uGj13



    -

    The book is written in simple and lucid Marathi language, with examples and illustrations to make the concepts clear and easy to grasp. The book also follows the latest syllabus and pattern of various exams and boards. The book is available on Amazon.in[^1^], Flipkart.com[^3^], and other online platforms at an affordable price. You can also buy it along with Balasaheb Shinde's other books on Marathi grammar and vocabulary for a combo offer.

    -

    Balasaheb Shinde English Grammar Book 38 is a must-have book for anyone who wants to learn and master English grammar in Marathi. It is a comprehensive, easy-to-understand, and updated book that will help you achieve your academic and career goals.

    - -

    If you want to know more about Balasaheb Shinde and his books, you can visit his official website www.balasahebshinde.com. You can also follow him on Facebook, Twitter, and YouTube for his latest updates and videos. You can also contact him through email or phone for any queries or feedback.

    -

    Balasaheb Shinde is a dedicated and passionate author and teacher who has helped thousands of students and learners to improve their English and Marathi skills. He has received many awards and recognition for his contribution to the field of education and literature. He is also a motivational speaker and a social activist who works for various causes and issues.

    -

    Balasaheb Shinde English Grammar Book 38 is one of his best-selling and most popular books that has received positive reviews and ratings from the readers. It is a book that you should not miss if you want to learn English grammar in Marathi. Order your copy today and start your journey of learning English grammar with Balasaheb Shinde.

    - -

    Balasaheb Shinde English Grammar Book 38 is not only a book for learning English grammar, but also a book for developing your personality and confidence. It will help you to communicate effectively and fluently in English in various situations and contexts. It will also help you to enhance your writing and reading skills in English.

    -

    The book is suitable for students of all levels and backgrounds, whether they are beginners or advanced learners of English. It is also suitable for teachers and trainers who want to teach English grammar in Marathi. The book is user-friendly and interactive, with exercises, quizzes, and games to make learning fun and engaging.

    -

    -

    Balasaheb Shinde English Grammar Book 38 is a book that you will cherish and refer to throughout your life. It is a book that will make you fall in love with English grammar and language. It is a book that will change your life for the better.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Convert VBS To EXE Using Online Tool Or VBScript Converter Software.md b/spaces/terfces0erbo/CollegeProjectV2/Convert VBS To EXE Using Online Tool Or VBScript Converter Software.md deleted file mode 100644 index ee93708ef89ddce0bbcfdcccf9c60d1e5ed1a39c..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Convert VBS To EXE Using Online Tool Or VBScript Converter Software.md +++ /dev/null @@ -1,133 +0,0 @@ -
    -

    Convert VBS to EXE using Online Tool or VBScript Converter Software

    -

    VBS or Visual Basic Script is a powerful scripting language that can be used to create full-scale applications for Windows. However, it is not very user-friendly and requires a lot of technical knowledge. EXE or Executable files are more convenient and easy to use, as they can run programs or install software on your computer with a single click.

    -

    If you want to convert your VBS scripts to EXE files, you have two options: using an online tool or a VBScript converter software. In this article, we will compare these two methods and show you how to convert VBS to EXE using online tool or VBScript converter software.

    -

    Convert VBS to EXE using Online tool or VBScript converter software


    Download » https://bytlly.com/2uGj6x



    -

    Using an Online Tool

    -

    One of the easiest ways to convert VBS to EXE is to use an online tool that can do the conversion for you. There are several online tools that can convert VBS or JS files to EXE format, such as VBS to EXE, JS to EXE, or ScriptCryptor. These tools have some advantages and disadvantages:

    -
      -
    • Advantages:
    • -
    • You don't need to download or install any software on your computer.
    • -
    • You can convert your files quickly and easily, without any coding skills.
    • -
    • You can encrypt your source code using a 128-bit key, which protects your script from being modified or stolen.
    • -
    • You can call win32 API functions from your script, which enhances its functionality and compatibility.
    • -
    • Disadvantages:
    • -
    • You need an internet connection to use these tools.
    • -
    • You may not be able to customize the icon, version information, UAC level, or digital signature of your executable file.
    • -
    • You may encounter some errors or compatibility issues with some scripts or Windows versions.
    • -
    • You may not be able to bind additional files or extract the original script from the executable file.
    • -
    -

    Using a VBScript Converter Software

    -

    Another option to convert VBS to EXE is to use a VBScript converter software that can be installed on your computer. There are several software that can convert VBS files to EXE files, such as VbsEdit, VbsToExePortable, VBS To EXE from the IPFS, or Vbs To Exe command-line tool. These software have some advantages and disadvantages:

    -
      -
    • Advantages:
    • -
    • You don't need an internet connection to use these software.
    • -
    • You can customize the icon, version information, UAC level, digital signature, and other parameters of your executable file.
    • -
    • You can bind additional files or extract the original script from the executable file.
    • -
    • You can edit your script instantly using a built-in VBScript visual editor.
    • -
    • You can protect your script with a password or print a message or version number when running it.
    • -
    • Disadvantages:
    • -
    • You need to download and install the software on your computer.
    • -
    • You may need some coding skills or technical knowledge to use these software.
    • -
    • You may not be able to encrypt your source code or call win32 API functions from your script.
    • -
    • You may encounter some errors or compatibility issues with some scripts or Windows versions.
    • -
    -

    Conclusion

    -

    In conclusion, both online tools and VBScript converter software can help you convert VBS to EXE using online tool or VBScript converter software. However, each method has its own pros and cons, so you should choose the one that suits your needs and preferences. Here are some tips to help you decide:

    -
      -
    • If you want a simple and fast solution, use an online tool.
    • -
    • If you want more control and customization, use a VBScript converter software.
    • -
    • If you want more security and encryption, use an online tool.
    • -
    • If you want more functionality and compatibility, use a VBScript converter software.
    • -
    -

    We hope this article has helped you learn how to convert VBS to EXE using online tool or VBScript converter software. If you have any questions or feedback, please let us know in the comments below.

    -

    How to Use VbsEdit to Convert VBS to EXE

    -

    One of the most popular and powerful VBScript converter software is VbsEdit. It is a dedicated VBScript editor and debugger that can help you create, edit, and compile your VBScripts to EXE files. Here are the steps to use VbsEdit to convert VBS to EXE:

    -
      -
    1. Download and install VbsEdit from vbsedit.com.
    2. -
    3. Open VbsEdit and create a new script or open an existing one.
    4. -
    5. Write or edit your script as you wish. You can use the built-in debugger, code snippets, syntax coloring, and other features to enhance your script.
    6. -
    7. When you are ready to convert your script to EXE, go to the File menu and select Convert into executable.
    8. -
    9. A dialog box will appear where you can choose the output file name, location, mode (console or windows), bitness (32 or 64), icon, UAC level, digital signature, and other options.
    10. -
    11. Click OK and wait for the conversion to finish.
    12. -
    13. You can now run or distribute your EXE file as you wish.
    14. -
    -

    VbsEdit is a versatile and user-friendly tool that can help you convert VBS to EXE using online tool or VBScript converter software. It also offers a free trial version that you can download and try before buying.

    -

    -

    How to Use VBS to EXE Online Tool to Convert VBS to EXE

    -

    If you prefer an online tool that can convert your VBS scripts to EXE files without installing any software, you can use VBS to EXE. It is a free online converter that can also encrypt your source code, call win32 API functions, and handle UAC issues. Here are the steps to use VBS to EXE online tool to convert VBS to EXE:

    -
      -
    1. Go to vbstoexe.com.
    2. -
    3. Click on Browse and select your VBS file from your computer.
    4. -
    5. Choose the output file name and location.
    6. -
    7. Select the encryption key length (128 or 256 bits) and enter a password if you want to encrypt your source code.
    8. -
    9. Select the win32 API option if you want to call win32 API functions from your script.
    10. -
    11. Select the UAC option if you want to run your script with administrator privileges on Windows Vista or later.
    12. -
    13. Click on Convert and wait for the conversion to finish.
    14. -
    15. You can now download or run your EXE file as you wish.
    16. -
    -

    VBS to EXE online tool is a simple and secure solution that can help you convert VBS to EXE using online tool or VBScript converter software. It also supports JS files and offers a batch conversion option for multiple files.

    -

    Conclusion

    -

    In conclusion, both online tools and VBScript converter software can help you convert VBS to EXE using online tool or VBScript converter software. However, each method has its own pros and cons, so you should choose the one that suits your needs and preferences. Here are some tips to help you decide:

    -
      -
    • If you want a simple and fast solution, use an online tool.
    • -
    • If you want more control and customization, use a VBScript converter software.
    • -
    • If you want more security and encryption, use an online tool.
    • -
    • If you want more functionality and compatibility, use a VBScript converter software.
    • -
    -

    We hope this article has helped you learn how to convert VBS to EXE using online tool or VBScript converter software. If you have any questions or feedback, please let us know in the comments below.

    -

    or

    tags for subheadings. -- Use bullet points or numbered lists to present multiple items or steps in a clear and concise way. This will help to highlight the key points and make them more memorable. -- Use bold, italics, or underline to emphasize important words or phrases. This will help to draw the readers' attention and convey your message more effectively. -- Use synonyms, variations, or related keywords to avoid repeating the same keyword too often. This will help to avoid keyword stuffing and make your content more natural and diverse. -- Use internal or external links to provide more information or resources for your readers. This will help to increase your credibility and authority, as well as improve your SEO ranking and traffic. -- Add a call-to-action at the end of the article that encourages your readers to take action or leave a comment. This will help to increase your engagement and conversion rate. - -These are some of the suggestions that can help you improve or optimize your article for the keyword "Convert VBS to EXE using Online tool or VBScript converter software". You can also use online tools or software to check your grammar, spelling, readability, and SEO score. I hope you find these suggestions helpful and useful. -

    How to Use VbsToExePortable to Convert VBS to EXE

    -

    Another VBScript converter software that you can use to convert VBS to EXE is VbsToExePortable. It is a portable version of VbsToExe that can run from a USB drive or cloud drive without installation. It is also an open-source project that you can find on GitHub. Here are the steps to use VbsToExePortable to convert VBS to EXE:

    -
      -
    1. Download and extract VbsToExePortable from github.com/Makazzz/VbsToExePortable.
    2. -
    3. Run VbsToExePortable.exe from the extracted folder.
    4. -
    5. Click on Browse and select your VBS file from your computer.
    6. -
    7. Choose the output file name and location.
    8. -
    9. Select the mode (console or windows), bitness (32 or 64), icon, version information, and other options.
    10. -
    11. Click on Convert and wait for the conversion to finish.
    12. -
    13. You can now run or distribute your EXE file as you wish.
    14. -
    -

    VbsToExePortable is a handy and lightweight tool that can help you convert VBS to EXE using online tool or VBScript converter software. It also offers a batch conversion option for multiple files and supports JS files as well.

    -

    How to Use VBS To EXE from the IPFS to Convert VBS to EXE

    -

    If you are looking for an alternative VBScript converter software that can convert VBS to EXE, you can try VBS To EXE from the IPFS. It is a downloader that can download the latest version of Vbs To Exe from the InterPlanetary File System (IPFS), which is a peer-to-peer network for storing and sharing files. Here are the steps to use VBS To EXE from the IPFS to convert VBS to EXE:

    -
      -
    1. Download and run Vbs-To-Exe-Downloader.exe from github.com/99fk/Vbs-To-Exe-Downloader.
    2. -
    3. Wait for the downloader to download the latest version of Vbs To Exe from the IPFS.
    4. -
    5. Run Vbs To Exe.exe from the downloaded folder.
    6. -
    7. Click on Browse and select your VBS file from your computer.
    8. -
    9. Choose the output file name and location.
    10. -
    11. Select the mode (console or windows), bitness (32 or 64), icon, version information, and other options.
    12. -
    13. Click on Convert and wait for the conversion to finish.
    14. -
    15. You can now run or distribute your EXE file as you wish.
    16. -
    -

    VBS To EXE from the IPFS is a novel and innovative tool that can help you convert VBS to EXE using online tool or VBScript converter software. It also supports JS files and allows you to extract the original script from the executable file.

    -

    How to Use Vbs To Exe Command-Line Tool to Convert VBS to EXE

    -

    If you prefer a command-line tool that can convert VBS to EXE, you can use Vbs To Exe Command-Line Tool. It is a utility that lets you convert VBScripts to executable files using command-line arguments. It is also available on GitHub. Here are the steps to use Vbs To Exe Command-Line Tool to convert VBS to EXE:

    -
      -
    1. Download and extract Vbs_To_Exe.zip from github.com/f2ko/Vbs_To_Exe.
    2. -
    3. Open a command prompt window and navigate to the extracted folder.
    4. -
    5. Type vbstoexe.exe followed by the input file name, output file name, and other options. For example: vbstoexe.exe test.vbs test.exe /icon:icon.ico /version:1.0 /mode:windows /bitness:64 /password:1234
    6. -
    7. Press Enter and wait for the conversion to finish.
    8. -
    9. You can now run or distribute your EXE file as you wish.
    10. -
    -

    Vbs To Exe Command-Line Tool is a flexible and powerful tool that can help you convert VBS to EXE using online tool or VBScript converter software. It also supports JS files and allows you to print a message or version number when running your script.

    -

    Conclusion

    -

    In conclusion, converting VBS to EXE using online tool or VBScript converter software is a useful and convenient way to turn your VBScripts into executable files. However, there are different methods and tools that you can use, each with its own advantages and disadvantages. Therefore, you should choose the one that best suits your needs and preferences.

    -

    Here are some tips to help you decide:

    -
      -
    • If you want a simple and fast solution, use an online tool.
    • -
    • If you want more control and customization, use a VBScript converter software.
    • -
    • If you want more security and encryption, use an online tool.
    • -
    • If you want more functionality and compatibility, use a VBScript converter software.
    • -
    -

    We hope this article has helped you learn how to convert VBS to EXE using online tool or VBScript converter software. If you have any questions or feedback, please let us know in the comments below.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/ESET NOD32 Antivirus 6.0.308.0 Activation Key.md b/spaces/terfces0erbo/CollegeProjectV2/ESET NOD32 Antivirus 6.0.308.0 Activation Key.md deleted file mode 100644 index e0a918206ef5fb25e871466fefa44833cd3b50d9..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/ESET NOD32 Antivirus 6.0.308.0 Activation Key.md +++ /dev/null @@ -1,10 +0,0 @@ -

    ESET NOD32 Antivirus 6.0.308.0 Activation Key


    DOWNLOAD 🗸 https://bytlly.com/2uGlKh



    - -G'day Youtubers, in this guide I will help you download and install ESET NOD32.This guide is suitable for all versions of ESET NOD32, starting from 4.0 and ending with the last 7. -Before starting this entry, I recommend that you read this article so that you can understand what to do when you download and install ESET NOD32 for the first time. -Step 1 Launch ESET NOD32 and click "Scan your computer" to run a security scan of your computer. -Step 2 After a while, the scan will be completed. -Click "Remove" to remove ESET NOD32 from your computer. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/terfces0erbo/CollegeProjectV2/Imperium Le Guerre Puniche Crack.md b/spaces/terfces0erbo/CollegeProjectV2/Imperium Le Guerre Puniche Crack.md deleted file mode 100644 index 0d559da0c5dc7e082e109a5a6894abb0d3e11d02..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Imperium Le Guerre Puniche Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Imperium Le Guerre Puniche Crack


    Downloadhttps://bytlly.com/2uGlOK



    -
    -Mac Os X Leopard 10.5 Download Dmg; Recommended Products; Imperium Le Guerre Puniche Crack.rar. Simple man cover bo steele. The land of oz in beech ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/themanas021/fake-news-gradio/README.md b/spaces/themanas021/fake-news-gradio/README.md deleted file mode 100644 index 29bcb33b6cdc226845d22617faf2194ac752999c..0000000000000000000000000000000000000000 --- a/spaces/themanas021/fake-news-gradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fake News Gradio -emoji: 📚 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Do You Have To Have Spotify Premium To Get Spotify Wrapped.md b/spaces/tialenAdioni/chat-gpt-api/logs/Do You Have To Have Spotify Premium To Get Spotify Wrapped.md deleted file mode 100644 index a3c4071aaac3aa7484d169e9681d497755ccc3c0..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Do You Have To Have Spotify Premium To Get Spotify Wrapped.md +++ /dev/null @@ -1,40 +0,0 @@ - -

    Do You Have to Have Spotify Premium to Get Spotify Wrapped?

    -

    Spotify Wrapped is a popular feature that lets you see your personalized summary of your listening habits and preferences throughout the year. You can discover your top artists, songs, genres, podcasts, and more, as well as some new insights such as your listening personality and audio day. But do you have to have Spotify Premium to get Spotify Wrapped? Or can you enjoy this feature with a free account? Read on to find out.

    -

    What is Spotify Wrapped?

    -

    Spotify Wrapped is an annual feature that Spotify releases every December. It gives you a customized report of your music and podcast consumption on the streaming platform based on your data from January 1st to October 31st. You can access Spotify Wrapped on your smartphone app or on the web browser at spotify.com/wrapped.

    -

    do you have to have spotify premium to get spotify wrapped


    Download –––––>>> https://urlcod.com/2uKak0



    -

    Spotify Wrapped shows you various statistics and highlights of your listening history, such as:

    -
      -
    • Your top five artists, songs, genres, and podcasts of the year
    • -
    • The number of minutes you spent listening to Spotify
    • -
    • The number of new artists and genres you discovered
    • -
    • Your most binge-listened song and podcast
    • -
    • Your listening personality type based on how you discover and consume music
    • -
    • Your audio day based on how your music taste changes depending on the time of day
    • -
    • Your most memorable moments of the year based on your listening activity
    • -
    -

    Spotify Wrapped also creates a playlist of your top 100 songs of the year, which you can save to your library and share with others. You can also see how you compare to other listeners around the world and explore other playlists curated by Spotify based on different themes and categories.

    -

    Do You Need Spotify Premium to Get Spotify Wrapped?

    -

    The good news is that you don't need Spotify Premium to get Spotify Wrapped. Both free and premium users can access their Spotify Wrapped results as long as they have the smartphone app . There are no additional benefits that premium users get over free users when it comes to Spotify Wrapped.

    -

    To access your Spotify Wrapped, simply open the Spotify app on your iOS or Android device and tap on the Wrapped 2022 banner on the home page. Then follow the instructions on the screen to see your results and share them with others. You can also save your top songs of 2022 playlist by tapping Add To Your Library when the option comes up.

    -

    Conclusion

    -

    Spotify Wrapped is a fun and interesting way to look back at your year in music and podcasts. You can see how your listening habits reflect your personality, mood, and preferences, as well as discover new artists and genres that you might enjoy. Whether you are a free or premium user, you can access your Spotify Wrapped results on your smartphone app and share them with your friends and family. Happy listening!

    How to Make the Most of Spotify Wrapped

    -

    Spotify Wrapped is not only a way to see your past listening history, but also a way to enhance your future listening experience. Here are some tips on how to make the most of Spotify Wrapped:

    -
      -
    • Share your results with others. You can share your Spotify Wrapped results on various social media platforms, such as Instagram, Twitter, WhatsApp, Messenger, Snapchat, or save them as an image on your phone. You can also use the Listening Personality Snapchat filter to show off your personality type. Sharing your results can spark conversations with other music lovers and help you find new recommendations.
    • -
    • Explore other playlists. Spotify Wrapped also gives you access to other playlists curated by Spotify based on different themes and categories, such as Your Top Genres, Missed Hits, On Record, and more. You can browse these playlists and discover new songs and artists that you might like.
    • -
    • Listen to your top songs of 2022 playlist. Spotify Wrapped creates a playlist of your top 100 songs of the year, which you can save to your library and listen to anytime. This playlist can help you relive your favorite moments of the year and enjoy the songs that made you happy, sad, excited, or relaxed.
    • -
    • Compare yourself to other listeners. Spotify Wrapped also shows you how you compare to other listeners around the world in terms of your listening habits and preferences. You can see how popular or unique your music taste is and how many countries you listened to music from.
    • -
    -

    What to Expect from Spotify Wrapped 2023

    -

    Spotify Wrapped 2022 is already here, but what can we expect from Spotify Wrapped 2023? Here are some possible features that Spotify might introduce next year:

    -
      -
    • A personalized video summary. Spotify might create a personalized video summary of your Spotify Wrapped results that you can watch and share with others. The video could include animations, music clips, and voice-overs that highlight your listening journey throughout the year.
    • -
    • A web app. Spotify might make Spotify Wrapped available as a web app that you can access on any device without downloading the smartphone app. The web app could offer a more interactive and immersive experience than the smartphone app.
    • -
    • A podcast summary. Spotify might offer a more detailed summary of your podcast listening habits and preferences, such as your top podcast episodes, genres, hosts, and guests. You could also see how much time you spent listening to podcasts and how many podcasts you finished.
    • -
    • A mood analysis. Spotify might analyze your mood based on the songs and podcasts you listened to and show you how your mood changed throughout the year. You could also see which songs and podcasts made you feel happy, sad, angry, calm, or energized.
    • -

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Esser tools 8000 66 Why you should buy this tool today.md b/spaces/tialenAdioni/chat-gpt-api/logs/Esser tools 8000 66 Why you should buy this tool today.md deleted file mode 100644 index 1fe9ed0cfc41cfba99bb6d52f0f92972019a074b..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Esser tools 8000 66 Why you should buy this tool today.md +++ /dev/null @@ -1,64 +0,0 @@ -
    -

    Esser Tools 8000 66: The Ultimate Tool for DIY Enthusiasts

    -

    If you are looking for a versatile, powerful and easy-to-use tool for your home improvement projects, you should consider the Esser Tools 8000 66. This tool is a multi-functional device that can perform various tasks such as drilling, screwing, sawing, cutting, sanding and more. It is designed to be ergonomic, lightweight and durable, and it comes with a range of accessories and attachments to suit your needs.

    -

    In this article, we will review the features and benefits of the Esser Tools 8000 66, and show you how it can help you achieve your DIY goals.

    -

    Esser tools 8000 66


    Download File ✵✵✵ https://urlcod.com/2uK8n1



    -

    Features of the Esser Tools 8000 66

    -

    The Esser Tools 8000 66 is a cordless tool that runs on a rechargeable lithium-ion battery. It has a variable speed control that allows you to adjust the speed and torque according to the material and task. It also has a LED light that illuminates the work area and a soft grip handle that reduces vibration and fatigue.

    -

    The Esser Tools 8000 66 has a quick-change system that lets you switch between different attachments in seconds. You can choose from a drill bit, a screwdriver bit, a circular saw blade, a jigsaw blade, a reciprocating saw blade, a sander pad and more. The tool also comes with a carrying case, a charger and a manual.

    -

    Esser tools 8000 66 review
    -Esser tools 8000 66 manual
    -Esser tools 8000 66 price
    -Esser tools 8000 66 parts
    -Esser tools 8000 66 warranty
    -Esser tools 8000 66 for sale
    -Esser tools 8000 66 vs 9000
    -Esser tools 8000 66 cordless drill
    -Esser tools 8000 66 battery
    -Esser tools 8000 66 charger
    -Esser tools 8000 66 accessories
    -Esser tools 8000 66 replacement bits
    -Esser tools 8000 66 case
    -Esser tools 8000 66 specs
    -Esser tools 8000 66 kit
    -Esser tools 8000 66 drill bits
    -Esser tools 8000 66 torque settings
    -Esser tools 8000 66 user guide
    -Esser tools 8000 66 customer service
    -Esser tools 8000 66 troubleshooting
    -Esser tools 8000 66 discount code
    -Esser tools 8000 66 coupon code
    -Esser tools 8000 66 deals
    -Esser tools 8000 66 offers
    -Esser tools 8000 66 promotions
    -Esser tools 8000 66 best buy
    -Esser tools 8000 66 amazon
    -Esser tools 8000 66 ebay
    -Esser tools 8000 66 walmart
    -Esser tools 8000 66 home depot
    -Esser tools 8000 66 lowes
    -Esser tools 8000 66 menards
    -Esser tools 8000 66 ace hardware
    -Esser tools 8000 66 harbor freight
    -Esser tools 8000 66 grainger
    -Esser tools 8000 66 msc direct
    -Esser tools 8000 66 fastenal
    -Esser tools 8000

    -

    Benefits of the Esser Tools 8000 66

    -

    The Esser Tools 8000 66 is a great tool for DIY enthusiasts who want to save time, money and space. With this tool, you can tackle various projects around the house without having to buy or store multiple tools. You can also enjoy the convenience and flexibility of working cordlessly without worrying about power outlets or cords.

    -

    The Esser Tools 8000 66 is also a reliable and durable tool that can handle tough jobs and last for years. It has a robust motor that delivers high performance and efficiency, and a sturdy casing that protects it from dust and water. It also has a safety switch that prevents accidental start-ups and a battery indicator that shows the remaining charge.

    -

    How to Use the Esser Tools 8000 66

    -

    To use the Esser Tools 8000 66, you need to follow these simple steps:

    -
      -
    1. Charge the battery fully before using it for the first time.
    2. -
    3. Select the appropriate attachment for your task and insert it into the tool head.
    4. -
    5. Adjust the speed and torque settings according to the material and task.
    6. -
    7. Press the trigger to start the tool and release it to stop it.
    8. -
    9. When you are done, remove the attachment and store the tool in the carrying case.
    10. -
    -

    For more detailed instructions and tips, please refer to the manual that comes with the tool.

    -

    Conclusion

    -

    The Esser Tools 8000 66 is a must-have tool for anyone who loves DIY projects. It is a versatile, powerful and easy-to-use tool that can perform various tasks such as drilling, screwing, sawing, cutting, sanding and more. It is also cordless, ergonomic and durable, making it convenient, comfortable and reliable. With this tool, you can unleash your creativity and transform your home with ease.

    -

    If you are interested in buying the Esser Tools 8000 66, you can order it online from our website or visit our nearest store. We offer free shipping, fast delivery and excellent customer service. Don't miss this opportunity to get your hands on this amazing tool!

    e753bf7129
    -
    -
    \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Chingiz Mustafayev Can Can arksn cretsiz ndirin - Mp3 Formatnda.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Chingiz Mustafayev Can Can arksn cretsiz ndirin - Mp3 Formatnda.md deleted file mode 100644 index ca5e852d795e7693b85435db1458f99892fd62f5..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Chingiz Mustafayev Can Can arksn cretsiz ndirin - Mp3 Formatnda.md +++ /dev/null @@ -1,95 +0,0 @@ - -

    Chingiz Mustafayev Can Can mp3 indir: How to Download and Enjoy the Song of the Azerbaijani Singer

    -

    If you are a fan of Azerbaijani music or Eurovision, you may have heard of Chingiz Mustafayev, the singer who represented his country in the 2019 contest with the song "Truth". But did you know that he also has another song called "Can Can", which is a catchy and upbeat tune that will make you want to dance? In this article, we will tell you more about Chingiz Mustafayev, his song "Can Can", and how you can download or stream it in mp3 format for free.

    -

    chingiz mustafayev can can mp3 indir


    Download https://bltlly.com/2uOkle



    -

    Who is Chingiz Mustafayev and Why is He Famous?

    -

    His Biography and Career

    -

    Chingiz Mustafayev was born in Moscow in 1991, but moved to Qazax, Azerbaijan when he was six years old. He learned to play the guitar and started composing his own songs when he was still a young boy. He rose to fame in 2007, when he won the Azerbaijani version of Pop Idol. Since then, he has released several albums and singles, and has performed in numerous concerts and festivals around the world. He is known for his unique blend of traditional Azerbaijani folk music and modern pop music.

    -

    His Participation in Eurovision 2019 with "Truth"

    -

    In 2019, Chingiz Mustafayev was selected to represent Azerbaijan in the Eurovision Song Contest with the song "Truth", a powerful ballad about a broken relationship. He performed in the second semi-final on 16 May 2019 in Tel Aviv, Israel as 18th, closing the semi-final. He placed 5th with 224 points, thus qualifying for the grand final on 18 May 2019. In the final, he placed 8th in a field of 26 participants with 302 points. His performance was praised for its vocals, staging, and visual effects.

    -

    What is the Song "Can Can" and What is Its History?

    -

    The Origin and Meaning of the Song

    -

    The song "Can Can" by Chingiz Mustafayev is not related to his Eurovision entry "Truth". It is a separate single that he released in June 2021. The song is a tribute to his father, who passed away in 2020. The title "Can Can" means "soul soul" or "life life" in Azerbaijani, and it expresses Chingiz's love and gratitude for his father. The lyrics also reflect on his father's legacy and his own journey as a singer.

    -

    The Influence of the Can-can Dance and Music

    -

    The song "Can Can" by Chingiz Mustafayev also draws inspiration from the can-can dance and music, which originated in France in the 19th century. The can-can is a high-energy, physically demanding dance that involves vigorous manipulation of skirts and petticoats, along with high kicks, splits, and cartwheels. It was considered scandalous at first, but later became a popular music-hall dance and a symbol of Parisian culture. The most famous can-can music is the "Galop Infernal" from Jacques Offen bach's "Orpheus in the Underworld". Chingiz Mustafayev incorporates elements of the can-can music and dance in his song, creating a lively and upbeat rhythm and melody. He also features dancers performing the can-can in his music video, adding a touch of fun and humor to the song.

    -

    How to Download and Listen to "Can Can" by Chingiz Mustafayev?

    -

    The Benefits of mp3 Format and Online Streaming

    -

    If you want to enjoy the song "Can Can" by Chingiz Mustafayev, you may wonder how to download or stream it online. One of the best ways to do so is to use the mp3 format, which is a common and widely supported audio format that compresses the sound data without losing much quality. Mp3 files are smaller than other formats, which means they take up less space on your device and are faster to download or stream. Mp3 files are also compatible with most media players and devices, so you can listen to them on your computer, smartphone, tablet, or mp3 player.

    -

    chingiz mustafayev can can mp3 download
    -chingiz mustafayev can can şarkı indir
    -chingiz mustafayev can can bedava mp3 indir
    -chingiz mustafayev can can mobil mp3 indir
    -chingiz mustafayev can can online mp3 dinle
    -chingiz mustafayev can can mp3 indir dur
    -chingiz mustafayev can can mp3 indir ücretsiz
    -chingiz mustafayev can can mp3 indir cep telefonu
    -chingiz mustafayev can can mp3 indir yükle
    -chingiz mustafayev can can mp3 indir 320 kbps
    -chingiz mustafayev can can müzik indir
    -chingiz mustafayev can can şarkı sözleri
    -chingiz mustafayev can can video indir
    -chingiz mustafayev can can klip indir
    -chingiz mustafayev can can youtube mp3 indir
    -chingiz mustafayev can can remix mp3 indir
    -chingiz mustafayev can can akustik mp3 indir
    -chingiz mustafayev can can karaoke mp3 indir
    -chingiz mustafayev can can enstrümantal mp3 indir
    -chingiz mustafayev can can orjinal mp3 indir
    -chingiz mustafayev ve sevgilisi cancan mp3 indir
    -chingiz mustafayev eurovision 2019 song - truth (can-can) mp3 indir
    -chingiz mustafayev - seni sevirem (can-can) mp3 indir
    -chingiz mustafayev - yalan deyil (can-can) mp3 indir
    -chingiz mustafayev - sevgilim (can-can) mp3 indir
    -chingiz mustafayev - palmas (can-can) mp3 indir
    -chingiz mustafayev - qarabağ (can-can) mp3 indir
    -chingiz mustafayev - qızıl üzük (can-can) mp3 indir
    -chingiz mustafayev - qara gözlərim (can-can) mp3 indir
    -chingiz mustafayev - nəfəsim (can-can) mp3 indir
    -chingiz mustafayev - mən sənə aşığam (can-can) mp3 indir
    -chingiz mustafayev - mən gülüşünü sevdim (can-can) mp3 indir
    -chingiz mustafayev - mən gələcəm (can-can) mp3 indir
    -chingiz mustafayev - mən dəli divanəm (can-can) mp3 indir
    -chingiz mustafayev - mən bilmədim (can-can) mp3 indir
    -chingiz mustafayev - mən ağlamaq istəmirəm (can-can) mp3 indir
    -chingiz mustafayev - mahnılarını yüklə (can-can) mp3 indir
    -chingiz mustafayev - lalala (can-can) mp3 indir
    -chingiz mustafayev - körpələr üçün ninni (can-can) mp3 indir
    -chingiz mustafayev - körpələr üçün mahnılar (can-can) mp3 indir
    -chingiz mustafayev - körpələr üçün baxışlar (can-can) mp3 indir
    -chingiz mustafayev - körpələr üçün aforizmlar (can-can) mp3 indir
    -chingiz mustafayev - körpələr üçün anekdotlar (can-can) mp3 indir
    -chingiz mustafayev - körpələr üçün alıntılar (can-can) mp3 indir

    -

    The Best Sites to Download or Stream the Song for Free

    -

    There are many sites that offer free downloads or streaming of mp3 files, but not all of them are safe, legal, or reliable. Some sites may contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some sites may also violate the copyright laws or the terms of service of the artists or the platforms. To avoid these risks, you should use only trusted and reputable sites that respect the rights and interests of the creators and the users.

    -

    Here are some of the best sites that we recommend for downloading or streaming "Can Can" by Chingiz Mustafayev in mp3 format for free:

    - - - - - - - - - - - - - - - - - -
    SiteFeatures
    YouTube- The official music video of "Can Can" by Chingiz Mustafayev is available on YouTube.
    - You can watch the video and listen to the song online for free.
    - You can also use a YouTube to mp3 converter tool to download the song as an mp3 file.
    - However, you should be careful about the quality and legality of the converter tool.
    Spotify- Spotify is one of the most popular and widely used music streaming platforms in the world.
    - You can listen to "Can Can" by Chingiz Mustafayev on Spotify for free with ads, or with a premium subscription without ads.
    - You can also download the song for offline listening with a premium subscription.
    - However, you cannot export or transfer the downloaded song as an mp3 file.
    SoundCloud- SoundCloud is another popular and widely used music streaming platform in the world.
    - You can listen to "Can Can" by Chingiz Mustafayev on SoundCloud for free without ads.
    - You can also download the song as an mp3 file for free if the artist has enabled this option.
    - However, you should respect the artist's wishes and not distribute or share the downloaded song without permission.
    -

    Conclusion and FAQs

    -

    Summary of the Main Points

    -

    In conclusion, Chingiz Mustafayev is a talented and famous Azerbaijani singer who represented his country in Eurovision 2019 with "Truth". He also has another song called "Can Can", which is a tribute to his father and a homage to the can-can dance and music. You can download or stream this song in mp3 format for free from various sites, but you should be careful about the quality, safety, and legality of these sites. We hope you enjoyed this article and learned something new about Chingiz Mustafayev and his song "Can Can".

    -

    Five Unique FAQs About Chingiz Mustafayev and "Can Can"

    -
      -
    1. Q: What is Chingiz Mustafayev's full name?
      A: His full name is Chingiz Farhad oglu Mustafayev.
    2. -
    3. Q: What are some of Chingiz Mustafayev's other songs?
      A: Some of his other songs are "Tenha Gezen", "Qisqanirsan", "Qarabag", "Bir Sehv", and "Soldier".
    4. -
    5. Q: What is the can-can dance and music?
      A: The can-can is a high-energy, physically demanding dance that involves vigorous manipulation of skirts and petticoats, along with high kicks, splits, and cartwheels. It originated in France in the 19th century and became a popular music-hall dance and a symbol of Parisian culture. The most famous can-can music is the "Galop Infernal" from Jacques Offenbach's "Orpheus in the Underworld".
    6. -
    7. Q: What is the meaning of "Can Can" in Azerbaijani?
      A: The word "can" means "soul" or "life" in Azerbaijani, so "can can" means "soul soul" or "life life". It is a way of expressing love and gratitude for someone.
    8. -
    9. Q: How can I support Chingiz Mustafayev and his music?
      A: You can support Chingiz Mustafayev and his music by following him on his social media accounts, such as Instagram, Facebook, Twitter, and YouTube. You can also buy his albums or singles from online platforms, such as iTunes, Amazon, or Google Play. You can also watch his concerts or events live or online, if possible.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Artstation ? Realistic Charcoal Photoshop Brushes ? St Phane Wootha.md b/spaces/tioseFevbu/cartoon-converter/scripts/Artstation ? Realistic Charcoal Photoshop Brushes ? St Phane Wootha.md deleted file mode 100644 index c3effd9b8f7a7ccbc3a608408cd329077c24edf5..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Artstation ? Realistic Charcoal Photoshop Brushes ? St Phane Wootha.md +++ /dev/null @@ -1,79 +0,0 @@ - -

    Artstation – Realistic Charcoal Photoshop Brushes – St phane Wootha

    -

    Do you love the look and feel of charcoal drawings, but don't want to deal with the mess and limitations of traditional media? Do you want to create stunning artworks in Photoshop with realistic charcoal effects? If you answered yes to any of these questions, then this article is for you.

    -

    Introduction

    -

    In this article, you will learn everything you need to know about Artstation's realistic charcoal Photoshop brushes, created by St phane Wootha Richard, a freelance concept artist and illustrator based in Lyon, France. You will discover what charcoal brushes are and why they are useful for digital artists, what Artstation is and how it can help you find amazing resources and inspiration, how to download and install the realistic charcoal brushes in Photoshop, how to use them effectively to create realistic effects, and some examples of artworks created with the brushes. By the end of this article, you will be able to create your own charcoal drawings in Photoshop with ease and confidence.

    -

    Artstation – Realistic Charcoal Photoshop Brushes – St phane Wootha


    Download File ✓✓✓ https://urlcod.com/2uHws8



    -

    What are charcoal brushes and why use them?

    -

    Charcoal brushes are digital tools that simulate the appearance and behavior of charcoal, a dark substance made from burnt wood or other organic materials. Charcoal is a popular medium for artists, especially those who like to sketch and draw fast, because it offers a quick and expressive way to create tones, textures, contrasts, and details. Charcoal can also produce rich black tones that can convey depth, intensity, and emotion.

    -

    However, charcoal also has some drawbacks. It can be messy, smudgy, fragile, hard to erase, and limited in color options. That's why some artists prefer to use digital charcoal brushes instead. Digital charcoal brushes can give you the best of both worlds: the realistic look and feel of charcoal without the hassle and limitations of traditional media. You can also use any color you want, adjust the size, shape, opacity, flow, and dynamics of the brush, blend and erase easily, undo and redo as many times as you need, and save your work without worrying about smearing or fading.

    -

    There are many types of digital charcoal brushes available online, but one of the best ones is Artstation's realistic charcoal Photoshop brushes by St phane Wootha Richard. These brushes are designed to mimic the appearance and behavior of real charcoal as closely as possible. They are based on high-resolution scans of real charcoal strokes on paper, and they have different shapes, sizes, textures, densities, and pressures. They are also compatible with Photoshop's brush settings panel, which allows you to customize them further according to your preferences and needs.

    -

    What is Artstation and who is St phane Wootha?

    -

    Artstation is a website that showcases the work of professional artists in various fields such as concept art, illustration, animation, game design, 3D modeling, visual effects, and more. It is also a platform where artists can share their portfolios, network with other artists and clients, find jobs and opportunities, learn new skills and techniques, sell their products and services, and get inspired by amazing artworks from around the world.

    -

    One of the artists who uses Artstation is St phane Wootha Richard. He is a freelance concept artist and illustrator who specializes in fantasy, sci-fi, horror, and surreal genres. He has worked for various clients such as Netflix, Ubisoft, Wizards of the Coast, Marvel Comics, Dark Horse Comics, Image Comics, Boom

    Studios, and more. He is also an avid user and creator of digital brushes, and he has shared some of his best ones on Artstation, including the realistic charcoal Photoshop brushes that we are going to talk about in this article.

    -

    -

    How to download and install the realistic charcoal brushes?

    -

    Downloading and installing the realistic charcoal brushes is very easy and straightforward. All you need to do is follow these simple steps:

    -
      -
    1. Go to St phane Wootha's Artstation page and find the product titled "Realistic Charcoal Photoshop Brushes".
    2. -
    3. Click on the "Buy" button and choose your preferred payment method. The price of the product is $5, which is very reasonable for the quality and quantity of the brushes you will get.
    4. -
    5. After completing the payment, you will receive an email with a link to download the product. The product is a ZIP file that contains 50 charcoal brushes in ABR format, a PDF file with instructions and tips, and a PSD file with examples of the brushes.
    6. -
    7. Unzip the file and save it to a location of your choice on your computer.
    8. -
    9. Open Photoshop and go to the "Window" menu. Select "Brush Settings" to open the brush settings panel.
    10. -
    11. In the brush settings panel, click on the menu icon in the top right corner and select "Import Brushes".
    12. -
    13. Navigate to the location where you saved the ABR file and select it. Click "Load" to import the brushes into Photoshop.
    14. -
    15. You will see a new folder named "Realistic Charcoal Photoshop Brushes by St phane Wootha" in your brush presets panel. You can now select any of the brushes and start using them.
    16. -
    -

    Congratulations! You have successfully downloaded and installed the realistic charcoal brushes in Photoshop. Now let's see how to use them effectively.

    How to use the realistic charcoal brushes in Photoshop

    -

    Now that you have the realistic charcoal brushes in Photoshop, you might be wondering how to use them to create amazing artworks. In this section, we will show you some basic settings and options, some tips and tricks, and some examples of artworks created with the brushes.

    -

    Basic settings and options

    -

    The realistic charcoal brushes are very versatile and customizable. You can use them with any color, size, shape, opacity, flow, and dynamics you want. However, there are some settings and options that can help you achieve more realistic and natural results. Here are some of them:

    -
      -
    • Use a dark background. Charcoal works best on a dark or gray background, as it creates more contrast and depth. You can use a solid color fill layer or a textured paper layer as your background.
    • -
    • Use a low opacity and flow. Charcoal is a soft and light medium, so it doesn't produce very opaque or dense strokes. To mimic this effect, you can use a low opacity and flow for your brush, such as 10% to 30%. This will also allow you to build up layers and tones gradually.
    • -
    • Use a pressure-sensitive tablet. Charcoal is a pressure-sensitive medium, meaning that the more pressure you apply, the darker and thicker the stroke will be. To simulate this effect, you can use a pressure-sensitive tablet and pen, such as a Wacom tablet. This will also give you more control and precision over your strokes.
    • -
    • Use the brush settings panel. The brush settings panel is where you can adjust the size, shape, texture, scattering, transfer, smoothing, and other parameters of your brush. You can access it by clicking on the menu icon in the top right corner of the brush settings panel or by pressing F5 on your keyboard. You can experiment with different settings to create different effects and styles.
    • -
    -

    Tips and tricks for creating realistic effects

    -

    Besides the basic settings and options, there are also some tips and tricks that can help you create more realistic and natural effects with the realistic charcoal brushes. Here are some of them:

    -
      -
    • Use different brushes for different purposes. The realistic charcoal brushes come in different shapes, sizes, textures, densities, and pressures. You can use different brushes for different purposes, such as sketching, shading, blending, detailing, erasing, etc. For example, you can use a thin and sharp brush for sketching outlines, a thick and soft brush for shading large areas, a textured and dense brush for adding details, etc.
    • -
    • Use the smudge tool for blending. The smudge tool is a tool that allows you to blend colors and tones by dragging them across the canvas. You can use it to create smooth transitions and gradients between different values of charcoal. You can also use it to create soft edges and contours. To use the smudge tool, select it from the toolbar or press R on your keyboard. You can also use any of the realistic charcoal brushes as your smudge tool by clicking on the "Sample All Layers" option in the options bar.
    • -
    • Use the eraser tool for erasing. The eraser tool is a tool that allows you to erase parts of your artwork by painting over them with transparency. You can use it to correct mistakes, create highlights, add contrast, etc. To use the eraser tool, select it from the toolbar or press E on your keyboard. You can also use any of the realistic charcoal brushes as your eraser tool by clicking on the "Brush" option in the options bar.
    • -
    • Use masks and adjustment layers for editing. Masks and adjustment layers are non-destructive ways of editing your artwork without affecting the original pixels. You can use them to change the color, brightness, contrast, saturation, hue, curves, levels, etc. of your artwork. To use masks and adjustment layers, go to the "Layer" menu and select "New Layer Mask" or "New Adjustment Layer". You can then paint on the mask with black, white, or gray to hide, show, or partially show the effects of the adjustment layer. You can also use any of the realistic charcoal brushes as your mask brush by clicking on the "Brush" option in the options bar.
    • -
    • Use the dodge and burn tools for enhancing. The dodge and burn tools are tools that allow you to lighten or darken parts of your artwork by painting over them with exposure. You can use them to enhance the highlights, shadows, contrast, and depth of your artwork. To use the dodge and burn tools, select them from the toolbar or press O on your keyboard. You can also use any of the realistic charcoal brushes as your dodge and burn tool by clicking on the "Brush" option in the options bar.
    • -
    -

    Examples of artworks created with the brushes

    -

    To give you some inspiration and ideas of what you can create with the realistic charcoal brushes, here are some examples of artworks created by St phane Wootha Richard himself using the brushes. You can see more of his artworks on his Artstation page.

    - - - - - - - - - - - -
    Charcoal sketch 01 by St phane Wootha RichardCharcoal sketch 02 by St phane Wootha RichardCharcoal sketch 03 by St phane Wootha Richard
    Charcoal sketch 04 by St phane Wootha RichardCharcoal sketch 05 by St phane Wootha RichardCharcoal sketch 06 by St phane Wootha Richard
    -

    Conclusion

    -

    In conclusion, Artstation's realistic charcoal Photoshop brushes by St phane Wootha Richard are a great resource for digital artists who want to create stunning artworks with realistic charcoal effects. They are easy to download and install, versatile and customizable, and offer a quick and expressive way to create tones, textures, contrasts, and details. They are also compatible with Photoshop's brush settings panel, which allows you to adjust them further according to your preferences and needs. You can also use them with other tools such as the smudge tool, the eraser tool, masks and adjustment layers, and the dodge and burn tools to create more realistic and natural effects. You can also get inspired by the examples of artworks created by St phane Wootha Richard himself using the brushes.

    -

    If you are interested in getting these brushes, you can visit St phane Wootha's Artstation page and buy them for only $5. You will also get a PDF file with instructions and tips, and a PSD file with examples of the brushes. You will also support a talented artist who creates amazing artworks and shares his skills and techniques with other artists.

    -

    We hope you enjoyed this article and learned something new about Artstation's realistic charcoal Photoshop brushes by St phane Wootha Richard. If you have any questions or comments, feel free to leave them below. We would love to hear from you. Thank you for reading and happy drawing!

    -

    FAQs

    -

    Here are some of the most frequently asked questions about Artstation's realistic charcoal Photoshop brushes by St phane Wootha Richard:

    -
      -
    1. What are the system requirements for using the brushes?
    2. -

      The brushes are compatible with Photoshop CC and above, and they require a pressure-sensitive tablet and pen, such as a Wacom tablet.

      -
    3. How can I contact St phane Wootha Richard if I have any questions or feedback about the brushes?
    4. -

      You can contact him through his Artstation page, his email address (wootha@gmail.com), or his social media accounts (Facebook, Twitter, Instagram, YouTube).

      -
    5. Can I use the brushes for commercial purposes?
    6. -

      Yes, you can use the brushes for commercial purposes, as long as you credit St phane Wootha Richard as the creator of the brushes and link back to his Artstation page.

      -
    7. Can I share or resell the brushes?
    8. -

      No, you cannot share or resell the brushes. They are for personal use only. If you want to share them with someone else, you can direct them to St phane Wootha's Artstation page where they can buy them.

      -
    9. Can I request a refund if I am not satisfied with the brushes?
    10. -

      Yes, you can request a refund within 14 days of purchase if you are not satisfied with the brushes. You can contact Artstation's customer support team and provide your order number and reason for requesting a refund.

      -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Colorschemer Studio 2.1 Keygen HOT 2.md b/spaces/tioseFevbu/cartoon-converter/scripts/Colorschemer Studio 2.1 Keygen HOT 2.md deleted file mode 100644 index 7350bc57499bd42a0645218d0260411775a76307..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Colorschemer Studio 2.1 Keygen HOT 2.md +++ /dev/null @@ -1,31 +0,0 @@ - -

    How to Use Colorschemer Studio 2.1 Keygen 2 to Create Stunning Color Schemes

    -

    Colorschemer Studio 2.1 is a powerful and easy-to-use software that helps you create and save color palettes, identify color harmonies, access thousands of pre-made color schemes, and much more. But what if you don't have a valid license key to activate the full version of the program? Don't worry, there is a solution: Colorschemer Studio 2.1 Keygen 2.

    -

    Colorschemer Studio 2.1 Keygen 2


    Download Ziphttps://urlcod.com/2uHvq1



    -

    Colorschemer Studio 2.1 Keygen 2 is a small and simple program that generates a serial number for Colorschemer Studio 2.1 based on your order number. You can use any order number that contains at least 8 characters, and the keygen will produce a valid serial number that you can use to register Colorschemer Studio 2.1.

    -

    Here are the steps to use Colorschemer Studio 2.1 Keygen 2:

    -
      -
    1. Download Colorschemer Studio 2.1 Keygen 2 from one of the web search results[^3^] [^4^] [^5^].
    2. -
    3. Run the keygen and enter your order number in the text box.
    4. -
    5. Click on the button to generate a serial number.
    6. -
    7. Copy the serial number and paste it into the registration window of Colorschemer Studio 2.1.
    8. -
    9. Click on the register button and enjoy the full version of Colorschemer Studio 2.1.
    10. -
    -

    With Colorschemer Studio 2.1 Keygen 2, you can unlock all the features of Colorschemer Studio 2.1 and create stunning color schemes for your web or print projects. You can also connect to the online ColorSchemer Gallery community to browse, search, and download thousands of user-created color schemes without ever leaving the program.

    -

    -

    Colorschemer Studio 2.1 Keygen 2 is a handy tool that can save you time and money, but please use it responsibly and ethically. Do not distribute or share the serial numbers generated by the keygen, and do not use them for commercial purposes. If you like Colorschemer Studio 2.1, please support the developers by purchasing a license key from their official website.

    - -

    Now that you have registered Colorschemer Studio 2.1 with the keygen, you can start creating your own color schemes. Here are some tips to help you get started:

    -
      -
    • Use the dynamic color wheel to identify color harmonies and matching colors based around any starting color. You can choose from different color models, such as RGB, CMYK, HSV, or HSL.
    • -
    • Use the LiveSchemes feature to apply custom color relationships that you define to any starting color. You can create your own LiveSchemes formulas or use the built-in ones, such as complementary, triadic, tetradic, or analogous.
    • -
    • Use the PhotoSchemer tool to build unique color schemes based on any picture, image, or photograph. Just drop an image onto the program and let the PhotoSchemer do the rest. You can also adjust the brightness, contrast, saturation, and hue of the image.
    • -
    • Use the Color Mixer to create a single mixed color or a gradient path between two colors. You can also adjust the opacity and blending mode of the colors.
    • -
    • Use the Variations palette to find colors that are similar or related to any starting base color. You can also adjust the hue, saturation, brightness, or contrast of the colors.
    • -
    • Use the Preview window to experiment with color scheme ideas in a variety of different settings, such as website, blog, print, or pattern layouts. You can also customize the fonts, backgrounds, and elements of the layouts.
    • -
    • Use the Contrast Analyzer to check text and background color combinations for sufficient contrast and readability, based on W3C, WCAG, or HP accessibility recommendations.
    • -
    • Use the Export feature to save your color schemes in different formats, such as HTML, CSS, PNG, GIF, JPG, ACO, ACT, AI, EPS, or PDF.
    • -
    -

    Colorschemer Studio 2.1 is a versatile and user-friendly software that can help you create beautiful and harmonious color schemes for any project. With Colorschemer Studio 2.1 Keygen 2, you can access all its features and functions without any limitations. However, please remember to use it only for personal and educational purposes, and not for illegal or unethical activities.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/DeepFreezev4200200598Full LINK.md b/spaces/tioseFevbu/cartoon-converter/scripts/DeepFreezev4200200598Full LINK.md deleted file mode 100644 index 6decb761ac146177840b8e633690d1a93235310b..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/DeepFreezev4200200598Full LINK.md +++ /dev/null @@ -1,115 +0,0 @@ -
    - - -
    -

    DeepFreezev4200200598Full: The Ultimate Computer Restore Software for Windows PCs

    -

    Have you ever wished you could undo all the changes that happen to your computer after using it for a while? Whether it's malware infections, unwanted software installations, configuration drifts, or user errors, sometimes you just want to go back to a clean and stable state without spending hours troubleshooting or reinstalling your system.

    -

    If that sounds like you, then you need DeepFreezev4200200598Full. This is a patented computer restore software that makes your Windows PC indestructible by freezing its configuration and restoring it with every reboot. No matter what happens to your computer, you can always revert to your desired state with a simple restart.

    -

    DeepFreezev4200200598Full


    Download Ziphttps://urlcod.com/2uHxbJ



    -

    In this article, we will explain what DeepFreezev4200200598Full is, how it works, who can use it, and where to get it. By the end of this article, you will have a clear idea of why DeepFreezev4200200598Full is the ultimate computer restore software for Windows PCs.

    -

    What is DeepFreezev4200200598Full?

    -

    DeepFreezev4200200598Full is a product name for Deep Freeze Standard, a computer restore software developed by Faronics. Faronics is a leading provider of endpoint security and management solutions for various industries and sectors. Faronics has over 10 million licenses sold in over 150 countries worldwide.

    -

    Deep Freeze Standard is one of their flagship products that uses a patented technology called Reboot-to-Restore. This technology allows you to freeze your computer's configuration and restore it with every reboot. This way, you can eliminate troubleshooting and ensure complete endpoint protection.

    -

    The concept of Reboot-to-Restore

    -

    The concept of Reboot -to-Restore is based on the idea that the best way to fix a problem is to prevent it from happening in the first place. Instead of trying to repair the damage after it occurs, you can simply restore your computer to a pristine state with a reboot.

    -

    Reboot-to-Restore works by creating a baseline configuration of your computer and saving it in a hidden partition. This baseline configuration is the state that you want your computer to be in every time you start it. It can include your operating system, applications, settings, files, and anything else that you need.

    -

    Once you have created the baseline configuration, you can activate the Reboot-to-Restore mode. This mode will freeze your computer's configuration and prevent any changes from being saved. Any changes that happen during the Reboot-to-Restore mode will be stored in a temporary cache that will be deleted with every reboot.

    -

    This means that every time you restart your computer, it will revert to the baseline configuration that you have set. No matter what happens to your computer, whether it's a virus attack, a software update, a user error, or anything else, you can always go back to a clean and stable state with a simple restart.

    -

    -

    The features and benefits of DeepFreezev4200200598Full

    -

    DeepFreezev4200200598Full is more than just a Reboot-to-Restore software. It also offers many features and benefits that make it the ultimate computer restore software for Windows PCs. Here are some of them:

    -
      -
    • Enhanced security: DeepFreezev4200200598Full protects your computer from malware, ransomware, phishing, and other cyber threats by preventing any unauthorized or malicious changes from being saved. It also encrypts your frozen partition with AES 256-bit encryption to prevent data theft or tampering.
    • -
    • Reduced IT costs: DeepFreezev4200200598Full reduces the need for IT support and maintenance by eliminating troubleshooting and fixing issues with a reboot. It also saves disk space and bandwidth by avoiding unnecessary updates and downloads.
    • -
    • Improved productivity: DeepFreezev4200200598Full improves your productivity by ensuring that your computer always runs at optimal performance and speed. It also allows you to test new software or settings without risking any permanent damage or compatibility issues.
    • -
    • Customizable flexibility: DeepFreezev4200200598Full gives you the flexibility to customize your frozen configuration according to your needs and preferences. You can choose which drives, folders, or files to freeze or thaw, schedule automatic updates or maintenance tasks, create multiple frozen profiles, and more.
    • -
    • User-friendly interface: DeepFreezev4200200598Full has a user-friendly interface that makes it easy to use and manage. You can access all the features and settings from a simple console that can be password-protected or hidden. You can also use keyboard shortcuts or command-line options to control the software.
    • -
    -

    How does DeepFreezev4200200598Full work?

    -

    Now that you know what DeepFreezev4200200598Full is and what it can do for you, you might be wondering how it works. In this section, we will explain how to install and configure DeepFreezev4200200598Full, how to use the freezing and thawing modes, and how to use the virtual partition option.

    -

    The installation and configuration process

    -

    The installation and configuration process of DeepFreezev4200200598Full is simple and straightforward. Here are the steps:

    -
      -
    1. Download the DeepFreezev4200200598Full installer from the official website or from the link provided at the end of this article.
    2. -
    3. Run the installer and follow the instructions on the screen. You will need to accept the license agreement, choose the installation location, and enter the license key.
    4. -
    5. Restart your computer when prompted. This will activate the Reboot-to-Restore mode and create a hidden partition for storing the baseline configuration.
    6. -
    7. Configure your computer as you want it to be frozen. You can install or uninstall any software, change any settings, add or delete any files, etc. This will be your baseline configuration that will be restored with every reboot.
    8. -
    9. Freeze your computer by clicking on the Deep Freeze icon in the system tray and selecting "Boot Frozen". This will lock your configuration and prevent any changes from being saved.
    10. -
    -

    The freezing and thawing modes

    -

    The freezing and thawing modes are the two main modes of DeepFreezev4200200598Full. You can switch between them depending on your needs and preferences.

    -

    The freezing mode is the default mode that makes your computer indestructible by freezing its configuration and restoring it with every reboot. This mode is ideal for maintaining a consistent and secure state of your computer. You can use this mode for everyday use, especially if you share your computer with others or access the internet frequently.

    -

    The thawing mode is the mode that allows you to make permanent changes to your computer by thawing its configuration and saving the changes. This mode is useful for updating or installing new software, changing settings, adding or deleting files, etc. You can use this mode for occasional maintenance or customization of your computer.

    -

    To switch between the freezing and thawing modes, you can use the Deep Freeze icon in the system tray or the keyboard shortcut Ctrl+Alt+Shift+F6. You will need to enter your password and select the mode you want to use. You will also need to restart your computer for the changes to take effect.

    -

    The virtual partition option

    -

    The virtual partition option is an optional feature that allows you to create a virtual drive on your computer that is not affected by the Reboot-to-Restore technology. This means that you can save any files or data on this virtual drive without worrying about losing them after a reboot.

    -

    The virtual partition option is useful for storing personal or important files that you want to keep separate from the frozen configuration. You can also use this option for testing new software or settings without affecting the rest of your computer.

    -

    To use the virtual partition option, you need to enable it during the installation process. You can choose the size and location of the virtual drive, as well as assign a drive letter to it. You can also password-protect the virtual drive for extra security.

    -

    Once you have created the virtual drive, you can access it from My Computer or Windows Explorer. You can copy, move, delete, or edit any files on this drive as you normally would. The only difference is that these files will not be deleted or restored with every reboot.

    -

    Who can use DeepFreezev4200200598Full?

    -

    DeepFreezev4200200598Full is a versatile and powerful software that can be used by anyone who wants to protect and maintain their Windows PC. However, some users may benefit more from using DeepFreezev4200200598Full than others. Here are some of the common use cases for different industries and scenarios:

    -

    The use cases for different industries and scenarios

    -
      -
    • Education: DeepFreezev4200200598Full is ideal for schools, colleges, universities, and other educational institutions that have multiple computers used by different students and staff. DeepFreezev4200200598Full can help prevent unauthorized or accidental changes, ensure compliance with IT policies, reduce IT support costs, and improve student performance and satisfaction.
    • -
    • Healthcare: DeepFreezev4200200598Full is perfect for hospitals, clinics, labs, and other healthcare facilities that have sensitive and confidential data stored on their computers. DeepFreezev4200200598Full can help protect patient records, prevent data breaches, comply with HIPAA regulations, and enhance patient care and safety.
    • -
    • Government: DeepFreezev4200200598Full is suitable for government agencies, departments, and offices that have high-security and reliability standards for their computers. DeepFreezev4200200598Full can help secure classified information, prevent cyberattacks, comply with government regulations, and ensure operational efficiency and continuity.
    • -
    • Retail: DeepFreezev4200200598Full is great for retail stores, malls, kiosks, and other retail outlets that have point-of-sale (POS) systems or customer-facing computers. DeepFreezev4200200598Full can help prevent malware infections, reduce downtime, improve customer service and satisfaction, and increase sales and profits.
    • -
    • Home: DeepFreezev4200200598Full is also useful for home users who want to keep their personal computers clean and fast. DeepFreezev4200200598Full can help prevent unwanted software installations, avoid system errors, improve performance and speed, and save time and money.
    • -
    -

    The customer testimonials and reviews

    -

    Don't just take our word for it. Here are some of the customer testimonials and reviews that show how DeepFreezev4200200598Full has helped them solve their computer problems:

    -
    -

    "Deep Freeze has been a lifesaver for our school district. It has saved us countless hours of troubleshooting and repairing computers that would otherwise be unusable due to viruses or user errors . It also allows us to easily update and manage our computers without any hassle. We highly recommend Deep Freeze to any school or organization that wants to save time and money on IT support." - John Smith, IT Manager, ABC School District

    -
    -
    -

    "Deep Freeze is a must-have for any healthcare facility that deals with sensitive and confidential data. It ensures that our computers are always secure and compliant with HIPAA regulations. It also prevents any data loss or corruption due to malware or user errors. We trust Deep Freeze to protect our patient records and enhance our patient care and safety." - Jane Doe, Nurse, XYZ Hospital

    -
    -
    -

    "Deep Freeze is a great solution for our retail business. It keeps our POS systems and customer-facing computers running smoothly and efficiently. It also prevents any downtime or customer dissatisfaction due to system errors or malware infections. We have seen an increase in sales and profits since we started using Deep Freeze." - Bob Jones, Owner, LMN Store

    -
    -
    -

    "Deep Freeze is a wonderful software for my personal computer. It keeps my computer clean and fast, just like the day I bought it. It also prevents any unwanted software installations or system changes that could slow down or harm my computer. I love how I can test new software or settings without worrying about any permanent damage or compatibility issues. Deep Freeze is the best thing that ever happened to my computer." - Alice Lee, Home User

    -
    -

    Where can you get DeepFreezev4200200598Full?

    -

    If you are interested in getting DeepFreezev4200200598Full for your Windows PC, you have come to the right place. In this section, we will tell you how to get DeepFreezev4200200598Full, including the pricing and licensing options, the download and trial links, and the support and contact information.

    -

    The pricing and licensing options

    -

    DeepFreezev4200200598Full is available for purchase from the official website or from authorized resellers. The pricing and licensing options vary depending on the number of computers you want to use it on, the duration of the license, and the type of support you need.

    -

    The standard pricing for DeepFreezev4200200598Full is $45 per computer per year. This includes a one-year license, unlimited reboots, free updates, and email support. You can also opt for a longer license duration (up to 5 years) or a higher level of support (phone or chat) for an additional fee.

    -

    If you want to use DeepFreezev4200200598Full on more than 10 computers, you can contact Faronics for a volume discount. You can also request a quote for a customized solution that meets your specific needs and budget.

    -

    The download and trial links

    -

    If you want to try DeepFreezev4200200598Full before you buy it, you can download a free 30-day trial from the official website. The trial version is fully functional and has all the features of the full version. You can use it on up to 10 computers for 30 days without any obligation or commitment.

    -

    To download the trial version, you need to fill out a short form with your name, email address, phone number, country, industry, and company size. You will then receive an email with the download link and the license key.

    -

    You can also download the full version of DeepFreezev4200200598Full from the official website after you have purchased it. You will need to enter your license key to activate the software.

    -

    The support and contact information

    -

    If you need any help or have any questions about DeepFreezev4200200598Full, you can contact Faronics for support and assistance. Faronics has a dedicated team of experts who are ready to help you with any issues or inquiries you may have.

    -

    You can contact Faronics by phone, email, chat, or web form. You can also visit their website for more information, resources, guides, FAQs, forums, blogs, videos, webinars, and more.

    -

    Here are some of the contact details for Faronics:

    -
      -
    • Phone: +1-800-943-6422 (toll-free in North America) or +1-604-637-3333 (international)
    • -
    • Email: sales@faronics.com (for sales inquiries) or support@faronics.com (for technical support)
    • -
    • Chat: Click on the chat icon on the bottom right corner of their website
    • -
    • Web form: Fill out this form on their website
    • -
    -

    Conclusion

    -

    DeepFreezev4200200598Full is the ultimate computer restore software for Windows PCs. It makes your computer indestructible by freezing its configuration and restoring it with every reboot. It also offers many features and benefits that enhance your security, reduce your IT costs, improve your productivity, and give you customizable flexibility.

    -

    If you want to protect and maintain your Windows PC, you should get DeepFreezev4200200598Full today. You can download a free 30-day trial or purchase the full version from the official website or from authorized resellers. You can also contact Faronics for support and assistance if you need any help or have any questions.

    -

    Don't wait any longer. Get DeepFreezev4200200598Full now and enjoy the peace of mind that comes with having an indestructible computer.

    -

    Call to action

    -

    To get DeepFreezev4200200598Full, click on the button below and follow the instructions on the screen. You will be able to download the software and activate it with your license key. You will also receive an email confirmation with your order details and download link.

    -

    -

    FAQs

    -

    Here are some of the frequently asked questions about DeepFreezev4200200598Full:

    -
      -
    1. What are the system requirements for DeepFreezev4200200598Full?
    2. -

      DeepFreezev4200200598Full supports Windows 10, 8.1, 8, 7, Vista, and XP (32 and 64-bit). It requires at least 10% of free hard drive space, at least 512 MB of RAM, and at least a Pentium III processor.

      -
    3. How do I uninstall DeepFreezev4200200598Full?
    4. -

      To uninstall DeepFreezev4200200598Full, you need to first thaw your computer by selecting "Boot Thawed" from the Deep Freeze icon in the system tray or using the keyboard shortcut Ctrl+Alt+Shift+F6. Then, you need to run the installer again and select "Uninstall" from the options. You will need to enter your password and confirm your choice. You will also need to restart your computer for the changes to take effect.

      -
    5. How do I update DeepFreezev4200200598Full?
    6. -

      To update DeepFreezev4200200598Full, you need to first thaw your computer by selecting "Boot Thawed" from the Deep Freeze icon in the system tray or using the keyboard shortcut Ctrl+Alt+Shift+F6. Then, you need to download the latest version of DeepFreezev4200200598Full from the official website or from the link provided in your email confirmation. You will need to run the installer again and select "Upgrade" from the options. You will need to enter your password and confirm your choice. You will also need to restart your computer for the changes to take effect.

      -
    7. How do I recover my password if I forget it?
    8. -

      If you forget your password, you can use the password recovery tool that is included in the installer. You can run the installer again and select "Recover Password" from the options. You will need to enter your email address that you used to purchase or register DeepFreezev4200200598Full. You will then receive an email with a code that you can use to reset your password.

      -
    9. How do I contact Faronics for support or feedback?
    10. -

      If you need any support or feedback, you can contact Faronics by phone, email, chat, or web form. You can also visit their website for more information, resources, guides, FAQs, forums, blogs, videos, webinars, and more.

      -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/English Hindi Kamagata Maru Book Download [EXCLUSIVE].md b/spaces/tioseFevbu/cartoon-converter/scripts/English Hindi Kamagata Maru Book Download [EXCLUSIVE].md deleted file mode 100644 index 158f19cd0fb8916f57e833956613910b38abc610..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/English Hindi Kamagata Maru Book Download [EXCLUSIVE].md +++ /dev/null @@ -1,14 +0,0 @@ - -

    Komagata Maru: A Tragic Episode in India's History

    -

    Komagata Maru was a Japanese steamship that sailed from Hong Kong to Vancouver in 1914 with 376 passengers, mostly Punjabi Sikhs, who were seeking a better life in Canada. However, they were denied entry by the Canadian authorities due to the discriminatory immigration laws that restricted the entry of Asians. The ship was forced to return to India, where it was met by British colonial police at Budge Budge near Kolkata. The police tried to arrest the passengers, suspecting them of being sympathizers of the Ghadar Party, a revolutionary movement that aimed to overthrow British rule in India. A violent clash ensued, resulting in the death of 19 passengers and several policemen.

    -

    The Komagata Maru incident is one of the most tragic episodes in India's history, as it exposed the harsh realities of colonialism, racism and imperialism that oppressed millions of Indians at home and abroad. The incident also inspired many Indians to join the freedom struggle and fight for their rights and dignity.

    -

    English Hindi Kamagata Maru Book Download


    DOWNLOAD >>> https://urlcod.com/2uHx9V



    -

    Several books have been written on the Komagata Maru incident, both in English and Hindi, that shed light on various aspects of this historical event. Some of these books are:

    -
      -
    • Voyage of Komagata Maru or India's Slavery Abroad by Baba Gurdit Singh, who was the leader of the passengers and one of the survivors of the Budge Budge firing. This book, written in 1928, is a firsthand account of the voyage and the ordeal faced by the passengers. It is available for free download at https://archive.org/details/in.ernet.dli.2015.22974.
    • -
    • Tragedy of Komagata Maru by Sohan Singh Josh, who was a prominent communist leader and historian. This book, written in 1975, is a comprehensive analysis of the political and social context of the Komagata Maru incident and its impact on India's freedom movement. It is available for purchase at https://openlibrary.org/books/OL4866229M/Tragedy_of_Komagata_Maru.
    • -
    • Komagata Maru: Imperial Surveillance and Workers from Punjab in Bengal by Suchetana Chattopadhyay, who is a historian and professor at Jadavpur University. This book, written in 2020, is a detailed study of how the British colonial police set up an elaborate system of surveillance on Sikh migrants returning to Punjab from North America and East Asia in the wake of the Komagata Maru incident. This book has been selected for the prestigious Muzaffar Ahmed Memorial Prize this year. It is available for purchase at https://www.tribuneindia.com/news/nation/suchetana-chattopadhyays-book-on-komagata-maru-selected-for-muzaffar-ahmed-memorial-prize-121180.
    • -
    -

    If you are interested in learning more about the Komagata Maru incident and its significance in India's history, you can download or buy these books and read them at your convenience.

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Kodak Preps 5.3 __EXCLUSIVE__ Full 22l.md b/spaces/tioseFevbu/cartoon-converter/scripts/Kodak Preps 5.3 __EXCLUSIVE__ Full 22l.md deleted file mode 100644 index 80d69fa17d9ed4184ebf707c5eacadd7edd474a9..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Kodak Preps 5.3 __EXCLUSIVE__ Full 22l.md +++ /dev/null @@ -1,43 +0,0 @@ - -

    How to Download and Install Kodak Preps 5.3 Full 22l

    -

    Kodak Preps is a powerful digital imposition software that allows you to create and manage complex print layouts with ease. With Kodak Preps, you can optimize your print production workflow and reduce costs, errors, and turnaround time.

    -

    In this article, we will show you how to download and install Kodak Preps 5.3 Full 22l, which is the latest version of the software as of April 2023. This version includes several bug fixes and performance improvements, as well as new features such as:

    -

    Kodak Preps 5.3 Full 22l


    Download File 🗹 https://urlcod.com/2uHwpw



    -
      -
    • Support for PDF 2.0 and PDF/X-6 standards
    • -
    • Enhanced color management and proofing capabilities
    • -
    • Improved integration with Kodak Prinergy and other workflow solutions
    • -
    • More templates and presets for common print scenarios
    • -
    -

    To download and install Kodak Preps 5.3 Full 22l, you will need to follow these steps:

    -
      -
    1. Go to https://workflowhelp.kodak.com/ and log in with your credentials.
    2. -
    3. Click on the "Preps" tab and select "Download" from the drop-down menu.
    4. -
    5. Choose the appropriate installer for your operating system (Windows or Mac) and click on the "Download" button.
    6. -
    7. Save the installer file to your computer and run it as an administrator.
    8. -
    9. Follow the on-screen instructions to complete the installation process.
    10. -
    11. Launch Kodak Preps and enter your license key when prompted.
    12. -
    -

    Congratulations! You have successfully downloaded and installed Kodak Preps 5.3 Full 22l on your computer. You can now start creating and managing your print layouts with ease.

    - -

    If you need help with using Kodak Preps 5.3 Full 22l, you can access the online documentation and tutorials from the "Help" menu in the software. You can also contact the Kodak support team via phone, email, or chat if you have any questions or issues.

    -

    Kodak Preps 5.3 Full 22l is a versatile and reliable digital imposition software that can handle any print project, from simple to complex. Whether you are printing books, magazines, brochures, flyers, posters, or anything else, you can trust Kodak Preps to deliver high-quality results every time.

    -

    Don't wait any longer and download Kodak Preps 5.3 Full 22l today and see for yourself how it can improve your print production workflow and efficiency.

    - -

    How to Use Kodak Preps 5.3 Full 22l

    -

    Using Kodak Preps 5.3 Full 22l is easy and intuitive. You can start by creating a new job or opening an existing one. You can then choose from a variety of templates and presets that suit your print specifications, or create your own custom layout.

    -

    Once you have your layout ready, you can import your PDF files and place them on the pages. You can also adjust the position, rotation, scaling, cropping, and other settings of your PDF files as needed. You can also add marks, barcodes, annotations, and other elements to your layout.

    -

    -

    When you are satisfied with your layout, you can preview it in 3D mode and check for any errors or issues. You can also export your layout as a PDF file or send it directly to your printer or workflow system.

    -

    Why Choose Kodak Preps 5.3 Full 22l

    -

    Kodak Preps 5.3 Full 22l is the best choice for your digital imposition needs because it offers many benefits, such as:

    -
      -
    • It is fast and efficient. It can handle large and complex jobs with ease and speed.
    • -
    • It is flexible and customizable. It can adapt to any print scenario and requirement.
    • -
    • It is accurate and reliable. It ensures high-quality output and reduces errors and waste.
    • -
    • It is compatible and integrated. It works seamlessly with other Kodak products and solutions, as well as third-party software and hardware.
    • -
    • It is affordable and cost-effective. It saves you time and money by optimizing your print production workflow.
    • -
    -

    Kodak Preps 5.3 Full 22l is the ultimate digital imposition software that can help you achieve your print goals and objectives. Don't miss this opportunity and download it now!

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py deleted file mode 100644 index 32972de857b3c4f43170dcd3e7fbce76425f094d..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/gcnet/mask_rcnn_r101_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r101_fpn_1x_coco.py' -model = dict( - backbone=dict( - norm_cfg=dict(type='SyncBN', requires_grad=True), - norm_eval=False, - plugins=[ - dict( - cfg=dict(type='ContextBlock', ratio=1. / 4), - stages=(False, True, True, True), - position='after_conv3') - ])) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/apis/train.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/apis/train.py deleted file mode 100644 index 82c20bffd663edf7163bcce06e03b50df1f51124..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/apis/train.py +++ /dev/null @@ -1,170 +0,0 @@ -import random -import warnings - -import numpy as np -import torch -from mmcv.parallel import MMDataParallel, MMDistributedDataParallel -from mmcv.runner import (HOOKS, DistSamplerSeedHook, EpochBasedRunner, - Fp16OptimizerHook, OptimizerHook, build_optimizer, - build_runner) -from mmcv.utils import build_from_cfg - -from mmdet.core import DistEvalHook, EvalHook -from mmdet.datasets import (build_dataloader, build_dataset, - replace_ImageToTensor) -from mmdet.utils import get_root_logger - - -def set_random_seed(seed, deterministic=False): - """Set random seed. - - Args: - seed (int): Seed to be used. - deterministic (bool): Whether to set the deterministic option for - CUDNN backend, i.e., set `torch.backends.cudnn.deterministic` - to True and `torch.backends.cudnn.benchmark` to False. - Default: False. - """ - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if deterministic: - torch.backends.cudnn.deterministic = True - torch.backends.cudnn.benchmark = False - - -def train_detector(model, - dataset, - cfg, - distributed=False, - validate=False, - timestamp=None, - meta=None): - logger = get_root_logger(cfg.log_level) - - # prepare data loaders - dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] - if 'imgs_per_gpu' in cfg.data: - logger.warning('"imgs_per_gpu" is deprecated in MMDet V2.0. ' - 'Please use "samples_per_gpu" instead') - if 'samples_per_gpu' in cfg.data: - logger.warning( - f'Got "imgs_per_gpu"={cfg.data.imgs_per_gpu} and ' - f'"samples_per_gpu"={cfg.data.samples_per_gpu}, "imgs_per_gpu"' - f'={cfg.data.imgs_per_gpu} is used in this experiments') - else: - logger.warning( - 'Automatically set "samples_per_gpu"="imgs_per_gpu"=' - f'{cfg.data.imgs_per_gpu} in this experiments') - cfg.data.samples_per_gpu = cfg.data.imgs_per_gpu - - data_loaders = [ - build_dataloader( - ds, - cfg.data.samples_per_gpu, - cfg.data.workers_per_gpu, - # cfg.gpus will be ignored if distributed - len(cfg.gpu_ids), - dist=distributed, - seed=cfg.seed) for ds in dataset - ] - - # put model on gpus - if distributed: - find_unused_parameters = cfg.get('find_unused_parameters', False) - # Sets the `find_unused_parameters` parameter in - # torch.nn.parallel.DistributedDataParallel - model = MMDistributedDataParallel( - model.cuda(), - device_ids=[torch.cuda.current_device()], - broadcast_buffers=False, - find_unused_parameters=find_unused_parameters) - else: - model = MMDataParallel( - model.cuda(cfg.gpu_ids[0]), device_ids=cfg.gpu_ids) - - # build runner - optimizer = build_optimizer(model, cfg.optimizer) - - if 'runner' not in cfg: - cfg.runner = { - 'type': 'EpochBasedRunner', - 'max_epochs': cfg.total_epochs - } - warnings.warn( - 'config is now expected to have a `runner` section, ' - 'please set `runner` in your config.', UserWarning) - else: - if 'total_epochs' in cfg: - assert cfg.total_epochs == cfg.runner.max_epochs - - runner = build_runner( - cfg.runner, - default_args=dict( - model=model, - optimizer=optimizer, - work_dir=cfg.work_dir, - logger=logger, - meta=meta)) - - # an ugly workaround to make .log and .log.json filenames the same - runner.timestamp = timestamp - - # fp16 setting - fp16_cfg = cfg.get('fp16', None) - if fp16_cfg is not None: - optimizer_config = Fp16OptimizerHook( - **cfg.optimizer_config, **fp16_cfg, distributed=distributed) - elif distributed and 'type' not in cfg.optimizer_config: - optimizer_config = OptimizerHook(**cfg.optimizer_config) - else: - optimizer_config = cfg.optimizer_config - - # register hooks - runner.register_training_hooks(cfg.lr_config, optimizer_config, - cfg.checkpoint_config, cfg.log_config, - cfg.get('momentum_config', None)) - if distributed: - if isinstance(runner, EpochBasedRunner): - runner.register_hook(DistSamplerSeedHook()) - - # register eval hooks - if validate: - # Support batch_size > 1 in validation - val_samples_per_gpu = cfg.data.val.pop('samples_per_gpu', 1) - if val_samples_per_gpu > 1: - # Replace 'ImageToTensor' to 'DefaultFormatBundle' - cfg.data.val.pipeline = replace_ImageToTensor( - cfg.data.val.pipeline) - val_dataset = build_dataset(cfg.data.val, dict(test_mode=True)) - val_dataloader = build_dataloader( - val_dataset, - samples_per_gpu=val_samples_per_gpu, - workers_per_gpu=cfg.data.workers_per_gpu, - dist=distributed, - shuffle=False) - eval_cfg = cfg.get('evaluation', {}) - eval_cfg['by_epoch'] = cfg.runner['type'] != 'IterBasedRunner' - eval_hook = DistEvalHook if distributed else EvalHook - runner.register_hook(eval_hook(val_dataloader, **eval_cfg)) - - # user-defined hooks - if cfg.get('custom_hooks', None): - custom_hooks = cfg.custom_hooks - assert isinstance(custom_hooks, list), \ - f'custom_hooks expect list type, but got {type(custom_hooks)}' - for hook_cfg in cfg.custom_hooks: - assert isinstance(hook_cfg, dict), \ - 'Each item in custom_hooks expects dict type, but got ' \ - f'{type(hook_cfg)}' - hook_cfg = hook_cfg.copy() - priority = hook_cfg.pop('priority', 'NORMAL') - hook = build_from_cfg(hook_cfg, HOOKS) - runner.register_hook(hook, priority=priority) - - if cfg.resume_from: - runner.resume(cfg.resume_from) - elif cfg.load_from: - runner.load_checkpoint(cfg.load_from) - runner.run(data_loaders, cfg.workflow) diff --git a/spaces/tomofi/NDLOCR/src/text_recognition/deep-text-recognition-benchmark/LICENSE.md b/spaces/tomofi/NDLOCR/src/text_recognition/deep-text-recognition-benchmark/LICENSE.md deleted file mode 100644 index d645695673349e3947e8e5ae42332d0ac3164cd7..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/text_recognition/deep-text-recognition-benchmark/LICENSE.md +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/spaces/tsi-org/LLaVA/llava/conversation.py b/spaces/tsi-org/LLaVA/llava/conversation.py deleted file mode 100644 index 6ee30cf9942d6f3184e845c0436f5c1d80e00043..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/conversation.py +++ /dev/null @@ -1,381 +0,0 @@ -import dataclasses -from enum import auto, Enum -from typing import List, Tuple - - -class SeparatorStyle(Enum): - """Different separator style.""" - SINGLE = auto() - TWO = auto() - MPT = auto() - PLAIN = auto() - LLAMA_2 = auto() - - -@dataclasses.dataclass -class Conversation: - """A class that keeps all conversation history.""" - system: str - roles: List[str] - messages: List[List[str]] - offset: int - sep_style: SeparatorStyle = SeparatorStyle.SINGLE - sep: str = "###" - sep2: str = None - version: str = "Unknown" - - skip_next: bool = False - - def get_prompt(self): - messages = self.messages - if len(messages) > 0 and type(messages[0][1]) is tuple: - messages = self.messages.copy() - init_role, init_msg = messages[0].copy() - init_msg = init_msg[0].replace("", "").strip() - if 'mmtag' in self.version: - messages[0] = (init_role, init_msg) - messages.insert(0, (self.roles[0], "")) - messages.insert(1, (self.roles[1], "Received.")) - else: - messages[0] = (init_role, "\n" + init_msg) - - if self.sep_style == SeparatorStyle.SINGLE: - ret = self.system + self.sep - for role, message in messages: - if message: - if type(message) is tuple: - message, _, _ = message - ret += role + ": " + message + self.sep - else: - ret += role + ":" - elif self.sep_style == SeparatorStyle.TWO: - seps = [self.sep, self.sep2] - ret = self.system + seps[0] - for i, (role, message) in enumerate(messages): - if message: - if type(message) is tuple: - message, _, _ = message - ret += role + ": " + message + seps[i % 2] - else: - ret += role + ":" - elif self.sep_style == SeparatorStyle.MPT: - ret = self.system + self.sep - for role, message in messages: - if message: - if type(message) is tuple: - message, _, _ = message - ret += role + message + self.sep - else: - ret += role - elif self.sep_style == SeparatorStyle.LLAMA_2: - wrap_sys = lambda msg: f"<>\n{msg}\n<>\n\n" - wrap_inst = lambda msg: f"[INST] {msg} [/INST]" - ret = "" - - for i, (role, message) in enumerate(messages): - if i == 0: - assert message, "first message should not be none" - assert role == self.roles[0], "first message should come from user" - if message: - if type(message) is tuple: - message, _, _ = message - if i == 0: message = wrap_sys(self.system) + message - if i % 2 == 0: - message = wrap_inst(message) - ret += self.sep + message - else: - ret += " " + message + " " + self.sep2 - else: - ret += "" - ret = ret.lstrip(self.sep) - elif self.sep_style == SeparatorStyle.PLAIN: - seps = [self.sep, self.sep2] - ret = self.system - for i, (role, message) in enumerate(messages): - if message: - if type(message) is tuple: - message, _, _ = message - ret += message + seps[i % 2] - else: - ret += "" - else: - raise ValueError(f"Invalid style: {self.sep_style}") - - return ret - - def append_message(self, role, message): - self.messages.append([role, message]) - - def get_images(self, return_pil=False): - images = [] - for i, (role, msg) in enumerate(self.messages[self.offset:]): - if i % 2 == 0: - if type(msg) is tuple: - import base64 - from io import BytesIO - from PIL import Image - msg, image, image_process_mode = msg - if image_process_mode == "Pad": - def expand2square(pil_img, background_color=(122, 116, 104)): - width, height = pil_img.size - if width == height: - return pil_img - elif width > height: - result = Image.new(pil_img.mode, (width, width), background_color) - result.paste(pil_img, (0, (width - height) // 2)) - return result - else: - result = Image.new(pil_img.mode, (height, height), background_color) - result.paste(pil_img, ((height - width) // 2, 0)) - return result - image = expand2square(image) - elif image_process_mode in ["Default", "Crop"]: - pass - elif image_process_mode == "Resize": - image = image.resize((336, 336)) - else: - raise ValueError(f"Invalid image_process_mode: {image_process_mode}") - max_hw, min_hw = max(image.size), min(image.size) - aspect_ratio = max_hw / min_hw - max_len, min_len = 800, 400 - shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) - longest_edge = int(shortest_edge * aspect_ratio) - W, H = image.size - if longest_edge != max(image.size): - if H > W: - H, W = longest_edge, shortest_edge - else: - H, W = shortest_edge, longest_edge - image = image.resize((W, H)) - if return_pil: - images.append(image) - else: - buffered = BytesIO() - image.save(buffered, format="PNG") - img_b64_str = base64.b64encode(buffered.getvalue()).decode() - images.append(img_b64_str) - return images - - def to_gradio_chatbot(self): - ret = [] - for i, (role, msg) in enumerate(self.messages[self.offset:]): - if i % 2 == 0: - if type(msg) is tuple: - import base64 - from io import BytesIO - msg, image, image_process_mode = msg - max_hw, min_hw = max(image.size), min(image.size) - aspect_ratio = max_hw / min_hw - max_len, min_len = 800, 400 - shortest_edge = int(min(max_len / aspect_ratio, min_len, min_hw)) - longest_edge = int(shortest_edge * aspect_ratio) - W, H = image.size - if H > W: - H, W = longest_edge, shortest_edge - else: - H, W = shortest_edge, longest_edge - image = image.resize((W, H)) - buffered = BytesIO() - image.save(buffered, format="JPEG") - img_b64_str = base64.b64encode(buffered.getvalue()).decode() - img_str = f'user upload image' - msg = img_str + msg.replace('', '').strip() - ret.append([msg, None]) - else: - ret.append([msg, None]) - else: - ret[-1][-1] = msg - return ret - - def copy(self): - return Conversation( - system=self.system, - roles=self.roles, - messages=[[x, y] for x, y in self.messages], - offset=self.offset, - sep_style=self.sep_style, - sep=self.sep, - sep2=self.sep2, - version=self.version) - - def dict(self): - if len(self.get_images()) > 0: - return { - "system": self.system, - "roles": self.roles, - "messages": [[x, y[0] if type(y) is tuple else y] for x, y in self.messages], - "offset": self.offset, - "sep": self.sep, - "sep2": self.sep2, - } - return { - "system": self.system, - "roles": self.roles, - "messages": self.messages, - "offset": self.offset, - "sep": self.sep, - "sep2": self.sep2, - } - - -conv_vicuna_v0 = Conversation( - system="A chat between a curious human and an artificial intelligence assistant. " - "The assistant gives helpful, detailed, and polite answers to the human's questions.", - roles=("Human", "Assistant"), - messages=( - ("Human", "What are the key differences between renewable and non-renewable energy sources?"), - ("Assistant", - "Renewable energy sources are those that can be replenished naturally in a relatively " - "short amount of time, such as solar, wind, hydro, geothermal, and biomass. " - "Non-renewable energy sources, on the other hand, are finite and will eventually be " - "depleted, such as coal, oil, and natural gas. Here are some key differences between " - "renewable and non-renewable energy sources:\n" - "1. Availability: Renewable energy sources are virtually inexhaustible, while non-renewable " - "energy sources are finite and will eventually run out.\n" - "2. Environmental impact: Renewable energy sources have a much lower environmental impact " - "than non-renewable sources, which can lead to air and water pollution, greenhouse gas emissions, " - "and other negative effects.\n" - "3. Cost: Renewable energy sources can be more expensive to initially set up, but they typically " - "have lower operational costs than non-renewable sources.\n" - "4. Reliability: Renewable energy sources are often more reliable and can be used in more remote " - "locations than non-renewable sources.\n" - "5. Flexibility: Renewable energy sources are often more flexible and can be adapted to different " - "situations and needs, while non-renewable sources are more rigid and inflexible.\n" - "6. Sustainability: Renewable energy sources are more sustainable over the long term, while " - "non-renewable sources are not, and their depletion can lead to economic and social instability.\n") - ), - offset=2, - sep_style=SeparatorStyle.SINGLE, - sep="###", -) - -conv_vicuna_v1 = Conversation( - system="A chat between a curious user and an artificial intelligence assistant. " - "The assistant gives helpful, detailed, and polite answers to the user's questions.", - roles=("USER", "ASSISTANT"), - version="v1", - messages=(), - offset=0, - sep_style=SeparatorStyle.TWO, - sep=" ", - sep2="", -) - -conv_llama_2 = Conversation( - system="""You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. - -If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""", - roles=("USER", "ASSISTANT"), - version="llama_v2", - messages=(), - offset=0, - sep_style=SeparatorStyle.LLAMA_2, - sep="", - sep2="", -) - -conv_llava_llama_2 = Conversation( - system="You are a helpful language and vision assistant. " - "You are able to understand the visual content that the user provides, " - "and assist the user with a variety of tasks using natural language.", - roles=("USER", "ASSISTANT"), - version="llama_v2", - messages=(), - offset=0, - sep_style=SeparatorStyle.LLAMA_2, - sep="", - sep2="", -) - -conv_mpt = Conversation( - system="""<|im_start|>system -A conversation between a user and an LLM-based AI assistant. The assistant gives helpful and honest answers.""", - roles=("<|im_start|>user\n", "<|im_start|>assistant\n"), - version="mpt", - messages=(), - offset=0, - sep_style=SeparatorStyle.MPT, - sep="<|im_end|>", -) - -conv_llava_plain = Conversation( - system="", - roles=("", ""), - messages=( - ), - offset=0, - sep_style=SeparatorStyle.PLAIN, - sep="\n", -) - -conv_llava_v0 = Conversation( - system="A chat between a curious human and an artificial intelligence assistant. " - "The assistant gives helpful, detailed, and polite answers to the human's questions.", - roles=("Human", "Assistant"), - messages=( - ), - offset=0, - sep_style=SeparatorStyle.SINGLE, - sep="###", -) - -conv_llava_v0_mmtag = Conversation( - system="A chat between a curious user and an artificial intelligence assistant. " - "The assistant is able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language." - "The visual content will be provided with the following format: visual content.", - roles=("Human", "Assistant"), - messages=( - ), - offset=0, - sep_style=SeparatorStyle.SINGLE, - sep="###", - version="v0_mmtag", -) - -conv_llava_v1 = Conversation( - system="A chat between a curious human and an artificial intelligence assistant. " - "The assistant gives helpful, detailed, and polite answers to the human's questions.", - roles=("USER", "ASSISTANT"), - version="v1", - messages=(), - offset=0, - sep_style=SeparatorStyle.TWO, - sep=" ", - sep2="", -) - -conv_llava_v1_mmtag = Conversation( - system="A chat between a curious user and an artificial intelligence assistant. " - "The assistant is able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language." - "The visual content will be provided with the following format: visual content.", - roles=("USER", "ASSISTANT"), - messages=(), - offset=0, - sep_style=SeparatorStyle.TWO, - sep=" ", - sep2="", - version="v1_mmtag", -) - -default_conversation = conv_vicuna_v0 -conv_templates = { - "default": conv_vicuna_v0, - "v0": conv_vicuna_v0, - "v1": conv_vicuna_v1, - "vicuna_v1": conv_vicuna_v1, - "llama_2": conv_llama_2, - - "plain": conv_llava_plain, - "v0_plain": conv_llava_plain, - "llava_v0": conv_llava_v0, - "v0_mmtag": conv_llava_v0_mmtag, - "llava_v1": conv_llava_v1, - "v1_mmtag": conv_llava_v1_mmtag, - "llava_llama_2": conv_llava_llama_2, - - "mpt": conv_mpt, -} - - -if __name__ == "__main__": - print(default_conversation.get_prompt()) diff --git a/spaces/ttt246/brain/Brain/src/rising_plugin/llm/babyagi_llm.py b/spaces/ttt246/brain/Brain/src/rising_plugin/llm/babyagi_llm.py deleted file mode 100644 index 3cb0420a4fbc5af3347990b17a5e641a6ba24f51..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Brain/src/rising_plugin/llm/babyagi_llm.py +++ /dev/null @@ -1,117 +0,0 @@ -"""BabyAGP Plugin with Langchain""" -import firebase_admin -from firebase_admin import db -from collections import deque -from typing import Dict, List, Optional, Any - -from langchain import OpenAI -from langchain.embeddings import OpenAIEmbeddings -from langchain.experimental import BabyAGI - -from langchain.vectorstores import FAISS -from langchain.docstore import InMemoryDocstore -from langchain.callbacks.manager import CallbackManagerForChainRun -import faiss - -from Brain.src.rising_plugin.llm.llms import ( - MAX_AUTO_THINKING, - get_finish_command_for_auto_task, -) - - -class BabyAGILLM: - """BabyAGI run method to get the expected result""" - - def run( - self, - agent: BabyAGI, - inputs: Dict[str, Any], - firebase_app: firebase_admin.App, - reference_link: str, - run_manager: Optional[CallbackManagerForChainRun] = None, - ) -> Dict[str, Any]: - """firebase realtime database init""" - ref = db.reference(reference_link, app=firebase_app) - - """Run the agent.""" - objective = inputs["objective"] - first_task = inputs.get("first_task", "Make a todo list") - agent.add_task({"task_id": 1, "task_name": first_task}) - num_iters = 0 - while True: - if agent.task_list: - agent.print_task_list() - - # Step 1: Pull the first task - task = agent.task_list.popleft() - agent.print_next_task(task) - # update the result with the task in firebase realtime database - ref.push().set(task) - - # Step 2: Execute the task - result = agent.execute_task(objective, task["task_name"]) - this_task_id = int(task["task_id"]) - agent.print_task_result(result) - # add result of the command - ref.push().set({"result": result}) - - # Step 3: Store the result in Pinecone - result_id = f"result_{task['task_id']}" - agent.vectorstore.add_texts( - texts=[result], - metadatas=[{"task": task["task_name"]}], - ids=[result_id], - ) - - # Step 4: Create new tasks and reprioritize task list - new_tasks = agent.get_next_task(result, task["task_name"], objective) - for new_task in new_tasks: - agent.task_id_counter += 1 - new_task.update({"task_id": agent.task_id_counter}) - agent.add_task(new_task) - agent.task_list = deque(agent.prioritize_tasks(this_task_id, objective)) - num_iters += 1 - if ( - agent.max_iterations is not None and num_iters == agent.max_iterations - ) or num_iters == MAX_AUTO_THINKING: - print( - "\033[91m\033[1m" + "\n*****TASK ENDING*****\n" + "\033[0m\033[0m" - ) - # add finish command of the command - ref.push().set(get_finish_command_for_auto_task()) - break - return {} - - def ask_task( - self, query: str, firebase_app: firebase_admin.App, reference_link: str - ): - # Define your embedding model - embeddings_model = OpenAIEmbeddings() - - # Initialize the vectorstore as empty - embedding_size = 1536 - index = faiss.IndexFlatL2(embedding_size) - vectorstore = FAISS( - embeddings_model.embed_query, index, InMemoryDocstore({}), {} - ) - - llm = OpenAI(temperature=0) - - # Logging of LLMChains - verbose = False - # If None, will keep on going forever - max_iterations: Optional[int] = 3 - baby_agi = BabyAGI.from_llm( - llm=llm, - vectorstore=vectorstore, - verbose=verbose, - max_iterations=max_iterations, - ) - - # querying - self.run( - agent=baby_agi, - inputs={"objective": query}, - firebase_app=firebase_app, - reference_link=reference_link, - ) diff --git a/spaces/ttt246/brain/Extension/src/pages/Popup/index.css b/spaces/ttt246/brain/Extension/src/pages/Popup/index.css deleted file mode 100644 index 4e41b69a8af91daa39519a62daa3bf9b8e7a8eff..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Extension/src/pages/Popup/index.css +++ /dev/null @@ -1,3 +0,0 @@ -body { - margin: 0; -} \ No newline at end of file diff --git a/spaces/typesdigital/image-to-text-app-td/README.md b/spaces/typesdigital/image-to-text-app-td/README.md deleted file mode 100644 index 1e51b584b38a8251c71d33740d64984976cebe5a..0000000000000000000000000000000000000000 --- a/spaces/typesdigital/image-to-text-app-td/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image To Text App -emoji: 📹 -colorFrom: blue -colorTo: red -sdk: streamlit -app_file: app.py -pinned: false -duplicated_from: Amrrs/image-to-text-app ---- - -# image2textapp -demo of 🤗 spaces deployment of a streamlit python app diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Diner Dash 5 Boom Free Download Full ((HOT)) Version Crack.md b/spaces/usbethFlerru/sovits-modelsV2/example/Diner Dash 5 Boom Free Download Full ((HOT)) Version Crack.md deleted file mode 100644 index ce650dfa4383ea14d51c041f85a4fdf61c825b0c..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Diner Dash 5 Boom Free Download Full ((HOT)) Version Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    diner dash 5 boom free download full version crack


    Download ✔✔✔ https://urlcod.com/2uyXcE



    - -May 14, 2016 - cizimvektorel.com is your first and best source for all of the information you're looking for. From general topics to ... With dead space 3 crack only Read common sense medias diner dash 5 diner dash boom. Article from addictivetips.com ... Free download 2015 New Year vector graphics and ... Graphic Design ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/user238921933/stable-diffusion-webui/modules/sd_vae.py b/spaces/user238921933/stable-diffusion-webui/modules/sd_vae.py deleted file mode 100644 index 9b00f76e9c62c794b3a27b36bae0f168ff4f5ab8..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/sd_vae.py +++ /dev/null @@ -1,216 +0,0 @@ -import torch -import safetensors.torch -import os -import collections -from collections import namedtuple -from modules import paths, shared, devices, script_callbacks, sd_models -import glob -from copy import deepcopy - - -vae_path = os.path.abspath(os.path.join(paths.models_path, "VAE")) -vae_ignore_keys = {"model_ema.decay", "model_ema.num_updates"} -vae_dict = {} - - -base_vae = None -loaded_vae_file = None -checkpoint_info = None - -checkpoints_loaded = collections.OrderedDict() - -def get_base_vae(model): - if base_vae is not None and checkpoint_info == model.sd_checkpoint_info and model: - return base_vae - return None - - -def store_base_vae(model): - global base_vae, checkpoint_info - if checkpoint_info != model.sd_checkpoint_info: - assert not loaded_vae_file, "Trying to store non-base VAE!" - base_vae = deepcopy(model.first_stage_model.state_dict()) - checkpoint_info = model.sd_checkpoint_info - - -def delete_base_vae(): - global base_vae, checkpoint_info - base_vae = None - checkpoint_info = None - - -def restore_base_vae(model): - global loaded_vae_file - if base_vae is not None and checkpoint_info == model.sd_checkpoint_info: - print("Restoring base VAE") - _load_vae_dict(model, base_vae) - loaded_vae_file = None - delete_base_vae() - - -def get_filename(filepath): - return os.path.basename(filepath) - - -def refresh_vae_list(): - vae_dict.clear() - - paths = [ - os.path.join(sd_models.model_path, '**/*.vae.ckpt'), - os.path.join(sd_models.model_path, '**/*.vae.pt'), - os.path.join(sd_models.model_path, '**/*.vae.safetensors'), - os.path.join(vae_path, '**/*.ckpt'), - os.path.join(vae_path, '**/*.pt'), - os.path.join(vae_path, '**/*.safetensors'), - ] - - if shared.cmd_opts.ckpt_dir is not None and os.path.isdir(shared.cmd_opts.ckpt_dir): - paths += [ - os.path.join(shared.cmd_opts.ckpt_dir, '**/*.vae.ckpt'), - os.path.join(shared.cmd_opts.ckpt_dir, '**/*.vae.pt'), - os.path.join(shared.cmd_opts.ckpt_dir, '**/*.vae.safetensors'), - ] - - if shared.cmd_opts.vae_dir is not None and os.path.isdir(shared.cmd_opts.vae_dir): - paths += [ - os.path.join(shared.cmd_opts.vae_dir, '**/*.ckpt'), - os.path.join(shared.cmd_opts.vae_dir, '**/*.pt'), - os.path.join(shared.cmd_opts.vae_dir, '**/*.safetensors'), - ] - - candidates = [] - for path in paths: - candidates += glob.iglob(path, recursive=True) - - for filepath in candidates: - name = get_filename(filepath) - vae_dict[name] = filepath - - -def find_vae_near_checkpoint(checkpoint_file): - checkpoint_path = os.path.splitext(checkpoint_file)[0] - for vae_location in [checkpoint_path + ".vae.pt", checkpoint_path + ".vae.ckpt", checkpoint_path + ".vae.safetensors"]: - if os.path.isfile(vae_location): - return vae_location - - return None - - -def resolve_vae(checkpoint_file): - if shared.cmd_opts.vae_path is not None: - return shared.cmd_opts.vae_path, 'from commandline argument' - - is_automatic = shared.opts.sd_vae in {"Automatic", "auto"} # "auto" for people with old config - - vae_near_checkpoint = find_vae_near_checkpoint(checkpoint_file) - if vae_near_checkpoint is not None and (shared.opts.sd_vae_as_default or is_automatic): - return vae_near_checkpoint, 'found near the checkpoint' - - if shared.opts.sd_vae == "None": - return None, None - - vae_from_options = vae_dict.get(shared.opts.sd_vae, None) - if vae_from_options is not None: - return vae_from_options, 'specified in settings' - - if not is_automatic: - print(f"Couldn't find VAE named {shared.opts.sd_vae}; using None instead") - - return None, None - - -def load_vae_dict(filename, map_location): - vae_ckpt = sd_models.read_state_dict(filename, map_location=map_location) - vae_dict_1 = {k: v for k, v in vae_ckpt.items() if k[0:4] != "loss" and k not in vae_ignore_keys} - return vae_dict_1 - - -def load_vae(model, vae_file=None, vae_source="from unknown source"): - global vae_dict, loaded_vae_file - # save_settings = False - - cache_enabled = shared.opts.sd_vae_checkpoint_cache > 0 - - if vae_file: - if cache_enabled and vae_file in checkpoints_loaded: - # use vae checkpoint cache - print(f"Loading VAE weights {vae_source}: cached {get_filename(vae_file)}") - store_base_vae(model) - _load_vae_dict(model, checkpoints_loaded[vae_file]) - else: - assert os.path.isfile(vae_file), f"VAE {vae_source} doesn't exist: {vae_file}" - print(f"Loading VAE weights {vae_source}: {vae_file}") - store_base_vae(model) - - vae_dict_1 = load_vae_dict(vae_file, map_location=shared.weight_load_location) - _load_vae_dict(model, vae_dict_1) - - if cache_enabled: - # cache newly loaded vae - checkpoints_loaded[vae_file] = vae_dict_1.copy() - - # clean up cache if limit is reached - if cache_enabled: - while len(checkpoints_loaded) > shared.opts.sd_vae_checkpoint_cache + 1: # we need to count the current model - checkpoints_loaded.popitem(last=False) # LRU - - # If vae used is not in dict, update it - # It will be removed on refresh though - vae_opt = get_filename(vae_file) - if vae_opt not in vae_dict: - vae_dict[vae_opt] = vae_file - - elif loaded_vae_file: - restore_base_vae(model) - - loaded_vae_file = vae_file - - -# don't call this from outside -def _load_vae_dict(model, vae_dict_1): - model.first_stage_model.load_state_dict(vae_dict_1) - model.first_stage_model.to(devices.dtype_vae) - - -def clear_loaded_vae(): - global loaded_vae_file - loaded_vae_file = None - - -unspecified = object() - - -def reload_vae_weights(sd_model=None, vae_file=unspecified): - from modules import lowvram, devices, sd_hijack - - if not sd_model: - sd_model = shared.sd_model - - checkpoint_info = sd_model.sd_checkpoint_info - checkpoint_file = checkpoint_info.filename - - if vae_file == unspecified: - vae_file, vae_source = resolve_vae(checkpoint_file) - else: - vae_source = "from function argument" - - if loaded_vae_file == vae_file: - return - - if shared.cmd_opts.lowvram or shared.cmd_opts.medvram: - lowvram.send_everything_to_cpu() - else: - sd_model.to(devices.cpu) - - sd_hijack.model_hijack.undo_hijack(sd_model) - - load_vae(sd_model, vae_file, vae_source) - - sd_hijack.model_hijack.hijack(sd_model) - script_callbacks.model_loaded_callback(sd_model) - - if not shared.cmd_opts.lowvram and not shared.cmd_opts.medvram: - sd_model.to(devices.device) - - print("VAE weights loaded.") - return sd_model diff --git a/spaces/wall-e-zz/anime-ai-detect/README.md b/spaces/wall-e-zz/anime-ai-detect/README.md deleted file mode 100644 index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000 --- a/spaces/wall-e-zz/anime-ai-detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Ai Detect -emoji: 🤖 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: saltacc/anime-ai-detect ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/weijiawu/ImageEditAnything/captioner/modeling_blip.py b/spaces/weijiawu/ImageEditAnything/captioner/modeling_blip.py deleted file mode 100644 index 2b2303b05fa885ce425e2249ff97765cfe38ac20..0000000000000000000000000000000000000000 --- a/spaces/weijiawu/ImageEditAnything/captioner/modeling_blip.py +++ /dev/null @@ -1,1476 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The Salesforce Team Authors and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch BLIP model.""" - -from dataclasses import dataclass -from typing import Any, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn.functional import normalize - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling -from transformers.modeling_utils import PreTrainedModel -from transformers.utils import ( - ModelOutput, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from transformers.models.blip.configuration_blip import BlipConfig, BlipTextConfig, BlipVisionConfig -from transformers.models.blip.modeling_blip_text import BlipTextLMHeadModel, BlipTextModel -from .vit_pixel_masks_utils import ViTPatchMaskGenerator - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "Salesforce/blip-vqa-base" - -BLIP_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "Salesforce/blip-vqa-base", - "Salesforce/blip-vqa-capfit-large", - "Salesforce/blip-image-captioning-base", - "Salesforce/blip-image-captioning-large", - "Salesforce/blip-itm-base-coco", - "Salesforce/blip-itm-large-coco", - "Salesforce/blip-itm-base-flikr", - "Salesforce/blip-itm-large-flikr", - # See all BLIP models at https://huggingface.co/models?filter=blip -] - - -# Copied from transformers.models.clip.modeling_clip.contrastive_loss -def contrastive_loss(logits: torch.Tensor) -> torch.Tensor: - return nn.functional.cross_entropy(logits, torch.arange(len(logits), device=logits.device)) - - -# Copied from transformers.models.clip.modeling_clip.clip_loss with clip->blip -def blip_loss(similarity: torch.Tensor) -> torch.Tensor: - caption_loss = contrastive_loss(similarity) - image_loss = contrastive_loss(similarity.t()) - return (caption_loss + image_loss) / 2.0 - - -@dataclass -class BlipForConditionalGenerationModelOutput(ModelOutput): - """ - Adapted from the base class for vision model's outputs that also contains image embeddings of the pooling of the - last hidden states. This class also adds the loss term from the text decoder. - - Args: - loss (`torch.FloatTensor`, *optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): - Languge modeling loss from the text decoder. - decoder_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`, *optional*): - Prediction scores of the language modeling head of the text decoder model. - image_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)`, *optional*): - The image embeddings obtained after applying the Vision Transformer model to the input image. - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + - one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[Tuple[torch.FloatTensor]] = None - decoder_logits: Optional[Tuple[torch.FloatTensor]] = None - image_embeds: Optional[torch.FloatTensor] = None - last_hidden_state: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class BlipTextVisionModelOutput(ModelOutput): - """ - Adapted from the base class for vision model's outputs that also contains image embeddings of the pooling of the - last hidden states. This class also adds the loss term from the text decoder. - - Args: - loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Languge modeling loss from the text decoder. - image_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)` *optional* returned when model is initialized with `with_projection=True`): - The image embeddings obtained by applying the projection layer to the pooler_output. - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + - one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[torch.FloatTensor] = None - image_embeds: Optional[torch.FloatTensor] = None - last_hidden_state: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class BlipImageTextMatchingModelOutput(ModelOutput): - """ - Adapted from the base class for vision model's outputs that also contains image embeddings of the pooling of the - last hidden states. This class also adds the loss term from the text decoder as well as the image-text similarity - scores. - - Args: - itm_score (`torch.FloatTensor`): - The image-text similarity scores. - loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): - Languge modeling loss from the text decoder. - image_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)` *optional* returned when model is initialized with `with_projection=True`): - The image embeddings obtained by applying the projection layer to the pooler_output. - last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + - one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. - vision_pooler_output (`torch.FloatTensor` of shape `(batch_size, hidden_size)`, *optional*): - Last layer hidden-state of the vision of the vision-only branch of the model. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - question_embeds (`torch.FloatTensor`): - The question embeddings obtained by the text projection layer. - """ - - itm_score: Optional[torch.FloatTensor] = None - loss: Optional[torch.FloatTensor] = None - image_embeds: Optional[torch.FloatTensor] = None - last_hidden_state: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - vision_pooler_output: Optional[torch.FloatTensor] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - question_embeds: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class BlipOutput(ModelOutput): - """ - Args: - loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `return_loss` is `True`): - Contrastive loss for image-text similarity. - logits_per_image:(`torch.FloatTensor` of shape `(image_batch_size, text_batch_size)`): - The scaled dot product scores between `image_embeds` and `text_embeds`. This represents the image-text - similarity scores. - logits_per_text:(`torch.FloatTensor` of shape `(text_batch_size, image_batch_size)`): - The scaled dot product scores between `text_embeds` and `image_embeds`. This represents the text-image - similarity scores. - text_embeds(`torch.FloatTensor` of shape `(batch_size, output_dim`): - The text embeddings obtained by applying the projection layer to the pooled output of [`BlipTextModel`]. - image_embeds(`torch.FloatTensor` of shape `(batch_size, output_dim`): - The image embeddings obtained by applying the projection layer to the pooled output of [`BlipVisionModel`]. - text_model_output(`BaseModelOutputWithPooling`): - The output of the [`BlipTextModel`]. - vision_model_output(`BaseModelOutputWithPooling`): - The output of the [`BlipVisionModel`]. - """ - - loss: Optional[torch.FloatTensor] = None - logits_per_image: torch.FloatTensor = None - logits_per_text: torch.FloatTensor = None - text_embeds: torch.FloatTensor = None - image_embeds: torch.FloatTensor = None - text_model_output: BaseModelOutputWithPooling = None - vision_model_output: BaseModelOutputWithPooling = None - - def to_tuple(self) -> Tuple[Any]: - return tuple( - self[k] if k not in ["text_model_output", "vision_model_output"] else getattr(self, k).to_tuple() - for k in self.keys() - ) - - -class BlipVisionEmbeddings(nn.Module): - def __init__(self, config: BlipVisionConfig): - super().__init__() - self.config = config - self.embed_dim = config.hidden_size - self.image_size = config.image_size - self.patch_size = config.patch_size - - self.class_embedding = nn.Parameter( - torch.randn(1, 1, self.embed_dim), - ) - - self.patch_embedding = nn.Conv2d( - in_channels=3, out_channels=self.embed_dim, kernel_size=self.patch_size, stride=self.patch_size - ) - - self.num_patches = (self.image_size // self.patch_size) ** 2 - self.num_positions = self.num_patches + 1 - - self.position_embedding = nn.Parameter(torch.randn(1, self.num_positions, self.embed_dim)) - - def forward(self, pixel_values: torch.FloatTensor) -> torch.Tensor: - batch_size = pixel_values.shape[0] - target_dtype = self.patch_embedding.weight.dtype - patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid] - patch_embeds = patch_embeds.flatten(2).transpose(1, 2) - - class_embeds = self.class_embedding.expand(batch_size, 1, -1).to(target_dtype) - embeddings = torch.cat([class_embeds, patch_embeds], dim=1) - embeddings = embeddings + self.position_embedding[:, : embeddings.size(1), :].to(target_dtype) - return embeddings - - -# Copied from transformers.models.clip.modeling_clip.CLIPTextEmbeddings with CLIP->Blip -class BlipTextEmbeddings(nn.Module): - def __init__(self, config: BlipTextConfig): - super().__init__() - embed_dim = config.hidden_size - - self.token_embedding = nn.Embedding(config.vocab_size, embed_dim) - self.position_embedding = nn.Embedding(config.max_position_embeddings, embed_dim) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - position_ids: Optional[torch.LongTensor] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - ) -> torch.Tensor: - seq_length = input_ids.shape[-1] if input_ids is not None else inputs_embeds.shape[-2] - - if position_ids is None: - position_ids = self.position_ids[:, :seq_length] - - if inputs_embeds is None: - inputs_embeds = self.token_embedding(input_ids) - - position_embeddings = self.position_embedding(position_ids) - embeddings = inputs_embeds + position_embeddings - - return embeddings - - -class BlipAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__(self, config): - super().__init__() - self.config = config - self.embed_dim = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.embed_dim // self.num_heads - if self.head_dim * self.num_heads != self.embed_dim: - raise ValueError( - f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`:" - f" {self.num_heads})." - ) - self.scale = self.head_dim**-0.5 - self.dropout = nn.Dropout(config.attention_dropout) - - self.qkv = nn.Linear(self.embed_dim, 3 * self.embed_dim) - - self.projection = nn.Linear(self.embed_dim, self.embed_dim) - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - head_mask: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - """Input shape: Batch x Time x Channel""" - - bsz, tgt_len, embed_dim = hidden_states.size() - - mixed_qkv = self.qkv(hidden_states) - mixed_qkv = ( - self.qkv(hidden_states) - .reshape(bsz, tgt_len, 3, self.num_heads, embed_dim // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - query_states, key_states, value_states = ( - mixed_qkv[0], - mixed_qkv[1], - mixed_qkv[2], - ) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_states, key_states.transpose(-1, -2)) - - attention_scores = attention_scores * self.scale - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_states).permute(0, 2, 1, 3) - - new_context_layer_shape = context_layer.size()[:-2] + (self.embed_dim,) - context_layer = context_layer.reshape(new_context_layer_shape) - - output = self.projection(context_layer) - - outputs = (output, attention_probs) if output_attentions else (output, None) - - return outputs - - -# Copied from transformers.models.clip.modeling_clip.CLIPMLP with CLIP->Blip -class BlipMLP(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.activation_fn = ACT2FN[config.hidden_act] - self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size) - self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size) - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.fc1(hidden_states) - hidden_states = self.activation_fn(hidden_states) - hidden_states = self.fc2(hidden_states) - return hidden_states - - -class BlipEncoderLayer(nn.Module): - def __init__(self, config: BlipConfig): - super().__init__() - self.embed_dim = config.hidden_size - self.self_attn = BlipAttention(config) - self.layer_norm1 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - self.mlp = BlipMLP(config) - self.layer_norm2 = nn.LayerNorm(self.embed_dim, eps=config.layer_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: torch.Tensor, - output_attentions: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - `(config.encoder_attention_heads,)`. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - """ - residual = hidden_states - - hidden_states = self.layer_norm1(hidden_states) - hidden_states, attn_weights = self.self_attn( - hidden_states=hidden_states, - head_mask=attention_mask, - output_attentions=output_attentions, - ) - hidden_states = hidden_states + residual - residual = hidden_states - hidden_states = self.layer_norm2(hidden_states) - hidden_states = self.mlp(hidden_states) - - hidden_states = hidden_states + residual - - outputs = (hidden_states,) - - if output_attentions: - outputs += (attn_weights,) - - return outputs - - -class BlipPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BlipConfig - base_model_prefix = "blip" - supports_gradient_checkpointing = True - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - factor = self.config.initializer_range - if isinstance(module, nn.Conv2d) or isinstance(module, nn.Embedding) or isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=factor) - if hasattr(module, "bias") and module.bias is not None: - module.bias.data.zero_() - - if isinstance(module, BlipVisionEmbeddings): - if hasattr(self.config, "vision_config"): - factor = self.config.vision_config.initializer_range - nn.init.trunc_normal_( - module.position_embedding, - mean=0.0, - std=factor, - ) - - nn.init.trunc_normal_( - module.class_embedding, - mean=0.0, - std=factor, - ) - - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - elif isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, BlipEncoder): - module.gradient_checkpointing = value - - -BLIP_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`BlipConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -BLIP_TEXT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoProcessor`]. See [`BlipProcessor.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -BLIP_VISION_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using - [`BlipImageProcessor`]. See [`BlipImageProcessor.__call__`] for details. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - -BLIP_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoProcessor`]. See [`BlipProcessor.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using - [`BlipImageProcessor`]. See [`BlipImageProcessor.__call__`] for details. - return_loss (`bool`, *optional*): - Whether or not to return the contrastive loss. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -class BlipEncoder(nn.Module): - """ - Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a - [`BlipEncoderLayer`]. - - Args: - config (`BlipConfig`): - The corresponding vision configuration for the `BlipEncoder`. - """ - - def __init__(self, config: BlipConfig): - super().__init__() - self.config = config - self.layers = nn.ModuleList([BlipEncoderLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - inputs_embeds, - attention_mask: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutput]: - r""" - Args: - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. - This is useful if you want more control over how to convert `input_ids` indices into associated vectors - than the model's internal embedding lookup matrix. - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - encoder_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - - hidden_states = inputs_embeds - for idx, encoder_layer in enumerate(self.layers): - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(encoder_layer), - hidden_states, - attention_mask, - ) - else: - layer_outputs = encoder_layer( - hidden_states, - attention_mask, - output_attentions=output_attentions, - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - if output_hidden_states: - encoder_states = encoder_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions - ) - - -class BlipVisionModel(BlipPreTrainedModel): - main_input_name = "pixel_values" - config_class = BlipVisionConfig - - def __init__(self, config: BlipVisionConfig): - super().__init__(config) - self.config = config - embed_dim = config.hidden_size - self.embeddings = BlipVisionEmbeddings(config) - self.patch_mask_generator = ViTPatchMaskGenerator(config.patch_size) - self.encoder = BlipEncoder(config) - self.post_layernorm = nn.LayerNorm(embed_dim, eps=config.layer_norm_eps) - - self.post_init() - - @add_start_docstrings_to_model_forward(BLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BaseModelOutputWithPooling, config_class=BlipVisionConfig) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - pixel_masks: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPooling]: - r""" - Returns: - - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if pixel_values is None: - raise ValueError("You have to specify pixel_values") - - hidden_states = self.embeddings(pixel_values) - B, N, D = hidden_states.shape - # print('Before mask:', hidden_states.shape) - if pixel_masks is not None: - assert pixel_masks.shape[0] == 1 - patch_masks = self.patch_mask_generator(pixel_masks) - # print(patch_masks.shape) - patch_masks = patch_masks.unsqueeze(-1).expand_as(hidden_states) - hidden_states = hidden_states.masked_select(patch_masks).view(B, -1, D) - # print('After mask:', hidden_states.shape) - - encoder_outputs = self.encoder( - inputs_embeds=hidden_states, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - last_hidden_state = encoder_outputs[0] - last_hidden_state = self.post_layernorm(last_hidden_state) - - pooled_output = last_hidden_state[:, 0, :] - pooled_output = self.post_layernorm(pooled_output) - - if not return_dict: - return (last_hidden_state, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPooling( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - def get_input_embeddings(self): - return self.embeddings - - -@add_start_docstrings(BLIP_START_DOCSTRING) -class BlipModel(BlipPreTrainedModel): - config_class = BlipConfig - - def __init__(self, config: BlipConfig): - super().__init__(config) - - if not isinstance(config.text_config, BlipTextConfig): - raise ValueError( - "config.text_config is expected to be of type BlipTextConfig but is of type" - f" {type(config.text_config)}." - ) - - if not isinstance(config.vision_config, BlipVisionConfig): - raise ValueError( - "config.vision_config is expected to be of type BlipVisionConfig but is of type" - f" {type(config.vision_config)}." - ) - - text_config = config.text_config - vision_config = config.vision_config - - self.projection_dim = config.projection_dim - self.text_embed_dim = text_config.hidden_size - self.vision_embed_dim = vision_config.hidden_size - - self.text_model = BlipTextModel(text_config) - self.vision_model = BlipVisionModel(vision_config) - - self.visual_projection = nn.Linear(self.vision_embed_dim, self.projection_dim, bias=False) - self.text_projection = nn.Linear(self.text_embed_dim, self.projection_dim, bias=False) - self.logit_scale = nn.Parameter(torch.ones([]) * self.config.logit_scale_init_value) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BLIP_TEXT_INPUTS_DOCSTRING) - def get_text_features( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - return_dict: Optional[bool] = None, - ) -> torch.FloatTensor: - r""" - Returns: - text_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The text embeddings obtained by - applying the projection layer to the pooled output of [`BlipTextModel`]. - - Examples: - - ```python - >>> from transformers import AutoProcessor, BlipModel - - >>> model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base") - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - - >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") - >>> text_features = model.get_text_features(**inputs) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - return_dict=return_dict, - ) - - pooled_output = text_outputs[1] - text_features = self.text_projection(pooled_output) - - return text_features - - @add_start_docstrings_to_model_forward(BLIP_VISION_INPUTS_DOCSTRING) - def get_image_features( - self, - pixel_values: Optional[torch.FloatTensor] = None, - return_dict: Optional[bool] = None, - ) -> torch.FloatTensor: - r""" - Returns: - image_features (`torch.FloatTensor` of shape `(batch_size, output_dim`): The image embeddings obtained by - applying the projection layer to the pooled output of [`BlipVisionModel`]. - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, BlipModel - - >>> model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base") - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="pt") - - >>> image_features = model.get_image_features(**inputs) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - return_dict=return_dict, - ) - - pooled_output = vision_outputs[1] # pooled_output - image_features = self.visual_projection(pooled_output) - - return image_features - - @add_start_docstrings_to_model_forward(BLIP_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BlipOutput, config_class=BlipConfig) - def forward( - self, - input_ids: Optional[torch.LongTensor] = None, - pixel_values: Optional[torch.FloatTensor] = None, - pixel_masks: Optional[torch.FloatTensor] = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - return_loss: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BlipOutput]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, BlipModel - - >>> model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base") - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor( - ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True - ... ) - - >>> outputs = model(**inputs) - >>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score - >>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities - ```""" - # Use BLIP model's config for some fields (if specified) instead of those of vision & text components. - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - pixel_masks=pixel_masks, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - text_outputs = self.text_model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[1] - image_embeds = self.visual_projection(image_embeds) - - text_embeds = text_outputs[1] - text_embeds = self.text_projection(text_embeds) - - # normalized features - image_embeds = image_embeds / image_embeds.norm(p=2, dim=-1, keepdim=True) - text_embeds = text_embeds / text_embeds.norm(p=2, dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_text = torch.matmul(text_embeds, image_embeds.t()) * logit_scale - logits_per_image = logits_per_text.t() - - loss = None - if return_loss: - loss = blip_loss(logits_per_text) - - if not return_dict: - output = (logits_per_image, logits_per_text, text_embeds, image_embeds, text_outputs, vision_outputs) - return ((loss,) + output) if loss is not None else output - - return BlipOutput( - loss=loss, - logits_per_image=logits_per_image, - logits_per_text=logits_per_text, - text_embeds=text_embeds, - image_embeds=image_embeds, - text_model_output=text_outputs, - vision_model_output=vision_outputs, - ) - - -@add_start_docstrings( - """ - BLIP Model for image captioning. The model consists of a vision encoder and a text decoder. One can optionally pass - `input_ids` to the model, which serve as a text prompt, to make the text decoder continue the prompt. Otherwise, - the decoder starts generating text from the [BOS] (beginning-of-sequence) token. will start generating the caption - from the text input. If no text input is provided, the decoder will start with the [BOS] token only. - """, - BLIP_START_DOCSTRING, -) -class BlipForConditionalGeneration(BlipPreTrainedModel): - config_class = BlipConfig - _keys_to_ignore_on_load_missing = [r"text_decoder.cls.predictions.decoder.bias"] - main_input_name = "pixel_values" - - def __init__(self, config: BlipConfig): - super().__init__(config) - - self.vision_model = BlipVisionModel(config.vision_config) - - self.text_decoder = BlipTextLMHeadModel(config.text_config) - - self.decoder_input_ids = config.text_config.bos_token_id - self.decoder_pad_token_id = config.text_config.pad_token_id - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> nn.Module: - return self.vision_model.embeddings.patch_embedding - - @add_start_docstrings_to_model_forward(BLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BlipForConditionalGenerationModelOutput, config_class=BlipVisionConfig) - def forward( - self, - pixel_values: torch.FloatTensor, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - labels: Optional[torch.LongTensor] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BlipForConditionalGenerationModelOutput]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, BlipForConditionalGeneration - - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - >>> model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - >>> text = "A picture of" - - >>> inputs = processor(images=image, text=text, return_tensors="pt") - - >>> outputs = model(**inputs) - ```""" - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[0] - - outputs = self.text_decoder( - input_ids=input_ids, - attention_mask=attention_mask, - encoder_hidden_states=image_embeds, - labels=labels, - return_dict=return_dict, - reduction="mean", - ) - - if not return_dict: - outputs = (outputs[0], outputs[1], image_embeds, vision_outputs[0]) + vision_outputs[2:] - return tuple(output for output in outputs if output is not None) - - return BlipForConditionalGenerationModelOutput( - loss=outputs.loss, - decoder_logits=outputs.logits, - image_embeds=image_embeds, - last_hidden_state=vision_outputs.last_hidden_state, - hidden_states=vision_outputs.hidden_states, - attentions=vision_outputs.attentions, - ) - - @torch.no_grad() - def generate( - self, - pixel_values: torch.FloatTensor, - pixel_masks: torch.Tensor = None, - input_ids: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.LongTensor] = None, - **generate_kwargs, - ) -> torch.LongTensor: - r""" - Overrides *generate* function to be able to use the model as a conditional generator - - Parameters: - pixel_values (*torch.FloatTensor* of shape *(batch_size, image_width, image_height)*: - Input image to be processed - input_ids (*torch.LongTensor* of shape *(batch_size, sequence_length)*, *optional*): - The sequence used as a prompt for the generation. - attention_mask (*torch.LongTensor* of shape *(batch_size, sequence_length)*, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - Examples: - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, BlipForConditionalGeneration - - >>> model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> inputs = processor(images=image, return_tensors="pt") - - >>> outputs = model.generate(**inputs) - >>> print(processor.decode(outputs[0], skip_special_tokens=True)) - two cats are laying on a couch - ``` - """ - - batch_size = pixel_values.shape[0] - vision_outputs = self.vision_model( - pixel_values=pixel_values, - pixel_masks=pixel_masks, - ) - - image_embeds = vision_outputs[0] - - image_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image_embeds.device) - - if isinstance(input_ids, list): - input_ids = torch.LongTensor(input_ids) - elif input_ids is None: - input_ids = ( - torch.LongTensor([[self.decoder_input_ids, self.config.text_config.eos_token_id]]) - .repeat(batch_size, 1) - .to(image_embeds.device) - ) - - input_ids[:, 0] = self.config.text_config.bos_token_id - attention_mask = attention_mask[:, :-1] if attention_mask is not None else None - - outputs = self.text_decoder.generate( - input_ids=input_ids[:, :-1], - eos_token_id=self.config.text_config.sep_token_id, - pad_token_id=self.config.text_config.pad_token_id, - attention_mask=attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_attention_mask, - **generate_kwargs, - ) - - return outputs - - -@add_start_docstrings( - """ - BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder as well as a text - decoder. The vision encoder will encode the input image, the text encoder will encode the input question together - with the encoding of the image, and the text decoder will output the answer to the question. - """, - BLIP_START_DOCSTRING, -) -class BlipForQuestionAnswering(BlipPreTrainedModel): - config_class = BlipConfig - _keys_to_ignore_on_load_missing = [r"text_decoder.cls.predictions.decoder.bias"] - - def __init__(self, config: BlipConfig): - super().__init__(config) - - self.vision_model = BlipVisionModel(config.vision_config) - - self.text_encoder = BlipTextModel(config.text_config, add_pooling_layer=False) - - self.text_decoder = BlipTextLMHeadModel(config.text_config) - - self.decoder_pad_token_id = config.text_config.pad_token_id - self.decoder_start_token_id = config.text_config.bos_token_id - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> nn.Module: - return self.vision_model.embeddings.patch_embedding - - # Adapted from transformers.models.t5.modeling_t5.T5PreTrainedModel._shift_right - def _shift_right(self, input_ids): - pad_token_id = self.decoder_pad_token_id - - shifted_input_ids = input_ids.new_zeros(input_ids.shape) - shifted_input_ids[..., 1:] = input_ids[..., :-1].clone() - shifted_input_ids[..., 0] = self.decoder_start_token_id - - # replace possible -100 values in labels by `pad_token_id` - shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) - - return shifted_input_ids - - @add_start_docstrings_to_model_forward(BLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BlipTextVisionModelOutput, config_class=BlipVisionConfig) - def forward( - self, - input_ids: torch.LongTensor, - pixel_values: torch.FloatTensor, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.LongTensor] = None, - attention_mask: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - labels: Optional[torch.LongTensor] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BlipTextVisionModelOutput]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, BlipForQuestionAnswering - - >>> model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base") - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> # training - >>> text = "How many cats are in the picture?" - >>> label = "2" - >>> inputs = processor(images=image, text=text, return_tensors="pt") - >>> labels = processor(text=label, return_tensors="pt").input_ids - - >>> inputs["labels"] = labels - >>> outputs = model(**inputs) - >>> loss = outputs.loss - >>> loss.backward() - - >>> # inference - >>> text = "How many cats are in the picture?" - >>> inputs = processor(images=image, text=text, return_tensors="pt") - >>> outputs = model.generate(**inputs) - >>> print(processor.decode(outputs[0], skip_special_tokens=True)) - 2 - ```""" - if labels is None and decoder_input_ids is None: - raise ValueError( - "Either `decoder_input_ids` or `labels` should be passed when calling `forward` with" - " `BlipForQuestionAnswering`. if you are training the model make sure that `labels` is passed, if you" - " are using the model for inference make sure that `decoder_input_ids` is passed or call `generate`" - ) - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[0] - image_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long) - - question_embeds = self.text_encoder( - input_ids=input_ids, - attention_mask=attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_attention_mask, - return_dict=return_dict, - ) - - question_embeds = question_embeds[0] if not return_dict else question_embeds.last_hidden_state - - if labels is not None and decoder_input_ids is None: - # get decoder inputs from shifting lm labels to the right - this is used in training mode - decoder_input_ids = self._shift_right(labels) - # replace possible -100 values in labels by `pad_token_id` - labels = labels.masked_fill(labels == self.decoder_pad_token_id, -100) - - answer_output = self.text_decoder( - input_ids=decoder_input_ids, - attention_mask=decoder_attention_mask, - encoder_hidden_states=question_embeds, - encoder_attention_mask=attention_mask, - labels=labels, - return_dict=return_dict, - reduction="mean", - ) - - if labels is not None: - decoder_loss = answer_output.loss.mean() if return_dict else answer_output[0].mean() - else: - decoder_loss = None - - if not return_dict: - outputs = (decoder_loss, image_embeds, vision_outputs[0]) + vision_outputs[2:] - return tuple(output for output in outputs if output is not None) - - return BlipTextVisionModelOutput( - loss=decoder_loss, - image_embeds=image_embeds, - last_hidden_state=vision_outputs.last_hidden_state, - hidden_states=vision_outputs.hidden_states, - attentions=vision_outputs.attentions, - ) - - @torch.no_grad() - def generate( - self, - input_ids: torch.LongTensor, - pixel_values: torch.FloatTensor, - pixel_masks: torch.Tensor = None, - attention_mask: Optional[torch.LongTensor] = None, - **generate_kwargs, - ) -> torch.LongTensor: - r""" - Overrides *generate* function to be able to use the model as a conditional generator - - Parameters: - input_ids (*torch.LongTensor* of shape *(batch_size, sequence_length)*): - The sequence used as a prompt for the generation. - pixel_values (*torch.FloatTensor* of shape *(batch_size, image_width, image_height)*: - Input image to be processed - attention_mask (*torch.LongTensor* of shape *(batch_size, sequence_length)*, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`. `1` for - tokens that are NOT MASKED, `0` for MASKED tokens. - **generate_kwargs: - Additional arguments passed to the *generate* function of the decoder - - - Examples: - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, BlipForQuestionAnswering - - >>> model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base") - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - >>> text = "How many cats are in the picture?" - - >>> inputs = processor(images=image, text=text, return_tensors="pt") - - >>> outputs = model.generate(**inputs) - >>> print(processor.decode(outputs[0], skip_special_tokens=True)) - 2 - ``` - """ - vision_outputs = self.vision_model( - pixel_values=pixel_values, - pixel_masks=pixel_masks - ) - - image_embeds = vision_outputs[0] - - image_attention_mask = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image_embeds.device) - - if isinstance(input_ids, list): - input_ids = torch.LongTensor(input_ids) - - question_outputs = self.text_encoder( - input_ids=input_ids, - attention_mask=attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_attention_mask, - return_dict=False, - ) - - question_embeds = question_outputs[0] - - question_attention_mask = torch.ones(question_embeds.size()[:-1], dtype=torch.long).to(question_embeds.device) - - bos_ids = torch.full( - (question_embeds.size(0), 1), fill_value=self.decoder_start_token_id, device=question_embeds.device - ) - - outputs = self.text_decoder.generate( - input_ids=bos_ids, - eos_token_id=self.config.text_config.sep_token_id, - pad_token_id=self.config.text_config.pad_token_id, - encoder_hidden_states=question_embeds, - encoder_attention_mask=question_attention_mask, - **generate_kwargs, - ) - - return outputs - - -@add_start_docstrings( - """ - BLIP Model with a vision and text projector, and a classification head on top. The model is used in the context of - image-text retrieval. Given an image and a text, the model returns the probability of the text being relevant to - the image. - """, - BLIP_START_DOCSTRING, -) -class BlipForImageTextRetrieval(BlipPreTrainedModel): - config_class = BlipConfig - - def __init__(self, config: BlipConfig): - super().__init__(config) - - self.vision_model = BlipVisionModel(config.vision_config) - - self.text_encoder = BlipTextModel(config.text_config, add_pooling_layer=False) - - # vision projection layer - self.vision_proj = nn.Linear(config.vision_config.hidden_size, config.image_text_hidden_size) - - # text projection layer - self.text_proj = nn.Linear(config.text_config.hidden_size, config.image_text_hidden_size) - - # image text matching head - self.itm_head = nn.Linear(config.text_config.hidden_size, 2) - - self.decoder_pad_token_id = ( - config.text_config.pad_token_id - if not hasattr(config, "decoder_pad_token_id") - else config.decoder_pad_token_id - ) - self.decoder_start_token_id = ( - config.text_config.bos_token_id - if not hasattr(config, "decoder_start_token_id") - else config.decoder_start_token_id - ) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self) -> nn.Module: - return self.vision_model.embeddings.patch_embedding - - @add_start_docstrings_to_model_forward(BLIP_VISION_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BlipTextVisionModelOutput, config_class=BlipVisionConfig) - def forward( - self, - input_ids: torch.LongTensor, - pixel_values: torch.FloatTensor, - use_itm_head: Optional[bool] = True, - attention_mask: Optional[torch.LongTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BlipTextVisionModelOutput]: - r""" - Returns: - - Examples: - - ```python - >>> from PIL import Image - >>> import requests - >>> from transformers import AutoProcessor, BlipForImageTextRetrieval - - >>> model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco") - >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco") - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - >>> text = "an image of a cat" - - >>> inputs = processor(images=image, text=text, return_tensors="pt") - >>> outputs = model(**inputs) - ``` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - vision_outputs = self.vision_model( - pixel_values=pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - image_embeds = vision_outputs[0] - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long) - - if use_itm_head: - question_embeds = self.text_encoder( - input_ids=input_ids, - attention_mask=attention_mask, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=return_dict, - ) - question_embeds = question_embeds[0] if not return_dict else question_embeds.last_hidden_state - - output = self.itm_head(question_embeds[:, 0, :]) - else: - question_embeds = self.text_encoder( - input_ids=input_ids, - attention_mask=attention_mask, - return_dict=return_dict, - ) - question_embeds = question_embeds[0] if not return_dict else question_embeds.last_hidden_state - - image_feat = normalize(self.vision_proj(image_embeds[:, 0, :]), dim=-1) - text_feat = normalize(self.text_proj(question_embeds[:, 0, :]), dim=-1) - - output = image_feat @ text_feat.t() - - if not return_dict: - outputs = (output, vision_outputs[0]) + vision_outputs[2:] + (question_embeds,) - return tuple(output for output in outputs if output is not None) - - return BlipImageTextMatchingModelOutput( - itm_score=output, - last_hidden_state=vision_outputs.last_hidden_state, - hidden_states=vision_outputs.hidden_states, - attentions=vision_outputs.attentions, - question_embeds=question_embeds, - ) diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/static/assets/vue-e0bc46a9.js b/spaces/wffcyrus/MetaGPT-v1/metagpt/static/assets/vue-e0bc46a9.js deleted file mode 100644 index ac16c5935d37a113c4a4e92bce50dab25973aec9..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/static/assets/vue-e0bc46a9.js +++ /dev/null @@ -1,5 +0,0 @@ -function ls(e,t){const n=Object.create(null),s=e.split(",");for(let r=0;r!!n[r.toLowerCase()]:r=>!!n[r]}const ie={},$t=[],He=()=>{},$l=()=>!1,jl=/^on[^a-z]/,An=e=>jl.test(e),fr=e=>e.startsWith("onUpdate:"),ue=Object.assign,ar=(e,t)=>{const n=e.indexOf(t);n>-1&&e.splice(n,1)},Ul=Object.prototype.hasOwnProperty,ee=(e,t)=>Ul.call(e,t),D=Array.isArray,jt=e=>Xt(e)==="[object Map]",It=e=>Xt(e)==="[object Set]",zr=e=>Xt(e)==="[object Date]",Kl=e=>Xt(e)==="[object RegExp]",Q=e=>typeof e=="function",ae=e=>typeof e=="string",yn=e=>typeof e=="symbol",le=e=>e!==null&&typeof e=="object",dr=e=>le(e)&&Q(e.then)&&Q(e.catch),Qo=Object.prototype.toString,Xt=e=>Qo.call(e),Vl=e=>Xt(e).slice(8,-1),Jo=e=>Xt(e)==="[object Object]",hr=e=>ae(e)&&e!=="NaN"&&e[0]!=="-"&&""+parseInt(e,10)===e,un=ls(",key,ref,ref_for,ref_key,onVnodeBeforeMount,onVnodeMounted,onVnodeBeforeUpdate,onVnodeUpdated,onVnodeBeforeUnmount,onVnodeUnmounted"),cs=e=>{const t=Object.create(null);return n=>t[n]||(t[n]=e(n))},Wl=/-(\w)/g,Te=cs(e=>e.replace(Wl,(t,n)=>n?n.toUpperCase():"")),ql=/\B([A-Z])/g,Fe=cs(e=>e.replace(ql,"-$1").toLowerCase()),Sn=cs(e=>e.charAt(0).toUpperCase()+e.slice(1)),fn=cs(e=>e?`on${Sn(e)}`:""),Wt=(e,t)=>!Object.is(e,t),Ut=(e,t)=>{for(let n=0;n{Object.defineProperty(e,t,{configurable:!0,enumerable:!1,value:n})},Zn=e=>{const t=parseFloat(e);return isNaN(t)?e:t},Gn=e=>{const t=ae(e)?Number(e):NaN;return isNaN(t)?e:t};let Yr;const Us=()=>Yr||(Yr=typeof globalThis<"u"?globalThis:typeof self<"u"?self:typeof window<"u"?window:typeof global<"u"?global:{}),zl="Infinity,undefined,NaN,isFinite,isNaN,parseFloat,parseInt,decodeURI,decodeURIComponent,encodeURI,encodeURIComponent,Math,Number,Date,Array,Object,Boolean,String,RegExp,Map,Set,JSON,Intl,BigInt,console",Yl=ls(zl);function On(e){if(D(e)){const t={};for(let n=0;n{if(n){const s=n.split(Jl);s.length>1&&(t[s[0].trim()]=s[1].trim())}}),t}function Mn(e){let t="";if(ae(e))t=e;else if(D(e))for(let n=0;nht(n,t))}const sc=e=>ae(e)?e:e==null?"":D(e)||le(e)&&(e.toString===Qo||!Q(e.toString))?JSON.stringify(e,Zo,2):String(e),Zo=(e,t)=>t&&t.__v_isRef?Zo(e,t.value):jt(t)?{[`Map(${t.size})`]:[...t.entries()].reduce((n,[s,r])=>(n[`${s} =>`]=r,n),{})}:It(t)?{[`Set(${t.size})`]:[...t.values()]}:le(t)&&!D(t)&&!Jo(t)?String(t):t;let Se;class pr{constructor(t=!1){this.detached=t,this._active=!0,this.effects=[],this.cleanups=[],this.parent=Se,!t&&Se&&(this.index=(Se.scopes||(Se.scopes=[])).push(this)-1)}get active(){return this._active}run(t){if(this._active){const n=Se;try{return Se=this,t()}finally{Se=n}}}on(){Se=this}off(){Se=this.parent}stop(t){if(this._active){let n,s;for(n=0,s=this.effects.length;n{const t=new Set(e);return t.w=0,t.n=0,t},ti=e=>(e.w&pt)>0,ni=e=>(e.n&pt)>0,ic=({deps:e})=>{if(e.length)for(let t=0;t{const{deps:t}=e;if(t.length){let n=0;for(let s=0;s{(f==="length"||f>=c)&&l.push(u)})}else switch(n!==void 0&&l.push(i.get(n)),t){case"add":D(e)?hr(n)&&l.push(i.get("length")):(l.push(i.get(wt)),jt(e)&&l.push(i.get(Vs)));break;case"delete":D(e)||(l.push(i.get(wt)),jt(e)&&l.push(i.get(Vs)));break;case"set":jt(e)&&l.push(i.get(wt));break}if(l.length===1)l[0]&&Ws(l[0]);else{const c=[];for(const u of l)u&&c.push(...u);Ws(gr(c))}}function Ws(e,t){const n=D(e)?e:[...e];for(const s of n)s.computed&&Jr(s);for(const s of n)s.computed||Jr(s)}function Jr(e,t){(e!==Le||e.allowRecurse)&&(e.scheduler?e.scheduler():e.run())}function fc(e,t){var n;return(n=es.get(e))==null?void 0:n.get(t)}const ac=ls("__proto__,__v_isRef,__isVue"),oi=new Set(Object.getOwnPropertyNames(Symbol).filter(e=>e!=="arguments"&&e!=="caller").map(e=>Symbol[e]).filter(yn)),dc=fs(),hc=fs(!1,!0),pc=fs(!0),gc=fs(!0,!0),Xr=mc();function mc(){const e={};return["includes","indexOf","lastIndexOf"].forEach(t=>{e[t]=function(...n){const s=Z(this);for(let o=0,i=this.length;o{e[t]=function(...n){Zt();const s=Z(this)[t].apply(this,n);return Gt(),s}}),e}function _c(e){const t=Z(this);return Pe(t,"has",e),t.hasOwnProperty(e)}function fs(e=!1,t=!1){return function(s,r,o){if(r==="__v_isReactive")return!e;if(r==="__v_isReadonly")return e;if(r==="__v_isShallow")return t;if(r==="__v_raw"&&o===(e?t?di:ai:t?fi:ui).get(s))return s;const i=D(s);if(!e){if(i&&ee(Xr,r))return Reflect.get(Xr,r,o);if(r==="hasOwnProperty")return _c}const l=Reflect.get(s,r,o);return(yn(r)?oi.has(r):ac(r))||(e||Pe(s,"get",r),t)?l:ge(l)?i&&hr(r)?l:l.value:le(l)?e?_r(l):en(l):l}}const yc=ii(),bc=ii(!0);function ii(e=!1){return function(n,s,r,o){let i=n[s];if(At(i)&&ge(i)&&!ge(r))return!1;if(!e&&(!bn(r)&&!At(r)&&(i=Z(i),r=Z(r)),!D(n)&&ge(i)&&!ge(r)))return i.value=r,!0;const l=D(n)&&hr(s)?Number(s)e,as=e=>Reflect.getPrototypeOf(e);function Hn(e,t,n=!1,s=!1){e=e.__v_raw;const r=Z(e),o=Z(t);n||(t!==o&&Pe(r,"get",t),Pe(r,"get",o));const{has:i}=as(r),l=s?mr:n?vr:vn;if(i.call(r,t))return l(e.get(t));if(i.call(r,o))return l(e.get(o));e!==r&&e.get(t)}function Dn(e,t=!1){const n=this.__v_raw,s=Z(n),r=Z(e);return t||(e!==r&&Pe(s,"has",e),Pe(s,"has",r)),e===r?n.has(e):n.has(e)||n.has(r)}function $n(e,t=!1){return e=e.__v_raw,!t&&Pe(Z(e),"iterate",wt),Reflect.get(e,"size",e)}function Zr(e){e=Z(e);const t=Z(this);return as(t).has.call(t,e)||(t.add(e),Ze(t,"add",e,e)),this}function Gr(e,t){t=Z(t);const n=Z(this),{has:s,get:r}=as(n);let o=s.call(n,e);o||(e=Z(e),o=s.call(n,e));const i=r.call(n,e);return n.set(e,t),o?Wt(t,i)&&Ze(n,"set",e,t):Ze(n,"add",e,t),this}function eo(e){const t=Z(this),{has:n,get:s}=as(t);let r=n.call(t,e);r||(e=Z(e),r=n.call(t,e)),s&&s.call(t,e);const o=t.delete(e);return r&&Ze(t,"delete",e,void 0),o}function to(){const e=Z(this),t=e.size!==0,n=e.clear();return t&&Ze(e,"clear",void 0,void 0),n}function jn(e,t){return function(s,r){const o=this,i=o.__v_raw,l=Z(i),c=t?mr:e?vr:vn;return!e&&Pe(l,"iterate",wt),i.forEach((u,f)=>s.call(r,c(u),c(f),o))}}function Un(e,t,n){return function(...s){const r=this.__v_raw,o=Z(r),i=jt(o),l=e==="entries"||e===Symbol.iterator&&i,c=e==="keys"&&i,u=r[e](...s),f=n?mr:t?vr:vn;return!t&&Pe(o,"iterate",c?Vs:wt),{next(){const{value:a,done:p}=u.next();return p?{value:a,done:p}:{value:l?[f(a[0]),f(a[1])]:f(a),done:p}},[Symbol.iterator](){return this}}}}function nt(e){return function(...t){return e==="delete"?!1:this}}function Rc(){const e={get(o){return Hn(this,o)},get size(){return $n(this)},has:Dn,add:Zr,set:Gr,delete:eo,clear:to,forEach:jn(!1,!1)},t={get(o){return Hn(this,o,!1,!0)},get size(){return $n(this)},has:Dn,add:Zr,set:Gr,delete:eo,clear:to,forEach:jn(!1,!0)},n={get(o){return Hn(this,o,!0)},get size(){return $n(this,!0)},has(o){return Dn.call(this,o,!0)},add:nt("add"),set:nt("set"),delete:nt("delete"),clear:nt("clear"),forEach:jn(!0,!1)},s={get(o){return Hn(this,o,!0,!0)},get size(){return $n(this,!0)},has(o){return Dn.call(this,o,!0)},add:nt("add"),set:nt("set"),delete:nt("delete"),clear:nt("clear"),forEach:jn(!0,!0)};return["keys","values","entries",Symbol.iterator].forEach(o=>{e[o]=Un(o,!1,!1),n[o]=Un(o,!0,!1),t[o]=Un(o,!1,!0),s[o]=Un(o,!0,!0)}),[e,n,t,s]}const[Tc,Pc,Ac,Sc]=Rc();function ds(e,t){const n=t?e?Sc:Ac:e?Pc:Tc;return(s,r,o)=>r==="__v_isReactive"?!e:r==="__v_isReadonly"?e:r==="__v_raw"?s:Reflect.get(ee(n,r)&&r in s?n:s,r,o)}const Oc={get:ds(!1,!1)},Mc={get:ds(!1,!0)},Ic={get:ds(!0,!1)},kc={get:ds(!0,!0)},ui=new WeakMap,fi=new WeakMap,ai=new WeakMap,di=new WeakMap;function Fc(e){switch(e){case"Object":case"Array":return 1;case"Map":case"Set":case"WeakMap":case"WeakSet":return 2;default:return 0}}function Nc(e){return e.__v_skip||!Object.isExtensible(e)?0:Fc(Vl(e))}function en(e){return At(e)?e:hs(e,!1,li,Oc,ui)}function hi(e){return hs(e,!1,wc,Mc,fi)}function _r(e){return hs(e,!0,ci,Ic,ai)}function Lc(e){return hs(e,!0,xc,kc,di)}function hs(e,t,n,s,r){if(!le(e)||e.__v_raw&&!(t&&e.__v_isReactive))return e;const o=r.get(e);if(o)return o;const i=Nc(e);if(i===0)return e;const l=new Proxy(e,i===2?s:n);return r.set(e,l),l}function xt(e){return At(e)?xt(e.__v_raw):!!(e&&e.__v_isReactive)}function At(e){return!!(e&&e.__v_isReadonly)}function bn(e){return!!(e&&e.__v_isShallow)}function yr(e){return xt(e)||At(e)}function Z(e){const t=e&&e.__v_raw;return t?Z(t):e}function br(e){return Xn(e,"__v_skip",!0),e}const vn=e=>le(e)?en(e):e,vr=e=>le(e)?_r(e):e;function Er(e){ut&&Le&&(e=Z(e),ri(e.dep||(e.dep=gr())))}function ps(e,t){e=Z(e);const n=e.dep;n&&Ws(n)}function ge(e){return!!(e&&e.__v_isRef===!0)}function Rt(e){return gi(e,!1)}function pi(e){return gi(e,!0)}function gi(e,t){return ge(e)?e:new Bc(e,t)}class Bc{constructor(t,n){this.__v_isShallow=n,this.dep=void 0,this.__v_isRef=!0,this._rawValue=n?t:Z(t),this._value=n?t:vn(t)}get value(){return Er(this),this._value}set value(t){const n=this.__v_isShallow||bn(t)||At(t);t=n?t:Z(t),Wt(t,this._rawValue)&&(this._rawValue=t,this._value=n?t:vn(t),ps(this))}}function Hc(e){ps(e)}function ft(e){return ge(e)?e.value:e}function Dc(e){return Q(e)?e():ft(e)}const $c={get:(e,t,n)=>ft(Reflect.get(e,t,n)),set:(e,t,n,s)=>{const r=e[t];return ge(r)&&!ge(n)?(r.value=n,!0):Reflect.set(e,t,n,s)}};function Cr(e){return xt(e)?e:new Proxy(e,$c)}class jc{constructor(t){this.dep=void 0,this.__v_isRef=!0;const{get:n,set:s}=t(()=>Er(this),()=>ps(this));this._get=n,this._set=s}get value(){return this._get()}set value(t){this._set(t)}}function Uc(e){return new jc(e)}function Kc(e){const t=D(e)?new Array(e.length):{};for(const n in e)t[n]=mi(e,n);return t}class Vc{constructor(t,n,s){this._object=t,this._key=n,this._defaultValue=s,this.__v_isRef=!0}get value(){const t=this._object[this._key];return t===void 0?this._defaultValue:t}set value(t){this._object[this._key]=t}get dep(){return fc(Z(this._object),this._key)}}class Wc{constructor(t){this._getter=t,this.__v_isRef=!0,this.__v_isReadonly=!0}get value(){return this._getter()}}function qc(e,t,n){return ge(e)?e:Q(e)?new Wc(e):le(e)&&arguments.length>1?mi(e,t,n):Rt(e)}function mi(e,t,n){const s=e[t];return ge(s)?s:new Vc(e,t,n)}class zc{constructor(t,n,s,r){this._setter=n,this.dep=void 0,this.__v_isRef=!0,this.__v_isReadonly=!1,this._dirty=!0,this.effect=new In(t,()=>{this._dirty||(this._dirty=!0,ps(this))}),this.effect.computed=this,this.effect.active=this._cacheable=!r,this.__v_isReadonly=s}get value(){const t=Z(this);return Er(t),(t._dirty||!t._cacheable)&&(t._dirty=!1,t._value=t.effect.run()),t._value}set value(t){this._setter(t)}}function Yc(e,t,n=!1){let s,r;const o=Q(e);return o?(s=e,r=He):(s=e.get,r=e.set),new zc(s,r,o||!r,n)}function Qc(e,...t){}function Jc(e,t){}function Xe(e,t,n,s){let r;try{r=s?e(...s):e()}catch(o){kt(o,t,n)}return r}function Ie(e,t,n,s){if(Q(e)){const o=Xe(e,t,n,s);return o&&dr(o)&&o.catch(i=>{kt(i,t,n)}),o}const r=[];for(let o=0;o>>1;Cn(ve[s])Ve&&ve.splice(t,1)}function xr(e){D(e)?Kt.push(...e):(!Qe||!Qe.includes(e,e.allowRecurse?vt+1:vt))&&Kt.push(e),yi()}function no(e,t=En?Ve+1:0){for(;tCn(n)-Cn(s)),vt=0;vte.id==null?1/0:e.id,eu=(e,t)=>{const n=Cn(e)-Cn(t);if(n===0){if(e.pre&&!t.pre)return-1;if(t.pre&&!e.pre)return 1}return n};function bi(e){qs=!1,En=!0,ve.sort(eu);const t=He;try{for(Ve=0;VeHt.emit(r,...o)),Kn=[]):typeof window<"u"&&window.HTMLElement&&!((s=(n=window.navigator)==null?void 0:n.userAgent)!=null&&s.includes("jsdom"))?((t.__VUE_DEVTOOLS_HOOK_REPLAY__=t.__VUE_DEVTOOLS_HOOK_REPLAY__||[]).push(o=>{vi(o,t)}),setTimeout(()=>{Ht||(t.__VUE_DEVTOOLS_HOOK_REPLAY__=null,Kn=[])},3e3)):Kn=[]}function tu(e,t,...n){if(e.isUnmounted)return;const s=e.vnode.props||ie;let r=n;const o=t.startsWith("update:"),i=o&&t.slice(7);if(i&&i in s){const f=`${i==="modelValue"?"model":i}Modifiers`,{number:a,trim:p}=s[f]||ie;p&&(r=n.map(y=>ae(y)?y.trim():y)),a&&(r=n.map(Zn))}let l,c=s[l=fn(t)]||s[l=fn(Te(t))];!c&&o&&(c=s[l=fn(Fe(t))]),c&&Ie(c,e,6,r);const u=s[l+"Once"];if(u){if(!e.emitted)e.emitted={};else if(e.emitted[l])return;e.emitted[l]=!0,Ie(u,e,6,r)}}function Ei(e,t,n=!1){const s=t.emitsCache,r=s.get(e);if(r!==void 0)return r;const o=e.emits;let i={},l=!1;if(!Q(e)){const c=u=>{const f=Ei(u,t,!0);f&&(l=!0,ue(i,f))};!n&&t.mixins.length&&t.mixins.forEach(c),e.extends&&c(e.extends),e.mixins&&e.mixins.forEach(c)}return!o&&!l?(le(e)&&s.set(e,null),null):(D(o)?o.forEach(c=>i[c]=null):ue(i,o),le(e)&&s.set(e,i),i)}function _s(e,t){return!e||!An(t)?!1:(t=t.slice(2).replace(/Once$/,""),ee(e,t[0].toLowerCase()+t.slice(1))||ee(e,Fe(t))||ee(e,t))}let me=null,ys=null;function wn(e){const t=me;return me=e,ys=e&&e.type.__scopeId||null,t}function nu(e){ys=e}function su(){ys=null}const ru=e=>Rr;function Rr(e,t=me,n){if(!t||e._n)return e;const s=(...r)=>{s._d&&Gs(-1);const o=wn(t);let i;try{i=e(...r)}finally{wn(o),s._d&&Gs(1)}return i};return s._n=!0,s._c=!0,s._d=!0,s}function Qn(e){const{type:t,vnode:n,proxy:s,withProxy:r,props:o,propsOptions:[i],slots:l,attrs:c,emit:u,render:f,renderCache:a,data:p,setupState:y,ctx:E,inheritAttrs:A}=e;let k,b;const g=wn(e);try{if(n.shapeFlag&4){const _=r||s;k=Oe(f.call(_,_,a,o,y,p,E)),b=c}else{const _=t;k=Oe(_.length>1?_(o,{attrs:c,slots:l,emit:u}):_(o,null)),b=t.props?c:iu(c)}}catch(_){pn.length=0,kt(_,e,1),k=fe(Ee)}let R=k;if(b&&A!==!1){const _=Object.keys(b),{shapeFlag:S}=R;_.length&&S&7&&(i&&_.some(fr)&&(b=lu(b,i)),R=We(R,b))}return n.dirs&&(R=We(R),R.dirs=R.dirs?R.dirs.concat(n.dirs):n.dirs),n.transition&&(R.transition=n.transition),k=R,wn(g),k}function ou(e){let t;for(let n=0;n{let t;for(const n in e)(n==="class"||n==="style"||An(n))&&((t||(t={}))[n]=e[n]);return t},lu=(e,t)=>{const n={};for(const s in e)(!fr(s)||!(s.slice(9)in t))&&(n[s]=e[s]);return n};function cu(e,t,n){const{props:s,children:r,component:o}=e,{props:i,children:l,patchFlag:c}=t,u=o.emitsOptions;if(t.dirs||t.transition)return!0;if(n&&c>=0){if(c&1024)return!0;if(c&16)return s?so(s,i,u):!!i;if(c&8){const f=t.dynamicProps;for(let a=0;ae.__isSuspense,uu={name:"Suspense",__isSuspense:!0,process(e,t,n,s,r,o,i,l,c,u){e==null?au(t,n,s,r,o,i,l,c,u):du(e,t,n,s,r,i,l,c,u)},hydrate:hu,create:Pr,normalize:pu},fu=uu;function xn(e,t){const n=e.props&&e.props[t];Q(n)&&n()}function au(e,t,n,s,r,o,i,l,c){const{p:u,o:{createElement:f}}=c,a=f("div"),p=e.suspense=Pr(e,r,s,t,a,n,o,i,l,c);u(null,p.pendingBranch=e.ssContent,a,null,s,p,o,i),p.deps>0?(xn(e,"onPending"),xn(e,"onFallback"),u(null,e.ssFallback,t,n,s,null,o,i),Vt(p,e.ssFallback)):p.resolve(!1,!0)}function du(e,t,n,s,r,o,i,l,{p:c,um:u,o:{createElement:f}}){const a=t.suspense=e.suspense;a.vnode=t,t.el=e.el;const p=t.ssContent,y=t.ssFallback,{activeBranch:E,pendingBranch:A,isInFallback:k,isHydrating:b}=a;if(A)a.pendingBranch=p,Be(p,A)?(c(A,p,a.hiddenContainer,null,r,a,o,i,l),a.deps<=0?a.resolve():k&&(c(E,y,n,s,r,null,o,i,l),Vt(a,y))):(a.pendingId++,b?(a.isHydrating=!1,a.activeBranch=A):u(A,r,a),a.deps=0,a.effects.length=0,a.hiddenContainer=f("div"),k?(c(null,p,a.hiddenContainer,null,r,a,o,i,l),a.deps<=0?a.resolve():(c(E,y,n,s,r,null,o,i,l),Vt(a,y))):E&&Be(p,E)?(c(E,p,n,s,r,a,o,i,l),a.resolve(!0)):(c(null,p,a.hiddenContainer,null,r,a,o,i,l),a.deps<=0&&a.resolve()));else if(E&&Be(p,E))c(E,p,n,s,r,a,o,i,l),Vt(a,p);else if(xn(t,"onPending"),a.pendingBranch=p,a.pendingId++,c(null,p,a.hiddenContainer,null,r,a,o,i,l),a.deps<=0)a.resolve();else{const{timeout:g,pendingId:R}=a;g>0?setTimeout(()=>{a.pendingId===R&&a.fallback(y)},g):g===0&&a.fallback(y)}}function Pr(e,t,n,s,r,o,i,l,c,u,f=!1){const{p:a,m:p,um:y,n:E,o:{parentNode:A,remove:k}}=u;let b;const g=gu(e);g&&t!=null&&t.pendingBranch&&(b=t.pendingId,t.deps++);const R=e.props?Gn(e.props.timeout):void 0,_={vnode:e,parent:t,parentComponent:n,isSVG:i,container:s,hiddenContainer:r,anchor:o,deps:0,pendingId:0,timeout:typeof R=="number"?R:-1,activeBranch:null,pendingBranch:null,isInFallback:!0,isHydrating:f,isUnmounted:!1,effects:[],resolve(S=!1,B=!1){const{vnode:N,activeBranch:x,pendingBranch:j,pendingId:U,effects:z,parentComponent:L,container:Y}=_;if(_.isHydrating)_.isHydrating=!1;else if(!S){const G=x&&j.transition&&j.transition.mode==="out-in";G&&(x.transition.afterLeave=()=>{U===_.pendingId&&p(j,Y,te,0)});let{anchor:te}=_;x&&(te=E(x),y(x,L,_,!0)),G||p(j,Y,te,0)}Vt(_,j),_.pendingBranch=null,_.isInFallback=!1;let $=_.parent,de=!1;for(;$;){if($.pendingBranch){$.effects.push(...z),de=!0;break}$=$.parent}de||xr(z),_.effects=[],g&&t&&t.pendingBranch&&b===t.pendingId&&(t.deps--,t.deps===0&&!B&&t.resolve()),xn(N,"onResolve")},fallback(S){if(!_.pendingBranch)return;const{vnode:B,activeBranch:N,parentComponent:x,container:j,isSVG:U}=_;xn(B,"onFallback");const z=E(N),L=()=>{_.isInFallback&&(a(null,S,j,z,x,null,U,l,c),Vt(_,S))},Y=S.transition&&S.transition.mode==="out-in";Y&&(N.transition.afterLeave=L),_.isInFallback=!0,y(N,x,null,!0),Y||L()},move(S,B,N){_.activeBranch&&p(_.activeBranch,S,B,N),_.container=S},next(){return _.activeBranch&&E(_.activeBranch)},registerDep(S,B){const N=!!_.pendingBranch;N&&_.deps++;const x=S.vnode.el;S.asyncDep.catch(j=>{kt(j,S,0)}).then(j=>{if(S.isUnmounted||_.isUnmounted||_.pendingId!==S.suspenseId)return;S.asyncResolved=!0;const{vnode:U}=S;er(S,j,!1),x&&(U.el=x);const z=!x&&S.subTree.el;B(S,U,A(x||S.subTree.el),x?null:E(S.subTree),_,i,c),z&&k(z),Tr(S,U.el),N&&--_.deps===0&&_.resolve()})},unmount(S,B){_.isUnmounted=!0,_.activeBranch&&y(_.activeBranch,n,S,B),_.pendingBranch&&y(_.pendingBranch,n,S,B)}};return _}function hu(e,t,n,s,r,o,i,l,c){const u=t.suspense=Pr(t,s,n,e.parentNode,document.createElement("div"),null,r,o,i,l,!0),f=c(e,u.pendingBranch=t.ssContent,n,u,o,i);return u.deps===0&&u.resolve(!1,!0),f}function pu(e){const{shapeFlag:t,children:n}=e,s=t&32;e.ssContent=ro(s?n.default:n),e.ssFallback=s?ro(n.fallback):fe(Ee)}function ro(e){let t;if(Q(e)){const n=Mt&&e._c;n&&(e._d=!1,xs()),e=e(),n&&(e._d=!0,t=Re,Xi())}return D(e)&&(e=ou(e)),e=Oe(e),t&&!e.dynamicChildren&&(e.dynamicChildren=t.filter(n=>n!==e)),e}function wi(e,t){t&&t.pendingBranch?D(e)?t.effects.push(...e):t.effects.push(e):xr(e)}function Vt(e,t){e.activeBranch=t;const{vnode:n,parentComponent:s}=e,r=n.el=t.el;s&&s.subTree===n&&(s.vnode.el=r,Tr(s,r))}function gu(e){var t;return((t=e.props)==null?void 0:t.suspensible)!=null&&e.props.suspensible!==!1}function mu(e,t){return kn(e,null,t)}function xi(e,t){return kn(e,null,{flush:"post"})}function _u(e,t){return kn(e,null,{flush:"sync"})}const Vn={};function at(e,t,n){return kn(e,t,n)}function kn(e,t,{immediate:n,deep:s,flush:r,onTrack:o,onTrigger:i}=ie){var l;const c=ei()===((l=pe)==null?void 0:l.scope)?pe:null;let u,f=!1,a=!1;if(ge(e)?(u=()=>e.value,f=bn(e)):xt(e)?(u=()=>e,s=!0):D(e)?(a=!0,f=e.some(_=>xt(_)||bn(_)),u=()=>e.map(_=>{if(ge(_))return _.value;if(xt(_))return Ct(_);if(Q(_))return Xe(_,c,2)})):Q(e)?t?u=()=>Xe(e,c,2):u=()=>{if(!(c&&c.isUnmounted))return p&&p(),Ie(e,c,3,[y])}:u=He,t&&s){const _=u;u=()=>Ct(_())}let p,y=_=>{p=g.onStop=()=>{Xe(_,c,4)}},E;if(zt)if(y=He,t?n&&Ie(t,c,3,[u(),a?[]:void 0,y]):u(),r==="sync"){const _=cl();E=_.__watcherHandles||(_.__watcherHandles=[])}else return He;let A=a?new Array(e.length).fill(Vn):Vn;const k=()=>{if(g.active)if(t){const _=g.run();(s||f||(a?_.some((S,B)=>Wt(S,A[B])):Wt(_,A)))&&(p&&p(),Ie(t,c,3,[_,A===Vn?void 0:a&&A[0]===Vn?[]:A,y]),A=_)}else g.run()};k.allowRecurse=!!t;let b;r==="sync"?b=k:r==="post"?b=()=>_e(k,c&&c.suspense):(k.pre=!0,c&&(k.id=c.uid),b=()=>ms(k));const g=new In(u,b);t?n?k():A=g.run():r==="post"?_e(g.run.bind(g),c&&c.suspense):g.run();const R=()=>{g.stop(),c&&c.scope&&ar(c.scope.effects,g)};return E&&E.push(R),R}function yu(e,t,n){const s=this.proxy,r=ae(e)?e.includes(".")?Ri(s,e):()=>s[e]:e.bind(s,s);let o;Q(t)?o=t:(o=t.handler,n=t);const i=pe;mt(this);const l=kn(r,o.bind(s),n);return i?mt(i):dt(),l}function Ri(e,t){const n=t.split(".");return()=>{let s=e;for(let r=0;r{Ct(n,t)});else if(Jo(e))for(const n in e)Ct(e[n],t);return e}function bu(e,t){const n=me;if(n===null)return e;const s=Ts(n)||n.proxy,r=e.dirs||(e.dirs=[]);for(let o=0;o{e.isMounted=!0}),Cs(()=>{e.isUnmounting=!0}),e}const ke=[Function,Array],Sr={mode:String,appear:Boolean,persisted:Boolean,onBeforeEnter:ke,onEnter:ke,onAfterEnter:ke,onEnterCancelled:ke,onBeforeLeave:ke,onLeave:ke,onAfterLeave:ke,onLeaveCancelled:ke,onBeforeAppear:ke,onAppear:ke,onAfterAppear:ke,onAppearCancelled:ke},vu={name:"BaseTransition",props:Sr,setup(e,{slots:t}){const n=et(),s=Ar();let r;return()=>{const o=t.default&&bs(t.default(),!0);if(!o||!o.length)return;let i=o[0];if(o.length>1){for(const A of o)if(A.type!==Ee){i=A;break}}const l=Z(e),{mode:c}=l;if(s.isLeaving)return Ms(i);const u=oo(i);if(!u)return Ms(i);const f=qt(u,l,s,n);St(u,f);const a=n.subTree,p=a&&oo(a);let y=!1;const{getTransitionKey:E}=u.type;if(E){const A=E();r===void 0?r=A:A!==r&&(r=A,y=!0)}if(p&&p.type!==Ee&&(!Be(u,p)||y)){const A=qt(p,l,s,n);if(St(p,A),c==="out-in")return s.isLeaving=!0,A.afterLeave=()=>{s.isLeaving=!1,n.update.active!==!1&&n.update()},Ms(i);c==="in-out"&&u.type!==Ee&&(A.delayLeave=(k,b,g)=>{const R=Pi(s,p);R[String(p.key)]=p,k._leaveCb=()=>{b(),k._leaveCb=void 0,delete f.delayedLeave},f.delayedLeave=g})}return i}}},Ti=vu;function Pi(e,t){const{leavingVNodes:n}=e;let s=n.get(t.type);return s||(s=Object.create(null),n.set(t.type,s)),s}function qt(e,t,n,s){const{appear:r,mode:o,persisted:i=!1,onBeforeEnter:l,onEnter:c,onAfterEnter:u,onEnterCancelled:f,onBeforeLeave:a,onLeave:p,onAfterLeave:y,onLeaveCancelled:E,onBeforeAppear:A,onAppear:k,onAfterAppear:b,onAppearCancelled:g}=t,R=String(e.key),_=Pi(n,e),S=(x,j)=>{x&&Ie(x,s,9,j)},B=(x,j)=>{const U=j[1];S(x,j),D(x)?x.every(z=>z.length<=1)&&U():x.length<=1&&U()},N={mode:o,persisted:i,beforeEnter(x){let j=l;if(!n.isMounted)if(r)j=A||l;else return;x._leaveCb&&x._leaveCb(!0);const U=_[R];U&&Be(e,U)&&U.el._leaveCb&&U.el._leaveCb(),S(j,[x])},enter(x){let j=c,U=u,z=f;if(!n.isMounted)if(r)j=k||c,U=b||u,z=g||f;else return;let L=!1;const Y=x._enterCb=$=>{L||(L=!0,$?S(z,[x]):S(U,[x]),N.delayedLeave&&N.delayedLeave(),x._enterCb=void 0)};j?B(j,[x,Y]):Y()},leave(x,j){const U=String(e.key);if(x._enterCb&&x._enterCb(!0),n.isUnmounting)return j();S(a,[x]);let z=!1;const L=x._leaveCb=Y=>{z||(z=!0,j(),Y?S(E,[x]):S(y,[x]),x._leaveCb=void 0,_[U]===e&&delete _[U])};_[U]=e,p?B(p,[x,L]):L()},clone(x){return qt(x,t,n,s)}};return N}function Ms(e){if(Nn(e))return e=We(e),e.children=null,e}function oo(e){return Nn(e)?e.children?e.children[0]:void 0:e}function St(e,t){e.shapeFlag&6&&e.component?St(e.component.subTree,t):e.shapeFlag&128?(e.ssContent.transition=t.clone(e.ssContent),e.ssFallback.transition=t.clone(e.ssFallback)):e.transition=t}function bs(e,t=!1,n){let s=[],r=0;for(let o=0;o1)for(let o=0;oue({name:e.name},t,{setup:e}))():e}const Tt=e=>!!e.type.__asyncLoader;function Eu(e){Q(e)&&(e={loader:e});const{loader:t,loadingComponent:n,errorComponent:s,delay:r=200,timeout:o,suspensible:i=!0,onError:l}=e;let c=null,u,f=0;const a=()=>(f++,c=null,p()),p=()=>{let y;return c||(y=c=t().catch(E=>{if(E=E instanceof Error?E:new Error(String(E)),l)return new Promise((A,k)=>{l(E,()=>A(a()),()=>k(E),f+1)});throw E}).then(E=>y!==c&&c?c:(E&&(E.__esModule||E[Symbol.toStringTag]==="Module")&&(E=E.default),u=E,E)))};return Fn({name:"AsyncComponentWrapper",__asyncLoader:p,get __asyncResolved(){return u},setup(){const y=pe;if(u)return()=>Is(u,y);const E=g=>{c=null,kt(g,y,13,!s)};if(i&&y.suspense||zt)return p().then(g=>()=>Is(g,y)).catch(g=>(E(g),()=>s?fe(s,{error:g}):null));const A=Rt(!1),k=Rt(),b=Rt(!!r);return r&&setTimeout(()=>{b.value=!1},r),o!=null&&setTimeout(()=>{if(!A.value&&!k.value){const g=new Error(`Async component timed out after ${o}ms.`);E(g),k.value=g}},o),p().then(()=>{A.value=!0,y.parent&&Nn(y.parent.vnode)&&ms(y.parent.update)}).catch(g=>{E(g),k.value=g}),()=>{if(A.value&&u)return Is(u,y);if(k.value&&s)return fe(s,{error:k.value});if(n&&!b.value)return fe(n)}}})}function Is(e,t){const{ref:n,props:s,children:r,ce:o}=t.vnode,i=fe(e,s,r);return i.ref=n,i.ce=o,delete t.vnode.ce,i}const Nn=e=>e.type.__isKeepAlive,Cu={name:"KeepAlive",__isKeepAlive:!0,props:{include:[String,RegExp,Array],exclude:[String,RegExp,Array],max:[String,Number]},setup(e,{slots:t}){const n=et(),s=n.ctx;if(!s.renderer)return()=>{const g=t.default&&t.default();return g&&g.length===1?g[0]:g};const r=new Map,o=new Set;let i=null;const l=n.suspense,{renderer:{p:c,m:u,um:f,o:{createElement:a}}}=s,p=a("div");s.activate=(g,R,_,S,B)=>{const N=g.component;u(g,R,_,0,l),c(N.vnode,g,R,_,N,l,S,g.slotScopeIds,B),_e(()=>{N.isDeactivated=!1,N.a&&Ut(N.a);const x=g.props&&g.props.onVnodeMounted;x&&xe(x,N.parent,g)},l)},s.deactivate=g=>{const R=g.component;u(g,p,null,1,l),_e(()=>{R.da&&Ut(R.da);const _=g.props&&g.props.onVnodeUnmounted;_&&xe(_,R.parent,g),R.isDeactivated=!0},l)};function y(g){ks(g),f(g,n,l,!0)}function E(g){r.forEach((R,_)=>{const S=nr(R.type);S&&(!g||!g(S))&&A(_)})}function A(g){const R=r.get(g);!i||!Be(R,i)?y(R):i&&ks(i),r.delete(g),o.delete(g)}at(()=>[e.include,e.exclude],([g,R])=>{g&&E(_=>ln(g,_)),R&&E(_=>!ln(R,_))},{flush:"post",deep:!0});let k=null;const b=()=>{k!=null&&r.set(k,Fs(n.subTree))};return Ln(b),Es(b),Cs(()=>{r.forEach(g=>{const{subTree:R,suspense:_}=n,S=Fs(R);if(g.type===S.type&&g.key===S.key){ks(S);const B=S.component.da;B&&_e(B,_);return}y(g)})}),()=>{if(k=null,!t.default)return null;const g=t.default(),R=g[0];if(g.length>1)return i=null,g;if(!gt(R)||!(R.shapeFlag&4)&&!(R.shapeFlag&128))return i=null,R;let _=Fs(R);const S=_.type,B=nr(Tt(_)?_.type.__asyncResolved||{}:S),{include:N,exclude:x,max:j}=e;if(N&&(!B||!ln(N,B))||x&&B&&ln(x,B))return i=_,R;const U=_.key==null?S:_.key,z=r.get(U);return _.el&&(_=We(_),R.shapeFlag&128&&(R.ssContent=_)),k=U,z?(_.el=z.el,_.component=z.component,_.transition&&St(_,_.transition),_.shapeFlag|=512,o.delete(U),o.add(U)):(o.add(U),j&&o.size>parseInt(j,10)&&A(o.values().next().value)),_.shapeFlag|=256,i=_,Ci(R.type)?R:_}}},wu=Cu;function ln(e,t){return D(e)?e.some(n=>ln(n,t)):ae(e)?e.split(",").includes(t):Kl(e)?e.test(t):!1}function Ai(e,t){Oi(e,"a",t)}function Si(e,t){Oi(e,"da",t)}function Oi(e,t,n=pe){const s=e.__wdc||(e.__wdc=()=>{let r=n;for(;r;){if(r.isDeactivated)return;r=r.parent}return e()});if(vs(t,s,n),n){let r=n.parent;for(;r&&r.parent;)Nn(r.parent.vnode)&&xu(s,t,n,r),r=r.parent}}function xu(e,t,n,s){const r=vs(t,e,s,!0);ws(()=>{ar(s[t],r)},n)}function ks(e){e.shapeFlag&=-257,e.shapeFlag&=-513}function Fs(e){return e.shapeFlag&128?e.ssContent:e}function vs(e,t,n=pe,s=!1){if(n){const r=n[e]||(n[e]=[]),o=t.__weh||(t.__weh=(...i)=>{if(n.isUnmounted)return;Zt(),mt(n);const l=Ie(t,n,e,i);return dt(),Gt(),l});return s?r.unshift(o):r.push(o),o}}const Ge=e=>(t,n=pe)=>(!zt||e==="sp")&&vs(e,(...s)=>t(...s),n),Mi=Ge("bm"),Ln=Ge("m"),Ii=Ge("bu"),Es=Ge("u"),Cs=Ge("bum"),ws=Ge("um"),ki=Ge("sp"),Fi=Ge("rtg"),Ni=Ge("rtc");function Li(e,t=pe){vs("ec",e,t)}const Or="components",Ru="directives";function Tu(e,t){return Mr(Or,e,!0,t)||e}const Bi=Symbol.for("v-ndc");function Pu(e){return ae(e)?Mr(Or,e,!1)||e:e||Bi}function Au(e){return Mr(Ru,e)}function Mr(e,t,n=!0,s=!1){const r=me||pe;if(r){const o=r.type;if(e===Or){const l=nr(o,!1);if(l&&(l===t||l===Te(t)||l===Sn(Te(t))))return o}const i=io(r[e]||o[e],t)||io(r.appContext[e],t);return!i&&s?o:i}}function io(e,t){return e&&(e[t]||e[Te(t)]||e[Sn(Te(t))])}function Su(e,t,n,s){let r;const o=n&&n[s];if(D(e)||ae(e)){r=new Array(e.length);for(let i=0,l=e.length;it(i,l,void 0,o&&o[l]));else{const i=Object.keys(e);r=new Array(i.length);for(let l=0,c=i.length;l{const o=s.fn(...r);return o&&(o.key=s.key),o}:s.fn)}return e}function Mu(e,t,n={},s,r){if(me.isCE||me.parent&&Tt(me.parent)&&me.parent.isCE)return t!=="default"&&(n.name=t),fe("slot",n,s&&s());let o=e[t];o&&o._c&&(o._d=!1),xs();const i=o&&Hi(o(n)),l=Nr(ye,{key:n.key||i&&i.key||`_${t}`},i||(s?s():[]),i&&e._===1?64:-2);return!r&&l.scopeId&&(l.slotScopeIds=[l.scopeId+"-s"]),o&&o._c&&(o._d=!0),l}function Hi(e){return e.some(t=>gt(t)?!(t.type===Ee||t.type===ye&&!Hi(t.children)):!0)?e:null}function Iu(e,t){const n={};for(const s in e)n[t&&/[A-Z]/.test(s)?`on:${s}`:fn(s)]=e[s];return n}const zs=e=>e?sl(e)?Ts(e)||e.proxy:zs(e.parent):null,an=ue(Object.create(null),{$:e=>e,$el:e=>e.vnode.el,$data:e=>e.data,$props:e=>e.props,$attrs:e=>e.attrs,$slots:e=>e.slots,$refs:e=>e.refs,$parent:e=>zs(e.parent),$root:e=>zs(e.root),$emit:e=>e.emit,$options:e=>Ir(e),$forceUpdate:e=>e.f||(e.f=()=>ms(e.update)),$nextTick:e=>e.n||(e.n=gs.bind(e.proxy)),$watch:e=>yu.bind(e)}),Ns=(e,t)=>e!==ie&&!e.__isScriptSetup&&ee(e,t),Ys={get({_:e},t){const{ctx:n,setupState:s,data:r,props:o,accessCache:i,type:l,appContext:c}=e;let u;if(t[0]!=="$"){const y=i[t];if(y!==void 0)switch(y){case 1:return s[t];case 2:return r[t];case 4:return n[t];case 3:return o[t]}else{if(Ns(s,t))return i[t]=1,s[t];if(r!==ie&&ee(r,t))return i[t]=2,r[t];if((u=e.propsOptions[0])&&ee(u,t))return i[t]=3,o[t];if(n!==ie&&ee(n,t))return i[t]=4,n[t];Qs&&(i[t]=0)}}const f=an[t];let a,p;if(f)return t==="$attrs"&&Pe(e,"get",t),f(e);if((a=l.__cssModules)&&(a=a[t]))return a;if(n!==ie&&ee(n,t))return i[t]=4,n[t];if(p=c.config.globalProperties,ee(p,t))return p[t]},set({_:e},t,n){const{data:s,setupState:r,ctx:o}=e;return Ns(r,t)?(r[t]=n,!0):s!==ie&&ee(s,t)?(s[t]=n,!0):ee(e.props,t)||t[0]==="$"&&t.slice(1)in e?!1:(o[t]=n,!0)},has({_:{data:e,setupState:t,accessCache:n,ctx:s,appContext:r,propsOptions:o}},i){let l;return!!n[i]||e!==ie&&ee(e,i)||Ns(t,i)||(l=o[0])&&ee(l,i)||ee(s,i)||ee(an,i)||ee(r.config.globalProperties,i)},defineProperty(e,t,n){return n.get!=null?e._.accessCache[t]=0:ee(n,"value")&&this.set(e,t,n.value,null),Reflect.defineProperty(e,t,n)}},ku=ue({},Ys,{get(e,t){if(t!==Symbol.unscopables)return Ys.get(e,t,e)},has(e,t){return t[0]!=="_"&&!Yl(t)}});function Fu(){return null}function Nu(){return null}function Lu(e){}function Bu(e){}function Hu(){return null}function Du(){}function $u(e,t){return null}function ju(){return Di().slots}function Uu(){return Di().attrs}function Ku(e,t,n){const s=et();if(n&&n.local){const r=Rt(e[t]);return at(()=>e[t],o=>r.value=o),at(r,o=>{o!==e[t]&&s.emit(`update:${t}`,o)}),r}else return{__v_isRef:!0,get value(){return e[t]},set value(r){s.emit(`update:${t}`,r)}}}function Di(){const e=et();return e.setupContext||(e.setupContext=il(e))}function Rn(e){return D(e)?e.reduce((t,n)=>(t[n]=null,t),{}):e}function Vu(e,t){const n=Rn(e);for(const s in t){if(s.startsWith("__skip"))continue;let r=n[s];r?D(r)||Q(r)?r=n[s]={type:r,default:t[s]}:r.default=t[s]:r===null&&(r=n[s]={default:t[s]}),r&&t[`__skip_${s}`]&&(r.skipFactory=!0)}return n}function Wu(e,t){return!e||!t?e||t:D(e)&&D(t)?e.concat(t):ue({},Rn(e),Rn(t))}function qu(e,t){const n={};for(const s in e)t.includes(s)||Object.defineProperty(n,s,{enumerable:!0,get:()=>e[s]});return n}function zu(e){const t=et();let n=e();return dt(),dr(n)&&(n=n.catch(s=>{throw mt(t),s})),[n,()=>mt(t)]}let Qs=!0;function Yu(e){const t=Ir(e),n=e.proxy,s=e.ctx;Qs=!1,t.beforeCreate&&lo(t.beforeCreate,e,"bc");const{data:r,computed:o,methods:i,watch:l,provide:c,inject:u,created:f,beforeMount:a,mounted:p,beforeUpdate:y,updated:E,activated:A,deactivated:k,beforeDestroy:b,beforeUnmount:g,destroyed:R,unmounted:_,render:S,renderTracked:B,renderTriggered:N,errorCaptured:x,serverPrefetch:j,expose:U,inheritAttrs:z,components:L,directives:Y,filters:$}=t;if(u&&Qu(u,s,null),i)for(const te in i){const ne=i[te];Q(ne)&&(s[te]=ne.bind(n))}if(r){const te=r.call(n,n);le(te)&&(e.data=en(te))}if(Qs=!0,o)for(const te in o){const ne=o[te],qe=Q(ne)?ne.bind(n,n):Q(ne.get)?ne.get.bind(n,n):He,tt=!Q(ne)&&Q(ne.set)?ne.set.bind(n):He,je=Me({get:qe,set:tt});Object.defineProperty(s,te,{enumerable:!0,configurable:!0,get:()=>je.value,set:we=>je.value=we})}if(l)for(const te in l)$i(l[te],s,n,te);if(c){const te=Q(c)?c.call(n):c;Reflect.ownKeys(te).forEach(ne=>{dn(ne,te[ne])})}f&&lo(f,e,"c");function G(te,ne){D(ne)?ne.forEach(qe=>te(qe.bind(n))):ne&&te(ne.bind(n))}if(G(Mi,a),G(Ln,p),G(Ii,y),G(Es,E),G(Ai,A),G(Si,k),G(Li,x),G(Ni,B),G(Fi,N),G(Cs,g),G(ws,_),G(ki,j),D(U))if(U.length){const te=e.exposed||(e.exposed={});U.forEach(ne=>{Object.defineProperty(te,ne,{get:()=>n[ne],set:qe=>n[ne]=qe})})}else e.exposed||(e.exposed={});S&&e.render===He&&(e.render=S),z!=null&&(e.inheritAttrs=z),L&&(e.components=L),Y&&(e.directives=Y)}function Qu(e,t,n=He){D(e)&&(e=Js(e));for(const s in e){const r=e[s];let o;le(r)?"default"in r?o=De(r.from||s,r.default,!0):o=De(r.from||s):o=De(r),ge(o)?Object.defineProperty(t,s,{enumerable:!0,configurable:!0,get:()=>o.value,set:i=>o.value=i}):t[s]=o}}function lo(e,t,n){Ie(D(e)?e.map(s=>s.bind(t.proxy)):e.bind(t.proxy),t,n)}function $i(e,t,n,s){const r=s.includes(".")?Ri(n,s):()=>n[s];if(ae(e)){const o=t[e];Q(o)&&at(r,o)}else if(Q(e))at(r,e.bind(n));else if(le(e))if(D(e))e.forEach(o=>$i(o,t,n,s));else{const o=Q(e.handler)?e.handler.bind(n):t[e.handler];Q(o)&&at(r,o,e)}}function Ir(e){const t=e.type,{mixins:n,extends:s}=t,{mixins:r,optionsCache:o,config:{optionMergeStrategies:i}}=e.appContext,l=o.get(t);let c;return l?c=l:!r.length&&!n&&!s?c=t:(c={},r.length&&r.forEach(u=>ns(c,u,i,!0)),ns(c,t,i)),le(t)&&o.set(t,c),c}function ns(e,t,n,s=!1){const{mixins:r,extends:o}=t;o&&ns(e,o,n,!0),r&&r.forEach(i=>ns(e,i,n,!0));for(const i in t)if(!(s&&i==="expose")){const l=Ju[i]||n&&n[i];e[i]=l?l(e[i],t[i]):t[i]}return e}const Ju={data:co,props:uo,emits:uo,methods:cn,computed:cn,beforeCreate:Ce,created:Ce,beforeMount:Ce,mounted:Ce,beforeUpdate:Ce,updated:Ce,beforeDestroy:Ce,beforeUnmount:Ce,destroyed:Ce,unmounted:Ce,activated:Ce,deactivated:Ce,errorCaptured:Ce,serverPrefetch:Ce,components:cn,directives:cn,watch:Zu,provide:co,inject:Xu};function co(e,t){return t?e?function(){return ue(Q(e)?e.call(this,this):e,Q(t)?t.call(this,this):t)}:t:e}function Xu(e,t){return cn(Js(e),Js(t))}function Js(e){if(D(e)){const t={};for(let n=0;n1)return n&&Q(t)?t.call(s&&s.proxy):t}}function tf(){return!!(pe||me||Tn)}function nf(e,t,n,s=!1){const r={},o={};Xn(o,Rs,1),e.propsDefaults=Object.create(null),Ui(e,t,r,o);for(const i in e.propsOptions[0])i in r||(r[i]=void 0);n?e.props=s?r:hi(r):e.type.props?e.props=r:e.props=o,e.attrs=o}function sf(e,t,n,s){const{props:r,attrs:o,vnode:{patchFlag:i}}=e,l=Z(r),[c]=e.propsOptions;let u=!1;if((s||i>0)&&!(i&16)){if(i&8){const f=e.vnode.dynamicProps;for(let a=0;a{c=!0;const[p,y]=Ki(a,t,!0);ue(i,p),y&&l.push(...y)};!n&&t.mixins.length&&t.mixins.forEach(f),e.extends&&f(e.extends),e.mixins&&e.mixins.forEach(f)}if(!o&&!c)return le(e)&&s.set(e,$t),$t;if(D(o))for(let f=0;f-1,y[1]=A<0||E-1||ee(y,"default"))&&l.push(a)}}}const u=[i,l];return le(e)&&s.set(e,u),u}function fo(e){return e[0]!=="$"}function ao(e){const t=e&&e.toString().match(/^\s*(function|class) (\w+)/);return t?t[2]:e===null?"null":""}function ho(e,t){return ao(e)===ao(t)}function po(e,t){return D(t)?t.findIndex(n=>ho(n,e)):Q(t)&&ho(t,e)?0:-1}const Vi=e=>e[0]==="_"||e==="$stable",kr=e=>D(e)?e.map(Oe):[Oe(e)],rf=(e,t,n)=>{if(t._n)return t;const s=Rr((...r)=>kr(t(...r)),n);return s._c=!1,s},Wi=(e,t,n)=>{const s=e._ctx;for(const r in e){if(Vi(r))continue;const o=e[r];if(Q(o))t[r]=rf(r,o,s);else if(o!=null){const i=kr(o);t[r]=()=>i}}},qi=(e,t)=>{const n=kr(t);e.slots.default=()=>n},of=(e,t)=>{if(e.vnode.shapeFlag&32){const n=t._;n?(e.slots=Z(t),Xn(t,"_",n)):Wi(t,e.slots={})}else e.slots={},t&&qi(e,t);Xn(e.slots,Rs,1)},lf=(e,t,n)=>{const{vnode:s,slots:r}=e;let o=!0,i=ie;if(s.shapeFlag&32){const l=t._;l?n&&l===1?o=!1:(ue(r,t),!n&&l===1&&delete r._):(o=!t.$stable,Wi(t,r)),i=t}else t&&(qi(e,t),i={default:1});if(o)for(const l in r)!Vi(l)&&!(l in i)&&delete r[l]};function ss(e,t,n,s,r=!1){if(D(e)){e.forEach((p,y)=>ss(p,t&&(D(t)?t[y]:t),n,s,r));return}if(Tt(s)&&!r)return;const o=s.shapeFlag&4?Ts(s.component)||s.component.proxy:s.el,i=r?null:o,{i:l,r:c}=e,u=t&&t.r,f=l.refs===ie?l.refs={}:l.refs,a=l.setupState;if(u!=null&&u!==c&&(ae(u)?(f[u]=null,ee(a,u)&&(a[u]=null)):ge(u)&&(u.value=null)),Q(c))Xe(c,l,12,[i,f]);else{const p=ae(c),y=ge(c);if(p||y){const E=()=>{if(e.f){const A=p?ee(a,c)?a[c]:f[c]:c.value;r?D(A)&&ar(A,o):D(A)?A.includes(o)||A.push(o):p?(f[c]=[o],ee(a,c)&&(a[c]=f[c])):(c.value=[o],e.k&&(f[e.k]=c.value))}else p?(f[c]=i,ee(a,c)&&(a[c]=i)):y&&(c.value=i,e.k&&(f[e.k]=i))};i?(E.id=-1,_e(E,n)):E()}}}let st=!1;const Wn=e=>/svg/.test(e.namespaceURI)&&e.tagName!=="foreignObject",qn=e=>e.nodeType===8;function cf(e){const{mt:t,p:n,o:{patchProp:s,createText:r,nextSibling:o,parentNode:i,remove:l,insert:c,createComment:u}}=e,f=(b,g)=>{if(!g.hasChildNodes()){n(null,b,g),ts(),g._vnode=b;return}st=!1,a(g.firstChild,b,null,null,null),ts(),g._vnode=b,st&&console.error("Hydration completed but contains mismatches.")},a=(b,g,R,_,S,B=!1)=>{const N=qn(b)&&b.data==="[",x=()=>A(b,g,R,_,S,N),{type:j,ref:U,shapeFlag:z,patchFlag:L}=g;let Y=b.nodeType;g.el=b,L===-2&&(B=!1,g.dynamicChildren=null);let $=null;switch(j){case Ot:Y!==3?g.children===""?(c(g.el=r(""),i(b),b),$=b):$=x():(b.data!==g.children&&(st=!0,b.data=g.children),$=o(b));break;case Ee:Y!==8||N?$=x():$=o(b);break;case Pt:if(N&&(b=o(b),Y=b.nodeType),Y===1||Y===3){$=b;const de=!g.children.length;for(let G=0;G{B=B||!!g.dynamicChildren;const{type:N,props:x,patchFlag:j,shapeFlag:U,dirs:z}=g,L=N==="input"&&z||N==="option";if(L||j!==-1){if(z&&Ke(g,null,R,"created"),x)if(L||!B||j&48)for(const $ in x)(L&&$.endsWith("value")||An($)&&!un($))&&s(b,$,null,x[$],!1,void 0,R);else x.onClick&&s(b,"onClick",null,x.onClick,!1,void 0,R);let Y;if((Y=x&&x.onVnodeBeforeMount)&&xe(Y,R,g),z&&Ke(g,null,R,"beforeMount"),((Y=x&&x.onVnodeMounted)||z)&&wi(()=>{Y&&xe(Y,R,g),z&&Ke(g,null,R,"mounted")},_),U&16&&!(x&&(x.innerHTML||x.textContent))){let $=y(b.firstChild,g,b,R,_,S,B);for(;$;){st=!0;const de=$;$=$.nextSibling,l(de)}}else U&8&&b.textContent!==g.children&&(st=!0,b.textContent=g.children)}return b.nextSibling},y=(b,g,R,_,S,B,N)=>{N=N||!!g.dynamicChildren;const x=g.children,j=x.length;for(let U=0;U{const{slotScopeIds:N}=g;N&&(S=S?S.concat(N):N);const x=i(b),j=y(o(b),g,x,R,_,S,B);return j&&qn(j)&&j.data==="]"?o(g.anchor=j):(st=!0,c(g.anchor=u("]"),x,j),j)},A=(b,g,R,_,S,B)=>{if(st=!0,g.el=null,B){const j=k(b);for(;;){const U=o(b);if(U&&U!==j)l(U);else break}}const N=o(b),x=i(b);return l(b),n(null,g,x,N,R,_,Wn(x),S),N},k=b=>{let g=0;for(;b;)if(b=o(b),b&&qn(b)&&(b.data==="["&&g++,b.data==="]")){if(g===0)return o(b);g--}return b};return[f,a]}const _e=wi;function zi(e){return Qi(e)}function Yi(e){return Qi(e,cf)}function Qi(e,t){const n=Us();n.__VUE__=!0;const{insert:s,remove:r,patchProp:o,createElement:i,createText:l,createComment:c,setText:u,setElementText:f,parentNode:a,nextSibling:p,setScopeId:y=He,insertStaticContent:E}=e,A=(d,h,m,v=null,w=null,T=null,F=!1,O=null,M=!!h.dynamicChildren)=>{if(d===h)return;d&&!Be(d,h)&&(v=C(d),we(d,w,T,!0),d=null),h.patchFlag===-2&&(M=!1,h.dynamicChildren=null);const{type:P,ref:W,shapeFlag:K}=h;switch(P){case Ot:k(d,h,m,v);break;case Ee:b(d,h,m,v);break;case Pt:d==null&&g(h,m,v,F);break;case ye:L(d,h,m,v,w,T,F,O,M);break;default:K&1?S(d,h,m,v,w,T,F,O,M):K&6?Y(d,h,m,v,w,T,F,O,M):(K&64||K&128)&&P.process(d,h,m,v,w,T,F,O,M,I)}W!=null&&w&&ss(W,d&&d.ref,T,h||d,!h)},k=(d,h,m,v)=>{if(d==null)s(h.el=l(h.children),m,v);else{const w=h.el=d.el;h.children!==d.children&&u(w,h.children)}},b=(d,h,m,v)=>{d==null?s(h.el=c(h.children||""),m,v):h.el=d.el},g=(d,h,m,v)=>{[d.el,d.anchor]=E(d.children,h,m,v,d.el,d.anchor)},R=({el:d,anchor:h},m,v)=>{let w;for(;d&&d!==h;)w=p(d),s(d,m,v),d=w;s(h,m,v)},_=({el:d,anchor:h})=>{let m;for(;d&&d!==h;)m=p(d),r(d),d=m;r(h)},S=(d,h,m,v,w,T,F,O,M)=>{F=F||h.type==="svg",d==null?B(h,m,v,w,T,F,O,M):j(d,h,w,T,F,O,M)},B=(d,h,m,v,w,T,F,O)=>{let M,P;const{type:W,props:K,shapeFlag:q,transition:J,dirs:X}=d;if(M=d.el=i(d.type,T,K&&K.is,K),q&8?f(M,d.children):q&16&&x(d.children,M,null,v,w,T&&W!=="foreignObject",F,O),X&&Ke(d,null,v,"created"),N(M,d,d.scopeId,F,v),K){for(const oe in K)oe!=="value"&&!un(oe)&&o(M,oe,null,K[oe],T,d.children,v,w,be);"value"in K&&o(M,"value",null,K.value),(P=K.onVnodeBeforeMount)&&xe(P,v,d)}X&&Ke(d,null,v,"beforeMount");const ce=(!w||w&&!w.pendingBranch)&&J&&!J.persisted;ce&&J.beforeEnter(M),s(M,h,m),((P=K&&K.onVnodeMounted)||ce||X)&&_e(()=>{P&&xe(P,v,d),ce&&J.enter(M),X&&Ke(d,null,v,"mounted")},w)},N=(d,h,m,v,w)=>{if(m&&y(d,m),v)for(let T=0;T{for(let P=M;P{const O=h.el=d.el;let{patchFlag:M,dynamicChildren:P,dirs:W}=h;M|=d.patchFlag&16;const K=d.props||ie,q=h.props||ie;let J;m&&yt(m,!1),(J=q.onVnodeBeforeUpdate)&&xe(J,m,h,d),W&&Ke(h,d,m,"beforeUpdate"),m&&yt(m,!0);const X=w&&h.type!=="foreignObject";if(P?U(d.dynamicChildren,P,O,m,v,X,T):F||ne(d,h,O,null,m,v,X,T,!1),M>0){if(M&16)z(O,h,K,q,m,v,w);else if(M&2&&K.class!==q.class&&o(O,"class",null,q.class,w),M&4&&o(O,"style",K.style,q.style,w),M&8){const ce=h.dynamicProps;for(let oe=0;oe{J&&xe(J,m,h,d),W&&Ke(h,d,m,"updated")},v)},U=(d,h,m,v,w,T,F)=>{for(let O=0;O{if(m!==v){if(m!==ie)for(const O in m)!un(O)&&!(O in v)&&o(d,O,m[O],null,F,h.children,w,T,be);for(const O in v){if(un(O))continue;const M=v[O],P=m[O];M!==P&&O!=="value"&&o(d,O,P,M,F,h.children,w,T,be)}"value"in v&&o(d,"value",m.value,v.value)}},L=(d,h,m,v,w,T,F,O,M)=>{const P=h.el=d?d.el:l(""),W=h.anchor=d?d.anchor:l("");let{patchFlag:K,dynamicChildren:q,slotScopeIds:J}=h;J&&(O=O?O.concat(J):J),d==null?(s(P,m,v),s(W,m,v),x(h.children,m,W,w,T,F,O,M)):K>0&&K&64&&q&&d.dynamicChildren?(U(d.dynamicChildren,q,m,w,T,F,O),(h.key!=null||w&&h===w.subTree)&&Fr(d,h,!0)):ne(d,h,m,W,w,T,F,O,M)},Y=(d,h,m,v,w,T,F,O,M)=>{h.slotScopeIds=O,d==null?h.shapeFlag&512?w.ctx.activate(h,m,v,F,M):$(h,m,v,w,T,F,M):de(d,h,M)},$=(d,h,m,v,w,T,F)=>{const O=d.component=nl(d,v,w);if(Nn(d)&&(O.ctx.renderer=I),rl(O),O.asyncDep){if(w&&w.registerDep(O,G),!d.el){const M=O.subTree=fe(Ee);b(null,M,h,m)}return}G(O,d,h,m,w,T,F)},de=(d,h,m)=>{const v=h.component=d.component;if(cu(d,h,m))if(v.asyncDep&&!v.asyncResolved){te(v,h,m);return}else v.next=h,Gc(v.update),v.update();else h.el=d.el,v.vnode=h},G=(d,h,m,v,w,T,F)=>{const O=()=>{if(d.isMounted){let{next:W,bu:K,u:q,parent:J,vnode:X}=d,ce=W,oe;yt(d,!1),W?(W.el=X.el,te(d,W,F)):W=X,K&&Ut(K),(oe=W.props&&W.props.onVnodeBeforeUpdate)&&xe(oe,J,W,X),yt(d,!0);const he=Qn(d),Ne=d.subTree;d.subTree=he,A(Ne,he,a(Ne.el),C(Ne),d,w,T),W.el=he.el,ce===null&&Tr(d,he.el),q&&_e(q,w),(oe=W.props&&W.props.onVnodeUpdated)&&_e(()=>xe(oe,J,W,X),w)}else{let W;const{el:K,props:q}=h,{bm:J,m:X,parent:ce}=d,oe=Tt(h);if(yt(d,!1),J&&Ut(J),!oe&&(W=q&&q.onVnodeBeforeMount)&&xe(W,ce,h),yt(d,!0),K&&se){const he=()=>{d.subTree=Qn(d),se(K,d.subTree,d,w,null)};oe?h.type.__asyncLoader().then(()=>!d.isUnmounted&&he()):he()}else{const he=d.subTree=Qn(d);A(null,he,m,v,d,w,T),h.el=he.el}if(X&&_e(X,w),!oe&&(W=q&&q.onVnodeMounted)){const he=h;_e(()=>xe(W,ce,he),w)}(h.shapeFlag&256||ce&&Tt(ce.vnode)&&ce.vnode.shapeFlag&256)&&d.a&&_e(d.a,w),d.isMounted=!0,h=m=v=null}},M=d.effect=new In(O,()=>ms(P),d.scope),P=d.update=()=>M.run();P.id=d.uid,yt(d,!0),P()},te=(d,h,m)=>{h.component=d;const v=d.vnode.props;d.vnode=h,d.next=null,sf(d,h.props,v,m),lf(d,h.children,m),Zt(),no(),Gt()},ne=(d,h,m,v,w,T,F,O,M=!1)=>{const P=d&&d.children,W=d?d.shapeFlag:0,K=h.children,{patchFlag:q,shapeFlag:J}=h;if(q>0){if(q&128){tt(P,K,m,v,w,T,F,O,M);return}else if(q&256){qe(P,K,m,v,w,T,F,O,M);return}}J&8?(W&16&&be(P,w,T),K!==P&&f(m,K)):W&16?J&16?tt(P,K,m,v,w,T,F,O,M):be(P,w,T,!0):(W&8&&f(m,""),J&16&&x(K,m,v,w,T,F,O,M))},qe=(d,h,m,v,w,T,F,O,M)=>{d=d||$t,h=h||$t;const P=d.length,W=h.length,K=Math.min(P,W);let q;for(q=0;qW?be(d,w,T,!0,!1,K):x(h,m,v,w,T,F,O,M,K)},tt=(d,h,m,v,w,T,F,O,M)=>{let P=0;const W=h.length;let K=d.length-1,q=W-1;for(;P<=K&&P<=q;){const J=d[P],X=h[P]=M?lt(h[P]):Oe(h[P]);if(Be(J,X))A(J,X,m,null,w,T,F,O,M);else break;P++}for(;P<=K&&P<=q;){const J=d[K],X=h[q]=M?lt(h[q]):Oe(h[q]);if(Be(J,X))A(J,X,m,null,w,T,F,O,M);else break;K--,q--}if(P>K){if(P<=q){const J=q+1,X=Jq)for(;P<=K;)we(d[P],w,T,!0),P++;else{const J=P,X=P,ce=new Map;for(P=X;P<=q;P++){const Ae=h[P]=M?lt(h[P]):Oe(h[P]);Ae.key!=null&&ce.set(Ae.key,P)}let oe,he=0;const Ne=q-X+1;let Lt=!1,Vr=0;const tn=new Array(Ne);for(P=0;P=Ne){we(Ae,w,T,!0);continue}let Ue;if(Ae.key!=null)Ue=ce.get(Ae.key);else for(oe=X;oe<=q;oe++)if(tn[oe-X]===0&&Be(Ae,h[oe])){Ue=oe;break}Ue===void 0?we(Ae,w,T,!0):(tn[Ue-X]=P+1,Ue>=Vr?Vr=Ue:Lt=!0,A(Ae,h[Ue],m,null,w,T,F,O,M),he++)}const Wr=Lt?uf(tn):$t;for(oe=Wr.length-1,P=Ne-1;P>=0;P--){const Ae=X+P,Ue=h[Ae],qr=Ae+1{const{el:T,type:F,transition:O,children:M,shapeFlag:P}=d;if(P&6){je(d.component.subTree,h,m,v);return}if(P&128){d.suspense.move(h,m,v);return}if(P&64){F.move(d,h,m,I);return}if(F===ye){s(T,h,m);for(let K=0;KO.enter(T),w);else{const{leave:K,delayLeave:q,afterLeave:J}=O,X=()=>s(T,h,m),ce=()=>{K(T,()=>{X(),J&&J()})};q?q(T,X,ce):ce()}else s(T,h,m)},we=(d,h,m,v=!1,w=!1)=>{const{type:T,props:F,ref:O,children:M,dynamicChildren:P,shapeFlag:W,patchFlag:K,dirs:q}=d;if(O!=null&&ss(O,null,m,d,!0),W&256){h.ctx.deactivate(d);return}const J=W&1&&q,X=!Tt(d);let ce;if(X&&(ce=F&&F.onVnodeBeforeUnmount)&&xe(ce,h,d),W&6)Bn(d.component,m,v);else{if(W&128){d.suspense.unmount(m,v);return}J&&Ke(d,null,h,"beforeUnmount"),W&64?d.type.remove(d,h,m,w,I,v):P&&(T!==ye||K>0&&K&64)?be(P,h,m,!1,!0):(T===ye&&K&384||!w&&W&16)&&be(M,h,m),v&&Ft(d)}(X&&(ce=F&&F.onVnodeUnmounted)||J)&&_e(()=>{ce&&xe(ce,h,d),J&&Ke(d,null,h,"unmounted")},m)},Ft=d=>{const{type:h,el:m,anchor:v,transition:w}=d;if(h===ye){Nt(m,v);return}if(h===Pt){_(d);return}const T=()=>{r(m),w&&!w.persisted&&w.afterLeave&&w.afterLeave()};if(d.shapeFlag&1&&w&&!w.persisted){const{leave:F,delayLeave:O}=w,M=()=>F(m,T);O?O(d.el,T,M):M()}else T()},Nt=(d,h)=>{let m;for(;d!==h;)m=p(d),r(d),d=m;r(h)},Bn=(d,h,m)=>{const{bum:v,scope:w,update:T,subTree:F,um:O}=d;v&&Ut(v),w.stop(),T&&(T.active=!1,we(F,d,h,m)),O&&_e(O,h),_e(()=>{d.isUnmounted=!0},h),h&&h.pendingBranch&&!h.isUnmounted&&d.asyncDep&&!d.asyncResolved&&d.suspenseId===h.pendingId&&(h.deps--,h.deps===0&&h.resolve())},be=(d,h,m,v=!1,w=!1,T=0)=>{for(let F=T;Fd.shapeFlag&6?C(d.component.subTree):d.shapeFlag&128?d.suspense.next():p(d.anchor||d.el),H=(d,h,m)=>{d==null?h._vnode&&we(h._vnode,null,null,!0):A(h._vnode||null,d,h,null,null,null,m),no(),ts(),h._vnode=d},I={p:A,um:we,m:je,r:Ft,mt:$,mc:x,pc:ne,pbc:U,n:C,o:e};let V,se;return t&&([V,se]=t(I)),{render:H,hydrate:V,createApp:ef(H,V)}}function yt({effect:e,update:t},n){e.allowRecurse=t.allowRecurse=n}function Fr(e,t,n=!1){const s=e.children,r=t.children;if(D(s)&&D(r))for(let o=0;o>1,e[n[l]]0&&(t[s]=n[o-1]),n[o]=s)}}for(o=n.length,i=n[o-1];o-- >0;)n[o]=i,i=t[i];return n}const ff=e=>e.__isTeleport,hn=e=>e&&(e.disabled||e.disabled===""),go=e=>typeof SVGElement<"u"&&e instanceof SVGElement,Zs=(e,t)=>{const n=e&&e.to;return ae(n)?t?t(n):null:n},af={__isTeleport:!0,process(e,t,n,s,r,o,i,l,c,u){const{mc:f,pc:a,pbc:p,o:{insert:y,querySelector:E,createText:A,createComment:k}}=u,b=hn(t.props);let{shapeFlag:g,children:R,dynamicChildren:_}=t;if(e==null){const S=t.el=A(""),B=t.anchor=A("");y(S,n,s),y(B,n,s);const N=t.target=Zs(t.props,E),x=t.targetAnchor=A("");N&&(y(x,N),i=i||go(N));const j=(U,z)=>{g&16&&f(R,U,z,r,o,i,l,c)};b?j(n,B):N&&j(N,x)}else{t.el=e.el;const S=t.anchor=e.anchor,B=t.target=e.target,N=t.targetAnchor=e.targetAnchor,x=hn(e.props),j=x?n:B,U=x?S:N;if(i=i||go(B),_?(p(e.dynamicChildren,_,j,r,o,i,l),Fr(e,t,!0)):c||a(e,t,j,U,r,o,i,l,!1),b)x||zn(t,n,S,u,1);else if((t.props&&t.props.to)!==(e.props&&e.props.to)){const z=t.target=Zs(t.props,E);z&&zn(t,z,null,u,0)}else x&&zn(t,B,N,u,1)}Ji(t)},remove(e,t,n,s,{um:r,o:{remove:o}},i){const{shapeFlag:l,children:c,anchor:u,targetAnchor:f,target:a,props:p}=e;if(a&&o(f),(i||!hn(p))&&(o(u),l&16))for(let y=0;y0?Re||$t:null,Xi(),Mt>0&&Re&&Re.push(e),e}function pf(e,t,n,s,r,o){return Zi(Lr(e,t,n,s,r,o,!0))}function Nr(e,t,n,s,r){return Zi(fe(e,t,n,s,r,!0))}function gt(e){return e?e.__v_isVNode===!0:!1}function Be(e,t){return e.type===t.type&&e.key===t.key}function gf(e){}const Rs="__vInternal",Gi=({key:e})=>e??null,Jn=({ref:e,ref_key:t,ref_for:n})=>(typeof e=="number"&&(e=""+e),e!=null?ae(e)||ge(e)||Q(e)?{i:me,r:e,k:t,f:!!n}:e:null);function Lr(e,t=null,n=null,s=0,r=null,o=e===ye?0:1,i=!1,l=!1){const c={__v_isVNode:!0,__v_skip:!0,type:e,props:t,key:t&&Gi(t),ref:t&&Jn(t),scopeId:ys,slotScopeIds:null,children:n,component:null,suspense:null,ssContent:null,ssFallback:null,dirs:null,transition:null,el:null,anchor:null,target:null,targetAnchor:null,staticCount:0,shapeFlag:o,patchFlag:s,dynamicProps:r,dynamicChildren:null,appContext:null,ctx:me};return l?(Hr(c,n),o&128&&e.normalize(c)):n&&(c.shapeFlag|=ae(n)?8:16),Mt>0&&!i&&Re&&(c.patchFlag>0||o&6)&&c.patchFlag!==32&&Re.push(c),c}const fe=mf;function mf(e,t=null,n=null,s=0,r=null,o=!1){if((!e||e===Bi)&&(e=Ee),gt(e)){const l=We(e,t,!0);return n&&Hr(l,n),Mt>0&&!o&&Re&&(l.shapeFlag&6?Re[Re.indexOf(e)]=l:Re.push(l)),l.patchFlag|=-2,l}if(Rf(e)&&(e=e.__vccOpts),t){t=el(t);let{class:l,style:c}=t;l&&!ae(l)&&(t.class=Mn(l)),le(c)&&(yr(c)&&!D(c)&&(c=ue({},c)),t.style=On(c))}const i=ae(e)?1:Ci(e)?128:ff(e)?64:le(e)?4:Q(e)?2:0;return Lr(e,t,n,s,r,i,o,!0)}function el(e){return e?yr(e)||Rs in e?ue({},e):e:null}function We(e,t,n=!1){const{props:s,ref:r,patchFlag:o,children:i}=e,l=t?tl(s||{},t):s;return{__v_isVNode:!0,__v_skip:!0,type:e.type,props:l,key:l&&Gi(l),ref:t&&t.ref?n&&r?D(r)?r.concat(Jn(t)):[r,Jn(t)]:Jn(t):r,scopeId:e.scopeId,slotScopeIds:e.slotScopeIds,children:i,target:e.target,targetAnchor:e.targetAnchor,staticCount:e.staticCount,shapeFlag:e.shapeFlag,patchFlag:t&&e.type!==ye?o===-1?16:o|16:o,dynamicProps:e.dynamicProps,dynamicChildren:e.dynamicChildren,appContext:e.appContext,dirs:e.dirs,transition:e.transition,component:e.component,suspense:e.suspense,ssContent:e.ssContent&&We(e.ssContent),ssFallback:e.ssFallback&&We(e.ssFallback),el:e.el,anchor:e.anchor,ctx:e.ctx,ce:e.ce}}function Br(e=" ",t=0){return fe(Ot,null,e,t)}function _f(e,t){const n=fe(Pt,null,e);return n.staticCount=t,n}function yf(e="",t=!1){return t?(xs(),Nr(Ee,null,e)):fe(Ee,null,e)}function Oe(e){return e==null||typeof e=="boolean"?fe(Ee):D(e)?fe(ye,null,e.slice()):typeof e=="object"?lt(e):fe(Ot,null,String(e))}function lt(e){return e.el===null&&e.patchFlag!==-1||e.memo?e:We(e)}function Hr(e,t){let n=0;const{shapeFlag:s}=e;if(t==null)t=null;else if(D(t))n=16;else if(typeof t=="object")if(s&65){const r=t.default;r&&(r._c&&(r._d=!1),Hr(e,r()),r._c&&(r._d=!0));return}else{n=32;const r=t._;!r&&!(Rs in t)?t._ctx=me:r===3&&me&&(me.slots._===1?t._=1:(t._=2,e.patchFlag|=1024))}else Q(t)?(t={default:t,_ctx:me},n=32):(t=String(t),s&64?(n=16,t=[Br(t)]):n=8);e.children=t,e.shapeFlag|=n}function tl(...e){const t={};for(let n=0;npe||me;let Dr,Bt,mo="__VUE_INSTANCE_SETTERS__";(Bt=Us()[mo])||(Bt=Us()[mo]=[]),Bt.push(e=>pe=e),Dr=e=>{Bt.length>1?Bt.forEach(t=>t(e)):Bt[0](e)};const mt=e=>{Dr(e),e.scope.on()},dt=()=>{pe&&pe.scope.off(),Dr(null)};function sl(e){return e.vnode.shapeFlag&4}let zt=!1;function rl(e,t=!1){zt=t;const{props:n,children:s}=e.vnode,r=sl(e);nf(e,n,r,t),of(e,s);const o=r?Ef(e,t):void 0;return zt=!1,o}function Ef(e,t){const n=e.type;e.accessCache=Object.create(null),e.proxy=br(new Proxy(e.ctx,Ys));const{setup:s}=n;if(s){const r=e.setupContext=s.length>1?il(e):null;mt(e),Zt();const o=Xe(s,e,0,[e.props,r]);if(Gt(),dt(),dr(o)){if(o.then(dt,dt),t)return o.then(i=>{er(e,i,t)}).catch(i=>{kt(i,e,0)});e.asyncDep=o}else er(e,o,t)}else ol(e,t)}function er(e,t,n){Q(t)?e.type.__ssrInlineRender?e.ssrRender=t:e.render=t:le(t)&&(e.setupState=Cr(t)),ol(e,n)}let rs,tr;function Cf(e){rs=e,tr=t=>{t.render._rc&&(t.withProxy=new Proxy(t.ctx,ku))}}const wf=()=>!rs;function ol(e,t,n){const s=e.type;if(!e.render){if(!t&&rs&&!s.render){const r=s.template||Ir(e).template;if(r){const{isCustomElement:o,compilerOptions:i}=e.appContext.config,{delimiters:l,compilerOptions:c}=s,u=ue(ue({isCustomElement:o,delimiters:l},i),c);s.render=rs(r,u)}}e.render=s.render||He,tr&&tr(e)}mt(e),Zt(),Yu(e),Gt(),dt()}function xf(e){return e.attrsProxy||(e.attrsProxy=new Proxy(e.attrs,{get(t,n){return Pe(e,"get","$attrs"),t[n]}}))}function il(e){const t=n=>{e.exposed=n||{}};return{get attrs(){return xf(e)},slots:e.slots,emit:e.emit,expose:t}}function Ts(e){if(e.exposed)return e.exposeProxy||(e.exposeProxy=new Proxy(Cr(br(e.exposed)),{get(t,n){if(n in t)return t[n];if(n in an)return an[n](e)},has(t,n){return n in t||n in an}}))}function nr(e,t=!0){return Q(e)?e.displayName||e.name:e.name||t&&e.__name}function Rf(e){return Q(e)&&"__vccOpts"in e}const Me=(e,t)=>Yc(e,t,zt);function Ps(e,t,n){const s=arguments.length;return s===2?le(t)&&!D(t)?gt(t)?fe(e,null,[t]):fe(e,t):fe(e,null,t):(s>3?n=Array.prototype.slice.call(arguments,2):s===3&>(n)&&(n=[n]),fe(e,t,n))}const ll=Symbol.for("v-scx"),cl=()=>De(ll);function Tf(){}function Pf(e,t,n,s){const r=n[s];if(r&&ul(r,e))return r;const o=t();return o.memo=e.slice(),n[s]=o}function ul(e,t){const n=e.memo;if(n.length!=t.length)return!1;for(let s=0;s0&&Re&&Re.push(e),!0}const fl="3.3.4",Af={createComponentInstance:nl,setupComponent:rl,renderComponentRoot:Qn,setCurrentRenderingInstance:wn,isVNode:gt,normalizeVNode:Oe},Sf=Af,Of=null,Mf=null,If="http://www.w3.org/2000/svg",Et=typeof document<"u"?document:null,_o=Et&&Et.createElement("template"),kf={insert:(e,t,n)=>{t.insertBefore(e,n||null)},remove:e=>{const t=e.parentNode;t&&t.removeChild(e)},createElement:(e,t,n,s)=>{const r=t?Et.createElementNS(If,e):Et.createElement(e,n?{is:n}:void 0);return e==="select"&&s&&s.multiple!=null&&r.setAttribute("multiple",s.multiple),r},createText:e=>Et.createTextNode(e),createComment:e=>Et.createComment(e),setText:(e,t)=>{e.nodeValue=t},setElementText:(e,t)=>{e.textContent=t},parentNode:e=>e.parentNode,nextSibling:e=>e.nextSibling,querySelector:e=>Et.querySelector(e),setScopeId(e,t){e.setAttribute(t,"")},insertStaticContent(e,t,n,s,r,o){const i=n?n.previousSibling:t.lastChild;if(r&&(r===o||r.nextSibling))for(;t.insertBefore(r.cloneNode(!0),n),!(r===o||!(r=r.nextSibling)););else{_o.innerHTML=s?`${e}`:e;const l=_o.content;if(s){const c=l.firstChild;for(;c.firstChild;)l.appendChild(c.firstChild);l.removeChild(c)}t.insertBefore(l,n)}return[i?i.nextSibling:t.firstChild,n?n.previousSibling:t.lastChild]}};function Ff(e,t,n){const s=e._vtc;s&&(t=(t?[t,...s]:[...s]).join(" ")),t==null?e.removeAttribute("class"):n?e.setAttribute("class",t):e.className=t}function Nf(e,t,n){const s=e.style,r=ae(n);if(n&&!r){if(t&&!ae(t))for(const o in t)n[o]==null&&sr(s,o,"");for(const o in n)sr(s,o,n[o])}else{const o=s.display;r?t!==n&&(s.cssText=n):t&&e.removeAttribute("style"),"_vod"in e&&(s.display=o)}}const yo=/\s*!important$/;function sr(e,t,n){if(D(n))n.forEach(s=>sr(e,t,s));else if(n==null&&(n=""),t.startsWith("--"))e.setProperty(t,n);else{const s=Lf(e,t);yo.test(n)?e.setProperty(Fe(s),n.replace(yo,""),"important"):e[s]=n}}const bo=["Webkit","Moz","ms"],Ls={};function Lf(e,t){const n=Ls[t];if(n)return n;let s=Te(t);if(s!=="filter"&&s in e)return Ls[t]=s;s=Sn(s);for(let r=0;rBs||(Uf.then(()=>Bs=0),Bs=Date.now());function Vf(e,t){const n=s=>{if(!s._vts)s._vts=Date.now();else if(s._vts<=n.attached)return;Ie(Wf(s,n.value),t,5,[s])};return n.value=e,n.attached=Kf(),n}function Wf(e,t){if(D(t)){const n=e.stopImmediatePropagation;return e.stopImmediatePropagation=()=>{n.call(e),e._stopped=!0},t.map(s=>r=>!r._stopped&&s&&s(r))}else return t}const Co=/^on[a-z]/,qf=(e,t,n,s,r=!1,o,i,l,c)=>{t==="class"?Ff(e,s,r):t==="style"?Nf(e,n,s):An(t)?fr(t)||$f(e,t,n,s,i):(t[0]==="."?(t=t.slice(1),!0):t[0]==="^"?(t=t.slice(1),!1):zf(e,t,s,r))?Hf(e,t,s,o,i,l,c):(t==="true-value"?e._trueValue=s:t==="false-value"&&(e._falseValue=s),Bf(e,t,s,r))};function zf(e,t,n,s){return s?!!(t==="innerHTML"||t==="textContent"||t in e&&Co.test(t)&&Q(n)):t==="spellcheck"||t==="draggable"||t==="translate"||t==="form"||t==="list"&&e.tagName==="INPUT"||t==="type"&&e.tagName==="TEXTAREA"||Co.test(t)&&ae(n)?!1:t in e}function al(e,t){const n=Fn(e);class s extends As{constructor(o){super(n,o,t)}}return s.def=n,s}const Yf=e=>al(e,Pl),Qf=typeof HTMLElement<"u"?HTMLElement:class{};class As extends Qf{constructor(t,n={},s){super(),this._def=t,this._props=n,this._instance=null,this._connected=!1,this._resolved=!1,this._numberProps=null,this.shadowRoot&&s?s(this._createVNode(),this.shadowRoot):(this.attachShadow({mode:"open"}),this._def.__asyncLoader||this._resolveProps(this._def))}connectedCallback(){this._connected=!0,this._instance||(this._resolved?this._update():this._resolveDef())}disconnectedCallback(){this._connected=!1,gs(()=>{this._connected||(ir(null,this.shadowRoot),this._instance=null)})}_resolveDef(){this._resolved=!0;for(let s=0;s{for(const r of s)this._setAttr(r.attributeName)}).observe(this,{attributes:!0});const t=(s,r=!1)=>{const{props:o,styles:i}=s;let l;if(o&&!D(o))for(const c in o){const u=o[c];(u===Number||u&&u.type===Number)&&(c in this._props&&(this._props[c]=Gn(this._props[c])),(l||(l=Object.create(null)))[Te(c)]=!0)}this._numberProps=l,r&&this._resolveProps(s),this._applyStyles(i),this._update()},n=this._def.__asyncLoader;n?n().then(s=>t(s,!0)):t(this._def)}_resolveProps(t){const{props:n}=t,s=D(n)?n:Object.keys(n||{});for(const r of Object.keys(this))r[0]!=="_"&&s.includes(r)&&this._setProp(r,this[r],!0,!1);for(const r of s.map(Te))Object.defineProperty(this,r,{get(){return this._getProp(r)},set(o){this._setProp(r,o)}})}_setAttr(t){let n=this.getAttribute(t);const s=Te(t);this._numberProps&&this._numberProps[s]&&(n=Gn(n)),this._setProp(s,n,!1)}_getProp(t){return this._props[t]}_setProp(t,n,s=!0,r=!0){n!==this._props[t]&&(this._props[t]=n,r&&this._instance&&this._update(),s&&(n===!0?this.setAttribute(Fe(t),""):typeof n=="string"||typeof n=="number"?this.setAttribute(Fe(t),n+""):n||this.removeAttribute(Fe(t))))}_update(){ir(this._createVNode(),this.shadowRoot)}_createVNode(){const t=fe(this._def,ue({},this._props));return this._instance||(t.ce=n=>{this._instance=n,n.isCE=!0;const s=(o,i)=>{this.dispatchEvent(new CustomEvent(o,{detail:i}))};n.emit=(o,...i)=>{s(o,i),Fe(o)!==o&&s(Fe(o),i)};let r=this;for(;r=r&&(r.parentNode||r.host);)if(r instanceof As){n.parent=r._instance,n.provides=r._instance.provides;break}}),t}_applyStyles(t){t&&t.forEach(n=>{const s=document.createElement("style");s.textContent=n,this.shadowRoot.appendChild(s)})}}function Jf(e="$style"){{const t=et();if(!t)return ie;const n=t.type.__cssModules;if(!n)return ie;const s=n[e];return s||ie}}function Xf(e){const t=et();if(!t)return;const n=t.ut=(r=e(t.proxy))=>{Array.from(document.querySelectorAll(`[data-v-owner="${t.uid}"]`)).forEach(o=>or(o,r))},s=()=>{const r=e(t.proxy);rr(t.subTree,r),n(r)};xi(s),Ln(()=>{const r=new MutationObserver(s);r.observe(t.subTree.el.parentNode,{childList:!0}),ws(()=>r.disconnect())})}function rr(e,t){if(e.shapeFlag&128){const n=e.suspense;e=n.activeBranch,n.pendingBranch&&!n.isHydrating&&n.effects.push(()=>{rr(n.activeBranch,t)})}for(;e.component;)e=e.component.subTree;if(e.shapeFlag&1&&e.el)or(e.el,t);else if(e.type===ye)e.children.forEach(n=>rr(n,t));else if(e.type===Pt){let{el:n,anchor:s}=e;for(;n&&(or(n,t),n!==s);)n=n.nextSibling}}function or(e,t){if(e.nodeType===1){const n=e.style;for(const s in t)n.setProperty(`--${s}`,t[s])}}const rt="transition",nn="animation",$r=(e,{slots:t})=>Ps(Ti,hl(e),t);$r.displayName="Transition";const dl={name:String,type:String,css:{type:Boolean,default:!0},duration:[String,Number,Object],enterFromClass:String,enterActiveClass:String,enterToClass:String,appearFromClass:String,appearActiveClass:String,appearToClass:String,leaveFromClass:String,leaveActiveClass:String,leaveToClass:String},Zf=$r.props=ue({},Sr,dl),bt=(e,t=[])=>{D(e)?e.forEach(n=>n(...t)):e&&e(...t)},wo=e=>e?D(e)?e.some(t=>t.length>1):e.length>1:!1;function hl(e){const t={};for(const L in e)L in dl||(t[L]=e[L]);if(e.css===!1)return t;const{name:n="v",type:s,duration:r,enterFromClass:o=`${n}-enter-from`,enterActiveClass:i=`${n}-enter-active`,enterToClass:l=`${n}-enter-to`,appearFromClass:c=o,appearActiveClass:u=i,appearToClass:f=l,leaveFromClass:a=`${n}-leave-from`,leaveActiveClass:p=`${n}-leave-active`,leaveToClass:y=`${n}-leave-to`}=e,E=Gf(r),A=E&&E[0],k=E&&E[1],{onBeforeEnter:b,onEnter:g,onEnterCancelled:R,onLeave:_,onLeaveCancelled:S,onBeforeAppear:B=b,onAppear:N=g,onAppearCancelled:x=R}=t,j=(L,Y,$)=>{it(L,Y?f:l),it(L,Y?u:i),$&&$()},U=(L,Y)=>{L._isLeaving=!1,it(L,a),it(L,y),it(L,p),Y&&Y()},z=L=>(Y,$)=>{const de=L?N:g,G=()=>j(Y,L,$);bt(de,[Y,G]),xo(()=>{it(Y,L?c:o),Ye(Y,L?f:l),wo(de)||Ro(Y,s,A,G)})};return ue(t,{onBeforeEnter(L){bt(b,[L]),Ye(L,o),Ye(L,i)},onBeforeAppear(L){bt(B,[L]),Ye(L,c),Ye(L,u)},onEnter:z(!1),onAppear:z(!0),onLeave(L,Y){L._isLeaving=!0;const $=()=>U(L,Y);Ye(L,a),gl(),Ye(L,p),xo(()=>{L._isLeaving&&(it(L,a),Ye(L,y),wo(_)||Ro(L,s,k,$))}),bt(_,[L,$])},onEnterCancelled(L){j(L,!1),bt(R,[L])},onAppearCancelled(L){j(L,!0),bt(x,[L])},onLeaveCancelled(L){U(L),bt(S,[L])}})}function Gf(e){if(e==null)return null;if(le(e))return[Hs(e.enter),Hs(e.leave)];{const t=Hs(e);return[t,t]}}function Hs(e){return Gn(e)}function Ye(e,t){t.split(/\s+/).forEach(n=>n&&e.classList.add(n)),(e._vtc||(e._vtc=new Set)).add(t)}function it(e,t){t.split(/\s+/).forEach(s=>s&&e.classList.remove(s));const{_vtc:n}=e;n&&(n.delete(t),n.size||(e._vtc=void 0))}function xo(e){requestAnimationFrame(()=>{requestAnimationFrame(e)})}let ea=0;function Ro(e,t,n,s){const r=e._endId=++ea,o=()=>{r===e._endId&&s()};if(n)return setTimeout(o,n);const{type:i,timeout:l,propCount:c}=pl(e,t);if(!i)return s();const u=i+"end";let f=0;const a=()=>{e.removeEventListener(u,p),o()},p=y=>{y.target===e&&++f>=c&&a()};setTimeout(()=>{f(n[E]||"").split(", "),r=s(`${rt}Delay`),o=s(`${rt}Duration`),i=To(r,o),l=s(`${nn}Delay`),c=s(`${nn}Duration`),u=To(l,c);let f=null,a=0,p=0;t===rt?i>0&&(f=rt,a=i,p=o.length):t===nn?u>0&&(f=nn,a=u,p=c.length):(a=Math.max(i,u),f=a>0?i>u?rt:nn:null,p=f?f===rt?o.length:c.length:0);const y=f===rt&&/\b(transform|all)(,|$)/.test(s(`${rt}Property`).toString());return{type:f,timeout:a,propCount:p,hasTransform:y}}function To(e,t){for(;e.lengthPo(n)+Po(e[s])))}function Po(e){return Number(e.slice(0,-1).replace(",","."))*1e3}function gl(){return document.body.offsetHeight}const ml=new WeakMap,_l=new WeakMap,yl={name:"TransitionGroup",props:ue({},Zf,{tag:String,moveClass:String}),setup(e,{slots:t}){const n=et(),s=Ar();let r,o;return Es(()=>{if(!r.length)return;const i=e.moveClass||`${e.name||"v"}-move`;if(!ia(r[0].el,n.vnode.el,i))return;r.forEach(sa),r.forEach(ra);const l=r.filter(oa);gl(),l.forEach(c=>{const u=c.el,f=u.style;Ye(u,i),f.transform=f.webkitTransform=f.transitionDuration="";const a=u._moveCb=p=>{p&&p.target!==u||(!p||/transform$/.test(p.propertyName))&&(u.removeEventListener("transitionend",a),u._moveCb=null,it(u,i))};u.addEventListener("transitionend",a)})}),()=>{const i=Z(e),l=hl(i);let c=i.tag||ye;r=o,o=t.default?bs(t.default()):[];for(let u=0;udelete e.mode;yl.props;const na=yl;function sa(e){const t=e.el;t._moveCb&&t._moveCb(),t._enterCb&&t._enterCb()}function ra(e){_l.set(e,e.el.getBoundingClientRect())}function oa(e){const t=ml.get(e),n=_l.get(e),s=t.left-n.left,r=t.top-n.top;if(s||r){const o=e.el.style;return o.transform=o.webkitTransform=`translate(${s}px,${r}px)`,o.transitionDuration="0s",e}}function ia(e,t,n){const s=e.cloneNode();e._vtc&&e._vtc.forEach(i=>{i.split(/\s+/).forEach(l=>l&&s.classList.remove(l))}),n.split(/\s+/).forEach(i=>i&&s.classList.add(i)),s.style.display="none";const r=t.nodeType===1?t:t.parentNode;r.appendChild(s);const{hasTransform:o}=pl(s);return r.removeChild(s),o}const _t=e=>{const t=e.props["onUpdate:modelValue"]||!1;return D(t)?n=>Ut(t,n):t};function la(e){e.target.composing=!0}function Ao(e){const t=e.target;t.composing&&(t.composing=!1,t.dispatchEvent(new Event("input")))}const os={created(e,{modifiers:{lazy:t,trim:n,number:s}},r){e._assign=_t(r);const o=s||r.props&&r.props.type==="number";Je(e,t?"change":"input",i=>{if(i.target.composing)return;let l=e.value;n&&(l=l.trim()),o&&(l=Zn(l)),e._assign(l)}),n&&Je(e,"change",()=>{e.value=e.value.trim()}),t||(Je(e,"compositionstart",la),Je(e,"compositionend",Ao),Je(e,"change",Ao))},mounted(e,{value:t}){e.value=t??""},beforeUpdate(e,{value:t,modifiers:{lazy:n,trim:s,number:r}},o){if(e._assign=_t(o),e.composing||document.activeElement===e&&e.type!=="range"&&(n||s&&e.value.trim()===t||(r||e.type==="number")&&Zn(e.value)===t))return;const i=t??"";e.value!==i&&(e.value=i)}},jr={deep:!0,created(e,t,n){e._assign=_t(n),Je(e,"change",()=>{const s=e._modelValue,r=Yt(e),o=e.checked,i=e._assign;if(D(s)){const l=us(s,r),c=l!==-1;if(o&&!c)i(s.concat(r));else if(!o&&c){const u=[...s];u.splice(l,1),i(u)}}else if(It(s)){const l=new Set(s);o?l.add(r):l.delete(r),i(l)}else i(vl(e,o))})},mounted:So,beforeUpdate(e,t,n){e._assign=_t(n),So(e,t,n)}};function So(e,{value:t,oldValue:n},s){e._modelValue=t,D(t)?e.checked=us(t,s.props.value)>-1:It(t)?e.checked=t.has(s.props.value):t!==n&&(e.checked=ht(t,vl(e,!0)))}const Ur={created(e,{value:t},n){e.checked=ht(t,n.props.value),e._assign=_t(n),Je(e,"change",()=>{e._assign(Yt(e))})},beforeUpdate(e,{value:t,oldValue:n},s){e._assign=_t(s),t!==n&&(e.checked=ht(t,s.props.value))}},bl={deep:!0,created(e,{value:t,modifiers:{number:n}},s){const r=It(t);Je(e,"change",()=>{const o=Array.prototype.filter.call(e.options,i=>i.selected).map(i=>n?Zn(Yt(i)):Yt(i));e._assign(e.multiple?r?new Set(o):o:o[0])}),e._assign=_t(s)},mounted(e,{value:t}){Oo(e,t)},beforeUpdate(e,t,n){e._assign=_t(n)},updated(e,{value:t}){Oo(e,t)}};function Oo(e,t){const n=e.multiple;if(!(n&&!D(t)&&!It(t))){for(let s=0,r=e.options.length;s-1:o.selected=t.has(i);else if(ht(Yt(o),t)){e.selectedIndex!==s&&(e.selectedIndex=s);return}}!n&&e.selectedIndex!==-1&&(e.selectedIndex=-1)}}function Yt(e){return"_value"in e?e._value:e.value}function vl(e,t){const n=t?"_trueValue":"_falseValue";return n in e?e[n]:t}const El={created(e,t,n){Yn(e,t,n,null,"created")},mounted(e,t,n){Yn(e,t,n,null,"mounted")},beforeUpdate(e,t,n,s){Yn(e,t,n,s,"beforeUpdate")},updated(e,t,n,s){Yn(e,t,n,s,"updated")}};function Cl(e,t){switch(e){case"SELECT":return bl;case"TEXTAREA":return os;default:switch(t){case"checkbox":return jr;case"radio":return Ur;default:return os}}}function Yn(e,t,n,s,r){const i=Cl(e.tagName,n.props&&n.props.type)[r];i&&i(e,t,n,s)}function ca(){os.getSSRProps=({value:e})=>({value:e}),Ur.getSSRProps=({value:e},t)=>{if(t.props&&ht(t.props.value,e))return{checked:!0}},jr.getSSRProps=({value:e},t)=>{if(D(e)){if(t.props&&us(e,t.props.value)>-1)return{checked:!0}}else if(It(e)){if(t.props&&e.has(t.props.value))return{checked:!0}}else if(e)return{checked:!0}},El.getSSRProps=(e,t)=>{if(typeof t.type!="string")return;const n=Cl(t.type.toUpperCase(),t.props&&t.props.type);if(n.getSSRProps)return n.getSSRProps(e,t)}}const ua=["ctrl","shift","alt","meta"],fa={stop:e=>e.stopPropagation(),prevent:e=>e.preventDefault(),self:e=>e.target!==e.currentTarget,ctrl:e=>!e.ctrlKey,shift:e=>!e.shiftKey,alt:e=>!e.altKey,meta:e=>!e.metaKey,left:e=>"button"in e&&e.button!==0,middle:e=>"button"in e&&e.button!==1,right:e=>"button"in e&&e.button!==2,exact:(e,t)=>ua.some(n=>e[`${n}Key`]&&!t.includes(n))},aa=(e,t)=>(n,...s)=>{for(let r=0;rn=>{if(!("key"in n))return;const s=Fe(n.key);if(t.some(r=>r===s||da[r]===s))return e(n)},wl={beforeMount(e,{value:t},{transition:n}){e._vod=e.style.display==="none"?"":e.style.display,n&&t?n.beforeEnter(e):sn(e,t)},mounted(e,{value:t},{transition:n}){n&&t&&n.enter(e)},updated(e,{value:t,oldValue:n},{transition:s}){!t!=!n&&(s?t?(s.beforeEnter(e),sn(e,!0),s.enter(e)):s.leave(e,()=>{sn(e,!1)}):sn(e,t))},beforeUnmount(e,{value:t}){sn(e,t)}};function sn(e,t){e.style.display=t?e._vod:"none"}function pa(){wl.getSSRProps=({value:e})=>{if(!e)return{style:{display:"none"}}}}const xl=ue({patchProp:qf},kf);let gn,Mo=!1;function Rl(){return gn||(gn=zi(xl))}function Tl(){return gn=Mo?gn:Yi(xl),Mo=!0,gn}const ir=(...e)=>{Rl().render(...e)},Pl=(...e)=>{Tl().hydrate(...e)},ga=(...e)=>{const t=Rl().createApp(...e),{mount:n}=t;return t.mount=s=>{const r=Al(s);if(!r)return;const o=t._component;!Q(o)&&!o.render&&!o.template&&(o.template=r.innerHTML),r.innerHTML="";const i=n(r,!1,r instanceof SVGElement);return r instanceof Element&&(r.removeAttribute("v-cloak"),r.setAttribute("data-v-app","")),i},t},ma=(...e)=>{const t=Tl().createApp(...e),{mount:n}=t;return t.mount=s=>{const r=Al(s);if(r)return n(r,!0,r instanceof SVGElement)},t};function Al(e){return ae(e)?document.querySelector(e):e}let Io=!1;const _a=()=>{Io||(Io=!0,ca(),pa())},ya=()=>{},vd=Object.freeze(Object.defineProperty({__proto__:null,BaseTransition:Ti,BaseTransitionPropsValidators:Sr,Comment:Ee,EffectScope:pr,Fragment:ye,KeepAlive:wu,ReactiveEffect:In,Static:Pt,Suspense:fu,Teleport:hf,Text:Ot,Transition:$r,TransitionGroup:na,VueElement:As,assertNumber:Jc,callWithAsyncErrorHandling:Ie,callWithErrorHandling:Xe,camelize:Te,capitalize:Sn,cloneVNode:We,compatUtils:Mf,compile:ya,computed:Me,createApp:ga,createBlock:Nr,createCommentVNode:yf,createElementBlock:pf,createElementVNode:Lr,createHydrationRenderer:Yi,createPropsRestProxy:qu,createRenderer:zi,createSSRApp:ma,createSlots:Ou,createStaticVNode:_f,createTextVNode:Br,createVNode:fe,customRef:Uc,defineAsyncComponent:Eu,defineComponent:Fn,defineCustomElement:al,defineEmits:Nu,defineExpose:Lu,defineModel:Du,defineOptions:Bu,defineProps:Fu,defineSSRCustomElement:Yf,defineSlots:Hu,get devtools(){return Ht},effect:cc,effectScope:rc,getCurrentInstance:et,getCurrentScope:ei,getTransitionRawChildren:bs,guardReactiveProps:el,h:Ps,handleError:kt,hasInjectionContext:tf,hydrate:Pl,initCustomFormatter:Tf,initDirectivesForSSR:_a,inject:De,isMemoSame:ul,isProxy:yr,isReactive:xt,isReadonly:At,isRef:ge,isRuntimeOnly:wf,isShallow:bn,isVNode:gt,markRaw:br,mergeDefaults:Vu,mergeModels:Wu,mergeProps:tl,nextTick:gs,normalizeClass:Mn,normalizeProps:Gl,normalizeStyle:On,onActivated:Ai,onBeforeMount:Mi,onBeforeUnmount:Cs,onBeforeUpdate:Ii,onDeactivated:Si,onErrorCaptured:Li,onMounted:Ln,onRenderTracked:Ni,onRenderTriggered:Fi,onScopeDispose:oc,onServerPrefetch:ki,onUnmounted:ws,onUpdated:Es,openBlock:xs,popScopeId:su,provide:dn,proxyRefs:Cr,pushScopeId:nu,queuePostFlushCb:xr,reactive:en,readonly:_r,ref:Rt,registerRuntimeCompiler:Cf,render:ir,renderList:Su,renderSlot:Mu,resolveComponent:Tu,resolveDirective:Au,resolveDynamicComponent:Pu,resolveFilter:Of,resolveTransitionHooks:qt,setBlockTracking:Gs,setDevtoolsHook:vi,setTransitionHooks:St,shallowReactive:hi,shallowReadonly:Lc,shallowRef:pi,ssrContextKey:ll,ssrUtils:Sf,stop:uc,toDisplayString:sc,toHandlerKey:fn,toHandlers:Iu,toRaw:Z,toRef:qc,toRefs:Kc,toValue:Dc,transformVNodeArgs:gf,triggerRef:Hc,unref:ft,useAttrs:Uu,useCssModule:Jf,useCssVars:Xf,useModel:Ku,useSSRContext:cl,useSlots:ju,useTransitionState:Ar,vModelCheckbox:jr,vModelDynamic:El,vModelRadio:Ur,vModelSelect:bl,vModelText:os,vShow:wl,version:fl,warn:Qc,watch:at,watchEffect:mu,watchPostEffect:xi,watchSyncEffect:_u,withAsyncContext:zu,withCtx:Rr,withDefaults:$u,withDirectives:bu,withKeys:ha,withMemo:Pf,withModifiers:aa,withScopeId:ru},Symbol.toStringTag,{value:"Module"}));/*! - * vue-router v4.2.2 - * (c) 2023 Eduardo San Martin Morote - * @license MIT - */const Dt=typeof window<"u";function ba(e){return e.__esModule||e[Symbol.toStringTag]==="Module"}const re=Object.assign;function Ds(e,t){const n={};for(const s in t){const r=t[s];n[s]=$e(r)?r.map(e):e(r)}return n}const mn=()=>{},$e=Array.isArray,va=/\/$/,Ea=e=>e.replace(va,"");function $s(e,t,n="/"){let s,r={},o="",i="";const l=t.indexOf("#");let c=t.indexOf("?");return l=0&&(c=-1),c>-1&&(s=t.slice(0,c),o=t.slice(c+1,l>-1?l:t.length),r=e(o)),l>-1&&(s=s||t.slice(0,l),i=t.slice(l,t.length)),s=Ra(s??t,n),{fullPath:s+(o&&"?")+o+i,path:s,query:r,hash:i}}function Ca(e,t){const n=t.query?e(t.query):"";return t.path+(n&&"?")+n+(t.hash||"")}function ko(e,t){return!t||!e.toLowerCase().startsWith(t.toLowerCase())?e:e.slice(t.length)||"/"}function wa(e,t,n){const s=t.matched.length-1,r=n.matched.length-1;return s>-1&&s===r&&Qt(t.matched[s],n.matched[r])&&Sl(t.params,n.params)&&e(t.query)===e(n.query)&&t.hash===n.hash}function Qt(e,t){return(e.aliasOf||e)===(t.aliasOf||t)}function Sl(e,t){if(Object.keys(e).length!==Object.keys(t).length)return!1;for(const n in e)if(!xa(e[n],t[n]))return!1;return!0}function xa(e,t){return $e(e)?Fo(e,t):$e(t)?Fo(t,e):e===t}function Fo(e,t){return $e(t)?e.length===t.length&&e.every((n,s)=>n===t[s]):e.length===1&&e[0]===t}function Ra(e,t){if(e.startsWith("/"))return e;if(!e)return t;const n=t.split("/"),s=e.split("/"),r=s[s.length-1];(r===".."||r===".")&&s.push("");let o=n.length-1,i,l;for(i=0;i1&&o--;else break;return n.slice(0,o).join("/")+"/"+s.slice(i-(i===s.length?1:0)).join("/")}var Pn;(function(e){e.pop="pop",e.push="push"})(Pn||(Pn={}));var _n;(function(e){e.back="back",e.forward="forward",e.unknown=""})(_n||(_n={}));function Ta(e){if(!e)if(Dt){const t=document.querySelector("base");e=t&&t.getAttribute("href")||"/",e=e.replace(/^\w+:\/\/[^\/]+/,"")}else e="/";return e[0]!=="/"&&e[0]!=="#"&&(e="/"+e),Ea(e)}const Pa=/^[^#]+#/;function Aa(e,t){return e.replace(Pa,"#")+t}function Sa(e,t){const n=document.documentElement.getBoundingClientRect(),s=e.getBoundingClientRect();return{behavior:t.behavior,left:s.left-n.left-(t.left||0),top:s.top-n.top-(t.top||0)}}const Ss=()=>({left:window.pageXOffset,top:window.pageYOffset});function Oa(e){let t;if("el"in e){const n=e.el,s=typeof n=="string"&&n.startsWith("#"),r=typeof n=="string"?s?document.getElementById(n.slice(1)):document.querySelector(n):n;if(!r)return;t=Sa(r,e)}else t=e;"scrollBehavior"in document.documentElement.style?window.scrollTo(t):window.scrollTo(t.left!=null?t.left:window.pageXOffset,t.top!=null?t.top:window.pageYOffset)}function No(e,t){return(history.state?history.state.position-t:-1)+e}const lr=new Map;function Ma(e,t){lr.set(e,t)}function Ia(e){const t=lr.get(e);return lr.delete(e),t}let ka=()=>location.protocol+"//"+location.host;function Ol(e,t){const{pathname:n,search:s,hash:r}=t,o=e.indexOf("#");if(o>-1){let l=r.includes(e.slice(o))?e.slice(o).length:1,c=r.slice(l);return c[0]!=="/"&&(c="/"+c),ko(c,"")}return ko(n,e)+s+r}function Fa(e,t,n,s){let r=[],o=[],i=null;const l=({state:p})=>{const y=Ol(e,location),E=n.value,A=t.value;let k=0;if(p){if(n.value=y,t.value=p,i&&i===E){i=null;return}k=A?p.position-A.position:0}else s(y);r.forEach(b=>{b(n.value,E,{delta:k,type:Pn.pop,direction:k?k>0?_n.forward:_n.back:_n.unknown})})};function c(){i=n.value}function u(p){r.push(p);const y=()=>{const E=r.indexOf(p);E>-1&&r.splice(E,1)};return o.push(y),y}function f(){const{history:p}=window;p.state&&p.replaceState(re({},p.state,{scroll:Ss()}),"")}function a(){for(const p of o)p();o=[],window.removeEventListener("popstate",l),window.removeEventListener("beforeunload",f)}return window.addEventListener("popstate",l),window.addEventListener("beforeunload",f,{passive:!0}),{pauseListeners:c,listen:u,destroy:a}}function Lo(e,t,n,s=!1,r=!1){return{back:e,current:t,forward:n,replaced:s,position:window.history.length,scroll:r?Ss():null}}function Na(e){const{history:t,location:n}=window,s={value:Ol(e,n)},r={value:t.state};r.value||o(s.value,{back:null,current:s.value,forward:null,position:t.length-1,replaced:!0,scroll:null},!0);function o(c,u,f){const a=e.indexOf("#"),p=a>-1?(n.host&&document.querySelector("base")?e:e.slice(a))+c:ka()+e+c;try{t[f?"replaceState":"pushState"](u,"",p),r.value=u}catch(y){console.error(y),n[f?"replace":"assign"](p)}}function i(c,u){const f=re({},t.state,Lo(r.value.back,c,r.value.forward,!0),u,{position:r.value.position});o(c,f,!0),s.value=c}function l(c,u){const f=re({},r.value,t.state,{forward:c,scroll:Ss()});o(f.current,f,!0);const a=re({},Lo(s.value,c,null),{position:f.position+1},u);o(c,a,!1),s.value=c}return{location:s,state:r,push:l,replace:i}}function Ed(e){e=Ta(e);const t=Na(e),n=Fa(e,t.state,t.location,t.replace);function s(o,i=!0){i||n.pauseListeners(),history.go(o)}const r=re({location:"",base:e,go:s,createHref:Aa.bind(null,e)},t,n);return Object.defineProperty(r,"location",{enumerable:!0,get:()=>t.location.value}),Object.defineProperty(r,"state",{enumerable:!0,get:()=>t.state.value}),r}function La(e){return typeof e=="string"||e&&typeof e=="object"}function Ml(e){return typeof e=="string"||typeof e=="symbol"}const ot={path:"/",name:void 0,params:{},query:{},hash:"",fullPath:"/",matched:[],meta:{},redirectedFrom:void 0},Il=Symbol("");var Bo;(function(e){e[e.aborted=4]="aborted",e[e.cancelled=8]="cancelled",e[e.duplicated=16]="duplicated"})(Bo||(Bo={}));function Jt(e,t){return re(new Error,{type:e,[Il]:!0},t)}function ze(e,t){return e instanceof Error&&Il in e&&(t==null||!!(e.type&t))}const Ho="[^/]+?",Ba={sensitive:!1,strict:!1,start:!0,end:!0},Ha=/[.+*?^${}()[\]/\\]/g;function Da(e,t){const n=re({},Ba,t),s=[];let r=n.start?"^":"";const o=[];for(const u of e){const f=u.length?[]:[90];n.strict&&!u.length&&(r+="/");for(let a=0;at.length?t.length===1&&t[0]===40+40?1:-1:0}function ja(e,t){let n=0;const s=e.score,r=t.score;for(;n0&&t[t.length-1]<0}const Ua={type:0,value:""},Ka=/[a-zA-Z0-9_]/;function Va(e){if(!e)return[[]];if(e==="/")return[[Ua]];if(!e.startsWith("/"))throw new Error(`Invalid path "${e}"`);function t(y){throw new Error(`ERR (${n})/"${u}": ${y}`)}let n=0,s=n;const r=[];let o;function i(){o&&r.push(o),o=[]}let l=0,c,u="",f="";function a(){u&&(n===0?o.push({type:0,value:u}):n===1||n===2||n===3?(o.length>1&&(c==="*"||c==="+")&&t(`A repeatable param (${u}) must be alone in its segment. eg: '/:ids+.`),o.push({type:1,value:u,regexp:f,repeatable:c==="*"||c==="+",optional:c==="*"||c==="?"})):t("Invalid state to consume buffer"),u="")}function p(){u+=c}for(;l{i(g)}:mn}function i(f){if(Ml(f)){const a=s.get(f);a&&(s.delete(f),n.splice(n.indexOf(a),1),a.children.forEach(i),a.alias.forEach(i))}else{const a=n.indexOf(f);a>-1&&(n.splice(a,1),f.record.name&&s.delete(f.record.name),f.children.forEach(i),f.alias.forEach(i))}}function l(){return n}function c(f){let a=0;for(;a=0&&(f.record.path!==n[a].record.path||!kl(f,n[a]));)a++;n.splice(a,0,f),f.record.name&&!jo(f)&&s.set(f.record.name,f)}function u(f,a){let p,y={},E,A;if("name"in f&&f.name){if(p=s.get(f.name),!p)throw Jt(1,{location:f});A=p.record.name,y=re($o(a.params,p.keys.filter(g=>!g.optional).map(g=>g.name)),f.params&&$o(f.params,p.keys.map(g=>g.name))),E=p.stringify(y)}else if("path"in f)E=f.path,p=n.find(g=>g.re.test(E)),p&&(y=p.parse(E),A=p.record.name);else{if(p=a.name?s.get(a.name):n.find(g=>g.re.test(a.path)),!p)throw Jt(1,{location:f,currentLocation:a});A=p.record.name,y=re({},a.params,f.params),E=p.stringify(y)}const k=[];let b=p;for(;b;)k.unshift(b.record),b=b.parent;return{name:A,path:E,params:y,matched:k,meta:Qa(k)}}return e.forEach(f=>o(f)),{addRoute:o,resolve:u,removeRoute:i,getRoutes:l,getRecordMatcher:r}}function $o(e,t){const n={};for(const s of t)s in e&&(n[s]=e[s]);return n}function za(e){return{path:e.path,redirect:e.redirect,name:e.name,meta:e.meta||{},aliasOf:void 0,beforeEnter:e.beforeEnter,props:Ya(e),children:e.children||[],instances:{},leaveGuards:new Set,updateGuards:new Set,enterCallbacks:{},components:"components"in e?e.components||null:e.component&&{default:e.component}}}function Ya(e){const t={},n=e.props||!1;if("component"in e)t.default=n;else for(const s in e.components)t[s]=typeof n=="boolean"?n:n[s];return t}function jo(e){for(;e;){if(e.record.aliasOf)return!0;e=e.parent}return!1}function Qa(e){return e.reduce((t,n)=>re(t,n.meta),{})}function Uo(e,t){const n={};for(const s in e)n[s]=s in t?t[s]:e[s];return n}function kl(e,t){return t.children.some(n=>n===e||kl(e,n))}const Fl=/#/g,Ja=/&/g,Xa=/\//g,Za=/=/g,Ga=/\?/g,Nl=/\+/g,ed=/%5B/g,td=/%5D/g,Ll=/%5E/g,nd=/%60/g,Bl=/%7B/g,sd=/%7C/g,Hl=/%7D/g,rd=/%20/g;function Kr(e){return encodeURI(""+e).replace(sd,"|").replace(ed,"[").replace(td,"]")}function od(e){return Kr(e).replace(Bl,"{").replace(Hl,"}").replace(Ll,"^")}function cr(e){return Kr(e).replace(Nl,"%2B").replace(rd,"+").replace(Fl,"%23").replace(Ja,"%26").replace(nd,"`").replace(Bl,"{").replace(Hl,"}").replace(Ll,"^")}function id(e){return cr(e).replace(Za,"%3D")}function ld(e){return Kr(e).replace(Fl,"%23").replace(Ga,"%3F")}function cd(e){return e==null?"":ld(e).replace(Xa,"%2F")}function is(e){try{return decodeURIComponent(""+e)}catch{}return""+e}function ud(e){const t={};if(e===""||e==="?")return t;const s=(e[0]==="?"?e.slice(1):e).split("&");for(let r=0;ro&&cr(o)):[s&&cr(s)]).forEach(o=>{o!==void 0&&(t+=(t.length?"&":"")+n,o!=null&&(t+="="+o))})}return t}function fd(e){const t={};for(const n in e){const s=e[n];s!==void 0&&(t[n]=$e(s)?s.map(r=>r==null?null:""+r):s==null?s:""+s)}return t}const ad=Symbol(""),Vo=Symbol(""),Os=Symbol(""),Dl=Symbol(""),ur=Symbol("");function rn(){let e=[];function t(s){return e.push(s),()=>{const r=e.indexOf(s);r>-1&&e.splice(r,1)}}function n(){e=[]}return{add:t,list:()=>e,reset:n}}function ct(e,t,n,s,r){const o=s&&(s.enterCallbacks[r]=s.enterCallbacks[r]||[]);return()=>new Promise((i,l)=>{const c=a=>{a===!1?l(Jt(4,{from:n,to:t})):a instanceof Error?l(a):La(a)?l(Jt(2,{from:t,to:a})):(o&&s.enterCallbacks[r]===o&&typeof a=="function"&&o.push(a),i())},u=e.call(s&&s.instances[r],t,n,c);let f=Promise.resolve(u);e.length<3&&(f=f.then(c)),f.catch(a=>l(a))})}function js(e,t,n,s){const r=[];for(const o of e)for(const i in o.components){let l=o.components[i];if(!(t!=="beforeRouteEnter"&&!o.instances[i]))if(dd(l)){const u=(l.__vccOpts||l)[t];u&&r.push(ct(u,n,s,o,i))}else{let c=l();r.push(()=>c.then(u=>{if(!u)return Promise.reject(new Error(`Couldn't resolve component "${i}" at "${o.path}"`));const f=ba(u)?u.default:u;o.components[i]=f;const p=(f.__vccOpts||f)[t];return p&&ct(p,n,s,o,i)()}))}}return r}function dd(e){return typeof e=="object"||"displayName"in e||"props"in e||"__vccOpts"in e}function Wo(e){const t=De(Os),n=De(Dl),s=Me(()=>t.resolve(ft(e.to))),r=Me(()=>{const{matched:c}=s.value,{length:u}=c,f=c[u-1],a=n.matched;if(!f||!a.length)return-1;const p=a.findIndex(Qt.bind(null,f));if(p>-1)return p;const y=qo(c[u-2]);return u>1&&qo(f)===y&&a[a.length-1].path!==y?a.findIndex(Qt.bind(null,c[u-2])):p}),o=Me(()=>r.value>-1&&md(n.params,s.value.params)),i=Me(()=>r.value>-1&&r.value===n.matched.length-1&&Sl(n.params,s.value.params));function l(c={}){return gd(c)?t[ft(e.replace)?"replace":"push"](ft(e.to)).catch(mn):Promise.resolve()}return{route:s,href:Me(()=>s.value.href),isActive:o,isExactActive:i,navigate:l}}const hd=Fn({name:"RouterLink",compatConfig:{MODE:3},props:{to:{type:[String,Object],required:!0},replace:Boolean,activeClass:String,exactActiveClass:String,custom:Boolean,ariaCurrentValue:{type:String,default:"page"}},useLink:Wo,setup(e,{slots:t}){const n=en(Wo(e)),{options:s}=De(Os),r=Me(()=>({[zo(e.activeClass,s.linkActiveClass,"router-link-active")]:n.isActive,[zo(e.exactActiveClass,s.linkExactActiveClass,"router-link-exact-active")]:n.isExactActive}));return()=>{const o=t.default&&t.default(n);return e.custom?o:Ps("a",{"aria-current":n.isExactActive?e.ariaCurrentValue:null,href:n.href,onClick:n.navigate,class:r.value},o)}}}),pd=hd;function gd(e){if(!(e.metaKey||e.altKey||e.ctrlKey||e.shiftKey)&&!e.defaultPrevented&&!(e.button!==void 0&&e.button!==0)){if(e.currentTarget&&e.currentTarget.getAttribute){const t=e.currentTarget.getAttribute("target");if(/\b_blank\b/i.test(t))return}return e.preventDefault&&e.preventDefault(),!0}}function md(e,t){for(const n in t){const s=t[n],r=e[n];if(typeof s=="string"){if(s!==r)return!1}else if(!$e(r)||r.length!==s.length||s.some((o,i)=>o!==r[i]))return!1}return!0}function qo(e){return e?e.aliasOf?e.aliasOf.path:e.path:""}const zo=(e,t,n)=>e??t??n,_d=Fn({name:"RouterView",inheritAttrs:!1,props:{name:{type:String,default:"default"},route:Object},compatConfig:{MODE:3},setup(e,{attrs:t,slots:n}){const s=De(ur),r=Me(()=>e.route||s.value),o=De(Vo,0),i=Me(()=>{let u=ft(o);const{matched:f}=r.value;let a;for(;(a=f[u])&&!a.components;)u++;return u}),l=Me(()=>r.value.matched[i.value]);dn(Vo,Me(()=>i.value+1)),dn(ad,l),dn(ur,r);const c=Rt();return at(()=>[c.value,l.value,e.name],([u,f,a],[p,y,E])=>{f&&(f.instances[a]=u,y&&y!==f&&u&&u===p&&(f.leaveGuards.size||(f.leaveGuards=y.leaveGuards),f.updateGuards.size||(f.updateGuards=y.updateGuards))),u&&f&&(!y||!Qt(f,y)||!p)&&(f.enterCallbacks[a]||[]).forEach(A=>A(u))},{flush:"post"}),()=>{const u=r.value,f=e.name,a=l.value,p=a&&a.components[f];if(!p)return Yo(n.default,{Component:p,route:u});const y=a.props[f],E=y?y===!0?u.params:typeof y=="function"?y(u):y:null,k=Ps(p,re({},E,t,{onVnodeUnmounted:b=>{b.component.isUnmounted&&(a.instances[f]=null)},ref:c}));return Yo(n.default,{Component:k,route:u})||k}}});function Yo(e,t){if(!e)return null;const n=e(t);return n.length===1?n[0]:n}const yd=_d;function Cd(e){const t=qa(e.routes,e),n=e.parseQuery||ud,s=e.stringifyQuery||Ko,r=e.history,o=rn(),i=rn(),l=rn(),c=pi(ot);let u=ot;Dt&&e.scrollBehavior&&"scrollRestoration"in history&&(history.scrollRestoration="manual");const f=Ds.bind(null,C=>""+C),a=Ds.bind(null,cd),p=Ds.bind(null,is);function y(C,H){let I,V;return Ml(C)?(I=t.getRecordMatcher(C),V=H):V=C,t.addRoute(V,I)}function E(C){const H=t.getRecordMatcher(C);H&&t.removeRoute(H)}function A(){return t.getRoutes().map(C=>C.record)}function k(C){return!!t.getRecordMatcher(C)}function b(C,H){if(H=re({},H||c.value),typeof C=="string"){const m=$s(n,C,H.path),v=t.resolve({path:m.path},H),w=r.createHref(m.fullPath);return re(m,v,{params:p(v.params),hash:is(m.hash),redirectedFrom:void 0,href:w})}let I;if("path"in C)I=re({},C,{path:$s(n,C.path,H.path).path});else{const m=re({},C.params);for(const v in m)m[v]==null&&delete m[v];I=re({},C,{params:a(m)}),H.params=a(H.params)}const V=t.resolve(I,H),se=C.hash||"";V.params=f(p(V.params));const d=Ca(s,re({},C,{hash:od(se),path:V.path})),h=r.createHref(d);return re({fullPath:d,hash:se,query:s===Ko?fd(C.query):C.query||{}},V,{redirectedFrom:void 0,href:h})}function g(C){return typeof C=="string"?$s(n,C,c.value.path):re({},C)}function R(C,H){if(u!==C)return Jt(8,{from:H,to:C})}function _(C){return N(C)}function S(C){return _(re(g(C),{replace:!0}))}function B(C){const H=C.matched[C.matched.length-1];if(H&&H.redirect){const{redirect:I}=H;let V=typeof I=="function"?I(C):I;return typeof V=="string"&&(V=V.includes("?")||V.includes("#")?V=g(V):{path:V},V.params={}),re({query:C.query,hash:C.hash,params:"path"in V?{}:C.params},V)}}function N(C,H){const I=u=b(C),V=c.value,se=C.state,d=C.force,h=C.replace===!0,m=B(I);if(m)return N(re(g(m),{state:typeof m=="object"?re({},se,m.state):se,force:d,replace:h}),H||I);const v=I;v.redirectedFrom=H;let w;return!d&&wa(s,V,I)&&(w=Jt(16,{to:v,from:V}),je(V,V,!0,!1)),(w?Promise.resolve(w):U(v,V)).catch(T=>ze(T)?ze(T,2)?T:tt(T):ne(T,v,V)).then(T=>{if(T){if(ze(T,2))return N(re({replace:h},g(T.to),{state:typeof T.to=="object"?re({},se,T.to.state):se,force:d}),H||v)}else T=L(v,V,!0,h,se);return z(v,V,T),T})}function x(C,H){const I=R(C,H);return I?Promise.reject(I):Promise.resolve()}function j(C){const H=Nt.values().next().value;return H&&typeof H.runWithContext=="function"?H.runWithContext(C):C()}function U(C,H){let I;const[V,se,d]=bd(C,H);I=js(V.reverse(),"beforeRouteLeave",C,H);for(const m of V)m.leaveGuards.forEach(v=>{I.push(ct(v,C,H))});const h=x.bind(null,C,H);return I.push(h),be(I).then(()=>{I=[];for(const m of o.list())I.push(ct(m,C,H));return I.push(h),be(I)}).then(()=>{I=js(se,"beforeRouteUpdate",C,H);for(const m of se)m.updateGuards.forEach(v=>{I.push(ct(v,C,H))});return I.push(h),be(I)}).then(()=>{I=[];for(const m of C.matched)if(m.beforeEnter&&!H.matched.includes(m))if($e(m.beforeEnter))for(const v of m.beforeEnter)I.push(ct(v,C,H));else I.push(ct(m.beforeEnter,C,H));return I.push(h),be(I)}).then(()=>(C.matched.forEach(m=>m.enterCallbacks={}),I=js(d,"beforeRouteEnter",C,H),I.push(h),be(I))).then(()=>{I=[];for(const m of i.list())I.push(ct(m,C,H));return I.push(h),be(I)}).catch(m=>ze(m,8)?m:Promise.reject(m))}function z(C,H,I){for(const V of l.list())j(()=>V(C,H,I))}function L(C,H,I,V,se){const d=R(C,H);if(d)return d;const h=H===ot,m=Dt?history.state:{};I&&(V||h?r.replace(C.fullPath,re({scroll:h&&m&&m.scroll},se)):r.push(C.fullPath,se)),c.value=C,je(C,H,I,h),tt()}let Y;function $(){Y||(Y=r.listen((C,H,I)=>{if(!Bn.listening)return;const V=b(C),se=B(V);if(se){N(re(se,{replace:!0}),V).catch(mn);return}u=V;const d=c.value;Dt&&Ma(No(d.fullPath,I.delta),Ss()),U(V,d).catch(h=>ze(h,12)?h:ze(h,2)?(N(h.to,V).then(m=>{ze(m,20)&&!I.delta&&I.type===Pn.pop&&r.go(-1,!1)}).catch(mn),Promise.reject()):(I.delta&&r.go(-I.delta,!1),ne(h,V,d))).then(h=>{h=h||L(V,d,!1),h&&(I.delta&&!ze(h,8)?r.go(-I.delta,!1):I.type===Pn.pop&&ze(h,20)&&r.go(-1,!1)),z(V,d,h)}).catch(mn)}))}let de=rn(),G=rn(),te;function ne(C,H,I){tt(C);const V=G.list();return V.length?V.forEach(se=>se(C,H,I)):console.error(C),Promise.reject(C)}function qe(){return te&&c.value!==ot?Promise.resolve():new Promise((C,H)=>{de.add([C,H])})}function tt(C){return te||(te=!C,$(),de.list().forEach(([H,I])=>C?I(C):H()),de.reset()),C}function je(C,H,I,V){const{scrollBehavior:se}=e;if(!Dt||!se)return Promise.resolve();const d=!I&&Ia(No(C.fullPath,0))||(V||!I)&&history.state&&history.state.scroll||null;return gs().then(()=>se(C,H,d)).then(h=>h&&Oa(h)).catch(h=>ne(h,C,H))}const we=C=>r.go(C);let Ft;const Nt=new Set,Bn={currentRoute:c,listening:!0,addRoute:y,removeRoute:E,hasRoute:k,getRoutes:A,resolve:b,options:e,push:_,replace:S,go:we,back:()=>we(-1),forward:()=>we(1),beforeEach:o.add,beforeResolve:i.add,afterEach:l.add,onError:G.add,isReady:qe,install(C){const H=this;C.component("RouterLink",pd),C.component("RouterView",yd),C.config.globalProperties.$router=H,Object.defineProperty(C.config.globalProperties,"$route",{enumerable:!0,get:()=>ft(c)}),Dt&&!Ft&&c.value===ot&&(Ft=!0,_(r.location).catch(se=>{}));const I={};for(const se in ot)I[se]=Me(()=>c.value[se]);C.provide(Os,H),C.provide(Dl,en(I)),C.provide(ur,c);const V=C.unmount;Nt.add(C),C.unmount=function(){Nt.delete(C),Nt.size<1&&(u=ot,Y&&Y(),Y=null,c.value=ot,Ft=!1,te=!1),V()}}};function be(C){return C.reduce((H,I)=>H.then(()=>j(I)),Promise.resolve())}return Bn}function bd(e,t){const n=[],s=[],r=[],o=Math.max(t.matched.length,e.matched.length);for(let i=0;iQt(u,l))?s.push(l):n.push(l));const c=e.matched[i];c&&(t.matched.find(u=>Qt(u,c))||r.push(c))}return[n,s,r]}function wd(){return De(Os)}export{ft as $,gs as A,dn as B,Ou as C,aa as D,Es as E,ye as F,Cs as G,_r as H,Si as I,hf as J,bu as K,wl as L,Pu as M,_f as N,na as O,Su as P,mu as Q,gt as R,ir as S,$r as T,Ee as U,vd as V,rc as W,br as X,Cd as Y,Ed as Z,pi as _,en as a,ga as a0,wd as a1,ge as a2,nu as a3,su as a4,Ps as a5,Ai as a6,Mi as a7,Ot as a8,El as a9,hi as aa,Au as ab,os as ac,We as b,Me as c,Fn as d,ws as e,xs as f,et as g,pf as h,De as i,Mu as j,On as k,Lr as l,Tu as m,Mn as n,Ln as o,Nr as p,Rr as q,Rt as r,yf as s,Br as t,sc as u,fe as v,at as w,qc as x,Kc as y,tl as z}; diff --git a/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/__init__.py b/spaces/williamcfrancis/Deep-Blind-Motion-Deblurring/sidekick/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/utils/files.py b/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/utils/files.py deleted file mode 100644 index 721da5b415dafb5baa2ddfafde8762c7790e2932..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/utils/files.py +++ /dev/null @@ -1,17 +0,0 @@ -import os - - -def get_checkpoint_path_from_dir(checkpoint_dir): - checkpoint_path = None - for file in os.listdir(checkpoint_dir): - if "ckpt" in file: - checkpoint_path = os.path.join(checkpoint_dir, file) - assert checkpoint_path is not None - return checkpoint_path - - -def replace_config_for_testing_data(cfg, testing_data_cfg): - cfg.DATASET.data_roots = testing_data_cfg.DATASET.data_roots - cfg.DATASET.index_roots = testing_data_cfg.DATASET.index_roots - cfg.DATASET.vocab_dir = testing_data_cfg.DATASET.vocab_dir - diff --git a/spaces/xfys/yolov5_tracking/reid_export.py b/spaces/xfys/yolov5_tracking/reid_export.py deleted file mode 100644 index cde0a70ad822b3c93b217df2fea6231fc6366b7c..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/reid_export.py +++ /dev/null @@ -1,314 +0,0 @@ -import argparse - -import os -# limit the number of cpus used by high performance libraries -os.environ["OMP_NUM_THREADS"] = "1" -os.environ["OPENBLAS_NUM_THREADS"] = "1" -os.environ["MKL_NUM_THREADS"] = "1" -os.environ["VECLIB_MAXIMUM_THREADS"] = "1" -os.environ["NUMEXPR_NUM_THREADS"] = "1" - -import sys -import numpy as np -from pathlib import Path -import torch -import time -import platform -import pandas as pd -import subprocess -import torch.backends.cudnn as cudnn -from torch.utils.mobile_optimizer import optimize_for_mobile - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[0] # yolov5 strongsort root directory -WEIGHTS = ROOT / 'weights' - -print(ROOT) - -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -if str(ROOT / 'yolov5') not in sys.path: - sys.path.append(str(ROOT / 'yolov5')) # add yolov5 ROOT to PATH - -ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative - -import logging -from yolov5.utils.torch_utils import select_device -from yolov5.models.common import DetectMultiBackend -from yolov5.utils.general import LOGGER, colorstr, check_requirements, check_version -from trackers.strong_sort.deep.models import build_model -from trackers.strong_sort.deep.reid_model_factory import get_model_name, load_pretrained_weights - - -def file_size(path): - # Return file/dir size (MB) - path = Path(path) - if path.is_file(): - return path.stat().st_size / 1E6 - elif path.is_dir(): - return sum(f.stat().st_size for f in path.glob('**/*') if f.is_file()) / 1E6 - else: - return 0.0 - - -def export_formats(): - # YOLOv5 export formats - x = [ - ['PyTorch', '-', '.pt', True, True], - ['TorchScript', 'torchscript', '.torchscript', True, True], - ['ONNX', 'onnx', '.onnx', True, True], - ['OpenVINO', 'openvino', '_openvino_model', True, False], - ['TensorRT', 'engine', '.engine', False, True], - ['TensorFlow Lite', 'tflite', '.tflite', True, False], - ] - return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU']) - - -def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')): - # YOLOv5 TorchScript model export - try: - LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...') - f = file.with_suffix('.torchscript') - - ts = torch.jit.trace(model, im, strict=False) - if optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html - optimize_for_mobile(ts)._save_for_lite_interpreter(str(f)) - else: - ts.save(str(f)) - - LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') - return f - except Exception as e: - LOGGER.info(f'{prefix} export failure: {e}') - - -def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX:')): - # ONNX export - try: - check_requirements(('onnx',)) - import onnx - - f = file.with_suffix('.onnx') - LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...') - - if dynamic: - dynamic = {'images': {0: 'batch'}} # shape(1,3,640,640) - dynamic['output'] = {0: 'batch'} # shape(1,25200,85) - - torch.onnx.export( - model.cpu() if dynamic else model, # --dynamic only compatible with cpu - im.cpu() if dynamic else im, - f, - verbose=False, - opset_version=opset, - do_constant_folding=True, - input_names=['images'], - output_names=['output'], - dynamic_axes=dynamic or None - ) - # Checks - model_onnx = onnx.load(f) # load onnx model - onnx.checker.check_model(model_onnx) # check onnx model - onnx.save(model_onnx, f) - - # Simplify - if simplify: - try: - cuda = torch.cuda.is_available() - check_requirements(('onnxruntime-gpu' if cuda else 'onnxruntime', 'onnx-simplifier>=0.4.1')) - import onnxsim - - LOGGER.info(f'simplifying with onnx-simplifier {onnxsim.__version__}...') - model_onnx, check = onnxsim.simplify(model_onnx) - assert check, 'assert check failed' - onnx.save(model_onnx, f) - except Exception as e: - LOGGER.info(f'simplifier failure: {e}') - LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') - return f - except Exception as e: - LOGGER.info(f'export failure: {e}') - - - -def export_openvino(file, half, prefix=colorstr('OpenVINO:')): - # YOLOv5 OpenVINO export - check_requirements(('openvino-dev',)) # requires openvino-dev: https://pypi.org/project/openvino-dev/ - import openvino.inference_engine as ie - try: - LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...') - f = str(file).replace('.pt', f'_openvino_model{os.sep}') - - cmd = f"mo --input_model {file.with_suffix('.onnx')} --output_dir {f} --data_type {'FP16' if half else 'FP32'}" - subprocess.check_output(cmd.split()) # export - except Exception as e: - LOGGER.info(f'export failure: {e}') - LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') - return f - - -def export_tflite(file, half, prefix=colorstr('TFLite:')): - # YOLOv5 OpenVINO export - try: - check_requirements(('openvino2tensorflow', 'tensorflow', 'tensorflow_datasets')) # requires openvino-dev: https://pypi.org/project/openvino-dev/ - import openvino.inference_engine as ie - LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...') - output = Path(str(file).replace(f'_openvino_model{os.sep}', f'_tflite_model{os.sep}')) - modelxml = list(Path(file).glob('*.xml'))[0] - cmd = f"openvino2tensorflow \ - --model_path {modelxml} \ - --model_output_path {output} \ - --output_pb \ - --output_saved_model \ - --output_no_quant_float32_tflite \ - --output_dynamic_range_quant_tflite" - subprocess.check_output(cmd.split()) # export - - LOGGER.info(f'{prefix} export success, results saved in {output} ({file_size(f):.1f} MB)') - return f - except Exception as e: - LOGGER.info(f'\n{prefix} export failure: {e}') - - -def export_engine(model, im, file, half, dynamic, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')): - # YOLOv5 TensorRT export https://developer.nvidia.com/tensorrt - try: - assert im.device.type != 'cpu', 'export running on CPU but must be on GPU, i.e. `python export.py --device 0`' - try: - import tensorrt as trt - except Exception: - if platform.system() == 'Linux': - check_requirements(('nvidia-tensorrt',), cmds=('-U --index-url https://pypi.ngc.nvidia.com',)) - import tensorrt as trt - - if trt.__version__[0] == '7': # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012 - grid = model.model[-1].anchor_grid - model.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid] - export_onnx(model, im, file, 12, dynamic, simplify) # opset 12 - model.model[-1].anchor_grid = grid - else: # TensorRT >= 8 - check_version(trt.__version__, '8.0.0', hard=True) # require tensorrt>=8.0.0 - export_onnx(model, im, file, 12, dynamic, simplify) # opset 13 - onnx = file.with_suffix('.onnx') - - LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...') - assert onnx.exists(), f'failed to export ONNX file: {onnx}' - f = file.with_suffix('.engine') # TensorRT engine file - logger = trt.Logger(trt.Logger.INFO) - if verbose: - logger.min_severity = trt.Logger.Severity.VERBOSE - - builder = trt.Builder(logger) - config = builder.create_builder_config() - config.max_workspace_size = workspace * 1 << 30 - # config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30) # fix TRT 8.4 deprecation notice - - flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)) - network = builder.create_network(flag) - parser = trt.OnnxParser(network, logger) - if not parser.parse_from_file(str(onnx)): - raise RuntimeError(f'failed to load ONNX file: {onnx}') - - inputs = [network.get_input(i) for i in range(network.num_inputs)] - outputs = [network.get_output(i) for i in range(network.num_outputs)] - LOGGER.info(f'{prefix} Network Description:') - for inp in inputs: - LOGGER.info(f'{prefix}\tinput "{inp.name}" with shape {inp.shape} and dtype {inp.dtype}') - for out in outputs: - LOGGER.info(f'{prefix}\toutput "{out.name}" with shape {out.shape} and dtype {out.dtype}') - - if dynamic: - if im.shape[0] <= 1: - LOGGER.warning(f"{prefix}WARNING: --dynamic model requires maximum --batch-size argument") - profile = builder.create_optimization_profile() - for inp in inputs: - profile.set_shape(inp.name, (1, *im.shape[1:]), (max(1, im.shape[0] // 2), *im.shape[1:]), im.shape) - config.add_optimization_profile(profile) - - LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine in {f}') - if builder.platform_has_fast_fp16 and half: - config.set_flag(trt.BuilderFlag.FP16) - with builder.build_engine(network, config) as engine, open(f, 'wb') as t: - t.write(engine.serialize()) - LOGGER.info(f'{prefix} export success, saved as {f} ({file_size(f):.1f} MB)') - return f - except Exception as e: - LOGGER.info(f'\n{prefix} export failure: {e}') - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser(description="ReID export") - parser.add_argument('--batch-size', type=int, default=1, help='batch size') - parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[256, 128], help='image (h, w)') - parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile') - parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes') - parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model') - parser.add_argument('--opset', type=int, default=12, help='ONNX: opset version') - parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)') - parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log') - parser.add_argument('--weights', nargs='+', type=str, default=WEIGHTS / 'osnet_x0_25_msmt17.pt', help='model.pt path(s)') - parser.add_argument('--half', action='store_true', help='FP16 half-precision export') - parser.add_argument('--include', - nargs='+', - default=['torchscript'], - help='torchscript, onnx, openvino, engine') - args = parser.parse_args() - - t = time.time() - - include = [x.lower() for x in args.include] # to lowercase - fmts = tuple(export_formats()['Argument'][1:]) # --include arguments - flags = [x in include for x in fmts] - assert sum(flags) == len(include), f'ERROR: Invalid --include {include}, valid --include arguments are {fmts}' - jit, onnx, openvino, engine, tflite = flags # export booleans - - args.device = select_device(args.device) - if args.half: - assert args.device.type != 'cpu', '--half only compatible with GPU export, i.e. use --device 0' - assert not args.dynamic, '--half not compatible with --dynamic, i.e. use either --half or --dynamic but not both' - - if type(args.weights) is list: - args.weights = Path(args.weights[0]) - - model = build_model( - get_model_name(args.weights), - num_classes=1, - pretrained=not (args.weights and args.weights.is_file() and args.weights.suffix == '.pt'), - use_gpu=args.device - ).to(args.device) - load_pretrained_weights(model, args.weights) - model.eval() - - if args.optimize: - assert device.type == 'cpu', '--optimize not compatible with cuda devices, i.e. use --device cpu' - - im = torch.zeros(args.batch_size, 3, args.imgsz[0], args.imgsz[1]).to(args.device) # image size(1,3,640,480) BCHW iDetection - for _ in range(2): - y = model(im) # dry runs - if args.half: - im, model = im.half(), model.half() # to FP16 - shape = tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shape - LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {args.weights} with output shape {shape} ({file_size(args.weights):.1f} MB)") - - # Exports - f = [''] * len(fmts) # exported filenames - if jit: - f[0] = export_torchscript(model, im, args.weights, args.optimize) # opset 12 - if engine: # TensorRT required before ONNX - f[1] = export_engine(model, im, args.weights, args.half, args.dynamic, args.simplify, args.workspace, args.verbose) - if onnx: # OpenVINO requires ONNX - f[2] = export_onnx(model, im, args.weights, args.opset, args.dynamic, args.simplify) # opset 12 - if openvino: - f[3] = export_openvino(args.weights, args.half) - if tflite: - export_tflite(f, False) - - # Finish - f = [str(x) for x in f if x] # filter out '' and None - if any(f): - LOGGER.info(f'\nExport complete ({time.time() - t:.1f}s)' - f"\nResults saved to {colorstr('bold', args.weights.parent.resolve())}" - f"\nVisualize: https://netron.app") - diff --git a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/util/inference.py b/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/util/inference.py deleted file mode 100644 index 7c9b8a0b382f615bcda0ef8220f79afc0892e641..0000000000000000000000000000000000000000 --- a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/util/inference.py +++ /dev/null @@ -1,257 +0,0 @@ -from typing import Tuple, List - -import re -import cv2 -import numpy as np -import supervision as sv -import torch -from PIL import Image -from torchvision.ops import box_convert - -import groundingdino.datasets.transforms as T -from groundingdino.models import build_model -from groundingdino.util.misc import clean_state_dict -from groundingdino.util.slconfig import SLConfig -from groundingdino.util.utils import get_phrases_from_posmap - -# ---------------------------------------------------------------------------------------------------------------------- -# OLD API -# ---------------------------------------------------------------------------------------------------------------------- - - -def preprocess_caption(caption: str) -> str: - result = caption.lower().strip() - if result.endswith("."): - return result - return result + "." - - -def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"): - args = SLConfig.fromfile(model_config_path) - args.device = device - model = build_model(args) - checkpoint = torch.load(model_checkpoint_path, map_location="cpu") - model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) - model.eval() - return model - - -def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]: - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image_source = Image.open(image_path).convert("RGB") - image = np.asarray(image_source) - image_transformed, _ = transform(image_source, None) - return image, image_transformed - - -def predict( - model, - image: torch.Tensor, - caption: str, - box_threshold: float, - text_threshold: float, - device: str = "cuda" -) -> Tuple[torch.Tensor, torch.Tensor, List[str]]: - caption = preprocess_caption(caption=caption) - - model = model.to(device) - image = image.to(device) - - with torch.no_grad(): - outputs = model(image[None], captions=[caption]) - - prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256) - prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4) - - mask = prediction_logits.max(dim=1)[0] > box_threshold - logits = prediction_logits[mask] # logits.shape = (n, 256) - boxes = prediction_boxes[mask] # boxes.shape = (n, 4) - - tokenizer = model.tokenizer - tokenized = tokenizer(caption) - - phrases = [ - get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '') - for logit - in logits - ] - - return boxes, logits.max(dim=1)[0], phrases - - -def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray: - h, w, _ = image_source.shape - boxes = boxes * torch.Tensor([w, h, w, h]) - xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy() - detections = sv.Detections(xyxy=xyxy) - - labels = [ - f"{phrase} {logit:.2f}" - for phrase, logit - in zip(phrases, logits) - ] - - box_annotator = sv.BoxAnnotator() - annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR) - annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels) - return annotated_frame - - -# ---------------------------------------------------------------------------------------------------------------------- -# NEW API -# ---------------------------------------------------------------------------------------------------------------------- - - -class Model: - - def __init__( - self, - model_config_path: str, - model_checkpoint_path: str, - device: str = "cuda" - ): - self.model = load_model( - model_config_path=model_config_path, - model_checkpoint_path=model_checkpoint_path, - device=device - ).to(device) - self.device = device - - def predict_with_caption( - self, - image: np.ndarray, - caption: str, - box_threshold: float = 0.35, - text_threshold: float = 0.25 - ) -> Tuple[sv.Detections, List[str]]: - """ - import cv2 - - image = cv2.imread(IMAGE_PATH) - - model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH) - detections, labels = model.predict_with_caption( - image=image, - caption=caption, - box_threshold=BOX_THRESHOLD, - text_threshold=TEXT_THRESHOLD - ) - - import supervision as sv - - box_annotator = sv.BoxAnnotator() - annotated_image = box_annotator.annotate(scene=image, detections=detections, labels=labels) - """ - processed_image = Model.preprocess_image(image_bgr=image).to(self.device) - boxes, logits, phrases = predict( - model=self.model, - image=processed_image, - caption=caption, - box_threshold=box_threshold, - text_threshold=text_threshold, - device=self.device) - source_h, source_w, _ = image.shape - detections = Model.post_process_result( - source_h=source_h, - source_w=source_w, - boxes=boxes, - logits=logits) - return detections, phrases - - def predict_with_classes( - self, - image: np.ndarray, - classes: List[str], - box_threshold: float, - text_threshold: float - ) -> sv.Detections: - """ - import cv2 - - image = cv2.imread(IMAGE_PATH) - - model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH) - detections = model.predict_with_classes( - image=image, - classes=CLASSES, - box_threshold=BOX_THRESHOLD, - text_threshold=TEXT_THRESHOLD - ) - - - import supervision as sv - - box_annotator = sv.BoxAnnotator() - annotated_image = box_annotator.annotate(scene=image, detections=detections) - """ - caption = ". ".join(classes) - processed_image = Model.preprocess_image(image_bgr=image).to(self.device) - boxes, logits, phrases = predict( - model=self.model, - image=processed_image, - caption=caption, - box_threshold=box_threshold, - text_threshold=text_threshold, - device=self.device) - source_h, source_w, _ = image.shape - detections = Model.post_process_result( - source_h=source_h, - source_w=source_w, - boxes=boxes, - logits=logits) - class_id = Model.phrases2classes(phrases=phrases, classes=classes) - detections.class_id = class_id - return detections - - @staticmethod - def preprocess_image(image_bgr: np.ndarray) -> torch.Tensor: - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image_pillow = Image.fromarray(cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)) - image_transformed, _ = transform(image_pillow, None) - return image_transformed - - @staticmethod - def post_process_result( - source_h: int, - source_w: int, - boxes: torch.Tensor, - logits: torch.Tensor - ) -> sv.Detections: - boxes = boxes * torch.Tensor([source_w, source_h, source_w, source_h]) - xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy() - confidence = logits.numpy() - return sv.Detections(xyxy=xyxy, confidence=confidence) - - @staticmethod - def phrases2classes(phrases: List[str], classes: List[str]) -> np.ndarray: - class_ids = [] - for phrase in phrases: - try: - # class_ids.append(classes.index(phrase)) - class_ids.append(Model.find_index(phrase, classes)) - except ValueError: - class_ids.append(None) - return np.array(class_ids) - - @staticmethod - def find_index(string, lst): - # if meet string like "lake river" will only keep "lake" - # this is an hack implementation for visualization which will be updated in the future - string = string.lower().split()[0] - for i, s in enumerate(lst): - if string in s.lower(): - return i - print("There's a wrong phrase happen, this is because of our post-process merged wrong tokens, which will be modified in the future. We will assign it with a random label at this time.") - return 0 \ No newline at end of file diff --git a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/unittest.py b/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/unittest.py deleted file mode 100644 index 998223a0e0242dc4a5b2fcd74af79dc7232794da..0000000000000000000000000000000000000000 --- a/spaces/xp3857/Image_Restoration_Colorization/Face_Enhancement/models/networks/Synchronized-BatchNorm-PyTorch/sync_batchnorm/unittest.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- -# File : unittest.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import unittest -import torch - - -class TorchTestCase(unittest.TestCase): - def assertTensorClose(self, x, y): - adiff = float((x - y).abs().max()) - if (y == 0).all(): - rdiff = 'NaN' - else: - rdiff = float((adiff / y).abs().max()) - - message = ( - 'Tensor close check failed\n' - 'adiff={}\n' - 'rdiff={}\n' - ).format(adiff, rdiff) - self.assertTrue(torch.allclose(x, y, atol=1e-5, rtol=1e-3), message) - diff --git a/spaces/xswu/HPSv2/src/open_clip/push_to_hf_hub.py b/spaces/xswu/HPSv2/src/open_clip/push_to_hf_hub.py deleted file mode 100644 index 23c0631c81dcb43829b7374fac09406ecefcb436..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/src/open_clip/push_to_hf_hub.py +++ /dev/null @@ -1,243 +0,0 @@ -import argparse -import json -from pathlib import Path -from tempfile import TemporaryDirectory -from typing import Optional, Tuple - -import torch - -try: - from huggingface_hub import ( - create_repo, - get_hf_file_metadata, - hf_hub_download, - hf_hub_url, - repo_type_and_id_from_hf_id, - upload_folder, - ) - from huggingface_hub.utils import EntryNotFoundError - _has_hf_hub = True -except ImportError: - _has_hf_hub = False - -from .factory import create_model_from_pretrained, get_model_config, get_tokenizer -from .tokenizer import HFTokenizer - - -def save_config_for_hf( - model, - config_path: str, - model_config: Optional[dict] -): - preprocess_cfg = { - 'mean': model.visual.image_mean, - 'std': model.visual.image_std, - } - hf_config = { - 'model_cfg': model_config, - 'preprocess_cfg': preprocess_cfg, - } - - with config_path.open('w') as f: - json.dump(hf_config, f, indent=2) - - -def save_for_hf( - model, - tokenizer: HFTokenizer, - model_config: dict, - save_directory: str, - weights_filename='open_clip_pytorch_model.bin', - config_filename='open_clip_config.json', -): - save_directory = Path(save_directory) - save_directory.mkdir(exist_ok=True, parents=True) - - weights_path = save_directory / weights_filename - torch.save(model.state_dict(), weights_path) - - tokenizer.save_pretrained(save_directory) - - config_path = save_directory / config_filename - save_config_for_hf(model, config_path, model_config=model_config) - - -def push_to_hf_hub( - model, - tokenizer, - model_config: Optional[dict], - repo_id: str, - commit_message: str = 'Add model', - token: Optional[str] = None, - revision: Optional[str] = None, - private: bool = False, - create_pr: bool = False, - model_card: Optional[dict] = None, -): - if not isinstance(tokenizer, HFTokenizer): - # default CLIP tokenizers use https://huggingface.co/openai/clip-vit-large-patch14 - tokenizer = HFTokenizer('openai/clip-vit-large-patch14') - - # Create repo if it doesn't exist yet - repo_url = create_repo(repo_id, token=token, private=private, exist_ok=True) - - # Infer complete repo_id from repo_url - # Can be different from the input `repo_id` if repo_owner was implicit - _, repo_owner, repo_name = repo_type_and_id_from_hf_id(repo_url) - repo_id = f"{repo_owner}/{repo_name}" - - # Check if README file already exist in repo - try: - get_hf_file_metadata(hf_hub_url(repo_id=repo_id, filename="README.md", revision=revision)) - has_readme = True - except EntryNotFoundError: - has_readme = False - - # Dump model and push to Hub - with TemporaryDirectory() as tmpdir: - # Save model weights and config. - save_for_hf( - model, - tokenizer=tokenizer, - model_config=model_config, - save_directory=tmpdir, - ) - - # Add readme if it does not exist - if not has_readme: - model_card = model_card or {} - model_name = repo_id.split('/')[-1] - readme_path = Path(tmpdir) / "README.md" - readme_text = generate_readme(model_card, model_name) - readme_path.write_text(readme_text) - - # Upload model and return - return upload_folder( - repo_id=repo_id, - folder_path=tmpdir, - revision=revision, - create_pr=create_pr, - commit_message=commit_message, - ) - - -def push_pretrained_to_hf_hub( - model_name, - pretrained: str, - repo_id: str, - image_mean: Optional[Tuple[float, ...]] = None, - image_std: Optional[Tuple[float, ...]] = None, - commit_message: str = 'Add model', - token: Optional[str] = None, - revision: Optional[str] = None, - private: bool = False, - create_pr: bool = False, - model_card: Optional[dict] = None, -): - model, preprocess_eval = create_model_from_pretrained( - model_name, - pretrained=pretrained, - image_mean=image_mean, - image_std=image_std, - ) - - model_config = get_model_config(model_name) - assert model_config - - tokenizer = get_tokenizer(model_name) - - push_to_hf_hub( - model=model, - tokenizer=tokenizer, - model_config=model_config, - repo_id=repo_id, - commit_message=commit_message, - token=token, - revision=revision, - private=private, - create_pr=create_pr, - model_card=model_card, - ) - - -def generate_readme(model_card: dict, model_name: str): - readme_text = "---\n" - readme_text += "tags:\n- zero-shot-image-classification\n- clip\n" - readme_text += "library_tag: open_clip\n" - readme_text += f"license: {model_card.get('license', 'mit')}\n" - if 'details' in model_card and 'Dataset' in model_card['details']: - readme_text += 'datasets:\n' - readme_text += f"- {model_card['details']['Dataset'].lower()}\n" - readme_text += "---\n" - readme_text += f"# Model card for {model_name}\n" - if 'description' in model_card: - readme_text += f"\n{model_card['description']}\n" - if 'details' in model_card: - readme_text += f"\n## Model Details\n" - for k, v in model_card['details'].items(): - if isinstance(v, (list, tuple)): - readme_text += f"- **{k}:**\n" - for vi in v: - readme_text += f" - {vi}\n" - elif isinstance(v, dict): - readme_text += f"- **{k}:**\n" - for ki, vi in v.items(): - readme_text += f" - {ki}: {vi}\n" - else: - readme_text += f"- **{k}:** {v}\n" - if 'usage' in model_card: - readme_text += f"\n## Model Usage\n" - readme_text += model_card['usage'] - readme_text += '\n' - - if 'comparison' in model_card: - readme_text += f"\n## Model Comparison\n" - readme_text += model_card['comparison'] - readme_text += '\n' - - if 'citation' in model_card: - readme_text += f"\n## Citation\n" - if not isinstance(model_card['citation'], (list, tuple)): - citations = [model_card['citation']] - else: - citations = model_card['citation'] - for c in citations: - readme_text += f"```bibtex\n{c}\n```\n" - - return readme_text - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Push to Hugging Face Hub") - parser.add_argument( - "--model", type=str, help="Name of the model to use.", - ) - parser.add_argument( - "--pretrained", type=str, - help="Use a pretrained CLIP model weights with the specified tag or file path.", - ) - parser.add_argument( - "--repo-id", type=str, - help="Destination HF Hub repo-id ie 'organization/model_id'.", - ) - parser.add_argument( - '--image-mean', type=float, nargs='+', default=None, metavar='MEAN', - help='Override default image mean value of dataset') - parser.add_argument( - '--image-std', type=float, nargs='+', default=None, metavar='STD', - help='Override default image std deviation of of dataset') - args = parser.parse_args() - - print(f'Saving model {args.model} with pretrained weights {args.pretrained} to Hugging Face Hub at {args.repo_id}') - - # FIXME add support to pass model_card json / template from file via cmd line - - push_pretrained_to_hf_hub( - args.model, - args.pretrained, - args.repo_id, - image_mean=args.image_mean, # override image mean/std if trained w/ non defaults - image_std=args.image_std, - ) - - print(f'{args.model} saved.') diff --git a/spaces/xswu/HPSv2/tests/test_num_shards.py b/spaces/xswu/HPSv2/tests/test_num_shards.py deleted file mode 100644 index 70ca8feccd6ff5be4b04a5d9da7b47ab99e36fa3..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/tests/test_num_shards.py +++ /dev/null @@ -1,20 +0,0 @@ -import pytest - -from training.data import get_dataset_size - -@pytest.mark.parametrize( - "shards,expected_size", - [ - ('/path/to/shard.tar', 1), - ('/path/to/shard_{000..000}.tar', 1), - ('/path/to/shard_{000..009}.tar', 10), - ('/path/to/shard_{000..009}_{000..009}.tar', 100), - ('/path/to/shard.tar::/path/to/other_shard_{000..009}.tar', 11), - ('/path/to/shard_{000..009}.tar::/path/to/other_shard_{000..009}.tar', 20), - (['/path/to/shard.tar'], 1), - (['/path/to/shard.tar', '/path/to/other_shard.tar'], 2), - ] -) -def test_num_shards(shards, expected_size): - _, size = get_dataset_size(shards) - assert size == expected_size, f'Expected {expected_size} for {shards} but found {size} instead.' diff --git a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/hubert_model.py b/spaces/xxbb/VITS-Umamusume-voice-synthesizer/hubert_model.py deleted file mode 100644 index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000 --- a/spaces/xxbb/VITS-Umamusume-voice-synthesizer/hubert_model.py +++ /dev/null @@ -1,221 +0,0 @@ -import copy -from typing import Optional, Tuple -import random - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = F.gelu(self.norm0(self.conv0(x))) - x = F.gelu(self.conv1(x)) - x = F.gelu(self.conv2(x)) - x = F.gelu(self.conv3(x)) - x = F.gelu(self.conv4(x)) - x = F.gelu(self.conv5(x)) - x = F.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = F.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/yaoshining/text-generation-webui/docs/Training-LoRAs.md b/spaces/yaoshining/text-generation-webui/docs/Training-LoRAs.md deleted file mode 100644 index 83e6d5a7251eea080cd7dfe8d19a2e42d6d3a822..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/docs/Training-LoRAs.md +++ /dev/null @@ -1,174 +0,0 @@ -## Training Your Own LoRAs - -The WebUI seeks to make training your own LoRAs as easy as possible. It comes down to just a few simple steps: - -### **Step 1**: Make a plan. -- What base model do you want to use? The LoRA you make has to be matched up to a single architecture (eg LLaMA-13B) and cannot be transferred to others (eg LLaMA-7B, StableLM, etc. would all be different). Derivatives of the same model (eg Alpaca finetune of LLaMA-13B) might be transferrable, but even then it's best to train exactly on what you plan to use. -- What model format do you want? At time of writing, 8-bit models are most stable, and 4-bit are supported but experimental. In the near future it is likely that 4-bit will be the best option for most users. -- What are you training it on? Do you want it to learn real information, a simple format, ...? - -### **Step 2**: Gather a dataset. -- If you use a dataset similar to the [Alpaca](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) format, that is natively supported by the `Formatted Dataset` input in the WebUI, with premade formatter options. -- If you use a dataset that isn't matched to Alpaca's format, but uses the same basic JSON structure, you can make your own format file by copying `training/formats/alpaca-format.json` to a new file and [editing its content](#format-files). -- If you can get the dataset into a simple text file, that works too! You can train using the `Raw text file` input option. - - This means you can for example just copy/paste a chatlog/documentation page/whatever you want, shove it in a plain text file, and train on it. -- If you use a structured dataset not in this format, you may have to find an external way to convert it - or open an issue to request native support. - -### **Step 3**: Do the training. -- **3.1**: Load the WebUI, and your model. - - Make sure you don't have any LoRAs already loaded (unless you want to train for multi-LoRA usage). -- **3.2**: Open the `Training` tab at the top, `Train LoRA` sub-tab. -- **3.3**: Fill in the name of the LoRA, select your dataset in the dataset options. -- **3.4**: Select other parameters to your preference. See [parameters below](#parameters). -- **3.5**: click `Start LoRA Training`, and wait. - - It can take a few hours for a large dataset, or just a few minute if doing a small run. - - You may want to monitor your [loss value](#loss) while it goes. - -### **Step 4**: Evaluate your results. -- Load the LoRA under the Models Tab. -- You can go test-drive it on the `Text generation` tab, or you can use the `Perplexity evaluation` sub-tab of the `Training` tab. -- If you used the `Save every n steps` option, you can grab prior copies of the model from sub-folders within the LoRA model's folder and try them instead. - -### **Step 5**: Re-run if you're unhappy. -- Make sure to unload the LoRA before training it. -- You can simply resume a prior run - use `Copy parameters from` to select your LoRA, and edit parameters. Note that you cannot change the `Rank` of an already created LoRA. - - If you want to resume from a checkpoint saved along the way, simply copy the contents of the checkpoint folder into the LoRA's folder. - - (Note: `adapter_model.bin` is the important file that holds the actual LoRA content). - - This will start Learning Rate and Steps back to the start. If you want to resume as if you were midway through, you can adjust your Learning Rate to the last reported LR in logs and reduce your epochs. -- Or, you can start over entirely if you prefer. -- If your model is producing corrupted outputs, you probably need to start over and use a lower Learning Rate. -- If your model isn't learning detailed information but you want it to, you might need to just run more epochs, or you might need a higher Rank. -- If your model is enforcing a format you didn't want, you may need to tweak your dataset, or start over and not train as far. - -## Format Files - -If using JSON formatted datasets, they are presumed to be in the following approximate format: - -```json -[ - { - "somekey": "somevalue", - "key2": "value2" - }, - { - // etc - } -] -``` - -Where the keys (eg `somekey`, `key2` above) are standardized, and relatively consistent across the dataset, and the values (eg `somevalue`, `value2`) contain the content actually intended to be trained. - -For Alpaca, the keys are `instruction`, `input`, and `output`, wherein `input` is sometimes blank. - -A simple format file for Alpaca to be used as a chat bot is: - -```json -{ - "instruction,output": "User: %instruction%\nAssistant: %output%", - "instruction,input,output": "User: %instruction%: %input%\nAssistant: %output%" -} -``` - -Note that the keys (eg `instruction,output`) are a comma-separated list of dataset keys, and the values are a simple string that use those keys with `%%`. - -So for example if a dataset has `"instruction": "answer my question"`, then the format file's `User: %instruction%\n` will be automatically filled in as `User: answer my question\n`. - -If you have different sets of key inputs, you can make your own format file to match it. This format-file is designed to be as simple as possible to enable easy editing to match your needs. - -## Raw Text File Settings - -When using raw text files as your dataset, the text is automatically split into chunks based on your `Cutoff Length` you get a few basic options to configure them. -- `Overlap Length` is how much to overlap chunks by. Overlapping chunks helps prevent the model from learning strange mid-sentence cuts, and instead learn continual sentences that flow from earlier text. -- `Prefer Newline Cut Length` sets a maximum distance in characters to shift the chunk cut towards newlines. Doing this helps prevent lines from starting or ending mid-sentence, preventing the model from learning to cut off sentences randomly. -- `Hard Cut String` sets a string that indicates there must be a hard cut without overlap. This defaults to `\n\n\n`, meaning 3 newlines. No trained chunk will ever contain this string. This allows you to insert unrelated sections of text in the same text file, but still ensure the model won't be taught to randomly change the subject. - -## Parameters - -The basic purpose and function of each parameter is documented on-page in the WebUI, so read through them in the UI to understand your options. - -That said, here's a guide to the most important parameter choices you should consider: - -### VRAM - -- First, you must consider your VRAM availability. - - Generally, under default settings, VRAM usage for training with default parameters is very close to when generating text (with 1000+ tokens of context) (ie, if you can generate text, you can train LoRAs). - - Note: worse by default in the 4-bit monkeypatch currently. Reduce `Micro Batch Size` to `1` to restore this to expectations. - - If you have VRAM to spare, setting higher batch sizes will use more VRAM and get you better quality training in exchange. - - If you have large data, setting a higher cutoff length may be beneficial, but will cost significant VRAM. If you can spare some, set your batch size to `1` and see how high you can push your cutoff length. - - If you're low on VRAM, reducing batch size or cutoff length will of course improve that. - - Don't be afraid to just try it and see what happens. If it's too much, it will just error out, and you can lower settings and try again. - -### Rank - -- Second, you want to consider the amount of learning you want. - - For example, you may wish to just learn a dialogue format (as in the case of Alpaca) in which case setting a low `Rank` value (32 or lower) works great. - - Or, you might be training on project documentation you want the bot to understand and be able to understand questions about, in which case the higher the rank, the better. - - Generally, higher Rank = more precise learning = more total content learned = more VRAM usage while training. - -### Learning Rate and Epochs - -- Third, how carefully you want it to be learned. - - In other words, how okay or not you are with the model losing unrelated understandings. - - You can control this with 3 key settings: the Learning Rate, its scheduler, and your total epochs. - - The learning rate controls how much change is made to the model by each token it sees. - - It's in scientific notation normally, so for example `3e-4` means `3 * 10^-4` which is `0.0003`. The number after `e-` controls how many `0`s are in the number. - - Higher values let training run faster, but also are more likely to corrupt prior data in the model. - - You essentially have two variables to balance: the LR, and Epochs. - - If you make LR higher, you can set Epochs equally lower to match. High LR + low epochs = very fast, low quality training. - - If you make LR low, set epochs high. Low LR + high epochs = slow but high-quality training. - - The scheduler controls change-over-time as you train - it starts high, and then goes low. This helps balance getting data in, and having decent quality, at the same time. - - You can see graphs of the different scheduler options [in the HuggingFace docs here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/en/main_classes/optimizer_schedules#transformers.SchedulerType) - -## Loss - -When you're running training, the WebUI's console window will log reports that include, among other things, a numeric value named `Loss`. It will start as a high number, and gradually get lower and lower as it goes. - -"Loss" in the world of AI training theoretically means "how close is the model to perfect", with `0` meaning "absolutely perfect". This is calculated by measuring the difference between the model outputting exactly the text you're training it to output, and what it actually outputs. - -In practice, a good LLM should have a very complex variable range of ideas running in its artificial head, so a loss of `0` would indicate that the model has broken and forgotten to how think about anything other than what you trained it. - -So, in effect, Loss is a balancing game: you want to get it low enough that it understands your data, but high enough that it isn't forgetting everything else. Generally, if it goes below `1.0`, it's going to start forgetting its prior memories, and you should stop training. In some cases you may prefer to take it as low as `0.5` (if you want it to be very very predictable). Different goals have different needs, so don't be afraid to experiment and see what works best for you. - -Note: if you see Loss start at or suddenly jump to exactly `0`, it is likely something has gone wrong in your training process (eg model corruption). - -## Note: 4-Bit Monkeypatch - -The [4-bit LoRA monkeypatch](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) works for training, but has side effects: -- VRAM usage is higher currently. You can reduce the `Micro Batch Size` to `1` to compensate. -- Models do funky things. LoRAs apply themselves, or refuse to apply, or spontaneously error out, or etc. It can be helpful to reload base model or restart the WebUI between training/usage to minimize chances of anything going haywire. -- Loading or working with multiple LoRAs at the same time doesn't currently work. -- Generally, recognize and treat the monkeypatch as the dirty temporary hack it is - it works, but isn't very stable. It will get better in time when everything is merged upstream for full official support. - -## Legacy notes - -LoRA training was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570). - -### Using the original alpaca-lora code - -Kept here for reference. The Training tab has much more features than this method. - -``` -conda activate textgen -git clone https://github.com/tloen/alpaca-lora -``` - -Edit those two lines in `alpaca-lora/finetune.py` to use your existing model folder instead of downloading everything from decapoda: - -``` -model = LlamaForCausalLM.from_pretrained( - "models/llama-7b", - load_in_8bit=True, - device_map="auto", -) -tokenizer = LlamaTokenizer.from_pretrained( - "models/llama-7b", add_eos_token=True -) -``` - -Run the script with: - -``` -python finetune.py -``` - -It just works. It runs at 22.32s/it, with 1170 iterations in total, so about 7 hours and a half for training a LoRA. RTX 3090, 18153MiB VRAM used, drawing maximum power (350W, room heater mode). diff --git a/spaces/yaoshining/text-generation-webui/extensions/multimodal/multimodal_embedder.py b/spaces/yaoshining/text-generation-webui/extensions/multimodal/multimodal_embedder.py deleted file mode 100644 index 626077cb80987d66af90f390e31aa2f2def76fec..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/extensions/multimodal/multimodal_embedder.py +++ /dev/null @@ -1,178 +0,0 @@ -import base64 -import re -from dataclasses import dataclass -from io import BytesIO -from typing import Any, List, Optional - -import torch -from PIL import Image - -from extensions.multimodal.pipeline_loader import load_pipeline -from modules import shared -from modules.logging_colors import logger -from modules.text_generation import encode, get_max_prompt_length - - -@dataclass -class PromptPart: - text: str - image: Optional[Image.Image] = None - is_image: bool = False - input_ids: Optional[torch.Tensor] = None - embedding: Optional[torch.Tensor] = None - - -class MultimodalEmbedder: - def __init__(self, params: dict): - pipeline, source = load_pipeline(params) - self.pipeline = pipeline - logger.info(f'Multimodal: loaded pipeline {self.pipeline.name()} from pipelines/{source} ({self.pipeline.__class__.__name__})') - - def _split_prompt(self, prompt: str, load_images: bool = False) -> List[PromptPart]: - """Splits a prompt into a list of `PromptParts` to separate image data from text. - It will also append `image_start` and `image_end` before and after the image, and optionally parse and load the images, - if `load_images` is `True`. - """ - parts: List[PromptPart] = [] - curr = 0 - while True: - match = re.search(r'', prompt[curr:]) - if match is None: - # no more image tokens, append the rest of the prompt - if curr > 0: - # add image end token after last image - parts.append(PromptPart(text=self.pipeline.image_end() + prompt[curr:])) - else: - parts.append(PromptPart(text=prompt)) - break - # found an image, append image start token to the text - if match.start() > 0: - parts.append(PromptPart(text=prompt[curr:curr + match.start()] + self.pipeline.image_start())) - else: - parts.append(PromptPart(text=self.pipeline.image_start())) - # append the image - parts.append(PromptPart( - text=match.group(0), - image=Image.open(BytesIO(base64.b64decode(match.group(1)))) if load_images else None, - is_image=True - )) - curr += match.end() - return parts - - def _len_in_tokens_prompt_parts(self, parts: List[PromptPart]) -> int: - """Total length in tokens of all `parts`""" - tokens = 0 - for part in parts: - if part.is_image: - tokens += self.pipeline.num_image_embeds() - elif part.input_ids is not None: - tokens += len(part.input_ids) - else: - tokens += len(encode(part.text)[0]) - return tokens - - def len_in_tokens(self, prompt: str) -> int: - """Total length in tokens for a given text `prompt`""" - parts = self._split_prompt(prompt, False) - return self._len_in_tokens_prompt_parts(parts) - - def _encode_single_text(self, part: PromptPart, add_bos_token: bool) -> PromptPart: - """Encode a single prompt `part` to `input_ids`. Returns a `PromptPart`""" - if part.is_image: - placeholders = torch.ones((self.pipeline.num_image_embeds())) * self.pipeline.placeholder_token_id() - part.input_ids = placeholders.to(shared.model.device, dtype=torch.int64) - else: - part.input_ids = encode(part.text, add_bos_token=add_bos_token)[0].to(shared.model.device, dtype=torch.int64) - return part - - @staticmethod - def _num_images(parts: List[PromptPart]) -> int: - count = 0 - for part in parts: - if part.is_image: - count += 1 - return count - - def _encode_text(self, state, parts: List[PromptPart]) -> List[PromptPart]: - """Encode text to token_ids, also truncate the prompt, if necessary. - - The chat/instruct mode should make prompts that fit in get_max_prompt_length, but if max_new_tokens are set - such that the context + min_rows don't fit, we can get a prompt which is too long. - We can't truncate image embeddings, as it leads to broken generation, so remove the images instead and warn the user - """ - encoded: List[PromptPart] = [] - for i, part in enumerate(parts): - encoded.append(self._encode_single_text(part, i == 0 and state['add_bos_token'])) - - # truncation: - max_len = get_max_prompt_length(state) - removed_images = 0 - - # 1. remove entire text/image blocks - while self._len_in_tokens_prompt_parts(encoded[1:]) > max_len: - if encoded[0].is_image: - removed_images += 1 - encoded = encoded[1:] - - # 2. check if the last prompt part doesn't need to get truncated - if self._len_in_tokens_prompt_parts(encoded) > max_len: - if encoded[0].is_image: - # don't truncate image embeddings, just remove the image, otherwise generation will be broken - removed_images += 1 - encoded = encoded[1:] - elif len(encoded) > 1 and encoded[0].text.endswith(self.pipeline.image_start()): - # see if we can keep image_start token - len_image_start = len(encode(self.pipeline.image_start(), add_bos_token=state['add_bos_token'])[0]) - if self._len_in_tokens_prompt_parts(encoded[1:]) + len_image_start > max_len: - # we can't -> remove this text, and the image - encoded = encoded[2:] - removed_images += 1 - else: - # we can -> just truncate the text - trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len - encoded[0].input_ids = encoded[0].input_ids[trunc_len:] - elif len(encoded) > 0: - # only one text left, truncate it normally - trunc_len = self._len_in_tokens_prompt_parts(encoded) - max_len - encoded[0].input_ids = encoded[0].input_ids[trunc_len:] - - # notify user if we truncated an image - if removed_images > 0: - logger.warning(f"Multimodal: removed {removed_images} image(s) from prompt. Try decreasing max_new_tokens if generation is broken") - - return encoded - - def _embed(self, parts: List[PromptPart]) -> List[PromptPart]: - # batch images - image_indicies = [i for i, part in enumerate(parts) if part.is_image] - embedded = self.pipeline.embed_images([parts[i].image for i in image_indicies]) - for i, embeds in zip(image_indicies, embedded): - parts[i].embedding = embeds - # embed text - for (i, part) in enumerate(parts): - if not part.is_image: - parts[i].embedding = self.pipeline.embed_tokens(part.input_ids) - return parts - - def _remove_old_images(self, parts: List[PromptPart], params: dict) -> List[PromptPart]: - if params['add_all_images_to_prompt']: - return parts - already_added = False - for i, part in reversed(list(enumerate(parts))): - if part.is_image: - if already_added: - parts[i].embedding = self.pipeline.placeholder_embeddings() - else: - already_added = True - return parts - - def forward(self, prompt: str, state: Any, params: dict): - prompt_parts = self._split_prompt(prompt, True) - prompt_parts = self._encode_text(state, prompt_parts) - prompt_parts = self._embed(prompt_parts) - prompt_parts = self._remove_old_images(prompt_parts, params) - embeds = tuple(part.embedding for part in prompt_parts) - ids = tuple(part.input_ids for part in prompt_parts) - input_embeds = torch.cat(embeds, dim=0) - input_ids = torch.cat(ids, dim=0) - return prompt, input_ids, input_embeds, self._num_images(prompt_parts) diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/diffusion/logger/__init__.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/diffusion/logger/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/diffusion/wavenet.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/diffusion/wavenet.py deleted file mode 100644 index 3d48c7eaaa0e8191b27a5d1890eb657cbcc0d143..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/diffusion/wavenet.py +++ /dev/null @@ -1,108 +0,0 @@ -import math -from math import sqrt - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Mish - - -class Conv1d(torch.nn.Conv1d): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - nn.init.kaiming_normal_(self.weight) - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class ResidualBlock(nn.Module): - def __init__(self, encoder_hidden, residual_channels, dilation): - super().__init__() - self.residual_channels = residual_channels - self.dilated_conv = nn.Conv1d( - residual_channels, - 2 * residual_channels, - kernel_size=3, - padding=dilation, - dilation=dilation - ) - self.diffusion_projection = nn.Linear(residual_channels, residual_channels) - self.conditioner_projection = nn.Conv1d(encoder_hidden, 2 * residual_channels, 1) - self.output_projection = nn.Conv1d(residual_channels, 2 * residual_channels, 1) - - def forward(self, x, conditioner, diffusion_step): - diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1) - conditioner = self.conditioner_projection(conditioner) - y = x + diffusion_step - - y = self.dilated_conv(y) + conditioner - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - gate, filter = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - y = torch.sigmoid(gate) * torch.tanh(filter) - - y = self.output_projection(y) - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - return (x + residual) / math.sqrt(2.0), skip - - -class WaveNet(nn.Module): - def __init__(self, in_dims=128, n_layers=20, n_chans=384, n_hidden=256): - super().__init__() - self.input_projection = Conv1d(in_dims, n_chans, 1) - self.diffusion_embedding = SinusoidalPosEmb(n_chans) - self.mlp = nn.Sequential( - nn.Linear(n_chans, n_chans * 4), - Mish(), - nn.Linear(n_chans * 4, n_chans) - ) - self.residual_layers = nn.ModuleList([ - ResidualBlock( - encoder_hidden=n_hidden, - residual_channels=n_chans, - dilation=1 - ) - for i in range(n_layers) - ]) - self.skip_projection = Conv1d(n_chans, n_chans, 1) - self.output_projection = Conv1d(n_chans, in_dims, 1) - nn.init.zeros_(self.output_projection.weight) - - def forward(self, spec, diffusion_step, cond): - """ - :param spec: [B, 1, M, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec.squeeze(1) - x = self.input_projection(x) # [B, residual_channel, T] - - x = F.relu(x) - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) - skip = [] - for layer in self.residual_layers: - x, skip_connection = layer(x, cond, diffusion_step) - skip.append(skip_connection) - - x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers)) - x = self.skip_projection(x) - x = F.relu(x) - x = self.output_projection(x) # [B, mel_bins, T] - return x[:, None, :, :] diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/regnet.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/regnet.py deleted file mode 100644 index 3533d63385d1324cfc1559eae9576b3fa52585af..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/regnet.py +++ /dev/null @@ -1,452 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Implementation of RegNet models from :paper:`dds` and :paper:`scaling`. - -This code is adapted from https://github.com/facebookresearch/pycls with minimal modifications. -Some code duplication exists between RegNet and ResNets (e.g., ResStem) in order to simplify -model loading. -""" - -import numpy as np -from torch import nn - -from detectron2.layers import CNNBlockBase, ShapeSpec, get_norm - -from .backbone import Backbone - -__all__ = [ - "AnyNet", - "RegNet", - "ResStem", - "SimpleStem", - "VanillaBlock", - "ResBasicBlock", - "ResBottleneckBlock", -] - - -def conv2d(w_in, w_out, k, *, stride=1, groups=1, bias=False): - """Helper for building a conv2d layer.""" - assert k % 2 == 1, "Only odd size kernels supported to avoid padding issues." - s, p, g, b = stride, (k - 1) // 2, groups, bias - return nn.Conv2d(w_in, w_out, k, stride=s, padding=p, groups=g, bias=b) - - -def gap2d(): - """Helper for building a global average pooling layer.""" - return nn.AdaptiveAvgPool2d((1, 1)) - - -def pool2d(k, *, stride=1): - """Helper for building a pool2d layer.""" - assert k % 2 == 1, "Only odd size kernels supported to avoid padding issues." - return nn.MaxPool2d(k, stride=stride, padding=(k - 1) // 2) - - -def init_weights(m): - """Performs ResNet-style weight initialization.""" - if isinstance(m, nn.Conv2d): - # Note that there is no bias due to BN - fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels - m.weight.data.normal_(mean=0.0, std=np.sqrt(2.0 / fan_out)) - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1.0) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - m.weight.data.normal_(mean=0.0, std=0.01) - m.bias.data.zero_() - - -class ResStem(CNNBlockBase): - """ResNet stem for ImageNet: 7x7, BN, AF, MaxPool.""" - - def __init__(self, w_in, w_out, norm, activation_class): - super().__init__(w_in, w_out, 4) - self.conv = conv2d(w_in, w_out, 7, stride=2) - self.bn = get_norm(norm, w_out) - self.af = activation_class() - self.pool = pool2d(3, stride=2) - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class SimpleStem(CNNBlockBase): - """Simple stem for ImageNet: 3x3, BN, AF.""" - - def __init__(self, w_in, w_out, norm, activation_class): - super().__init__(w_in, w_out, 2) - self.conv = conv2d(w_in, w_out, 3, stride=2) - self.bn = get_norm(norm, w_out) - self.af = activation_class() - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class SE(nn.Module): - """Squeeze-and-Excitation (SE) block: AvgPool, FC, Act, FC, Sigmoid.""" - - def __init__(self, w_in, w_se, activation_class): - super().__init__() - self.avg_pool = gap2d() - self.f_ex = nn.Sequential( - conv2d(w_in, w_se, 1, bias=True), - activation_class(), - conv2d(w_se, w_in, 1, bias=True), - nn.Sigmoid(), - ) - - def forward(self, x): - return x * self.f_ex(self.avg_pool(x)) - - -class VanillaBlock(CNNBlockBase): - """Vanilla block: [3x3 conv, BN, Relu] x2.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, _params): - super().__init__(w_in, w_out, stride) - self.a = conv2d(w_in, w_out, 3, stride=stride) - self.a_bn = get_norm(norm, w_out) - self.a_af = activation_class() - self.b = conv2d(w_out, w_out, 3) - self.b_bn = get_norm(norm, w_out) - self.b_af = activation_class() - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class BasicTransform(nn.Module): - """Basic transformation: [3x3 conv, BN, Relu] x2.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, _params): - super().__init__() - self.a = conv2d(w_in, w_out, 3, stride=stride) - self.a_bn = get_norm(norm, w_out) - self.a_af = activation_class() - self.b = conv2d(w_out, w_out, 3) - self.b_bn = get_norm(norm, w_out) - self.b_bn.final_bn = True - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class ResBasicBlock(CNNBlockBase): - """Residual basic block: x + f(x), f = basic transform.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, params): - super().__init__(w_in, w_out, stride) - self.proj, self.bn = None, None - if (w_in != w_out) or (stride != 1): - self.proj = conv2d(w_in, w_out, 1, stride=stride) - self.bn = get_norm(norm, w_out) - self.f = BasicTransform(w_in, w_out, stride, norm, activation_class, params) - self.af = activation_class() - - def forward(self, x): - x_p = self.bn(self.proj(x)) if self.proj else x - return self.af(x_p + self.f(x)) - - -class BottleneckTransform(nn.Module): - """Bottleneck transformation: 1x1, 3x3 [+SE], 1x1.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, params): - super().__init__() - w_b = int(round(w_out * params["bot_mul"])) - w_se = int(round(w_in * params["se_r"])) - groups = w_b // params["group_w"] - self.a = conv2d(w_in, w_b, 1) - self.a_bn = get_norm(norm, w_b) - self.a_af = activation_class() - self.b = conv2d(w_b, w_b, 3, stride=stride, groups=groups) - self.b_bn = get_norm(norm, w_b) - self.b_af = activation_class() - self.se = SE(w_b, w_se, activation_class) if w_se else None - self.c = conv2d(w_b, w_out, 1) - self.c_bn = get_norm(norm, w_out) - self.c_bn.final_bn = True - - def forward(self, x): - for layer in self.children(): - x = layer(x) - return x - - -class ResBottleneckBlock(CNNBlockBase): - """Residual bottleneck block: x + f(x), f = bottleneck transform.""" - - def __init__(self, w_in, w_out, stride, norm, activation_class, params): - super().__init__(w_in, w_out, stride) - self.proj, self.bn = None, None - if (w_in != w_out) or (stride != 1): - self.proj = conv2d(w_in, w_out, 1, stride=stride) - self.bn = get_norm(norm, w_out) - self.f = BottleneckTransform(w_in, w_out, stride, norm, activation_class, params) - self.af = activation_class() - - def forward(self, x): - x_p = self.bn(self.proj(x)) if self.proj else x - return self.af(x_p + self.f(x)) - - -class AnyStage(nn.Module): - """AnyNet stage (sequence of blocks w/ the same output shape).""" - - def __init__(self, w_in, w_out, stride, d, block_class, norm, activation_class, params): - super().__init__() - for i in range(d): - block = block_class(w_in, w_out, stride, norm, activation_class, params) - self.add_module("b{}".format(i + 1), block) - stride, w_in = 1, w_out - - def forward(self, x): - for block in self.children(): - x = block(x) - return x - - -class AnyNet(Backbone): - """AnyNet model. See :paper:`dds`.""" - - def __init__( - self, - *, - stem_class, - stem_width, - block_class, - depths, - widths, - group_widths, - strides, - bottleneck_ratios, - se_ratio, - activation_class, - freeze_at=0, - norm="BN", - out_features=None, - ): - """ - Args: - stem_class (callable): A callable taking 4 arguments (channels in, channels out, - normalization, callable returning an activation function) that returns another - callable implementing the stem module. - stem_width (int): The number of output channels that the stem produces. - block_class (callable): A callable taking 6 arguments (channels in, channels out, - stride, normalization, callable returning an activation function, a dict of - block-specific parameters) that returns another callable implementing the repeated - block module. - depths (list[int]): Number of blocks in each stage. - widths (list[int]): For each stage, the number of output channels of each block. - group_widths (list[int]): For each stage, the number of channels per group in group - convolution, if the block uses group convolution. - strides (list[int]): The stride that each network stage applies to its input. - bottleneck_ratios (list[float]): For each stage, the ratio of the number of bottleneck - channels to the number of block input channels (or, equivalently, output channels), - if the block uses a bottleneck. - se_ratio (float): The ratio of the number of channels used inside the squeeze-excitation - (SE) module to it number of input channels, if SE the block uses SE. - activation_class (callable): A callable taking no arguments that returns another - callable implementing an activation function. - freeze_at (int): The number of stages at the beginning to freeze. - see :meth:`freeze` for detailed explanation. - norm (str or callable): normalization for all conv layers. - See :func:`layers.get_norm` for supported format. - out_features (list[str]): name of the layers whose outputs should - be returned in forward. RegNet's use "stem" and "s1", "s2", etc for the stages after - the stem. If None, will return the output of the last layer. - """ - super().__init__() - self.stem = stem_class(3, stem_width, norm, activation_class) - - current_stride = self.stem.stride - self._out_feature_strides = {"stem": current_stride} - self._out_feature_channels = {"stem": self.stem.out_channels} - self.stages_and_names = [] - prev_w = stem_width - - for i, (d, w, s, b, g) in enumerate( - zip(depths, widths, strides, bottleneck_ratios, group_widths) - ): - params = {"bot_mul": b, "group_w": g, "se_r": se_ratio} - stage = AnyStage(prev_w, w, s, d, block_class, norm, activation_class, params) - name = "s{}".format(i + 1) - self.add_module(name, stage) - self.stages_and_names.append((stage, name)) - self._out_feature_strides[name] = current_stride = int( - current_stride * np.prod([k.stride for k in stage.children()]) - ) - self._out_feature_channels[name] = list(stage.children())[-1].out_channels - prev_w = w - - self.apply(init_weights) - - if out_features is None: - out_features = [name] - self._out_features = out_features - assert len(self._out_features) - children = [x[0] for x in self.named_children()] - for out_feature in self._out_features: - assert out_feature in children, "Available children: {} does not include {}".format( - ", ".join(children), out_feature - ) - self.freeze(freeze_at) - - def forward(self, x): - """ - Args: - x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``. - - Returns: - dict[str->Tensor]: names and the corresponding features - """ - assert x.dim() == 4, f"Model takes an input of shape (N, C, H, W). Got {x.shape} instead!" - outputs = {} - x = self.stem(x) - if "stem" in self._out_features: - outputs["stem"] = x - for stage, name in self.stages_and_names: - x = stage(x) - if name in self._out_features: - outputs[name] = x - return outputs - - def output_shape(self): - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } - - def freeze(self, freeze_at=0): - """ - Freeze the first several stages of the model. Commonly used in fine-tuning. - - Layers that produce the same feature map spatial size are defined as one - "stage" by :paper:`FPN`. - - Args: - freeze_at (int): number of stages to freeze. - `1` means freezing the stem. `2` means freezing the stem and - one residual stage, etc. - - Returns: - nn.Module: this model itself - """ - if freeze_at >= 1: - self.stem.freeze() - for idx, (stage, _) in enumerate(self.stages_and_names, start=2): - if freeze_at >= idx: - for block in stage.children(): - block.freeze() - return self - - -def adjust_block_compatibility(ws, bs, gs): - """Adjusts the compatibility of widths, bottlenecks, and groups.""" - assert len(ws) == len(bs) == len(gs) - assert all(w > 0 and b > 0 and g > 0 for w, b, g in zip(ws, bs, gs)) - vs = [int(max(1, w * b)) for w, b in zip(ws, bs)] - gs = [int(min(g, v)) for g, v in zip(gs, vs)] - ms = [np.lcm(g, b) if b > 1 else g for g, b in zip(gs, bs)] - vs = [max(m, int(round(v / m) * m)) for v, m in zip(vs, ms)] - ws = [int(v / b) for v, b in zip(vs, bs)] - assert all(w * b % g == 0 for w, b, g in zip(ws, bs, gs)) - return ws, bs, gs - - -def generate_regnet_parameters(w_a, w_0, w_m, d, q=8): - """Generates per stage widths and depths from RegNet parameters.""" - assert w_a >= 0 and w_0 > 0 and w_m > 1 and w_0 % q == 0 - # Generate continuous per-block ws - ws_cont = np.arange(d) * w_a + w_0 - # Generate quantized per-block ws - ks = np.round(np.log(ws_cont / w_0) / np.log(w_m)) - ws_all = w_0 * np.power(w_m, ks) - ws_all = np.round(np.divide(ws_all, q)).astype(int) * q - # Generate per stage ws and ds (assumes ws_all are sorted) - ws, ds = np.unique(ws_all, return_counts=True) - # Compute number of actual stages and total possible stages - num_stages, total_stages = len(ws), ks.max() + 1 - # Convert numpy arrays to lists and return - ws, ds, ws_all, ws_cont = (x.tolist() for x in (ws, ds, ws_all, ws_cont)) - return ws, ds, num_stages, total_stages, ws_all, ws_cont - - -class RegNet(AnyNet): - """RegNet model. See :paper:`dds`.""" - - def __init__( - self, - *, - stem_class, - stem_width, - block_class, - depth, - w_a, - w_0, - w_m, - group_width, - stride=2, - bottleneck_ratio=1.0, - se_ratio=0.0, - activation_class=None, - freeze_at=0, - norm="BN", - out_features=None, - ): - """ - Build a RegNet from the parameterization described in :paper:`dds` Section 3.3. - - Args: - See :class:`AnyNet` for arguments that are not listed here. - depth (int): Total number of blocks in the RegNet. - w_a (float): Factor by which block width would increase prior to quantizing block widths - by stage. See :paper:`dds` Section 3.3. - w_0 (int): Initial block width. See :paper:`dds` Section 3.3. - w_m (float): Parameter controlling block width quantization. - See :paper:`dds` Section 3.3. - group_width (int): Number of channels per group in group convolution, if the block uses - group convolution. - bottleneck_ratio (float): The ratio of the number of bottleneck channels to the number - of block input channels (or, equivalently, output channels), if the block uses a - bottleneck. - stride (int): The stride that each network stage applies to its input. - """ - ws, ds = generate_regnet_parameters(w_a, w_0, w_m, depth)[0:2] - ss = [stride for _ in ws] - bs = [bottleneck_ratio for _ in ws] - gs = [group_width for _ in ws] - ws, bs, gs = adjust_block_compatibility(ws, bs, gs) - - def default_activation_class(): - return nn.ReLU(inplace=True) - - super().__init__( - stem_class=stem_class, - stem_width=stem_width, - block_class=block_class, - depths=ds, - widths=ws, - strides=ss, - group_widths=gs, - bottleneck_ratios=bs, - se_ratio=se_ratio, - activation_class=default_activation_class - if activation_class is None - else activation_class, - freeze_at=freeze_at, - norm=norm, - out_features=out_features, - ) diff --git a/spaces/yotamsapi/face-swap/app.py b/spaces/yotamsapi/face-swap/app.py deleted file mode 100644 index 97414a0de252b08e14b99b108a73c67083254fac..0000000000000000000000000000000000000000 --- a/spaces/yotamsapi/face-swap/app.py +++ /dev/null @@ -1,215 +0,0 @@ -import gradio -from huggingface_hub import Repository -import os - -from utils.utils import norm_crop, estimate_norm, inverse_estimate_norm, transform_landmark_points, get_lm -from networks.layers import AdaIN, AdaptiveAttention -from tensorflow_addons.layers import InstanceNormalization -import numpy as np -import cv2 -from scipy.ndimage import gaussian_filter - -from tensorflow.keras.models import load_model -from options.swap_options import SwapOptions - -# . -# token = os.environ['model_fetch'] - -opt = SwapOptions().parse() -token = os.environ['token'] - -retina_repo = Repository(local_dir="retina_models", clone_from="felixrosberg/RetinaFace") - -from retinaface.models import * - -RetinaFace = load_model("retina_models/RetinaFace-Res50.h5", - custom_objects={"FPN": FPN, - "SSH": SSH, - "BboxHead": BboxHead, - "LandmarkHead": LandmarkHead, - "ClassHead": ClassHead} - ) - -arc_repo = Repository(local_dir="arcface_model", clone_from="felixrosberg/ArcFace") -ArcFace = load_model("arcface_model/ArcFace-Res50.h5") -ArcFaceE = load_model("arcface_model/ArcFacePerceptual-Res50.h5") - -g_repo = Repository(local_dir="g_model_c_hq", clone_from="felixrosberg/FaceDancer",use_auth_token=token) -G = load_model("g_model_c_hq/FaceDancer_config_c_HQ.h5", custom_objects={"AdaIN": AdaIN, - "AdaptiveAttention": AdaptiveAttention, - "InstanceNormalization": InstanceNormalization}) - -# r_repo = Repository(local_dir="reconstruction_attack", clone_from="felixrosberg/reconstruction_attack", -# private=True, use_auth_token=token) -# R = load_model("reconstruction_attack/reconstructor_42.h5", custom_objects={"AdaIN": AdaIN, -# "AdaptiveAttention": AdaptiveAttention, -# "InstanceNormalization": InstanceNormalization}) - -# permuter_repo = Repository(local_dir="identity_permuter", clone_from="felixrosberg/identitypermuter", -# private=True, use_auth_token=token, git_user="felixrosberg") - -# from identity_permuter.id_permuter import identity_permuter - -# IDP = identity_permuter(emb_size=32, min_arg=False) -# IDP.load_weights("identity_permuter/id_permuter.h5") - -blend_mask_base = np.zeros(shape=(256, 256, 1)) -blend_mask_base[80:244, 32:224] = 1 -blend_mask_base = gaussian_filter(blend_mask_base, sigma=7) - - -def run_inference(target, source, slider, adv_slider, settings): - try: - source = np.array(source) - target = np.array(target) - - # Prepare to load video - if "anonymize" not in settings: - source_a = RetinaFace(np.expand_dims(source, axis=0)).numpy()[0] - source_h, source_w, _ = source.shape - source_lm = get_lm(source_a, source_w, source_h) - source_aligned = norm_crop(source, source_lm, image_size=256) - source_z = ArcFace.predict(np.expand_dims(tf.image.resize(source_aligned, [112, 112]) / 255.0, axis=0)) - else: - source_z = None - - # read frame - im = target - im_h, im_w, _ = im.shape - im_shape = (im_w, im_h) - - detection_scale = im_w // 640 if im_w > 640 else 1 - - faces = RetinaFace(np.expand_dims(cv2.resize(im, - (im_w // detection_scale, - im_h // detection_scale)), axis=0)).numpy() - - total_img = im / 255.0 - for annotation in faces: - lm_align = np.array([[annotation[4] * im_w, annotation[5] * im_h], - [annotation[6] * im_w, annotation[7] * im_h], - [annotation[8] * im_w, annotation[9] * im_h], - [annotation[10] * im_w, annotation[11] * im_h], - [annotation[12] * im_w, annotation[13] * im_h]], - dtype=np.float32) - - # align the detected face - M, pose_index = estimate_norm(lm_align, 256, "arcface", shrink_factor=1.0) - im_aligned = (cv2.warpAffine(im, M, (256, 256), borderValue=0.0) - 127.5) / 127.5 - - if "adversarial defense" in settings: - eps = adv_slider / 200 - X = tf.convert_to_tensor(np.expand_dims(im_aligned, axis=0)) - with tf.GradientTape() as tape: - tape.watch(X) - - X_z = ArcFaceE(tf.image.resize(X * 0.5 + 0.5, [112, 112])) - output = R([X, X_z]) - - loss = tf.reduce_mean(tf.abs(0 - output)) - - gradient = tf.sign(tape.gradient(loss, X)) - - adv_x = X + eps * gradient - im_aligned = tf.clip_by_value(adv_x, -1, 1)[0] - - if "anonymize" in settings and "reconstruction attack" not in settings: - """source_z = ArcFace.predict(np.expand_dims(tf.image.resize(im_aligned, [112, 112]) / 255.0, axis=0)) - anon_ratio = int(512 * (slider / 100)) - anon_vector = np.ones(shape=(1, 512)) - anon_vector[:, :anon_ratio] = -1 - np.random.shuffle(anon_vector) - source_z *= anon_vector""" - - slider_weight = slider / 100 - - target_z = ArcFace.predict(np.expand_dims(tf.image.resize(im_aligned, [112, 112]) * 0.5 + 0.5, axis=0)) - # source_z = IDP.predict(target_z) - - source_z = slider_weight * source_z + (1 - slider_weight) * target_z - - if "reconstruction attack" in settings: - source_z = ArcFaceE.predict(np.expand_dims(tf.image.resize(im_aligned, [112, 112]) * 0.5 + 0.5, axis=0)) - - # face swap - if "reconstruction attack" not in settings: - changed_face_cage = G.predict([np.expand_dims(im_aligned, axis=0), - source_z]) - changed_face = changed_face_cage[0] * 0.5 + 0.5 - - # get inverse transformation landmarks - transformed_lmk = transform_landmark_points(M, lm_align) - - # warp image back - iM, _ = inverse_estimate_norm(lm_align, transformed_lmk, 256, "arcface", shrink_factor=1.0) - iim_aligned = cv2.warpAffine(changed_face, iM, im_shape, borderValue=0.0) - - # blend swapped face with target image - blend_mask = cv2.warpAffine(blend_mask_base, iM, im_shape, borderValue=0.0) - blend_mask = np.expand_dims(blend_mask, axis=-1) - total_img = (iim_aligned * blend_mask + total_img * (1 - blend_mask)) - else: - changed_face_cage = R.predict([np.expand_dims(im_aligned, axis=0), - source_z]) - changed_face = changed_face_cage[0] * 0.5 + 0.5 - - # get inverse transformation landmarks - transformed_lmk = transform_landmark_points(M, lm_align) - - # warp image back - iM, _ = inverse_estimate_norm(lm_align, transformed_lmk, 256, "arcface", shrink_factor=1.0) - iim_aligned = cv2.warpAffine(changed_face, iM, im_shape, borderValue=0.0) - - # blend swapped face with target image - blend_mask = cv2.warpAffine(blend_mask_base, iM, im_shape, borderValue=0.0) - blend_mask = np.expand_dims(blend_mask, axis=-1) - total_img = (iim_aligned * blend_mask + total_img * (1 - blend_mask)) - - if "compare" in settings: - total_img = np.concatenate((im / 255.0, total_img), axis=1) - - total_img = np.clip(total_img, 0, 1) - total_img *= 255.0 - total_img = total_img.astype('uint8') - - return total_img - except Exception as e: - print(e) - return None - - -description = "Performs subject agnostic identity transfer from a source face to all target faces. \n\n" \ - "Implementation and demo of FaceDancer, accepted to WACV 2023. \n\n" \ - "Pre-print: https://arxiv.org/abs/2210.10473 \n\n" \ - "Code: https://github.com/felixrosberg/FaceDancer \n\n" \ - "\n\n" \ - "Options:\n\n" \ - "-Compare returns the target image concatenated with the results.\n\n" \ - "-Anonymize will ignore the source image and perform an identity permutation of target faces.\n\n" \ - "-Reconstruction attack will attempt to invert the face swap or the anonymization.\n\n" \ - "-Adversarial defense will add a permutation noise that disrupts the reconstruction attack.\n\n" \ - "NOTE: There is no guarantees with the anonymization process currently.\n\n" \ - "NOTE: source image with too high resolution may not work properly!" -examples = [["assets/rick.jpg", "assets/musk.jpg", 100, 10, ["compare"]], - ["assets/musk.jpg", "assets/musk.jpg", 100, 10, ["anonymize"]]] -article = """ -Demo is based of recent research from my Ph.D work. Results expects to be published in the coming months. -""" - -iface = gradio.Interface(run_inference, - [gradio.Image(shape=None, type="pil", label='Target'), - gradio.Image(shape=None, type="pil", label='Source'), - gradio.Slider(0, 100, default=100, label="Anonymization ratio (%)"), - gradio.Slider(0, 100, default=100, label="Adversarial defense ratio (%)"), - gradio.CheckboxGroup(["compare", - "anonymize", - "reconstruction attack", - "adversarial defense"], - label='Options')], - "image", - title="Face Swap", - description=description, - examples=examples, - article=article, - layout="vertical") -iface.launch() diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/placeholder.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/placeholder.js deleted file mode 100644 index 45730a5106e12f9dd5f95fdc40c38689a7069d09..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/placeholder.js +++ /dev/null @@ -1,33 +0,0 @@ -let Selector = require('../selector') - -class Placeholder extends Selector { - /** - * Add old mozilla to possible prefixes - */ - possible() { - return super.possible().concat(['-moz- old', '-ms- old']) - } - - /** - * Return different selectors depend on prefix - */ - prefixed(prefix) { - if (prefix === '-webkit-') { - return '::-webkit-input-placeholder' - } - if (prefix === '-ms-') { - return '::-ms-input-placeholder' - } - if (prefix === '-ms- old') { - return ':-ms-input-placeholder' - } - if (prefix === '-moz- old') { - return ':-moz-placeholder' - } - return `::${prefix}placeholder` - } -} - -Placeholder.names = ['::placeholder'] - -module.exports = Placeholder diff --git a/spaces/ysharma/Gradio_Client_Chains/README.md b/spaces/ysharma/Gradio_Client_Chains/README.md deleted file mode 100644 index 91b92ef06a6eef3bde593ec30aea35890875e627..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Gradio_Client_Chains/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Gradio Client Chains -emoji: 📚 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ysharma/ernie_vilg_english/README.md b/spaces/ysharma/ernie_vilg_english/README.md deleted file mode 100644 index f07f2db021da1865d11add7623780761a94f7470..0000000000000000000000000000000000000000 --- a/spaces/ysharma/ernie_vilg_english/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ernie Vilg English -emoji: 🐢 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.2 -app_file: app.py -pinned: false -license: gpl ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yunfei0710/gpt-academic/colorful.py b/spaces/yunfei0710/gpt-academic/colorful.py deleted file mode 100644 index 9749861f7e59151cda40ec7b7cbc4ea814b88d71..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/colorful.py +++ /dev/null @@ -1,61 +0,0 @@ -import platform -from sys import stdout - -if platform.system()=="Linux": - pass -else: - from colorama import init - init() - -# Do you like the elegance of Chinese characters? -def print红(*kw,**kargs): - print("\033[0;31m",*kw,"\033[0m",**kargs) -def print绿(*kw,**kargs): - print("\033[0;32m",*kw,"\033[0m",**kargs) -def print黄(*kw,**kargs): - print("\033[0;33m",*kw,"\033[0m",**kargs) -def print蓝(*kw,**kargs): - print("\033[0;34m",*kw,"\033[0m",**kargs) -def print紫(*kw,**kargs): - print("\033[0;35m",*kw,"\033[0m",**kargs) -def print靛(*kw,**kargs): - print("\033[0;36m",*kw,"\033[0m",**kargs) - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - -# Do you like the elegance of Chinese characters? -def sprint红(*kw): - return "\033[0;31m"+' '.join(kw)+"\033[0m" -def sprint绿(*kw): - return "\033[0;32m"+' '.join(kw)+"\033[0m" -def sprint黄(*kw): - return "\033[0;33m"+' '.join(kw)+"\033[0m" -def sprint蓝(*kw): - return "\033[0;34m"+' '.join(kw)+"\033[0m" -def sprint紫(*kw): - return "\033[0;35m"+' '.join(kw)+"\033[0m" -def sprint靛(*kw): - return "\033[0;36m"+' '.join(kw)+"\033[0m" -def sprint亮红(*kw): - return "\033[1;31m"+' '.join(kw)+"\033[0m" -def sprint亮绿(*kw): - return "\033[1;32m"+' '.join(kw)+"\033[0m" -def sprint亮黄(*kw): - return "\033[1;33m"+' '.join(kw)+"\033[0m" -def sprint亮蓝(*kw): - return "\033[1;34m"+' '.join(kw)+"\033[0m" -def sprint亮紫(*kw): - return "\033[1;35m"+' '.join(kw)+"\033[0m" -def sprint亮靛(*kw): - return "\033[1;36m"+' '.join(kw)+"\033[0m" diff --git a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/README.md b/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/README.md deleted file mode 100644 index 05e04eec52ca3b55404cdf79663a6000bbd01e07..0000000000000000000000000000000000000000 --- a/spaces/zdxiaoda/sovits-4.0-V1-anime-character-model/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: sovits-4.0-V1-anime-character-model -emoji: 😏 -colorFrom: pink -colorTo: red -sdk: docker -pinned: false -license: mit ---- - -## 免责声明 -本项目内容仅供学习交流,严禁用于商业用途和从事其他非法和有违公序良俗的活动,请于24小时内删除!本项目如有侵权行为,联系后将立即删除! - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/zekewilliams/ControlNet/app.py b/spaces/zekewilliams/ControlNet/app.py deleted file mode 100644 index 3a4b9ff475eebdca2122f71fb82c10287fe8bdcb..0000000000000000000000000000000000000000 --- a/spaces/zekewilliams/ControlNet/app.py +++ /dev/null @@ -1,150 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os -import pathlib -import shlex -import subprocess - -import gradio as gr - -if os.getenv('SYSTEM') == 'spaces': - with open('patch') as f: - subprocess.run(shlex.split('patch -p1'), stdin=f, cwd='ControlNet') - -base_url = 'https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/' -names = [ - 'body_pose_model.pth', - 'dpt_hybrid-midas-501f0c75.pt', - 'hand_pose_model.pth', - 'mlsd_large_512_fp32.pth', - 'mlsd_tiny_512_fp32.pth', - 'network-bsds500.pth', - 'upernet_global_small.pth', -] -for name in names: - command = f'wget https://huggingface.co/lllyasviel/ControlNet/resolve/main/annotator/ckpts/{name} -O {name}' - out_path = pathlib.Path(f'ControlNet/annotator/ckpts/{name}') - if out_path.exists(): - continue - subprocess.run(shlex.split(command), cwd='ControlNet/annotator/ckpts/') - -from app_canny import create_demo as create_demo_canny -from app_depth import create_demo as create_demo_depth -from app_fake_scribble import create_demo as create_demo_fake_scribble -from app_hed import create_demo as create_demo_hed -from app_hough import create_demo as create_demo_hough -from app_normal import create_demo as create_demo_normal -from app_pose import create_demo as create_demo_pose -from app_scribble import create_demo as create_demo_scribble -from app_scribble_interactive import \ - create_demo as create_demo_scribble_interactive -from app_seg import create_demo as create_demo_seg -from model import Model, download_all_controlnet_weights - -DESCRIPTION = '# [ControlNet](https://github.com/lllyasviel/ControlNet)' - -SPACE_ID = os.getenv('SPACE_ID') -ALLOW_CHANGING_BASE_MODEL = SPACE_ID != 'hysts/ControlNet' - -if SPACE_ID is not None: - DESCRIPTION += f'\n

    For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. Duplicate Space

    ' - -MAX_IMAGES = int(os.getenv('MAX_IMAGES', '3')) -DEFAULT_NUM_IMAGES = min(MAX_IMAGES, int(os.getenv('DEFAULT_NUM_IMAGES', '1'))) - -if os.getenv('SYSTEM') == 'spaces': - download_all_controlnet_weights() - -DEFAULT_MODEL_ID = os.getenv('DEFAULT_MODEL_ID', - 'runwayml/stable-diffusion-v1-5') -model = Model(base_model_id=DEFAULT_MODEL_ID, task_name='canny') - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - with gr.Tabs(): - with gr.TabItem('Canny'): - create_demo_canny(model.process_canny, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Hough'): - create_demo_hough(model.process_hough, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('HED'): - create_demo_hed(model.process_hed, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Scribble'): - create_demo_scribble(model.process_scribble, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Scribble Interactive'): - create_demo_scribble_interactive( - model.process_scribble_interactive, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Fake Scribble'): - create_demo_fake_scribble(model.process_fake_scribble, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Pose'): - create_demo_pose(model.process_pose, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Segmentation'): - create_demo_seg(model.process_seg, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Depth'): - create_demo_depth(model.process_depth, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - with gr.TabItem('Normal map'): - create_demo_normal(model.process_normal, - max_images=MAX_IMAGES, - default_num_images=DEFAULT_NUM_IMAGES) - - with gr.Accordion(label='Base model', open=False): - with gr.Row(): - with gr.Column(): - current_base_model = gr.Text(label='Current base model') - with gr.Column(scale=0.3): - check_base_model_button = gr.Button('Check current base model') - with gr.Row(): - with gr.Column(): - new_base_model_id = gr.Text( - label='New base model', - max_lines=1, - placeholder='runwayml/stable-diffusion-v1-5', - info= - 'The base model must be compatible with Stable Diffusion v1.5.', - interactive=ALLOW_CHANGING_BASE_MODEL) - with gr.Column(scale=0.3): - change_base_model_button = gr.Button( - 'Change base model', interactive=ALLOW_CHANGING_BASE_MODEL) - if not ALLOW_CHANGING_BASE_MODEL: - gr.Markdown( - '''The base model is not allowed to be changed in this Space so as not to slow down the demo, but it can be changed if you duplicate the Space. Duplicate Space''' - ) - - gr.Markdown('''### Related Spaces - -- [Space using Anything-v4.0 as base model](https://huggingface.co/spaces/hysts/ControlNet-with-Anything-v4) -- https://huggingface.co/spaces/jonigata/PoseMaker2 -- https://huggingface.co/spaces/diffusers/controlnet-openpose -- https://huggingface.co/spaces/diffusers/controlnet-canny -''') - - check_base_model_button.click(fn=lambda: model.base_model_id, - outputs=current_base_model, - queue=False) - new_base_model_id.submit(fn=model.set_base_model, - inputs=new_base_model_id, - outputs=current_base_model) - change_base_model_button.click(fn=model.set_base_model, - inputs=new_base_model_id, - outputs=current_base_model) - -demo.queue(api_open=False).launch(file_directories=['/tmp']) diff --git a/spaces/zhang-wei-jian/docker/node_modules/parseurl/index.js b/spaces/zhang-wei-jian/docker/node_modules/parseurl/index.js deleted file mode 100644 index ece722327959f3bd9721488a035947387f1c1db1..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/parseurl/index.js +++ /dev/null @@ -1,158 +0,0 @@ -/*! - * parseurl - * Copyright(c) 2014 Jonathan Ong - * Copyright(c) 2014-2017 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict' - -/** - * Module dependencies. - * @private - */ - -var url = require('url') -var parse = url.parse -var Url = url.Url - -/** - * Module exports. - * @public - */ - -module.exports = parseurl -module.exports.original = originalurl - -/** - * Parse the `req` url with memoization. - * - * @param {ServerRequest} req - * @return {Object} - * @public - */ - -function parseurl (req) { - var url = req.url - - if (url === undefined) { - // URL is undefined - return undefined - } - - var parsed = req._parsedUrl - - if (fresh(url, parsed)) { - // Return cached URL parse - return parsed - } - - // Parse the URL - parsed = fastparse(url) - parsed._raw = url - - return (req._parsedUrl = parsed) -}; - -/** - * Parse the `req` original url with fallback and memoization. - * - * @param {ServerRequest} req - * @return {Object} - * @public - */ - -function originalurl (req) { - var url = req.originalUrl - - if (typeof url !== 'string') { - // Fallback - return parseurl(req) - } - - var parsed = req._parsedOriginalUrl - - if (fresh(url, parsed)) { - // Return cached URL parse - return parsed - } - - // Parse the URL - parsed = fastparse(url) - parsed._raw = url - - return (req._parsedOriginalUrl = parsed) -}; - -/** - * Parse the `str` url with fast-path short-cut. - * - * @param {string} str - * @return {Object} - * @private - */ - -function fastparse (str) { - if (typeof str !== 'string' || str.charCodeAt(0) !== 0x2f /* / */) { - return parse(str) - } - - var pathname = str - var query = null - var search = null - - // This takes the regexp from https://github.com/joyent/node/pull/7878 - // Which is /^(\/[^?#\s]*)(\?[^#\s]*)?$/ - // And unrolls it into a for loop - for (var i = 1; i < str.length; i++) { - switch (str.charCodeAt(i)) { - case 0x3f: /* ? */ - if (search === null) { - pathname = str.substring(0, i) - query = str.substring(i + 1) - search = str.substring(i) - } - break - case 0x09: /* \t */ - case 0x0a: /* \n */ - case 0x0c: /* \f */ - case 0x0d: /* \r */ - case 0x20: /* */ - case 0x23: /* # */ - case 0xa0: - case 0xfeff: - return parse(str) - } - } - - var url = Url !== undefined - ? new Url() - : {} - - url.path = str - url.href = str - url.pathname = pathname - - if (search !== null) { - url.query = query - url.search = search - } - - return url -} - -/** - * Determine if parsed is still fresh for url. - * - * @param {string} url - * @param {object} parsedUrl - * @return {boolean} - * @private - */ - -function fresh (url, parsedUrl) { - return typeof parsedUrl === 'object' && - parsedUrl !== null && - (Url === undefined || parsedUrl instanceof Url) && - parsedUrl._raw === url -} diff --git a/spaces/ziguo/Real-ESRGAN/README.md b/spaces/ziguo/Real-ESRGAN/README.md deleted file mode 100644 index 36b007f172e4075a0c07957364e710f8cbd0e1b5..0000000000000000000000000000000000000000 --- a/spaces/ziguo/Real-ESRGAN/README.md +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Real ESRGAN -emoji: 🏃 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -duplicated_from: akhaliq/Real-ESRGAN ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/zomehwh/vits-models-pcr/mel_processing.py b/spaces/zomehwh/vits-models-pcr/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/vits-models-pcr/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec