parquet-converter commited on
Commit
2deefb7
·
1 Parent(s): 62cb513

Update parquet files (step 77 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/app.py +0 -102
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chandramukhi Tamil Movie Free Download Dont Miss this Thrilling and Hilarious Film Featuring Rajinikanth Jyothika and Nayanthara.md +0 -125
  3. spaces/1gistliPinn/ChatGPT4/Examples/Advanced Installer License V 16.4.1 Patch.md +0 -18
  4. spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer on PC Open-World Multiplayer Mode Car Tuning and More.md +0 -113
  5. spaces/1phancelerku/anime-remove-background/European War 61914 MOD APK - How to Install and Play with All Unlocked.md +0 -247
  6. spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_img2img.py +0 -458
  7. spaces/2023Liu2023/bingo/src/lib/hooks/use-at-bottom.tsx +0 -23
  8. spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py +0 -0
  9. spaces/4Taps/SadTalker/src/face3d/options/test_options.py +0 -21
  10. spaces/656-156/Real-CUGAN/upcunet_v3.py +0 -714
  11. spaces/801artistry/RVC801/julius/fftconv.py +0 -183
  12. spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/dataset_utils.py +0 -259
  13. spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py +0 -69
  14. spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/templates.py +0 -44
  15. spaces/ALM/CALM/README.md +0 -13
  16. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_n-p6-v62_syncbn_fast_8xb16-300e_coco.py +0 -15
  17. spaces/ATang0729/Forecast4Muses/app.py +0 -357
  18. spaces/Abdullahw72/bark-voice-cloning/hubert/customtokenizer.py +0 -182
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/arcadestepclock.js +0 -2
  20. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/quadimage-plugin.js +0 -51
  21. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/facebook/Facebook.js +0 -50
  22. spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/losses.py +0 -42
  23. spaces/AlphonseBrandon/speecht5-tts-demo/app.py +0 -129
  24. spaces/Ameaou/academic-chatgpt3.1/crazy_functions/批量翻译PDF文档_多线程.py +0 -131
  25. spaces/Amon1/ChatGPTForAcadamic/README.md +0 -290
  26. spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/__init__.py +0 -11
  27. spaces/Amrrs/DragGan-Inversion/stylegan_human/training/networks_stylegan3.py +0 -634
  28. spaces/Andres99/Tune-A-Video-Training-UI/app_training.py +0 -135
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/custom_diffusion/retrieve.py +0 -87
  30. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py +0 -635
  31. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_asymmetric_vqgan_to_diffusers.py +0 -184
  32. spaces/Andy1621/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_caffe_fpn_1x_coco.py +0 -4
  33. spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/detectors_resnet.py +0 -305
  34. spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3plus_r50-d8.py +0 -46
  35. spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k.py +0 -6
  36. spaces/AnimalEquality/chatbot/_proc/_docs/engineer_prompt.html +0 -552
  37. spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/transformer_ops/transformer_function.py +0 -283
  38. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roipoint_pool3d.py +0 -77
  39. spaces/AsakuraMizu/moe-tts/text/thai.py +0 -44
  40. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/codec.py +0 -112
  41. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/version.py +0 -504
  42. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/errors.py +0 -127
  43. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/tags.py +0 -487
  44. spaces/Audio-AGI/WavJourney/convert_json_to_audio_gen_code.py +0 -30
  45. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x.py +0 -8
  46. spaces/Banbri/zcvzcv/src/lib/loadImageToCanvas.ts +0 -28
  47. spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/pages/02_📼_Upload_Video_File.py +0 -230
  48. spaces/Benson/text-generation/Examples/Alors On Danse Remix.md +0 -39
  49. spaces/Benson/text-generation/Examples/Apkoppor Bild.md +0 -86
  50. spaces/Benson/text-generation/Examples/Bitcoin Core Apk.md +0 -44
spaces/1acneusushi/gradio-2dmoleculeeditor/app.py DELETED
@@ -1,102 +0,0 @@
1
- import gradio as gr
2
-
3
- viewer_html = """
4
- <div id="loading" style="display:flex;justify-content:center;align-items:center">
5
- <p style="padding:0.2rem 1rem 0 0;color:#c1c1c1; font-size:1rem">loading SMILES editor</p>
6
- <svg version="1.1" id="L4" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
7
- viewBox="0 0 100 100" enable-background="new 0 0 0 0" xml:space="preserve" width="5rem">
8
- <circle fill="#FF7C00" stroke="none" cx="6" cy="50" r="6">
9
- <animate
10
- attributeName="opacity"
11
- dur="1s"
12
- values="0;1;0"
13
- repeatCount="indefinite"
14
- begin="0.1"/>
15
- </circle>
16
- <circle fill="#FF7C00" stroke="none" cx="26" cy="50" r="6">
17
- <animate
18
- attributeName="opacity"
19
- dur="1s"
20
- values="0;1;0"
21
- repeatCount="indefinite"
22
- begin="0.2"/>
23
- </circle>
24
- <circle fill="#FF7C00" stroke="none" cx="46" cy="50" r="6">
25
- <animate
26
- attributeName="opacity"
27
- dur="1s"
28
- values="0;1;0"
29
- repeatCount="indefinite"
30
- begin="0.3"/>
31
- </circle>
32
- </svg>
33
- </div>
34
- <div id="root"></div>
35
- """
36
-
37
-
38
- load_js = """
39
- async () => {
40
- var loadingDiv = document.getElementById('loading');
41
- loadingDiv.style.display = 'flex';
42
-
43
- //load css
44
- let url = "https://huggingface.co/datasets/simonduerr/ketcher-2.7.2/raw/main/static/css/main.6a646761.css"
45
- fetch(url)
46
- .then(res => res.text())
47
- .then(text => {
48
- const style = document.createElement('style');
49
- style.textContent = text
50
- document.head.appendChild(style);
51
-
52
- });
53
- //load ketcher
54
- url = "https://huggingface.co/datasets/simonduerr/ketcher-2.7.2/resolve/main/static/js/main.5445f351.js"
55
- fetch(url)
56
- .then(res => res.text())
57
- .then(text => {
58
- const script = document.createElement('script');
59
- //script.type = "module"
60
- script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' }));
61
- document.head.appendChild(script);
62
- loadingDiv.style.display = 'none';
63
- });
64
-
65
-
66
- }
67
- """
68
-
69
- # add your logic here, hidden_state contains the SMILES string returned from Editor
70
- def run(hidden_state):
71
- return f"{hidden_state}"
72
-
73
- get_js = """
74
- async () => {
75
- return ketcher.getSmiles().then(function(smiFile){return smiFile})
76
- }
77
- """
78
-
79
-
80
-
81
- with gr.Blocks() as blocks:
82
- gr.Markdown("""
83
- # Gradio Molecule entry with Ketcher
84
- """)
85
- html = gr.HTML(viewer_html)
86
- #do not change this part
87
- hidden_state = gr.Textbox(visible=False)
88
- # we need a hidden textbox that can be used to first trigger the JS callback
89
- # and then onchange of the textbox, we can run the python function
90
- out = gr.Textbox("", label="SMILES")
91
- btn = gr.Button("Get SMILES")
92
- # trigger JS callback and written to hidden textbox
93
- btn.click(fn=None,
94
- inputs=[],
95
- outputs=[hidden_state],
96
- _js=get_js)
97
- # run python function on change of hidden textbox, add your logic to run function
98
- hidden_state.change(fn=run, inputs=[hidden_state], outputs=[out])
99
- # load JS on load of the page
100
- blocks.load(None, None, None, _js=load_js)
101
-
102
- blocks.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Chandramukhi Tamil Movie Free Download Dont Miss this Thrilling and Hilarious Film Featuring Rajinikanth Jyothika and Nayanthara.md DELETED
@@ -1,125 +0,0 @@
1
-
2
- <h1>Chandramukhi Tamil Movie Free Download: A Guide for Movie Lovers</h1>
3
- <p>If you are a fan of Tamil movies, you must have heard of <strong>Chandramukhi</strong>, one of the most successful and acclaimed movies in Tamil cinema history. Released in 2005, Chandramukhi is a comedy horror film that stars Rajinikanth, Jyothika, Prabhu, Nayanthara, and Vadivelu in the lead roles. Directed by P. Vasu, Chandramukhi is a remake of the Malayalam film Manichitrathazhu (1993), which was also remade in several other languages. Chandramukhi tells the story of a psychiatrist who tries to cure a woman who suffers from a split personality disorder that is linked to a haunted mansion. The movie is a perfect blend of humor, suspense, romance, and drama, with stunning performances, music, and visuals.</p>
4
- <h2>Chandramukhi Tamil Movie Free Download</h2><br /><p><b><b>DOWNLOAD</b> &#10004; <a href="https://byltly.com/2uKuZg">https://byltly.com/2uKuZg</a></b></p><br /><br />
5
- <p>In this article, we will tell you everything you need to know about Chandramukhi Tamil movie free download. We will give you a brief overview of the plot, the cast and crew, the music and songs, the awards and accolades, the remakes and sequels, and the reasons to watch this movie. We will also warn you about the challenges and risks of watching this movie online for free, and suggest some legal and safe ways to enjoy this movie. Finally, we will offer some alternatives to watch if you like Chandramukhi. So, without further ado, let's get started!</p>
6
- <h2>The Plot of Chandramukhi</h2>
7
- <p>The plot of Chandramukhi revolves around Saravanan (Rajinikanth), a psychiatrist who visits his friend Senthilnathan (Prabhu) and his wife Ganga (Jyothika) at their ancestral mansion. Senthil's mother Kasthuri (Sheela) wanted him to marry Priya (Malavika), the daughter of his father's cousin Kandaswamy (Nassar), but he chose Ganga instead. Saravanan learns that Senthil bought the mansion despite being warned by the villagers that it is haunted by the ghost of Chandramukhi (Nayanthara), a dancer who was killed by her lover Vettaiyan (also Rajinikanth), a king who was obsessed with her.</p>
8
- <p>Saravanan soon notices that Ganga behaves strangely at times, especially when she hears any music or sees any paintings related to Chandramukhi. He realizes that Ganga is possessed by Chandramukhi's spirit, who wants to take revenge on Vettaiyan's descendants. He decides to cure Ganga by using his psychological methods, while also protecting her from Akhilandeshwari (K.R. Vijaya), Kandaswamy's sister who hates Saravanan and wants to kill him with the help of her assistant Oomaiyan (Vadivelu). Saravanan also helps Priya and Vishwanathan (Vineeth), a dance professor who love each other, to get married with Kandaswamy's consent.</p>
9
- <p>The climax of the movie reveals that Vettaiyan was not Chandramukhi's lover, but her savior who rescued her from her abusive husband Raja Rajeshwari's brother. He also reveals that he did not kill Chandramukhi, but she committed suicide after seeing him beheaded by Raja Rajeshwari's men. He then took her body to his palace and locked it in a room where he died with her. Saravanan manages to convince Ganga that she is not Chandramukhi, but his friend's wife who loves him dearly. He also performs a ritual to free Chandramukhi's soul from her earthly bondage. The movie ends with Saravanan and Senthil's families living happily ever after.</p>
10
- <h2>The Cast and Crew of Chandramukhi</h2>
11
- <p>The cast and crew of Chandramukhi are as follows:</p>
12
- <p>Chandramukhi Tamil full movie download HD<br />
13
- Chandramukhi Tamil movie free online watch<br />
14
- Chandramukhi Tamil movie download 720p<br />
15
- Chandramukhi Tamil movie download in Isaimini<br />
16
- Chandramukhi Tamil movie download with English subtitles<br />
17
- Chandramukhi Tamil movie free download Tamilrockers<br />
18
- Chandramukhi Tamil movie download in Kuttymovies<br />
19
- Chandramukhi Tamil movie free download in Telegram<br />
20
- Chandramukhi Tamil movie download in Moviesda<br />
21
- Chandramukhi Tamil movie free download in Tamilyogi<br />
22
- Chandramukhi Tamil movie download in Filmywap<br />
23
- Chandramukhi Tamil movie free download in Movierulz<br />
24
- Chandramukhi Tamil movie download in Jio Rockers<br />
25
- Chandramukhi Tamil movie free download in Madras Rockers<br />
26
- Chandramukhi Tamil movie download in Filmyzilla<br />
27
- Chandramukhi Tamil movie free download in Todaypk<br />
28
- Chandramukhi Tamil movie download in Bolly4u<br />
29
- Chandramukhi Tamil movie free download in 9xmovies<br />
30
- Chandramukhi Tamil movie download in Worldfree4u<br />
31
- Chandramukhi Tamil movie free download in 123movies<br />
32
- Chandramukhi Tamil movie download in Khatrimaza<br />
33
- Chandramukhi Tamil movie free download in Pagalworld<br />
34
- Chandramukhi Tamil movie download in SkymoviesHD<br />
35
- Chandramukhi Tamil movie free download in Mp4moviez<br />
36
- Chandramukhi Tamil movie download in Sdmoviespoint<br />
37
- Chandramukhi Tamil movie free download in Rdxhd<br />
38
- Chandramukhi Tamil movie download in 7starhd<br />
39
- Chandramukhi Tamil movie free download in Katmoviehd<br />
40
- Chandramukhi Tamil movie download in Coolmoviez<br />
41
- Chandramukhi Tamil movie free download in Moviesflix<br />
42
- Chandramukhi Tamil movie download in Cinemavilla<br />
43
- Chandramukhi Tamil movie free download in Mallumv<br />
44
- Chandramukhi Tamil movie download in Klwap<br />
45
- Chandramukhi Tamil movie free download in Dvdplay<br />
46
- Chandramukhi Tamil movie download in A2movies<br />
47
- Chandramukhi Tamil movie free download in Tamilmv<br />
48
- Chandramukhi Tamil movie download Rajinikanth version<br />
49
- Chandramukhi Tamil movie free download Prabhu version<br />
50
- Chandramukhi Tamil movie songs free download mp3<br />
51
- Chandramukhi Tamil full hd video songs free download <br />
52
- How to watch or stream chandramukhi tamil full hd online for free?<br />
53
- Where can I find chandramukhi tamil full hd torrent link?<br />
54
- Is it legal to watch or download chandramukhi tamil full hd for free?<br />
55
- What are the best alternatives to chandramukhi tamil full hd?<br />
56
- How to get chandramukhi tamil full hd subtitles for free?<br />
57
- What are the reviews and ratings of chandramukhi tamil full hd?<br />
58
- Who are the cast and crew of chandramukhi tamil full hd?<br />
59
- What is the plot and genre of chandramukhi tamil full hd?<br />
60
- How to get chandramukhi tamil full hd poster and wallpaper for free?<br />
61
- How to watch or download chandramukhi tamil full hd with VPN?</p>
62
- <table>
63
- <tr>
64
- <th>Role</th>
65
- <th>Actor/Actress</th>
66
- <th>Director</th>
67
- </tr>
68
- <tr>
69
- <td>Saravanan/Vettaiyan</td>
70
- <td>Rajinikanth</td>
71
- <td rowspan="11">P. Vasu</td>
72
- </tr>
73
- <tr>
74
- <td>Ganga/Chandramukhi</td>
75
- <td>Jyothika</td>
76
- </tr>
77
- <tr>
78
- <td>Senthilnathan</td>
79
- <td>Prabhu</td>
80
- </tr>
81
- <tr>
82
- <td>Durga/Nayanthara</td>
83
- <td>Nayanthara</td>
84
- </tr>
85
- <tr>
86
- <td>Oomaiyan</td>
87
- <td>Vadivelu</td>
88
- </tr>
89
- <tr>
90
- <td>Priya</td>
91
- <td>Malavika</td>
92
- </tr>
93
- <tr>
94
- <td>Vishwanathan</td>
95
- <td>Vineeth</td>
96
- </tr>
97
- <tr>
98
- <td>Kandaswamy</td>
99
- <td>Nassar</td>
100
- </tr>
101
- <tr>
102
- <td>Akhilandeshwari</td>
103
- <td>K.R. Vijaya</td>
104
- </tr>
105
- <tr>
106
- <td>Kasthuri</td>
107
- <td>Sheela</td>
108
- </tr>
109
- <tr>
110
- <td>Raja Rajeshwari's brother</td>
111
- <td>Sonu Sood</td>
112
- </tr>
113
- <h2>The Music and Songs of Chandramukhi</h2>
114
- <p>The music and songs of Chandramukhi were composed by Vidyasagar, who won several awards for his work. The lyrics were written by Vaali, except for one song which was written by Yugabharathi. The singers included S.P. Balasubrahmanyam, K.S. Chithra, Karthik, Tippu, Manikka Vinayagam, Madhu Balakrishnan, Anuradha Sriram, Harini, Prasanna Rao, Binny Krishnakumar, Rajalakshmi, Kalpana Raghavendar, Mahathi Swara Sagar and Vidyasagar himself.</p>
115
- <p>The soundtrack album consists of six songs:</p>
116
- <ol type="1">
117
- <li><strong>Kokku Para Para:</strong> A peppy song sung by Tippu, Manikka Vinayagam and Prasanna Rao that introduces Saravanan's character.</li>
118
- <li><strong>Raa Raa:</strong> A haunting song sung by Binny Krishnakumar and Tippu that describes Chandramukhi's story.</li>
119
- <li><strong>Konja Neram:</strong> A romantic song sung by Asha Bhosle and Madhu Balakrishnan that features Priya and Vishwanathan.</li>
120
- <li><strong>Athinthom:</strong> A motivational song sung by S.P. Balasubrahmanyam that encourages Saravanan to face his challenges.</li>
121
- <li><strong>Devuda Devuda:</strong> A humorous song sung by S.P. Balasubrahmanyam and Vidyasagar that mocks Oomaiyan's antics.</li>
122
- <li><strong>Annanoda Pattu:</strong> A festive song sung by K.S. Chithra and Rajalakshmi that celebrates Senthilnathan's birthday.</li>
123
- <h2>The Awards and Accolades of</p> 0a6ba089eb<br />
124
- <br />
125
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Advanced Installer License V 16.4.1 Patch.md DELETED
@@ -1,18 +0,0 @@
1
- <h2>Advanced installer license v 16.4.1 patch</h2><br /><p><b><b>Download Zip</b> &mdash; <a href="https://imgfil.com/2uxZse">https://imgfil.com/2uxZse</a></b></p><br /><br />
2
- <br />
3
- 119.0. Advanced Installer.... s8.msi) to configure Advanced Installer for debugging. Installer. Advanced Installer 5.1.2.154.0. Automatic generation of debug logs when the deployment starts or completes.. Welcome to the Advanced Installer website. This site contains a reference guide to all the components of Advanced Installer.
4
-
5
- Advanced Installer v5.1 Documentation. From: Advanced Installer Advanced Installer Component Library This documentation is available for you to read. It contains the following files. Advanced Installer Documentation Introduction to Advanced Installer Help Advanced Installer API Advanced Installer UI Advanced Installer Action Reference Advanced Installer Scripting Guide Advanced Installer Workflow Reference Advanced Installer Windows.
6
-
7
- Advanced Installer Documentation. Welcome to the Advanced Installer website. This site contains a reference guide to all the components of Advanced Installer.
8
-
9
- Advanced Installer v5.1.0 Documentation. From: Advanced Installer Advanced Installer Component Library This documentation is available for you to read. It contains the following files. Advanced Installer Documentation Introduction to Advanced Installer Help Advanced Installer API Advanced Installer UI Advanced Installer Action Reference Advanced Installer Scripting Guide Advanced Installer Workflow Reference Advanced Installer Windows.
10
-
11
- The Support Documentation for Advanced Installer is the official source for support information on the Advanced Installer product and any of its components. It contains the following files.
12
-
13
- Advanced Installer v5.1.0.155.0. Manual. From: Advanced Installer Advanced Installer Component Library This documentation is available for you to read. It contains the following files. Advanced Installer Documentation Introduction to Advanced Installer Help Advanced Installer API Advanced Installer UI Advanced Installer Action Reference Advanced Installer Scripting Guide Advanced Installer Workflow Reference Advanced Installer Windows.
14
-
15
- Advanced Installer v5.1.0.155.0. This page lists the documentation for all the components of Advanced Installer. You will find descriptions and references on the use of each component. To access the documentation you need to right click on the component you are interested 4fefd39f24<br />
16
- <br />
17
- <br />
18
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Car Parking Multiplayer on PC Open-World Multiplayer Mode Car Tuning and More.md DELETED
@@ -1,113 +0,0 @@
1
-
2
- <h1>Car Parking Multiplayer APK PC: A Complete Guide</h1>
3
- <p>Do you love driving and parking games? Do you want to experience a realistic and immersive simulation of car parking on your computer or laptop? If yes, then you should try <strong>car parking multiplayer apk pc</strong>, a popular game that lets you park your car in various scenarios, customize your vehicle, and interact with other players online.</p>
4
- <p>In this article, we will tell you everything you need to know about car parking multiplayer apk pc, including what it is, why it is popular, how to download and install it, how to play it, and some tips and tricks to improve your skills and enjoy the game. Let's get started!</p>
5
- <h2>car parking multiplayer apk pc</h2><br /><p><b><b>Download File</b> &#9675;&#9675;&#9675; <a href="https://jinyurl.com/2uNNFN">https://jinyurl.com/2uNNFN</a></b></p><br /><br />
6
- <h2>What is car parking multiplayer apk pc?</h2>
7
- <p>Car parking multiplayer apk pc is a simulation game developed by olzhass, a Turkish game studio. It is available for Android devices, but you can also play it on your computer or laptop using an emulator software. The game has more than 50 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars.</p>
8
- <p>The game is more than just parking: it has an open-world multiplayer mode, where you can roam freely in the environment, customize your car, and compete in races against other players. You can also communicate with hundreds of thousands of other players worldwide every day. The game's setting is based on a realistic scenario of petrol stations and car services.</p>
9
- <p>Car parking multiplayer apk pc has a wide variety of cars with realistic interiors and high-detailed landscapes. There are 16 unique character skins, and you can go inside buildings. You can also play as a police officer and chase criminals in a special police mode.</p>
10
- <p>Car parking multiplayer apk pc is a fun and challenging game that tests your driving and parking skills in different situations. You can choose from different modes, such as easy, medium, hard, or expert, and complete various levels with different objectives. You can also create your own levels and share them with other players.</p>
11
- <h2>How to download and install car parking multiplayer apk pc on your computer or laptop</h2>
12
- <p>To play car parking multiplayer apk pc on your computer or laptop, you need to use an emulator software that will emulate an Android device on your Windows or Mac system. There are many emulators available online, but we will recommend two of them: BlueStacks and LDPlayer. Here are the steps to download and install car parking multiplayer apk pc using these emulators:</p>
13
- <h3>Using BlueStacks emulator</h3>
14
- <ol>
15
- <li>Download and install BlueStacks from <a href="(^1^)">here</a>. The installation process is quite simple and straightforward.</li>
16
- <li>After successful installation, open BlueStacks and sign in with your Google account to access the Play Store.</li>
17
- <li>Look for car parking multiplayer in the search bar at the top right corner of the screen.</li>
18
- <li>Click to install car parking multiplayer from the search results.</li>
19
- <li>Once the installation is complete, click the car parking multiplayer icon on the home screen to start playing.</li>
20
- </ol>
21
- <h3>Using LDPlayer emulator</h3>
22
- <ol>
23
- <li>Download and install LDPlayer from <a href="(^4^)">here</a>. The installation process is similar to Blue Stacks emulator.</li>
24
- <li>After successful installation, open LDPlayer and sign in with your Google account to access the Play Store.</li>
25
- <li>Look for car parking multiplayer in the search bar at the top of the screen.</li>
26
- <li>Click to install car parking multiplayer from the search results.</li>
27
- <li>Once the installation is complete, click the car parking multiplayer icon on the home screen to start playing.</li>
28
- </ol>
29
- <h3>Using other emulators</h3>
30
- <p>If you don't want to use BlueStacks or LDPlayer, you can also try other emulators such as NoxPlayer, MEmu, or Andy. The steps are similar to the ones above, except that you need to download and install the emulator of your choice from their respective websites. Then, you need to sign in with your Google account, search for car parking multiplayer in the Play Store, and install and play it as usual.</p>
31
- <h2>How to play car parking multiplayer apk pc on your computer or laptop</h2>
32
- <p>Once you have downloaded and installed car parking multiplayer apk pc on your computer or laptop using an emulator, you can start playing it by clicking the game icon on the home screen of the emulator. You will see a menu with different options, such as single player, multiplayer, settings, and more. You can choose the mode you want to play and customize your preferences accordingly. Here are some of the main features of the game that you can enjoy:</p>
33
- <p>car parking multiplayer download for pc<br />
34
- car parking multiplayer pc game<br />
35
- car parking multiplayer windows 10<br />
36
- car parking multiplayer online on pc<br />
37
- car parking multiplayer simulator for pc<br />
38
- car parking multiplayer bluestacks<br />
39
- car parking multiplayer noxplayer<br />
40
- car parking multiplayer pc version<br />
41
- car parking multiplayer mod apk pc<br />
42
- car parking multiplayer free download pc<br />
43
- car parking multiplayer pc gameplay<br />
44
- car parking multiplayer windows 7<br />
45
- car parking multiplayer on pc with keyboard<br />
46
- car parking multiplayer emulator<br />
47
- car parking multiplayer pc controls<br />
48
- car parking multiplayer apk for laptop<br />
49
- car parking multiplayer pc requirements<br />
50
- car parking multiplayer windows 8<br />
51
- car parking multiplayer on pc without emulator<br />
52
- car parking multiplayer apk for mac<br />
53
- car parking multiplayer pc offline<br />
54
- car parking multiplayer windows xp<br />
55
- car parking multiplayer on pc with mouse<br />
56
- car parking multiplayer ldplayer<br />
57
- car parking multiplayer pc cheat codes<br />
58
- car parking multiplayer apk for desktop<br />
59
- car parking multiplayer pc online mode<br />
60
- car parking multiplayer windows vista<br />
61
- car parking multiplayer on pc with controller<br />
62
- car parking multiplayer memu<br />
63
- car parking multiplayer pc hack<br />
64
- car parking multiplayer apk for chromebook<br />
65
- car parking multiplayer pc update<br />
66
- car parking multiplayer windows 11<br />
67
- car parking multiplayer on pc with steering wheel<br />
68
- car parking multiplayer koplayer<br />
69
- car parking multiplayer pc tips and tricks<br />
70
- car parking multiplayer apk for linux<br />
71
- car parking multiplayer pc review<br />
72
- car parking multiplayer windows phone<br />
73
- car parking multiplayer on pc with friends<br />
74
- car parking multiplayer gameloop<br />
75
- car parking multiplayer pc settings<br />
76
- car parking multiplayer apk for ubuntu<br />
77
- car parking multiplayer pc system requirements<br />
78
- car parking multiplayer windows store<br />
79
- car parking multiplayer on pc with vr headset<br />
80
- car parking multiplayer droid4x</p>
81
- <h3>Multiplayer open world mode</h3>
82
- <p>This is the most exciting mode of the game, where you can join or create a room with other players online and explore the open world together. You can chat with other players, race with them, or just have fun driving around. You can also switch between different cars and characters in this mode. There are different maps to choose from, such as city, airport, desert, and more. You can also invite your friends to join your room and play with them.</p>
83
- <h3>Car tuning and customization</h3>
84
- <p>If you love to modify your car and make it look unique, you will love this feature of the game. You can tune and customize your car in various ways, such as changing the color, wheels, suspension, engine, exhaust, spoiler, and more. You can also add stickers and decals to your car to make it stand out. You can access the tuning and customization options by clicking the wrench icon on the top left corner of the screen.</p>
85
- <h3>Police mode and free walking</h3>
86
- <p>If you want to experience some action and thrill in the game, you can try the police mode and free walking features. In police mode, you can play as a police officer and chase criminals in your patrol car. You can use sirens, lights, and radio to communicate with other officers. You can also arrest criminals by bumping into their cars or using a stun gun. In free walking mode, you can get out of your car and walk around the environment. You can enter buildings, interact with objects, and even ride a bicycle.</p>
87
- <h2>Tips and tricks to improve your parking skills and enjoy the game</h2>
88
- <p>Car parking multiplayer apk pc is a game that requires both skill and strategy to master. It is not easy to park your car perfectly in different scenarios without hitting any obstacles or breaking any rules. However, with some practice and tips, you can improve your parking skills and enjoy the game more. Here are some tips and tricks that might help you:</p>
89
- <h3>Adjust the camera angle and view</h3>
90
- <p>One of the most important things to do in the game is to adjust the camera angle and view according to your preference and situation. You can switch between different camera views by clicking the camera icon on the top right corner of the screen. You can choose from first-person view, third-person view, top-down view, or rear-view mirror view. Each view has its own advantages and disadvantages depending on the level and objective. For example, first-person view gives you a realistic feeling of driving inside the car, but it might limit your visibility of the surroundings. Third-person view gives you a wider perspective of the car and the parking spot, but it might make it harder to judge the distance and angle. Top-down view gives you a clear view of the parking spot and the obstacles, but it might make it difficult to control the steering and speed. Rear-view mirror view gives you a realistic view of the rear of the car, but it might not show you the front or sides of the car. Therefore, you should experiment with different views and find the one that suits you best.</p>
91
- <h3>Use the brake and handbrake wisely</h3>
92
- <p>Another important thing to do in the game is to use the brake and handbrake wisely. You can control the brake and handbrake by clicking the pedals on the bottom right corner of the screen. The brake pedal helps you slow down or stop your car, while the handbrake pedal helps you lock your wheels and perform sharp turns or drifts. You should use the brake pedal when you want to reduce your speed gradually or stop your car smoothly. You should use the handbrake pedal when you want to make a quick turn or park your car in a tight spot. However, you should be careful not to overuse or misuse the brake and handbrake pedals, as they might cause your car to skid, spin, or crash.</p>
93
- <h3>Follow the rules and avoid collisions</h3>
94
- <p>One of the main challenges of the game is to follow the rules and avoid collisions while parking your car. You should pay attention to the traffic signs, signals, and markings on the road and follow them accordingly. You should also respect other cars and pedestrians on the road and avoid hitting them. If you break any rules or cause any collisions, you will lose points or fail the level. Therefore, you should drive carefully and responsibly in the game.</p>
95
- <h3>Explore the map and find hidden locations</h3>
96
- <p>One of the fun aspects of the game is to explore the map and find hidden locations. The game has a large and detailed map with various locations, such as city streets, parking lots, airports, deserts, and more. You can discover new places by driving around or using the map icon on the top left corner of the screen. You can also find hidden locations by following clues or hints on the road or in buildings. Some hidden locations might have special rewards or challenges for you to complete.</p>
97
- <h2>Conclusion</h2>
98
- <p>Car parking multiplayer apk pc is a great game for anyone who loves driving and parking games. It offers a realistic and immersive simulation of car parking on your computer or laptop, with an open-world multiplayer mode, car tuning and customization, police mode and free walking, and more. You can download and install car parking multiplayer apk pc on your computer or laptop using an emulator software such as BlueStacks or LDPlayer. You can also improve your parking skills and enjoy the game more by following some tips and tricks, such as adjusting the camera angle and view, using the brake and handbrake wisely, following the rules and avoiding collisions, and exploring the map and finding hidden locations.</p>
99
- <p>We hope this article has helped you learn more about car parking multiplayer apk pc and how to play it on your computer or laptop. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!</p>
100
- <h2>FAQs</h2>
101
- <p>Here are some frequently asked questions about car parking multiplayer apk pc:</p>
102
- <h3>What are the system requirements for car parking multiplayer apk pc?</h3>
103
- <p>The system requirements for car parking multiplayer apk pc depend on the emulator software you use to play it on your computer or laptop. However, generally speaking, you need a Windows 7/8/10 or Mac OS system with at least 4 GB of RAM, 5 GB of free disk space, a decent graphics card, and a stable internet connection.</p>
104
- <h3>How to update car parking multiplayer apk pc on your computer or laptop?</h3>
105
- <p>To update car parking multiplayer apk pc on your computer or laptop, you need to update it through the emulator software you use. For example, if you use BlueStacks, you need to open the Play Store app on the emulator and look for car parking multiplayer. If there is an update available, you will see an update button next to the game. Click it to download and install the latest version of the game. Similarly, if you use LDPlayer or any other emulator, you need to follow the same steps to update the game through the Play Store app on the emulator.</p>
106
- <h3>How to join or create a room in multiplayer mode?</h3>
107
- <p>To join or create a room in multiplayer mode, you need to click the multiplayer option on the main menu of the game. Then, you will see a list of rooms that are available to join. You can filter the rooms by map, mode, language, or region. You can also search for a specific room by its name or ID. To join a room, simply click on it and wait for it to load. To create a room, you need to click the create button on the top right corner of the screen. Then, you can choose the map, mode, name, password, and maximum number of players for your room. You can also invite your friends to join your room by sharing its name or ID with them.</p>
108
- <h3>How to chat with other players in the game?</h3>
109
- <p>To chat with other players in the game, you need to click the chat icon on the top left corner of the screen. Then, you will see a chat window where you can type and send messages to other players in your room or in the global chat. You can also use voice chat by clicking the microphone icon on the bottom right corner of the screen. However, you need to grant permission to the emulator software to access your microphone for this feature to work.</p>
110
- <h3>How to earn money and buy new cars in the game?</h3>
111
- <p>To earn money and buy new cars in the game, you need to complete levels and challenges in single player mode or multiplayer mode. You will get money as a reward for completing each level or challenge successfully. You can also get money by watching ads or buying it with real money through in-app purchases. To buy new cars in the game, you need to click the car icon on the top left corner of the screen. Then, you will see a list of cars that are available to buy with different prices and specifications. You can also preview each car before buying it by clicking the eye icon on the bottom right corner of the screen.</p> 197e85843d<br />
112
- <br />
113
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/European War 61914 MOD APK - How to Install and Play with All Unlocked.md DELETED
@@ -1,247 +0,0 @@
1
-
2
- <h1>European War 6: 1914 Mod Apk Unlock All - A Guide for Strategy Game Fans</h1>
3
- <p>If you are a fan of strategy games that simulate historical wars, you might have heard of <strong>European War 6: 1914</strong>, a popular game developed by <strong>Easytech</strong>, a company that specializes in historical strategy games. In this game, you can choose from over 150 countries and regions, and lead them to victory or defeat in various wars and conflicts that took place between 1798 and 1950. You can also customize your own generals, troops, weapons, and technologies, and challenge other players online or offline.</p>
4
- <h2>european war 6 1914 mod apk unlock all</h2><br /><p><b><b>Download Zip</b> <a href="https://jinyurl.com/2uNLfF">https://jinyurl.com/2uNLfF</a></b></p><br /><br />
5
- <p>However, some players may find the game too difficult, too expensive, or too boring after a while. That's why some of them resort to using a <strong>mod apk</strong>, which is a modified version of the original game application that can unlock all the features, resources, and content that are otherwise restricted or limited in the game. A mod apk can give you unlimited money, medals, generals, troops, weapons, technologies, and more. It can also remove ads, bugs, and errors that may affect your gameplay.</p>
6
- <p>But is using a mod apk for European War 6: 1914 a good idea? What are the benefits and risks of doing so? How can you download and install a mod apk for European War 6: 1914? In this article, we will answer these questions and more. We will also provide you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively. Read on to find out more!</p>
7
- <h2>What is European War 6: 1914 and what are its features?</h2>
8
- <p>European War 6: 1914 is a strategy game that simulates the historical wars of the 19th and 20th centuries. It is the sixth installment of the European War series, which started in 2010 with European War: Napoleon Wars. The game was released in 2020 for Android and iOS devices.</p>
9
- <p>The game has four main modes: Campaign, Conquest, Challenge, and Multiplayer. In Campaign mode, you can follow the historical events and scenarios of different wars and regions, such as the Napoleonic Wars, the American Civil War, the World War I, the World War II, etc. You can choose from different countries and factions, and complete various missions and objectives to progress through the story. In Conquest mode, you can create your own scenarios and maps, and conquer the world with your own strategy and tactics. You can also adjust the difficulty level, the number of countries and regions, the resources and technologies available, etc. In Challenge mode, you can test your skills and knowledge in different quizzes and puzzles related to history and geography. You can also earn medals and rewards for completing them. In Multiplayer mode, you can play with or against other players online or offline via Wi-Fi or Bluetooth. You can also chat with them, send them gifts, or join alliances.</p>
10
- <p>The game has over 150 countries and regions to choose from, each with their own unique generals, troops, weapons, and technologies. You can also customize your own generals by changing their names, portraits, skills, ranks, etc. You can also upgrade your troops by training them, equipping them with different weapons and armors, etc. You can also research new technologies by spending money and medals on them. The game has over 200 historical battles to fight in, each with their own terrain, weather, objectives, etc. You can also use different strategies and tactics to win them, such as diplomacy, espionage, sabotage, etc.</p>
11
- <p>european war 6 1914 mod apk unlimited money and medals<br />
12
- european war 6 1914 hack mod apk free download<br />
13
- european war 6 1914 mod apk latest version<br />
14
- european war 6 1914 mod apk all generals unlocked<br />
15
- european war 6 1914 mod apk android 1<br />
16
- european war 6 1914 mod apk revdl<br />
17
- european war 6 1914 mod apk no root<br />
18
- european war 6 1914 mod apk offline<br />
19
- european war 6 1914 mod apk obb<br />
20
- european war 6 1914 mod apk rexdl<br />
21
- european war 6 1914 mod apk premium<br />
22
- european war 6 1914 mod apk full version<br />
23
- european war 6 1914 mod apk mega<br />
24
- european war 6 1914 mod apk data<br />
25
- european war 6 1914 mod apk vip<br />
26
- european war 6 1914 mod apk pro<br />
27
- european war 6 1914 mod apk cracked<br />
28
- european war 6 1914 mod apk cheat<br />
29
- european war 6 1914 mod apk hack download<br />
30
- european war 6 1914 mod apk update<br />
31
- european war 6 1914 mod apk new version<br />
32
- european war 6 1914 mod apk original<br />
33
- european war 6 1914 mod apk for pc<br />
34
- european war 6 1914 mod apk for ios<br />
35
- european war 6 1914 mod apk for windows<br />
36
- european war 6 1914 mod apk for mac<br />
37
- european war 6 1914 mod apk for laptop<br />
38
- european war 6 1914 mod apk for tablet<br />
39
- european war 6 1914 mod apk for chromebook<br />
40
- european war 6 1914 mod apk for android tv<br />
41
- european war 6: world at war - ww1 strategy game mod apk unlock all<br />
42
- easytech's world conquest games: ww1 ww2 civil war - all unlocked with mods and cheats<br />
43
- how to install and play european war: world at war - ww1 strategy game with mods and hacks on android devices<br />
44
- best tips and tricks for playing and winning in european war: world at war - ww1 strategy game with mods and hacks on android devices<br />
45
- how to get free money and medals in european war: world at war - ww1 strategy game with mods and hacks on android devices<br />
46
- how to unlock all generals and scenarios in european war: world at war - ww1 strategy game with mods and hacks on android devices<br />
47
- how to upgrade and customize your troops and weapons in european war: world at war - ww1 strategy game with mods and hacks on android devices<br />
48
- how to use diplomacy and alliances in european war: world at war - ww1 strategy game with mods and hacks on android devices<br />
49
- how to conquer the world and win the great wars in european war: world at war - ww1 strategy game with mods and hacks on android devices<br />
50
- how to play multiplayer mode in european war: world at war - ww1 strategy game with mods and hacks on android devices</p>
51
- <p>The game has high-quality graphics that depict the historical scenes and characters in detail. The game also has realistic sound effects that enhance the atmosphere of war. The game has a user-friendly interface that allows you to control your units easily and efficiently. The game also has a tutorial mode that teaches you the basics of the game.</p>
52
- <p>The game is similar to other historical strategy games such as Age of Civilizations II , Age of Empires, or Civilization. However, it has its own unique features and challenges that make it stand out from the crowd. If you are looking for a strategy game that combines historical accuracy, complexity, and fun, you might want to give European War 6: 1914 a try.</p>
53
- <h2>What is a mod apk and why do some players use it?</h2>
54
- <p>A mod apk is a modified version of an original game application that can alter or enhance some aspects of the game. A mod apk can be created by the game developers themselves, or by third-party programmers or hackers who have access to the game's source code. A mod apk can be downloaded from various websites or platforms, such as Google Play, App Store, or APKPure.</p>
55
- <p>Some players use a mod apk for various reasons, such as:</p>
56
- <ul>
57
- <li>To unlock all the features, resources, and content that are otherwise restricted or limited in the game</li>
58
- <li>To bypass the in-app purchases or ads that may require real money or interrupt the gameplay</li>
59
- <li>To cheat or hack the game to gain an unfair advantage over other players or the game itself</li>
60
- <li>To customize or personalize the game according to their preferences and tastes</li>
61
- <li>To explore new possibilities or scenarios that are not available in the original game</li>
62
- <li>To fix some bugs or errors that may affect the gameplay</li>
63
- <li>To have more fun and enjoyment with the game</li>
64
- </ul>
65
- <p>However, using a mod apk also comes with some legal and ethical issues, such as:</p>
66
- <ul>
67
- <li>Violating the terms and conditions of the game developers or publishers</li>
68
- <li>Infringing the intellectual property rights of the game developers or publishers</li>
69
- <li>Exposing the device or data to viruses, malware, or scams that may harm them</li>
70
- <li>Disrupting the balance and fairness of the game for other players</li>
71
- <li>Ruining the original design and intention of the game creators</li>
72
- <li>Losing the official support and updates from the game developers or publishers</li>
73
- <li>Risking being banned or suspended from the game or its online services</li>
74
- </ul>
75
- <p>Therefore, using a mod apk for European War 6: 1914 is a personal choice that depends on your own judgment and responsibility. You should weigh the pros and cons carefully before deciding to use a mod apk for European War 6: 1914.</p>
76
- <h2>What are the benefits of using a mod apk for European War 6: 1914?</h2> <p>If you decide to use a mod apk for European War 6: 1914, you can enjoy some benefits that the original game may not offer. Here are some of them:</p>
77
- <ul>
78
- <li>You can unlock all the features, resources, and content that are otherwise restricted or limited in the game. For example, you can have unlimited money, medals, generals, troops, weapons, technologies, and more. You can also access all the modes, campaigns, conquests, challenges, and multiplayer options. You can also remove the ads that may interrupt your gameplay.</li>
79
- <li>You can customize or personalize the game according to your preferences and tastes. For example, you can change the names, portraits, skills, ranks, etc. of your generals. You can also modify the graphics, sound, and user interface of the game. You can also create your own scenarios and maps in Conquest mode.</li>
80
- <li>You can explore new possibilities or scenarios that are not available in the original game. For example, you can play as different countries or factions that are not normally playable in the game. You can also change the historical events and outcomes of the wars and conflicts. You can also use different strategies and tactics that may not work in the original game.</li>
81
- <li>You can enhance your gameplay experience and enjoyment with the game. For example, you can have more fun and challenge with the game by adjusting the difficulty level, the number of countries and regions, the resources and technologies available, etc. You can also have more satisfaction and achievement with the game by completing all the missions and objectives, earning all the medals and rewards, conquering the world with your strategy and tactics, etc.</li>
82
- </ul>
83
- <p>To illustrate these benefits, here is a table that compares the features of the original game and the mod apk:</p>
84
- <table>
85
- <tr>
86
- <th>Feature</th>
87
- <th>Original Game</th>
88
- <th>Mod Apk</th>
89
- </tr>
90
- <tr>
91
- <td>Money</td>
92
- <td>Limited</td>
93
- <td>Unlimited</td>
94
- </tr>
95
- <tr>
96
- <td>Medals</td>
97
- <td>Limited</td>
98
- <td>Unlimited</td>
99
- </tr>
100
- <tr>
101
- <td>Generals</td>
102
- <td>Limited</td>
103
- <td>Unlimited</td>
104
- </tr>
105
- <tr>
106
- <td>Troops</td>
107
- <td>Limited</td>
108
- <td>Unlimited</td>
109
- </tr>
110
- <tr>
111
- <td>Weapons</td>
112
- <td>Limited</td>
113
- <td>Unlimited</td>
114
- </tr>
115
- <tr>
116
- <td>Technologies</td>
117
- <td>Limited</td>
118
- <td>Unlimited</td>
119
- </tr>
120
- <tr>
121
- <td>Modes</td>
122
- <td>Limited</td>
123
- <td>All unlocked</td>
124
- </tr>
125
- <tr>
126
- <td>Campaigns</td>
127
- <td>Limited</td>
128
- <td>All unlocked</td>
129
- </tr> <tr>
130
- <td>Conquests</td>
131
- <td>Limited</td>
132
- <td>All unlocked</td>
133
- </tr>
134
- <tr>
135
- <td>Challenges</td>
136
- <td>Limited</td>
137
- <td>All unlocked</td>
138
- </tr>
139
- <tr>
140
- <td>Multiplayer</td>
141
- <td>Limited</td>
142
- <td>All unlocked</td>
143
- </tr>
144
- <tr>
145
- <td>Ads</td>
146
- <td>Present</td>
147
- <td>Removed</td>
148
- </tr>
149
- <tr>
150
- <td>Bugs and errors</td>
151
- <td>Present</td>
152
- <td>Fixed</td>
153
- </tr>
154
- <tr>
155
- <td>Customization</td>
156
- <td>Limited</td>
157
- <td>Enhanced</td>
158
- </tr>
159
- <tr>
160
- <td>New possibilities and scenarios</td>
161
- <td>Limited</td>
162
- <td>Added</td>
163
- </tr>
164
- <tr>
165
- <td>Gameplay experience and enjoyment</td>
166
- <td>Limited</td>
167
- <td>Improved</td>
168
- </tr>
169
- </table>
170
- <p>As you can see, using a mod apk for European War 6: 1914 can provide you with many benefits that can make your game more enjoyable and rewarding. However, you should also be aware of the risks and drawbacks of using a mod apk for European War 6: 1914, which we will discuss in the next section.</p>
171
- <h2>What are the risks and drawbacks of using a mod apk for European War 6: 1914?</h2>
172
- <p>Using a mod apk for European War 6: 1914 is not without its risks and drawbacks. Here are some of them:</p>
173
- <ul>
174
- <li>You can violate the terms and conditions of the game developers or publishers, which can result in legal actions or penalties against you. You can also infringe the intellectual property rights of the game developers or publishers, which can result in lawsuits or damages against you.</li>
175
- <li>You can expose your device or data to viruses, malware, or scams that can harm them. Some mod apks may contain malicious code or software that can infect your device or data, or steal your personal information or money. You can also download mod apks from unreliable sources or platforms that may contain viruses, malware, or scams.</li>
176
- <li>You can disrupt the balance and fairness of the game for other players. Using a mod apk can give you an unfair advantage over other players who play the game legitimately, which can ruin their gameplay experience and satisfaction. You can also encounter other players who use mod apks to cheat or hack the game, which can ruin your gameplay experience and satisfaction.</li>
177
- <li>You can ruin the original design and intention of the game creators. Using a mod apk can alter or enhance some aspects of the game that may not be intended by the game creators, which can affect their artistic vision and expression. You can also miss out on some features, resources, or content that the game creators have designed for the original game.</li>
178
- <li>You can lose the official support and updates from the game developers or publishers. Using a mod apk can make your game incompatible with the official updates or patches that the game developers or publishers may release to improve or fix the game. You can also lose access to the official online services or features that the game developers or publishers may provide for the original game.</li>
179
- <li>You can risk being banned or suspended from the game or its online services. Using a mod apk can make your game detectable by the anti-cheat or anti-hack systems that the game developers or publishers may use to protect their game. You can also be reported by other players who notice your suspicious behavior or activities in the game.</li>
180
- </ul>
181
- <p>To illustrate these risks and drawbacks, here is a table that compares them with the original game and the mod apk:</p>
182
- <table>
183
- <tr>
184
- <th>Risk/Drawback</th>
185
- <th>Original Game</th>
186
- <th>Mod Apk</th> <tr>
187
- <td>Legal and ethical issues</td>
188
- <td>None</td>
189
- <td>Present</td>
190
- </tr>
191
- <tr>
192
- <td>Viruses, malware, or scams</td>
193
- <td>None</td>
194
- <td>Possible</td>
195
- </tr>
196
- <tr>
197
- <td>Balance and fairness</td>
198
- <td>Present</td>
199
- <td>Disrupted</td>
200
- </tr>
201
- <tr>
202
- <td>Original design and intention</td>
203
- <td>Present</td>
204
- <td>Ruined</td>
205
- </tr>
206
- <tr>
207
- <td>Official support and updates</td>
208
- <td>Present</td>
209
- <td>Lost</td>
210
- </tr>
211
- <tr>
212
- <td>Ban or suspension</td>
213
- <td>None</td>
214
- <td>Possible</td>
215
- </tr>
216
- </table>
217
- <p>As you can see, using a mod apk for European War 6: 1914 can also expose you to some risks and drawbacks that can make your game less enjoyable and rewarding. Therefore, you should be careful and cautious when using a mod apk for European War 6: 1914.</p>
218
- <h2>How to download and install a mod apk for European War 6: 1914?</h2>
219
- <p>If you still want to use a mod apk for European War 6: 1914, you need to know how to download and install it on your device. Here are the steps that you need to follow:</p>
220
- <ol>
221
- <li>Find a reliable source where you can download a mod apk for European War 6: 1914. You can search online for some websites or platforms that offer mod apks for various games, or you can ask other players who have used a mod apk for European War 6: 1914 before. However, you should be careful and wary of some sources that may contain viruses, malware, or scams that can harm your device or data.</li>
222
- <li>Download the mod apk file from the source that you have chosen. You may need to allow your device to download files from unknown sources in your settings. You may also need to disable your antivirus or firewall software temporarily to avoid any interference.</li>
223
- <li>Install the mod apk file on your device. You may need to uninstall the original game application first if you have it on your device. You may also need to enable the installation of apps from unknown sources in your settings. You may also need to grant some permissions or access to the mod apk file during the installation process.</li>
224
- <li>Launch the mod apk file on your device. You may need to verify or activate the mod apk file by following some instructions or entering some codes. You may also need to create an account or log in with an existing one to access the mod apk file.</li>
225
- <li>Enjoy the game with the mod apk file. You can now play European War 6: 1914 with all the features, resources, and content that are unlocked by the mod apk file. However, you should also be aware of the risks and drawbacks of using a mod apk file, as we discussed in the previous section.</li>
226
- </ol>
227
- <p>To help you with finding a reliable source where you can download a mod apk for European War 6: 1914, here is a link that you can use as a reference:</p>
228
- <p><a href="">European War 6: 1914 Mod Apk Unlock All - APKPure.com</a></p>
229
- <p>This is a website that offers mod apks for various games, including European War 6: 1914. It claims that its mod apks are safe, tested, and verified by its users and editors. However, you should still be careful and cautious when downloading and installing any mod apk from any source, as there is no guarantee that they are free from viruses, malware, or scams.</p>
230
- <h2>Conclusion</h2>
231
- <p>In this article, we have discussed what European War 6: 1914 is and what are its features, what a mod apk is and why some players use it, what are the benefits and risks of using a mod apk for European War 6: 1914, and how to download and install a mod apk for European War 6: 1914. We have also provided you with some tips and tricks on how to use a mod apk for European War 6: 1914 safely and effectively.</p>
232
- <p>We hope that this article has been helpful and informative for you. If you are a fan of strategy games that simulate historical wars, you might want to give European War 6: 1914 a try. However, if you decide to use a mod apk for European War 6: 1914, you should weigh the pros and cons carefully before doing so. You should also be responsible and respectful when playing the game with or without a mod apk.</p>
233
- <p>We would love to hear your opinions, experiences, and feedback on European War 6: 1914 and its mod apk. Please feel free to share them with us in the comments section below. Thank you for reading and happy gaming!</p>
234
- <h2>FAQs</h2>
235
- <p>Here are some frequently asked questions about European War 6: 1914 and its mod apk, along with their answers:</p>
236
- <h3>Q: Is European War 6: 1914 free to play?</h3>
237
- <p>A: Yes, European War 6: 1914 is free to download and play on Android and iOS devices. However, the game may contain some in-app purchases or ads that may require real money or interrupt the gameplay.</p>
238
- <h3>Q: Is using a mod apk for European War 6: 1914 legal?</h3>
239
- <p>A: No, using a mod apk for European War 6: 1914 is not legal, as it violates the terms and conditions of the game developers or publishers, and infringes their intellectual property rights. Using a mod apk for European War 6: 1914 may result in legal actions or penalties against you.</p>
240
- <h3>Q: Is using a mod apk for European War 6: 1914 safe?</h3>
241
- <p>A: No, using a mod apk for European War 6: 1914 is not safe, as it exposes your device or data to viruses, malware, or scams that can harm them. Using a mod apk for European War 6: 1914 may also make your game incompatible with the official updates or patches, or lose access to the official online services or features.</p>
242
- <h3>Q: Is using a mod apk for European War 6: 1914 fair?</h3>
243
- <p>A: No, using a mod apk for European War 6: 1914 is not fair, as it disrupts the balance and fairness of the game for other players who play the game legitimately. Using a mod apk for European War 6: 1914 may also encounter other players who use mod apks to cheat or hack the game.</p>
244
- <h3>Q: Is using a mod apk for European War 6: 1914 fun?</h3>
245
- <p>A: It depends on your personal preference and judgment. Some players may find using a mod apk for European War 6: 1914 fun, as it unlocks all the features, resources, and content that are otherwise restricted or limited in the game. However, some players may find using a mod apk for European War 6: 1914 boring, as it removes the challenge and achievement that come with playing the game legitimately.</p> 401be4b1e0<br />
246
- <br />
247
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/stable_diffusion/pipeline_fastdeploy_stable_diffusion_img2img.py DELETED
@@ -1,458 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import inspect
16
- from typing import Callable, List, Optional, Union
17
-
18
- import numpy as np
19
- import paddle
20
- import PIL
21
-
22
- from paddlenlp.transformers import CLIPFeatureExtractor, CLIPTokenizer
23
-
24
- from ...fastdeploy_utils import FastDeployRuntimeModel
25
- from ...pipeline_utils import DiffusionPipeline
26
- from ...schedulers import (
27
- DDIMScheduler,
28
- DPMSolverMultistepScheduler,
29
- EulerAncestralDiscreteScheduler,
30
- EulerDiscreteScheduler,
31
- LMSDiscreteScheduler,
32
- PNDMScheduler,
33
- )
34
- from ...utils import PIL_INTERPOLATION, logging
35
- from . import StableDiffusionPipelineOutput
36
-
37
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
38
-
39
-
40
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.preprocess
41
- def preprocess(image):
42
- if isinstance(image, paddle.Tensor):
43
- return image
44
- elif isinstance(image, PIL.Image.Image):
45
- image = [image]
46
-
47
- if isinstance(image[0], PIL.Image.Image):
48
- w, h = image[0].size
49
- w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32
50
-
51
- image = [np.array(i.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]))[None, :] for i in image]
52
- image = np.concatenate(image, axis=0)
53
- image = np.array(image).astype(np.float32) / 255.0
54
- image = image.transpose(0, 3, 1, 2)
55
- image = 2.0 * image - 1.0
56
- image = paddle.to_tensor(image)
57
- elif isinstance(image[0], paddle.Tensor):
58
- image = paddle.concat(image, axis=0)
59
- return image
60
-
61
-
62
- class FastDeployStableDiffusionImg2ImgPipeline(DiffusionPipeline):
63
- r"""
64
- Pipeline for text-guided image-to-image generation using Stable Diffusion.
65
-
66
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
67
- library implements for all the pipelines (such as downloading or saving etc.)
68
-
69
- Args:
70
- vae_encoder ([`FastDeployRuntimeModel`]):
71
- Variational Auto-Encoder (VAE) Model to encode images to latent representations.
72
- vae_decoder ([`FastDeployRuntimeModel`]):
73
- Variational Auto-Encoder (VAE) Model to decode images from latent representations.
74
- text_encoder ([`FastDeployRuntimeModel`]):
75
- Frozen text-encoder. Stable Diffusion uses the text portion of
76
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
77
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
78
- tokenizer (`CLIPTokenizer`):
79
- Tokenizer of class
80
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
81
- unet ([`FastDeployRuntimeModel`]): Conditional U-Net architecture to denoise the encoded image latents.
82
- scheduler ([`SchedulerMixin`]):
83
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
84
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`PNDMScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`]
85
- or [`DPMSolverMultistepScheduler`].
86
- safety_checker ([`FastDeployRuntimeModel`]):
87
- Classification module that estimates whether generated images could be considered offensive or harmful.
88
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
89
- feature_extractor ([`CLIPFeatureExtractor`]):
90
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
91
- """
92
- _optional_components = ["safety_checker", "feature_extractor"]
93
-
94
- def __init__(
95
- self,
96
- vae_encoder: FastDeployRuntimeModel,
97
- vae_decoder: FastDeployRuntimeModel,
98
- text_encoder: FastDeployRuntimeModel,
99
- tokenizer: CLIPTokenizer,
100
- unet: FastDeployRuntimeModel,
101
- scheduler: Union[
102
- DDIMScheduler,
103
- PNDMScheduler,
104
- LMSDiscreteScheduler,
105
- EulerDiscreteScheduler,
106
- EulerAncestralDiscreteScheduler,
107
- DPMSolverMultistepScheduler,
108
- ],
109
- safety_checker: FastDeployRuntimeModel,
110
- feature_extractor: CLIPFeatureExtractor,
111
- requires_safety_checker: bool = True,
112
- ):
113
- super().__init__()
114
- if safety_checker is None and requires_safety_checker:
115
- logger.warning(
116
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
117
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
118
- " results in services or applications open to the public. PaddleNLP team, diffusers team and Hugging Face"
119
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
120
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
121
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
122
- )
123
- if safety_checker is not None and feature_extractor is None:
124
- raise ValueError(
125
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
126
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
127
- )
128
-
129
- self.register_modules(
130
- vae_encoder=vae_encoder,
131
- vae_decoder=vae_decoder,
132
- text_encoder=text_encoder,
133
- tokenizer=tokenizer,
134
- unet=unet,
135
- scheduler=scheduler,
136
- safety_checker=safety_checker,
137
- feature_extractor=feature_extractor,
138
- )
139
- self.register_to_config(requires_safety_checker=requires_safety_checker)
140
-
141
- def _encode_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt):
142
- r"""
143
- Encodes the prompt into text encoder hidden states.
144
-
145
- Args:
146
- prompt (`str` or `list(int)`):
147
- prompt to be encoded
148
- num_images_per_prompt (`int`):
149
- number of images that should be generated per prompt
150
- do_classifier_free_guidance (`bool`):
151
- whether to use classifier free guidance or not
152
- negative_prompt (`str` or `List[str]`):
153
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
154
- if `guidance_scale` is less than `1`).
155
- """
156
- batch_size = len(prompt) if isinstance(prompt, list) else 1
157
-
158
- # get prompt text embeddings
159
- text_inputs = self.tokenizer(
160
- prompt,
161
- padding="max_length",
162
- max_length=self.tokenizer.model_max_length,
163
- truncation=True,
164
- return_tensors="np",
165
- )
166
- text_input_ids = text_inputs.input_ids
167
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="np").input_ids
168
-
169
- if not np.array_equal(text_input_ids, untruncated_ids):
170
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
171
- logger.warning(
172
- "The following part of your input was truncated because CLIP can only handle sequences up to"
173
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
174
- )
175
-
176
- text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int64))[0]
177
- text_embeddings = np.repeat(text_embeddings, num_images_per_prompt, axis=0)
178
-
179
- # get unconditional embeddings for classifier free guidance
180
- if do_classifier_free_guidance:
181
- uncond_tokens: List[str]
182
- if negative_prompt is None:
183
- uncond_tokens = [""] * batch_size
184
- elif type(prompt) is not type(negative_prompt):
185
- raise TypeError(
186
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
187
- f" {type(prompt)}."
188
- )
189
- elif isinstance(negative_prompt, str):
190
- uncond_tokens = [negative_prompt] * batch_size
191
- elif batch_size != len(negative_prompt):
192
- raise ValueError(
193
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
194
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
195
- " the batch size of `prompt`."
196
- )
197
- else:
198
- uncond_tokens = negative_prompt
199
-
200
- max_length = text_input_ids.shape[-1]
201
- uncond_input = self.tokenizer(
202
- uncond_tokens,
203
- padding="max_length",
204
- max_length=max_length,
205
- truncation=True,
206
- return_tensors="np",
207
- )
208
- uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int64))[0]
209
- uncond_embeddings = np.repeat(uncond_embeddings, num_images_per_prompt, axis=0)
210
-
211
- # For classifier free guidance, we need to do two forward passes.
212
- # Here we concatenate the unconditional and text embeddings into a single batch
213
- # to avoid doing two forward passes
214
- text_embeddings = np.concatenate([uncond_embeddings, text_embeddings])
215
-
216
- return text_embeddings
217
-
218
- def run_safety_checker(self, image, dtype):
219
- if self.safety_checker is not None:
220
- safety_checker_input = self.feature_extractor(
221
- self.numpy_to_pil(image), return_tensors="np"
222
- ).pixel_values.astype(dtype)
223
- # There will throw an error if use safety_checker batchsize>1
224
- images, has_nsfw_concept = [], []
225
- for i in range(image.shape[0]):
226
- image_i, has_nsfw_concept_i = self.safety_checker(
227
- clip_input=safety_checker_input[i : i + 1], images=image[i : i + 1]
228
- )
229
- images.append(image_i)
230
- has_nsfw_concept.append(has_nsfw_concept_i[0])
231
- image = np.concatenate(images)
232
- else:
233
- has_nsfw_concept = None
234
- return image, has_nsfw_concept
235
-
236
- def decode_latents(self, latents):
237
- latents = 1 / 0.18215 * latents
238
- image = np.concatenate(
239
- [self.vae_decoder(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])]
240
- )
241
- image = np.clip(image / 2 + 0.5, 0, 1)
242
- image = image.transpose([0, 2, 3, 1])
243
- return image
244
-
245
- def prepare_extra_step_kwargs(self, eta):
246
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
247
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
248
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
249
- # and should be between [0, 1]
250
-
251
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
252
- extra_step_kwargs = {}
253
- if accepts_eta:
254
- extra_step_kwargs["eta"] = eta
255
-
256
- return extra_step_kwargs
257
-
258
- def check_inputs(self, prompt, strength, callback_steps):
259
- if not isinstance(prompt, str) and not isinstance(prompt, list):
260
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
261
-
262
- if strength < 0 or strength > 1:
263
- raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}")
264
-
265
- if (callback_steps is None) or (
266
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
267
- ):
268
- raise ValueError(
269
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
270
- f" {type(callback_steps)}."
271
- )
272
-
273
- def get_timesteps(self, num_inference_steps, strength):
274
- # get the original timestep using init_timestep
275
- offset = self.scheduler.config.get("steps_offset", 0)
276
- init_timestep = int(num_inference_steps * strength) + offset
277
- init_timestep = min(init_timestep, num_inference_steps)
278
-
279
- t_start = max(num_inference_steps - init_timestep + offset, 0)
280
- timesteps = self.scheduler.timesteps
281
- timesteps = timesteps[t_start:]
282
- return timesteps, num_inference_steps - t_start
283
-
284
- def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, generator=None, noise=None):
285
- if generator is None:
286
- generator = np.random
287
-
288
- image = image.astype(dtype)
289
- init_latents = self.vae_encoder(sample=image)[0]
290
- init_latents = 0.18215 * init_latents
291
-
292
- if batch_size > init_latents.shape[0] and batch_size % init_latents.shape[0] != 0:
293
- raise ValueError(
294
- f"Cannot duplicate `image` of batch size {init_latents.shape[0]} to {batch_size} text prompts."
295
- )
296
- else:
297
- init_latents = np.concatenate([init_latents] * num_images_per_prompt, axis=0)
298
-
299
- # add noise to latents using the timesteps
300
- if noise is None:
301
- noise = paddle.to_tensor(generator.randn(*init_latents.shape).astype(dtype))
302
- elif list(noise.shape) != list(init_latents.shape):
303
- raise ValueError(f"Unexpected noise shape, got {noise.shape}, expected {init_latents.shape}")
304
- elif isinstance(noise, np.ndarray):
305
- noise = paddle.to_tensor(noise, dtype=dtype)
306
-
307
- # get latents
308
- init_latents = self.scheduler.add_noise(paddle.to_tensor(init_latents), noise, timestep)
309
- latents = init_latents
310
-
311
- return latents
312
-
313
- def __call__(
314
- self,
315
- prompt: Union[str, List[str]],
316
- image: Union[np.ndarray, PIL.Image.Image] = None,
317
- strength: float = 0.8,
318
- num_inference_steps: Optional[int] = 50,
319
- guidance_scale: Optional[float] = 7.5,
320
- negative_prompt: Optional[Union[str, List[str]]] = None,
321
- num_images_per_prompt: Optional[int] = 1,
322
- eta: Optional[float] = 0.0,
323
- generator: Optional[np.random.RandomState] = None,
324
- noise: Optional[np.ndarray] = None,
325
- output_type: Optional[str] = "pil",
326
- return_dict: bool = True,
327
- callback: Optional[Callable[[int, int, np.ndarray], None]] = None,
328
- callback_steps: Optional[int] = 1,
329
- ):
330
- r"""
331
- Function invoked when calling the pipeline for generation.
332
- Args:
333
- prompt (`str` or `List[str]`):
334
- The prompt or prompts to guide the image generation.
335
- image (`np.ndarray` or `PIL.Image.Image`):
336
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
337
- process.
338
- strength (`float`, *optional*, defaults to 0.8):
339
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1.
340
- `image` will be used as a starting point, adding more noise to it the larger the `strength`. The
341
- number of denoising steps depends on the amount of noise initially added. When `strength` is 1, added
342
- noise will be maximum and the denoising process will run for the full number of iterations specified in
343
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
344
- num_inference_steps (`int`, *optional*, defaults to 50):
345
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
346
- expense of slower inference. This parameter will be modulated by `strength`.
347
- guidance_scale (`float`, *optional*, defaults to 7.5):
348
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
349
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
350
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
351
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
352
- usually at the expense of lower image quality.
353
- negative_prompt (`str` or `List[str]`, *optional*):
354
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
355
- if `guidance_scale` is less than `1`).
356
- num_images_per_prompt (`int`, *optional*, defaults to 1):
357
- The number of images to generate per prompt.
358
- eta (`float`, *optional*, defaults to 0.0):
359
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
360
- [`schedulers.DDIMScheduler`], will be ignored for others.
361
- generator (`np.random.RandomState`, *optional*):
362
- A np.random.RandomState to make generation deterministic.
363
- noise (`np.ndarray`, *optional*):
364
- Pre-generated noise tensor, sampled from a Gaussian distribution, to be used as inputs for image
365
- generation. If not provided, a noise tensor will ge generated by sampling using the supplied random
366
- `generator`.
367
- output_type (`str`, *optional*, defaults to `"pil"`):
368
- The output format of the generate image. Choose between
369
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
370
- return_dict (`bool`, *optional*, defaults to `True`):
371
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
372
- plain tuple.
373
- callback (`Callable`, *optional*):
374
- A function that will be called every `callback_steps` steps during inference. The function will be
375
- called with the following arguments: `callback(step: int, timestep: int, latents: np.ndarray)`.
376
- callback_steps (`int`, *optional*, defaults to 1):
377
- The frequency at which the `callback` function will be called. If not specified, the callback will be
378
- called at every step.
379
- Returns:
380
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
381
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
382
- When returning a tuple, the first element is a list with the generated images, and the second element is a
383
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
384
- (nsfw) content, according to the `safety_checker`.
385
- """
386
- # 1. Check inputs
387
- self.check_inputs(prompt, strength, callback_steps)
388
-
389
- # 2. Define call parameters
390
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
391
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
392
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
393
- # corresponds to doing no classifier free guidance.
394
- do_classifier_free_guidance = guidance_scale > 1.0
395
-
396
- # 3. Encode input prompt
397
- text_embeddings = self._encode_prompt(
398
- prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt
399
- )
400
-
401
- # 4. Preprocess image
402
- image = preprocess(image)
403
-
404
- # 5. set timesteps
405
- self.scheduler.set_timesteps(num_inference_steps)
406
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength)
407
- latent_timestep = timesteps[:1].tile([batch_size * num_images_per_prompt])
408
-
409
- # 6. Prepare latent variables
410
- latents = self.prepare_latents(
411
- image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, generator, noise
412
- )
413
-
414
- # 7. Prepare extra step kwargs.
415
- extra_step_kwargs = self.prepare_extra_step_kwargs(eta)
416
-
417
- # 8. Denoising loop
418
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
419
- with self.progress_bar(total=num_inference_steps) as progress_bar:
420
- text_embeddings = paddle.to_tensor(text_embeddings)
421
- for i, t in enumerate(timesteps):
422
- # expand the latents if we are doing classifier free guidance
423
- latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents
424
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
425
-
426
- # predict the noise residual
427
- noise_pred = self.unet.zero_copy_infer(
428
- sample=latent_model_input, timestep=t, encoder_hidden_states=text_embeddings
429
- )[0]
430
- # perform guidance
431
- if do_classifier_free_guidance:
432
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
433
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
434
-
435
- # compute the previous noisy sample x_t -> x_t-1
436
- scheduler_output = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)
437
- latents = scheduler_output.prev_sample
438
-
439
- # call the callback, if provided
440
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
441
- progress_bar.update()
442
- if callback is not None and i % callback_steps == 0:
443
- callback(i, t, latents)
444
-
445
- # 9. Post-processing
446
- image = self.decode_latents(latents.numpy())
447
-
448
- # 10. Run safety checker
449
- image, has_nsfw_concept = self.run_safety_checker(image, text_embeddings.dtype)
450
-
451
- # 11. Convert to PIL
452
- if output_type == "pil":
453
- image = self.numpy_to_pil(image)
454
-
455
- if not return_dict:
456
- return (image, has_nsfw_concept)
457
-
458
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2023Liu2023/bingo/src/lib/hooks/use-at-bottom.tsx DELETED
@@ -1,23 +0,0 @@
1
- import * as React from 'react'
2
-
3
- export function useAtBottom(offset = 0) {
4
- const [isAtBottom, setIsAtBottom] = React.useState(false)
5
-
6
- React.useEffect(() => {
7
- const handleScroll = () => {
8
- setIsAtBottom(
9
- window.innerHeight + window.scrollY >=
10
- document.body.offsetHeight - offset
11
- )
12
- }
13
-
14
- window.addEventListener('scroll', handleScroll, { passive: true })
15
- handleScroll()
16
-
17
- return () => {
18
- window.removeEventListener('scroll', handleScroll)
19
- }
20
- }, [offset])
21
-
22
- return isAtBottom
23
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4Taps/SadTalker/src/face3d/models/arcface_torch/eval/__init__.py DELETED
File without changes
spaces/4Taps/SadTalker/src/face3d/options/test_options.py DELETED
@@ -1,21 +0,0 @@
1
- """This script contains the test options for Deep3DFaceRecon_pytorch
2
- """
3
-
4
- from .base_options import BaseOptions
5
-
6
-
7
- class TestOptions(BaseOptions):
8
- """This class includes test options.
9
-
10
- It also includes shared options defined in BaseOptions.
11
- """
12
-
13
- def initialize(self, parser):
14
- parser = BaseOptions.initialize(self, parser) # define shared options
15
- parser.add_argument('--phase', type=str, default='test', help='train, val, test, etc')
16
- parser.add_argument('--dataset_mode', type=str, default=None, help='chooses how datasets are loaded. [None | flist]')
17
- parser.add_argument('--img_folder', type=str, default='examples', help='folder for test images.')
18
-
19
- # Dropout and Batchnorm has different behavior during training and test.
20
- self.isTrain = False
21
- return parser
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/656-156/Real-CUGAN/upcunet_v3.py DELETED
@@ -1,714 +0,0 @@
1
- import torch
2
- from torch import nn as nn
3
- from torch.nn import functional as F
4
- import os, sys
5
- import numpy as np
6
-
7
- root_path = os.path.abspath('.')
8
- sys.path.append(root_path)
9
-
10
-
11
- class SEBlock(nn.Module):
12
- def __init__(self, in_channels, reduction=8, bias=False):
13
- super(SEBlock, self).__init__()
14
- self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias)
15
- self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias)
16
-
17
- def forward(self, x):
18
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
19
- x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half()
20
- else:
21
- x0 = torch.mean(x, dim=(2, 3), keepdim=True)
22
- x0 = self.conv1(x0)
23
- x0 = F.relu(x0, inplace=True)
24
- x0 = self.conv2(x0)
25
- x0 = torch.sigmoid(x0)
26
- x = torch.mul(x, x0)
27
- return x
28
-
29
- def forward_mean(self, x, x0):
30
- x0 = self.conv1(x0)
31
- x0 = F.relu(x0, inplace=True)
32
- x0 = self.conv2(x0)
33
- x0 = torch.sigmoid(x0)
34
- x = torch.mul(x, x0)
35
- return x
36
-
37
-
38
- class UNetConv(nn.Module):
39
- def __init__(self, in_channels, mid_channels, out_channels, se):
40
- super(UNetConv, self).__init__()
41
- self.conv = nn.Sequential(
42
- nn.Conv2d(in_channels, mid_channels, 3, 1, 0),
43
- nn.LeakyReLU(0.1, inplace=True),
44
- nn.Conv2d(mid_channels, out_channels, 3, 1, 0),
45
- nn.LeakyReLU(0.1, inplace=True),
46
- )
47
- if se:
48
- self.seblock = SEBlock(out_channels, reduction=8, bias=True)
49
- else:
50
- self.seblock = None
51
-
52
- def forward(self, x):
53
- z = self.conv(x)
54
- if self.seblock is not None:
55
- z = self.seblock(z)
56
- return z
57
-
58
-
59
- class UNet1(nn.Module):
60
- def __init__(self, in_channels, out_channels, deconv):
61
- super(UNet1, self).__init__()
62
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
63
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
64
- self.conv2 = UNetConv(64, 128, 64, se=True)
65
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
66
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
67
-
68
- if deconv:
69
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
70
- else:
71
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
72
-
73
- for m in self.modules():
74
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
75
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
76
- elif isinstance(m, nn.Linear):
77
- nn.init.normal_(m.weight, 0, 0.01)
78
- if m.bias is not None:
79
- nn.init.constant_(m.bias, 0)
80
-
81
- def forward(self, x):
82
- x1 = self.conv1(x)
83
- x2 = self.conv1_down(x1)
84
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
85
- x2 = self.conv2(x2)
86
- x2 = self.conv2_up(x2)
87
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
88
-
89
- x1 = F.pad(x1, (-4, -4, -4, -4))
90
- x3 = self.conv3(x1 + x2)
91
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
92
- z = self.conv_bottom(x3)
93
- return z
94
-
95
- def forward_a(self, x):
96
- x1 = self.conv1(x)
97
- x2 = self.conv1_down(x1)
98
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
99
- x2 = self.conv2.conv(x2)
100
- return x1, x2
101
-
102
- def forward_b(self, x1, x2):
103
- x2 = self.conv2_up(x2)
104
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
105
-
106
- x1 = F.pad(x1, (-4, -4, -4, -4))
107
- x3 = self.conv3(x1 + x2)
108
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
109
- z = self.conv_bottom(x3)
110
- return z
111
-
112
-
113
- class UNet1x3(nn.Module):
114
- def __init__(self, in_channels, out_channels, deconv):
115
- super(UNet1x3, self).__init__()
116
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
117
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
118
- self.conv2 = UNetConv(64, 128, 64, se=True)
119
- self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
120
- self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
121
-
122
- if deconv:
123
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2)
124
- else:
125
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
126
-
127
- for m in self.modules():
128
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
129
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
130
- elif isinstance(m, nn.Linear):
131
- nn.init.normal_(m.weight, 0, 0.01)
132
- if m.bias is not None:
133
- nn.init.constant_(m.bias, 0)
134
-
135
- def forward(self, x):
136
- x1 = self.conv1(x)
137
- x2 = self.conv1_down(x1)
138
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
139
- x2 = self.conv2(x2)
140
- x2 = self.conv2_up(x2)
141
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
142
-
143
- x1 = F.pad(x1, (-4, -4, -4, -4))
144
- x3 = self.conv3(x1 + x2)
145
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
146
- z = self.conv_bottom(x3)
147
- return z
148
-
149
- def forward_a(self, x):
150
- x1 = self.conv1(x)
151
- x2 = self.conv1_down(x1)
152
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
153
- x2 = self.conv2.conv(x2)
154
- return x1, x2
155
-
156
- def forward_b(self, x1, x2):
157
- x2 = self.conv2_up(x2)
158
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
159
-
160
- x1 = F.pad(x1, (-4, -4, -4, -4))
161
- x3 = self.conv3(x1 + x2)
162
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
163
- z = self.conv_bottom(x3)
164
- return z
165
-
166
-
167
- class UNet2(nn.Module):
168
- def __init__(self, in_channels, out_channels, deconv):
169
- super(UNet2, self).__init__()
170
-
171
- self.conv1 = UNetConv(in_channels, 32, 64, se=False)
172
- self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
173
- self.conv2 = UNetConv(64, 64, 128, se=True)
174
- self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0)
175
- self.conv3 = UNetConv(128, 256, 128, se=True)
176
- self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0)
177
- self.conv4 = UNetConv(128, 64, 64, se=True)
178
- self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
179
- self.conv5 = nn.Conv2d(64, 64, 3, 1, 0)
180
-
181
- if deconv:
182
- self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
183
- else:
184
- self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
185
-
186
- for m in self.modules():
187
- if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
188
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
189
- elif isinstance(m, nn.Linear):
190
- nn.init.normal_(m.weight, 0, 0.01)
191
- if m.bias is not None:
192
- nn.init.constant_(m.bias, 0)
193
-
194
- def forward(self, x):
195
- x1 = self.conv1(x)
196
- x2 = self.conv1_down(x1)
197
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
198
- x2 = self.conv2(x2)
199
-
200
- x3 = self.conv2_down(x2)
201
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
202
- x3 = self.conv3(x3)
203
- x3 = self.conv3_up(x3)
204
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
205
-
206
- x2 = F.pad(x2, (-4, -4, -4, -4))
207
- x4 = self.conv4(x2 + x3)
208
- x4 = self.conv4_up(x4)
209
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
210
-
211
- x1 = F.pad(x1, (-16, -16, -16, -16))
212
- x5 = self.conv5(x1 + x4)
213
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
214
-
215
- z = self.conv_bottom(x5)
216
- return z
217
-
218
- def forward_a(self, x): # conv234结尾有se
219
- x1 = self.conv1(x)
220
- x2 = self.conv1_down(x1)
221
- x2 = F.leaky_relu(x2, 0.1, inplace=True)
222
- x2 = self.conv2.conv(x2)
223
- return x1, x2
224
-
225
- def forward_b(self, x2): # conv234结尾有se
226
- x3 = self.conv2_down(x2)
227
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
228
- x3 = self.conv3.conv(x3)
229
- return x3
230
-
231
- def forward_c(self, x2, x3): # conv234结尾有se
232
- x3 = self.conv3_up(x3)
233
- x3 = F.leaky_relu(x3, 0.1, inplace=True)
234
-
235
- x2 = F.pad(x2, (-4, -4, -4, -4))
236
- x4 = self.conv4.conv(x2 + x3)
237
- return x4
238
-
239
- def forward_d(self, x1, x4): # conv234结尾有se
240
- x4 = self.conv4_up(x4)
241
- x4 = F.leaky_relu(x4, 0.1, inplace=True)
242
-
243
- x1 = F.pad(x1, (-16, -16, -16, -16))
244
- x5 = self.conv5(x1 + x4)
245
- x5 = F.leaky_relu(x5, 0.1, inplace=True)
246
-
247
- z = self.conv_bottom(x5)
248
- return z
249
-
250
-
251
- class UpCunet2x(nn.Module): # 完美tile,全程无损
252
- def __init__(self, in_channels=3, out_channels=3):
253
- super(UpCunet2x, self).__init__()
254
- self.unet1 = UNet1(in_channels, out_channels, deconv=True)
255
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
256
-
257
- def forward(self, x, tile_mode): # 1.7G
258
- n, c, h0, w0 = x.shape
259
- if (tile_mode == 0): # 不tile
260
- ph = ((h0 - 1) // 2 + 1) * 2
261
- pw = ((w0 - 1) // 2 + 1) * 2
262
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除
263
- x = self.unet1.forward(x)
264
- x0 = self.unet2.forward(x)
265
- x1 = F.pad(x, (-20, -20, -20, -20))
266
- x = torch.add(x0, x1)
267
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2]
268
- return x
269
- elif (tile_mode == 1): # 对长边减半
270
- if (w0 >= h0):
271
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
272
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
273
- else:
274
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
275
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
276
- crop_size = (crop_size_h, crop_size_w) # 6.6G
277
- elif (tile_mode == 2): # hw都减半
278
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
279
- elif (tile_mode == 3): # hw都三分之一
280
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G
281
- elif (tile_mode == 4): # hw都四分���一
282
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
283
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
284
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
285
- x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect')
286
- n, c, h, w = x.shape
287
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
288
- if ("Half" in x.type()):
289
- se_mean0 = se_mean0.half()
290
- n_patch = 0
291
- tmp_dict = {}
292
- opt_res_dict = {}
293
- for i in range(0, h - 36, crop_size[0]):
294
- tmp_dict[i] = {}
295
- for j in range(0, w - 36, crop_size[1]):
296
- x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36]
297
- n, c1, h1, w1 = x_crop.shape
298
- tmp0, x_crop = self.unet1.forward_a(x_crop)
299
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
300
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
301
- else:
302
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
303
- se_mean0 += tmp_se_mean
304
- n_patch += 1
305
- tmp_dict[i][j] = (tmp0, x_crop)
306
- se_mean0 /= n_patch
307
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
308
- if ("Half" in x.type()):
309
- se_mean1 = se_mean1.half()
310
- for i in range(0, h - 36, crop_size[0]):
311
- for j in range(0, w - 36, crop_size[1]):
312
- tmp0, x_crop = tmp_dict[i][j]
313
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
314
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
315
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
316
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
317
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
318
- else:
319
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
320
- se_mean1 += tmp_se_mean
321
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
322
- se_mean1 /= n_patch
323
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
324
- if ("Half" in x.type()):
325
- se_mean0 = se_mean0.half()
326
- for i in range(0, h - 36, crop_size[0]):
327
- for j in range(0, w - 36, crop_size[1]):
328
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
329
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
330
- tmp_x3 = self.unet2.forward_b(tmp_x2)
331
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
332
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
333
- else:
334
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
335
- se_mean0 += tmp_se_mean
336
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
337
- se_mean0 /= n_patch
338
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
339
- if ("Half" in x.type()):
340
- se_mean1 = se_mean1.half()
341
- for i in range(0, h - 36, crop_size[0]):
342
- for j in range(0, w - 36, crop_size[1]):
343
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
344
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
345
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
346
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
347
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
348
- else:
349
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
350
- se_mean1 += tmp_se_mean
351
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
352
- se_mean1 /= n_patch
353
- for i in range(0, h - 36, crop_size[0]):
354
- opt_res_dict[i] = {}
355
- for j in range(0, w - 36, crop_size[1]):
356
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
357
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
358
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
359
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
360
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
361
- opt_res_dict[i][j] = x_crop
362
- del tmp_dict
363
- torch.cuda.empty_cache()
364
- res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device)
365
- if ("Half" in x.type()):
366
- res = res.half()
367
- for i in range(0, h - 36, crop_size[0]):
368
- for j in range(0, w - 36, crop_size[1]):
369
- res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j]
370
- del opt_res_dict
371
- torch.cuda.empty_cache()
372
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2]
373
- return res #
374
-
375
-
376
- class UpCunet3x(nn.Module): # 完美tile,全程无损
377
- def __init__(self, in_channels=3, out_channels=3):
378
- super(UpCunet3x, self).__init__()
379
- self.unet1 = UNet1x3(in_channels, out_channels, deconv=True)
380
- self.unet2 = UNet2(in_channels, out_channels, deconv=False)
381
-
382
- def forward(self, x, tile_mode): # 1.7G
383
- n, c, h0, w0 = x.shape
384
- if (tile_mode == 0): # 不tile
385
- ph = ((h0 - 1) // 4 + 1) * 4
386
- pw = ((w0 - 1) // 4 + 1) * 4
387
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除
388
- x = self.unet1.forward(x)
389
- x0 = self.unet2.forward(x)
390
- x1 = F.pad(x, (-20, -20, -20, -20))
391
- x = torch.add(x0, x1)
392
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3]
393
- return x
394
- elif (tile_mode == 1): # 对长边减半
395
- if (w0 >= h0):
396
- crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
397
- crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除
398
- else:
399
- crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
400
- crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除
401
- crop_size = (crop_size_h, crop_size_w) # 6.6G
402
- elif (tile_mode == 2): # hw都减半
403
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G
404
- elif (tile_mode == 3): # hw都三分之一
405
- crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G
406
- elif (tile_mode == 4): # hw都四分之一
407
- crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G
408
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
409
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
410
- x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect')
411
- n, c, h, w = x.shape
412
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
413
- if ("Half" in x.type()):
414
- se_mean0 = se_mean0.half()
415
- n_patch = 0
416
- tmp_dict = {}
417
- opt_res_dict = {}
418
- for i in range(0, h - 28, crop_size[0]):
419
- tmp_dict[i] = {}
420
- for j in range(0, w - 28, crop_size[1]):
421
- x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28]
422
- n, c1, h1, w1 = x_crop.shape
423
- tmp0, x_crop = self.unet1.forward_a(x_crop)
424
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
425
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
426
- else:
427
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
428
- se_mean0 += tmp_se_mean
429
- n_patch += 1
430
- tmp_dict[i][j] = (tmp0, x_crop)
431
- se_mean0 /= n_patch
432
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
433
- if ("Half" in x.type()):
434
- se_mean1 = se_mean1.half()
435
- for i in range(0, h - 28, crop_size[0]):
436
- for j in range(0, w - 28, crop_size[1]):
437
- tmp0, x_crop = tmp_dict[i][j]
438
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
439
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
440
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
441
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
442
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
443
- else:
444
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
445
- se_mean1 += tmp_se_mean
446
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
447
- se_mean1 /= n_patch
448
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
449
- if ("Half" in x.type()):
450
- se_mean0 = se_mean0.half()
451
- for i in range(0, h - 28, crop_size[0]):
452
- for j in range(0, w - 28, crop_size[1]):
453
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
454
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
455
- tmp_x3 = self.unet2.forward_b(tmp_x2)
456
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
457
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
458
- else:
459
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
460
- se_mean0 += tmp_se_mean
461
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
462
- se_mean0 /= n_patch
463
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
464
- if ("Half" in x.type()):
465
- se_mean1 = se_mean1.half()
466
- for i in range(0, h - 28, crop_size[0]):
467
- for j in range(0, w - 28, crop_size[1]):
468
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
469
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
470
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
471
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
472
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
473
- else:
474
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
475
- se_mean1 += tmp_se_mean
476
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
477
- se_mean1 /= n_patch
478
- for i in range(0, h - 28, crop_size[0]):
479
- opt_res_dict[i] = {}
480
- for j in range(0, w - 28, crop_size[1]):
481
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
482
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
483
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
484
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
485
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
486
- opt_res_dict[i][j] = x_crop #
487
- del tmp_dict
488
- torch.cuda.empty_cache()
489
- res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device)
490
- if ("Half" in x.type()):
491
- res = res.half()
492
- for i in range(0, h - 28, crop_size[0]):
493
- for j in range(0, w - 28, crop_size[1]):
494
- res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j]
495
- del opt_res_dict
496
- torch.cuda.empty_cache()
497
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3]
498
- return res
499
-
500
-
501
- class UpCunet4x(nn.Module): # 完美tile,全程无损
502
- def __init__(self, in_channels=3, out_channels=3):
503
- super(UpCunet4x, self).__init__()
504
- self.unet1 = UNet1(in_channels, 64, deconv=True)
505
- self.unet2 = UNet2(64, 64, deconv=False)
506
- self.ps = nn.PixelShuffle(2)
507
- self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True)
508
-
509
- def forward(self, x, tile_mode):
510
- n, c, h0, w0 = x.shape
511
- x00 = x
512
- if (tile_mode == 0): # 不tile
513
- ph = ((h0 - 1) // 2 + 1) * 2
514
- pw = ((w0 - 1) // 2 + 1) * 2
515
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除
516
- x = self.unet1.forward(x)
517
- x0 = self.unet2.forward(x)
518
- x1 = F.pad(x, (-20, -20, -20, -20))
519
- x = torch.add(x0, x1)
520
- x = self.conv_final(x)
521
- x = F.pad(x, (-1, -1, -1, -1))
522
- x = self.ps(x)
523
- if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4]
524
- x += F.interpolate(x00, scale_factor=4, mode='nearest')
525
- return x
526
- elif (tile_mode == 1): # 对长边减半
527
- if (w0 >= h0):
528
- crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
529
- crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
530
- else:
531
- crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
532
- crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
533
- crop_size = (crop_size_h, crop_size_w) # 6.6G
534
- elif (tile_mode == 2): # hw都减半
535
- crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
536
- elif (tile_mode == 3): # hw都三分之一
537
- crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G
538
- elif (tile_mode == 4): # hw都四分之一
539
- crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
540
- ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
541
- pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
542
- x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect')
543
- n, c, h, w = x.shape
544
- se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
545
- if ("Half" in x.type()):
546
- se_mean0 = se_mean0.half()
547
- n_patch = 0
548
- tmp_dict = {}
549
- opt_res_dict = {}
550
- for i in range(0, h - 38, crop_size[0]):
551
- tmp_dict[i] = {}
552
- for j in range(0, w - 38, crop_size[1]):
553
- x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38]
554
- n, c1, h1, w1 = x_crop.shape
555
- tmp0, x_crop = self.unet1.forward_a(x_crop)
556
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
557
- tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
558
- else:
559
- tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
560
- se_mean0 += tmp_se_mean
561
- n_patch += 1
562
- tmp_dict[i][j] = (tmp0, x_crop)
563
- se_mean0 /= n_patch
564
- se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
565
- if ("Half" in x.type()):
566
- se_mean1 = se_mean1.half()
567
- for i in range(0, h - 38, crop_size[0]):
568
- for j in range(0, w - 38, crop_size[1]):
569
- tmp0, x_crop = tmp_dict[i][j]
570
- x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
571
- opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
572
- tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
573
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
574
- tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
575
- else:
576
- tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
577
- se_mean1 += tmp_se_mean
578
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
579
- se_mean1 /= n_patch
580
- se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
581
- if ("Half" in x.type()):
582
- se_mean0 = se_mean0.half()
583
- for i in range(0, h - 38, crop_size[0]):
584
- for j in range(0, w - 38, crop_size[1]):
585
- opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
586
- tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
587
- tmp_x3 = self.unet2.forward_b(tmp_x2)
588
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
589
- tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
590
- else:
591
- tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
592
- se_mean0 += tmp_se_mean
593
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
594
- se_mean0 /= n_patch
595
- se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
596
- if ("Half" in x.type()):
597
- se_mean1 = se_mean1.half()
598
- for i in range(0, h - 38, crop_size[0]):
599
- for j in range(0, w - 38, crop_size[1]):
600
- opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
601
- tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
602
- tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
603
- if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
604
- tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
605
- else:
606
- tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
607
- se_mean1 += tmp_se_mean
608
- tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
609
- se_mean1 /= n_patch
610
- for i in range(0, h - 38, crop_size[0]):
611
- opt_res_dict[i] = {}
612
- for j in range(0, w - 38, crop_size[1]):
613
- opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
614
- tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
615
- x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
616
- x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
617
- x_crop = torch.add(x0, x1) # x0是unet2的最终输出
618
- x_crop = self.conv_final(x_crop)
619
- x_crop = F.pad(x_crop, (-1, -1, -1, -1))
620
- x_crop = self.ps(x_crop)
621
- opt_res_dict[i][j] = x_crop
622
- del tmp_dict
623
- torch.cuda.empty_cache()
624
- res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device)
625
- if ("Half" in x.type()):
626
- res = res.half()
627
- for i in range(0, h - 38, crop_size[0]):
628
- for j in range(0, w - 38, crop_size[1]):
629
- # print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape)
630
- res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j]
631
- del opt_res_dict
632
- torch.cuda.empty_cache()
633
- if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4]
634
- res += F.interpolate(x00, scale_factor=4, mode='nearest')
635
- return res #
636
-
637
-
638
- class RealWaifuUpScaler(object):
639
- def __init__(self, scale, weight_path, half, device):
640
- weight = torch.load(weight_path, map_location="cpu")
641
- self.model = eval("UpCunet%sx" % scale)()
642
- if (half == True):
643
- self.model = self.model.half().to(device)
644
- else:
645
- self.model = self.model.to(device)
646
- self.model.load_state_dict(weight, strict=True)
647
- self.model.eval()
648
- self.half = half
649
- self.device = device
650
-
651
- def np2tensor(self, np_frame):
652
- if (self.half == False):
653
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255
654
- else:
655
- return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255
656
-
657
- def tensor2np(self, tensor):
658
- if (self.half == False):
659
- return (
660
- np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0)))
661
- else:
662
- return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(),
663
- (1, 2, 0)))
664
-
665
- def __call__(self, frame, tile_mode):
666
- with torch.no_grad():
667
- tensor = self.np2tensor(frame)
668
- result = self.tensor2np(self.model(tensor, tile_mode))
669
- return result
670
-
671
-
672
- if __name__ == "__main__":
673
- ###########inference_img
674
- import time, cv2, sys
675
- from time import time as ttime
676
-
677
- for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3),
678
- ("weights_v3/up4x-latest-denoise3x.pth", 4)]:
679
- for tile_mode in [0, 1, 2, 3, 4]:
680
- upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0")
681
- input_dir = "%s/input_dir1" % root_path
682
- output_dir = "%s/opt-dir-all-test" % root_path
683
- os.makedirs(output_dir, exist_ok=True)
684
- for name in os.listdir(input_dir):
685
- print(name)
686
- tmp = name.split(".")
687
- inp_path = os.path.join(input_dir, name)
688
- suffix = tmp[-1]
689
- prefix = ".".join(tmp[:-1])
690
- tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
691
- print(inp_path, tmp_path)
692
- # 支持中文路径
693
- # os.link(inp_path, tmp_path)#win用硬链接
694
- os.symlink(inp_path, tmp_path) # linux用软链接
695
- frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]]
696
- t0 = ttime()
697
- result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1]
698
- t1 = ttime()
699
- print(prefix, "done", t1 - t0)
700
- tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
701
- cv2.imwrite(tmp_opt_path, result)
702
- n = 0
703
- while (1):
704
- if (n == 0):
705
- suffix = "_%sx_tile%s.png" % (scale, tile_mode)
706
- else:
707
- suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) #
708
- if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False):
709
- break
710
- else:
711
- n += 1
712
- final_opt_path = os.path.join(output_dir, prefix + suffix)
713
- os.rename(tmp_opt_path, final_opt_path)
714
- os.remove(tmp_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/julius/fftconv.py DELETED
@@ -1,183 +0,0 @@
1
- # File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
2
- # Author: adefossez, 2020
3
-
4
- """
5
- Implementation of a FFT based 1D convolution in PyTorch.
6
- While FFT is used in CUDNN for small kernel sizes, it is not the case for long ones, e.g. 512.
7
- This module implements efficient FFT based convolutions for such convolutions. A typical
8
- application is for evaluationg FIR filters with a long receptive field, typically
9
- evaluated with a stride of 1.
10
- """
11
- from typing import Optional
12
-
13
- import torch
14
- try:
15
- import torch.fft as new_fft
16
- except ImportError:
17
- new_fft = None # type: ignore
18
- from torch.nn import functional as F
19
-
20
- from .core import pad_to, unfold
21
- from .utils import simple_repr
22
-
23
-
24
- # This is quite verbose, but sadly needed to make TorchScript happy.
25
- def _new_rfft(x: torch.Tensor):
26
- z = new_fft.rfft(x, dim=-1)
27
- return torch.view_as_real(z)
28
-
29
-
30
- def _old_rfft(x: torch.Tensor):
31
- return torch.rfft(x, 1) # type: ignore
32
-
33
-
34
- def _old_irfft(x: torch.Tensor, length: int):
35
- result = torch.irfft(x, 1, signal_sizes=(length,)) # type: ignore
36
- return result
37
-
38
-
39
- def _new_irfft(x: torch.Tensor, length: int):
40
- x = torch.view_as_complex(x)
41
- return new_fft.irfft(x, length, dim=-1)
42
-
43
-
44
- if new_fft is None:
45
- _rfft = _old_rfft
46
- _irfft = _old_irfft
47
- else:
48
- _rfft = _new_rfft
49
- _irfft = _new_irfft
50
-
51
-
52
- def _compl_mul_conjugate(a: torch.Tensor, b: torch.Tensor):
53
- """
54
- Given a and b two tensors of dimension 4
55
- with the last dimension being the real and imaginary part,
56
- returns a multiplied by the conjugate of b, the multiplication
57
- being with respect to the second dimension.
58
-
59
- """
60
- # PyTorch 1.7 supports complex number, but not for all operations.
61
- # Once the support is widespread, this can likely go away.
62
-
63
- op = "bcft,dct->bdft"
64
- return torch.stack([
65
- torch.einsum(op, a[..., 0], b[..., 0]) + torch.einsum(op, a[..., 1], b[..., 1]),
66
- torch.einsum(op, a[..., 1], b[..., 0]) - torch.einsum(op, a[..., 0], b[..., 1])
67
- ],
68
- dim=-1)
69
-
70
-
71
- def fft_conv1d(
72
- input: torch.Tensor, weight: torch.Tensor,
73
- bias: Optional[torch.Tensor] = None, stride: int = 1, padding: int = 0,
74
- block_ratio: float = 5):
75
- """
76
- Same as `torch.nn.functional.conv1d` but using FFT for the convolution.
77
- Please check PyTorch documentation for more information.
78
-
79
- Args:
80
- input (Tensor): input signal of shape `[B, C, T]`.
81
- weight (Tensor): weight of the convolution `[D, C, K]` with `D` the number
82
- of output channels.
83
- bias (Tensor or None): if not None, bias term for the convolution.
84
- stride (int): stride of convolution.
85
- padding (int): padding to apply to the input.
86
- block_ratio (float): can be tuned for speed. The input is splitted in chunks
87
- with a size of `int(block_ratio * kernel_size)`.
88
-
89
- Shape:
90
-
91
- - Inputs: `input` is `[B, C, T]`, `weight` is `[D, C, K]` and bias is `[D]`.
92
- - Output: `(*, T)`
93
-
94
-
95
- ..note::
96
- This function is faster than `torch.nn.functional.conv1d` only in specific cases.
97
- Typically, the kernel size should be of the order of 256 to see any real gain,
98
- for a stride of 1.
99
-
100
- ..Warning::
101
- Dilation and groups are not supported at the moment. This function might use
102
- more memory than the default Conv1d implementation.
103
- """
104
- input = F.pad(input, (padding, padding))
105
- batch, channels, length = input.shape
106
- out_channels, _, kernel_size = weight.shape
107
-
108
- if length < kernel_size:
109
- raise RuntimeError(f"Input should be at least as large as the kernel size {kernel_size}, "
110
- f"but it is only {length} samples long.")
111
- if block_ratio < 1:
112
- raise RuntimeError("Block ratio must be greater than 1.")
113
-
114
- # We are going to process the input blocks by blocks, as for some reason it is faster
115
- # and less memory intensive (I think the culprit is `torch.einsum`.
116
- block_size: int = min(int(kernel_size * block_ratio), length)
117
- fold_stride = block_size - kernel_size + 1
118
- weight = pad_to(weight, block_size)
119
- weight_z = _rfft(weight)
120
-
121
- # We pad the input and get the different frames, on which
122
- frames = unfold(input, block_size, fold_stride)
123
-
124
- frames_z = _rfft(frames)
125
- out_z = _compl_mul_conjugate(frames_z, weight_z)
126
- out = _irfft(out_z, block_size)
127
- # The last bit is invalid, because FFT will do a circular convolution.
128
- out = out[..., :-kernel_size + 1]
129
- out = out.reshape(batch, out_channels, -1)
130
- out = out[..., ::stride]
131
- target_length = (length - kernel_size) // stride + 1
132
- out = out[..., :target_length]
133
- if bias is not None:
134
- out += bias[:, None]
135
- return out
136
-
137
-
138
- class FFTConv1d(torch.nn.Module):
139
- """
140
- Same as `torch.nn.Conv1d` but based on `fft_conv1d`.
141
- Please check PyTorch documentation for more information.
142
-
143
- Args:
144
- in_channels (int): number of input channels.
145
- out_channels (int): number of output channels.
146
- kernel_size (int): kernel size of convolution.
147
- stride (int): stride of convolution.
148
- padding (int): padding to apply to the input.
149
- bias (bool): if True, use a bias term.
150
-
151
- ..note::
152
- This module is faster than `torch.nn.Conv1d` only in specific cases.
153
- Typically, `kernel_size` should be of the order of 256 to see any real gain,
154
- for a stride of 1.
155
-
156
- ..warning::
157
- Dilation and groups are not supported at the moment. This module might use
158
- more memory than the default Conv1d implementation.
159
-
160
- >>> fftconv = FFTConv1d(12, 24, 128, 4)
161
- >>> x = torch.randn(4, 12, 1024)
162
- >>> print(list(fftconv(x).shape))
163
- [4, 24, 225]
164
- """
165
- def __init__(self, in_channels: int, out_channels: int, kernel_size: int,
166
- stride: int = 1, padding: int = 0, bias: bool = True):
167
- super().__init__()
168
- self.in_channels = in_channels
169
- self.out_channels = out_channels
170
- self.kernel_size = kernel_size
171
- self.stride = stride
172
- self.padding = padding
173
-
174
- conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, bias=bias)
175
- self.weight = conv.weight
176
- self.bias = conv.bias
177
-
178
- def forward(self, input: torch.Tensor):
179
- return fft_conv1d(
180
- input, self.weight, self.bias, self.stride, self.padding)
181
-
182
- def __repr__(self):
183
- return simple_repr(self, overrides={"bias": self.bias is not None})
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/tasks/tts/dataset_utils.py DELETED
@@ -1,259 +0,0 @@
1
- from utils.cwt import get_lf0_cwt
2
- import torch.optim
3
- import torch.utils.data
4
- import importlib
5
- from utils.indexed_datasets import IndexedDataset
6
- from utils.pitch_utils import norm_interp_f0, denorm_f0, f0_to_coarse
7
- import numpy as np
8
- from tasks.base_task import BaseDataset
9
- import torch
10
- import torch.optim
11
- import torch.utils.data
12
- import utils
13
- import torch.distributions
14
- from utils.hparams import hparams
15
- from resemblyzer import VoiceEncoder
16
- import json
17
- from data_gen.tts.data_gen_utils import build_phone_encoder
18
-
19
- class BaseTTSDataset(BaseDataset):
20
- def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None):
21
- super().__init__(shuffle)
22
- self.data_dir = hparams['binary_data_dir'] if data_dir is None else data_dir
23
- self.prefix = prefix
24
- self.hparams = hparams
25
- self.indexed_ds = None
26
- self.ext_mel2ph = None
27
-
28
- def load_size():
29
- self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
30
-
31
- if prefix == 'test':
32
- if test_items is not None:
33
- self.indexed_ds, self.sizes = test_items, test_sizes
34
- else:
35
- load_size()
36
- if hparams['num_test_samples'] > 0:
37
- self.avail_idxs = [x for x in range(hparams['num_test_samples']) \
38
- if x < len(self.sizes)]
39
- if len(hparams['test_ids']) > 0:
40
- self.avail_idxs = hparams['test_ids'] + self.avail_idxs
41
- else:
42
- self.avail_idxs = list(range(len(self.sizes)))
43
- else:
44
- load_size()
45
- self.avail_idxs = list(range(len(self.sizes)))
46
-
47
- if hparams['min_frames'] > 0:
48
- self.avail_idxs = [
49
- x for x in self.avail_idxs if self.sizes[x] >= hparams['min_frames']]
50
- self.sizes = [self.sizes[i] for i in self.avail_idxs]
51
-
52
- def _get_item(self, index):
53
- if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
54
- index = self.avail_idxs[index]
55
- if self.indexed_ds is None:
56
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
57
- return self.indexed_ds[index]
58
-
59
- def __getitem__(self, index):
60
- hparams = self.hparams
61
- item = self._get_item(index)
62
- assert len(item['mel']) == self.sizes[index], (len(item['mel']), self.sizes[index])
63
- max_frames = hparams['max_frames']
64
- spec = torch.Tensor(item['mel'])[:max_frames]
65
- max_frames = spec.shape[0] // hparams['frames_multiple'] * hparams['frames_multiple']
66
- spec = spec[:max_frames]
67
- phone = torch.LongTensor(item['phone'][:hparams['max_input_tokens']])
68
- sample = {
69
- "id": index,
70
- "item_name": item['item_name'],
71
- "text": item['txt'],
72
- "txt_token": phone,
73
- "mel": spec,
74
- "mel_nonpadding": spec.abs().sum(-1) > 0,
75
- }
76
- if hparams['use_spk_embed']:
77
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
78
- if hparams['use_spk_id']:
79
- sample["spk_id"] = int(item['spk_id'])
80
- return sample
81
-
82
- def collater(self, samples):
83
- if len(samples) == 0:
84
- return {}
85
- hparams = self.hparams
86
- id = torch.LongTensor([s['id'] for s in samples])
87
- item_names = [s['item_name'] for s in samples]
88
- text = [s['text'] for s in samples]
89
- txt_tokens = utils.collate_1d([s['txt_token'] for s in samples], 0)
90
- mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
91
- txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples])
92
- mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
93
-
94
- batch = {
95
- 'id': id,
96
- 'item_name': item_names,
97
- 'nsamples': len(samples),
98
- 'text': text,
99
- 'txt_tokens': txt_tokens,
100
- 'txt_lengths': txt_lengths,
101
- 'mels': mels,
102
- 'mel_lengths': mel_lengths,
103
- }
104
-
105
- if hparams['use_spk_embed']:
106
- spk_embed = torch.stack([s['spk_embed'] for s in samples])
107
- batch['spk_embed'] = spk_embed
108
- if hparams['use_spk_id']:
109
- spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
110
- batch['spk_ids'] = spk_ids
111
- return batch
112
-
113
-
114
- class FastSpeechDataset(BaseTTSDataset):
115
- def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None):
116
- super().__init__(prefix, shuffle, test_items, test_sizes, data_dir)
117
- self.f0_mean, self.f0_std = hparams.get('f0_mean', None), hparams.get('f0_std', None)
118
- if prefix == 'test' and hparams['test_input_dir'] != '':
119
- self.data_dir = hparams['test_input_dir']
120
- self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
121
- self.indexed_ds = sorted(self.indexed_ds, key=lambda item: item['item_name'])
122
- items = {}
123
- for i in range(len(self.indexed_ds)):
124
- speaker = self.indexed_ds[i]['item_name'].split('_')[0]
125
- if speaker not in items.keys():
126
- items[speaker] = [i]
127
- else:
128
- items[speaker].append(i)
129
- sort_item = sorted(items.values(), key=lambda item_pre_speaker: len(item_pre_speaker), reverse=True)
130
- self.avail_idxs = [n for a in sort_item for n in a][:hparams['num_test_samples']]
131
- self.indexed_ds, self.sizes = self.load_test_inputs()
132
- self.avail_idxs = [i for i in range(hparams['num_test_samples'])]
133
-
134
- if hparams['pitch_type'] == 'cwt':
135
- _, hparams['cwt_scales'] = get_lf0_cwt(np.ones(10))
136
-
137
- def __getitem__(self, index):
138
- sample = super(FastSpeechDataset, self).__getitem__(index)
139
- item = self._get_item(index)
140
- hparams = self.hparams
141
- max_frames = hparams['max_frames']
142
- spec = sample['mel']
143
- T = spec.shape[0]
144
- phone = sample['txt_token']
145
- sample['energy'] = (spec.exp() ** 2).sum(-1).sqrt()
146
- sample['mel2ph'] = mel2ph = torch.LongTensor(item['mel2ph'])[:T] if 'mel2ph' in item else None
147
- if hparams['use_pitch_embed']:
148
- assert 'f0' in item
149
- if hparams.get('normalize_pitch', False):
150
- f0 = item["f0"]
151
- if len(f0 > 0) > 0 and f0[f0 > 0].std() > 0:
152
- f0[f0 > 0] = (f0[f0 > 0] - f0[f0 > 0].mean()) / f0[f0 > 0].std() * hparams['f0_std'] + \
153
- hparams['f0_mean']
154
- f0[f0 > 0] = f0[f0 > 0].clip(min=60, max=500)
155
- pitch = f0_to_coarse(f0)
156
- pitch = torch.LongTensor(pitch[:max_frames])
157
- else:
158
- pitch = torch.LongTensor(item.get("pitch"))[:max_frames] if "pitch" in item else None
159
- f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
160
- uv = torch.FloatTensor(uv)
161
- f0 = torch.FloatTensor(f0)
162
- if hparams['pitch_type'] == 'cwt':
163
- cwt_spec = torch.Tensor(item['cwt_spec'])[:max_frames]
164
- f0_mean = item.get('f0_mean', item.get('cwt_mean'))
165
- f0_std = item.get('f0_std', item.get('cwt_std'))
166
- sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std})
167
- elif hparams['pitch_type'] == 'ph':
168
- if "f0_ph" in item:
169
- f0 = torch.FloatTensor(item['f0_ph'])
170
- else:
171
- f0 = denorm_f0(f0, None, hparams)
172
- f0_phlevel_sum = torch.zeros_like(phone).float().scatter_add(0, mel2ph - 1, f0)
173
- f0_phlevel_num = torch.zeros_like(phone).float().scatter_add(
174
- 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1)
175
- f0_ph = f0_phlevel_sum / f0_phlevel_num
176
- f0, uv = norm_interp_f0(f0_ph, hparams)
177
- else:
178
- f0 = uv = torch.zeros_like(mel2ph)
179
- pitch = None
180
- sample["f0"], sample["uv"], sample["pitch"] = f0, uv, pitch
181
- if hparams['use_spk_embed']:
182
- sample["spk_embed"] = torch.Tensor(item['spk_embed'])
183
- if hparams['use_spk_id']:
184
- sample["spk_id"] = item['spk_id']
185
- return sample
186
-
187
- def collater(self, samples):
188
- if len(samples) == 0:
189
- return {}
190
- hparams = self.hparams
191
- batch = super(FastSpeechDataset, self).collater(samples)
192
- f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
193
- pitch = utils.collate_1d([s['pitch'] for s in samples]) if samples[0]['pitch'] is not None else None
194
- uv = utils.collate_1d([s['uv'] for s in samples])
195
- energy = utils.collate_1d([s['energy'] for s in samples], 0.0)
196
- mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
197
- if samples[0]['mel2ph'] is not None else None
198
- batch.update({
199
- 'mel2ph': mel2ph,
200
- 'energy': energy,
201
- 'pitch': pitch,
202
- 'f0': f0,
203
- 'uv': uv,
204
- })
205
- if hparams['pitch_type'] == 'cwt':
206
- cwt_spec = utils.collate_2d([s['cwt_spec'] for s in samples])
207
- f0_mean = torch.Tensor([s['f0_mean'] for s in samples])
208
- f0_std = torch.Tensor([s['f0_std'] for s in samples])
209
- batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std})
210
- return batch
211
-
212
- def load_test_inputs(self):
213
- binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer')
214
- pkg = ".".join(binarizer_cls.split(".")[:-1])
215
- cls_name = binarizer_cls.split(".")[-1]
216
- binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
217
- ph_set_fn = f"{hparams['binary_data_dir']}/phone_set.json"
218
- ph_set = json.load(open(ph_set_fn, 'r'))
219
- print("| phone set: ", ph_set)
220
- phone_encoder = build_phone_encoder(hparams['binary_data_dir'])
221
- word_encoder = None
222
- voice_encoder = VoiceEncoder().cuda()
223
- encoder = [phone_encoder, word_encoder]
224
- sizes = []
225
- items = []
226
- for i in range(len(self.avail_idxs)):
227
- item = self._get_item(i)
228
-
229
- item2tgfn = f"{hparams['test_input_dir'].replace('binary', 'processed')}/mfa_outputs/{item['item_name']}.TextGrid"
230
- item = binarizer_cls.process_item(item['item_name'], item['ph'], item['txt'], item2tgfn,
231
- item['wav_fn'], item['spk_id'], encoder, hparams['binarization_args'])
232
- item['spk_embed'] = voice_encoder.embed_utterance(item['wav']) \
233
- if hparams['binarization_args']['with_spk_embed'] else None # 判断是否保存embedding文件
234
- items.append(item)
235
- sizes.append(item['len'])
236
- return items, sizes
237
-
238
- class FastSpeechWordDataset(FastSpeechDataset):
239
- def __getitem__(self, index):
240
- sample = super(FastSpeechWordDataset, self).__getitem__(index)
241
- item = self._get_item(index)
242
- max_frames = hparams['max_frames']
243
- sample["ph_words"] = item["ph_words"]
244
- sample["word_tokens"] = torch.LongTensor(item["word_tokens"])
245
- sample["mel2word"] = torch.LongTensor(item.get("mel2word"))[:max_frames]
246
- sample["ph2word"] = torch.LongTensor(item['ph2word'][:hparams['max_input_tokens']])
247
- return sample
248
-
249
- def collater(self, samples):
250
- batch = super(FastSpeechWordDataset, self).collater(samples)
251
- ph_words = [s['ph_words'] for s in samples]
252
- batch['ph_words'] = ph_words
253
- word_tokens = utils.collate_1d([s['word_tokens'] for s in samples], 0)
254
- batch['word_tokens'] = word_tokens
255
- mel2word = utils.collate_1d([s['mel2word'] for s in samples], 0)
256
- batch['mel2word'] = mel2word
257
- ph2word = utils.collate_1d([s['ph2word'] for s in samples], 0)
258
- batch['ph2word'] = ph2word
259
- return batch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py DELETED
@@ -1,69 +0,0 @@
1
- import logging
2
-
3
- import numpy as np
4
- import scipy
5
- import torch
6
- from sklearn.metrics import average_precision_score, roc_auc_score
7
-
8
- logger = logging.getLogger(f'main.{__name__}')
9
-
10
- def metrics(targets, outputs, topk=(1, 5)):
11
- """
12
- Adapted from https://github.com/hche11/VGGSound/blob/master/utils.py
13
-
14
- Calculate statistics including mAP, AUC, and d-prime.
15
- Args:
16
- output: 2d tensors, (dataset_size, classes_num) - before softmax
17
- target: 1d tensors, (dataset_size, )
18
- topk: tuple
19
- Returns:
20
- metric_dict: a dict of metrics
21
- """
22
- metrics_dict = dict()
23
-
24
- num_cls = outputs.shape[-1]
25
-
26
- # accuracy@k
27
- _, preds = torch.topk(outputs, k=max(topk), dim=1)
28
- correct_for_maxtopk = preds == targets.view(-1, 1).expand_as(preds)
29
- for k in topk:
30
- metrics_dict[f'accuracy_{k}'] = float(correct_for_maxtopk[:, :k].sum() / correct_for_maxtopk.shape[0])
31
-
32
- # avg precision, average roc_auc, and dprime
33
- targets = torch.nn.functional.one_hot(targets, num_classes=num_cls)
34
-
35
- # ids of the predicted classes (same as softmax)
36
- targets_pred = torch.softmax(outputs, dim=1)
37
-
38
- targets = targets.numpy()
39
- targets_pred = targets_pred.numpy()
40
-
41
- # one-vs-rest
42
- avg_p = [average_precision_score(targets[:, c], targets_pred[:, c], average=None) for c in range(num_cls)]
43
- try:
44
- roc_aucs = [roc_auc_score(targets[:, c], targets_pred[:, c], average=None) for c in range(num_cls)]
45
- except ValueError:
46
- logger.warning('Weird... Some classes never occured in targets. Do not trust the metrics.')
47
- roc_aucs = np.array([0.5])
48
- avg_p = np.array([0])
49
-
50
- metrics_dict['mAP'] = np.mean(avg_p)
51
- metrics_dict['mROCAUC'] = np.mean(roc_aucs)
52
- # Percent point function (ppf) (inverse of cdf — percentiles).
53
- metrics_dict['dprime'] = scipy.stats.norm().ppf(metrics_dict['mROCAUC']) * np.sqrt(2)
54
-
55
- return metrics_dict
56
-
57
-
58
- if __name__ == '__main__':
59
- targets = torch.tensor([3, 3, 1, 2, 1, 0])
60
- outputs = torch.tensor([
61
- [1.2, 1.3, 1.1, 1.5],
62
- [1.3, 1.4, 1.0, 1.1],
63
- [1.5, 1.1, 1.4, 1.3],
64
- [1.0, 1.2, 1.4, 1.5],
65
- [1.2, 1.3, 1.1, 1.1],
66
- [1.2, 1.1, 1.1, 1.1],
67
- ]).float()
68
- metrics_dict = metrics(targets, outputs, topk=(1, 3))
69
- print(metrics_dict)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIZero2HeroBootcamp/ExperimentalChatGPTv1/templates.py DELETED
@@ -1,44 +0,0 @@
1
- css = '''
2
- <style>
3
- .chat-message {
4
- padding: 1.5rem; border-radius: 0.5rem; margin-bottom: 1rem; display: flex
5
- }
6
- .chat-message.user {
7
- background-color: #2b313e
8
- }
9
- .chat-message.bot {
10
- background-color: #475063
11
- }
12
- .chat-message .avatar {
13
- width: 20%;
14
- }
15
- .chat-message .avatar img {
16
- max-width: 78px;
17
- max-height: 78px;
18
- border-radius: 50%;
19
- object-fit: cover;
20
- }
21
- .chat-message .message {
22
- width: 80%;
23
- padding: 0 1.5rem;
24
- color: #fff;
25
- }
26
- '''
27
-
28
- bot_template = '''
29
- <div class="chat-message bot">
30
- <div class="avatar">
31
- <img src="https://cdna.artstation.com/p/assets/images/images/054/910/878/large/aaron-wacker-cyberpunk-computer-devices-iot.jpg?1665656564" style="max-height: 78px; max-width: 78px; border-radius: 50%; object-fit: cover;">
32
- </div>
33
- <div class="message">{{MSG}}</div>
34
- </div>
35
- '''
36
-
37
- user_template = '''
38
- <div class="chat-message user">
39
- <div class="avatar">
40
- <img src="https://cdnb.artstation.com/p/assets/images/images/054/910/875/large/aaron-wacker-cyberpunk-computer-brain-design.jpg?1665656558">
41
- </div>
42
- <div class="message">{{MSG}}</div>
43
- </div>
44
- '''
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ALM/CALM/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: CALM
3
- emoji: 🎵
4
- colorFrom: blue
5
- colorTo: gray
6
- sdk: streamlit
7
- sdk_version: 1.10.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov5/yolov5_n-p6-v62_syncbn_fast_8xb16-300e_coco.py DELETED
@@ -1,15 +0,0 @@
1
- _base_ = 'yolov5_s-p6-v62_syncbn_fast_8xb16-300e_coco.py'
2
-
3
- deepen_factor = 0.33
4
- widen_factor = 0.25
5
-
6
- model = dict(
7
- backbone=dict(
8
- deepen_factor=deepen_factor,
9
- widen_factor=widen_factor,
10
- ),
11
- neck=dict(
12
- deepen_factor=deepen_factor,
13
- widen_factor=widen_factor,
14
- ),
15
- bbox_head=dict(head_module=dict(widen_factor=widen_factor)))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/app.py DELETED
@@ -1,357 +0,0 @@
1
- # import subprocess
2
- # subprocess.call("mim install 'mmengine>=0.6.0'", shell=True)
3
- # subprocess.call("mim install 'mmcv>=2.0.0rc4,<2.1.0'", shell=True)
4
- # subprocess.call("min install 'mmdet>=3.0.0,<4.0.0'", shell=True)
5
- # subprocess.call("mim install 'mmyolo'", shell=True)
6
- # subprocess.call("mim install 'mmpose'", shell=True)
7
- # subprocess.call("mim install 'mmpretrain'", shell=True)
8
-
9
- import numpy as np
10
- import gradio as gr
11
- import requests
12
- import base64
13
- import pandas as pd
14
- import cv2
15
- from typing import Tuple
16
- from PIL import Image
17
- from io import BytesIO
18
- from skimage import color
19
- import os
20
-
21
- from Model.Model6.model6_inference import main as model6_inferencer
22
- from mmyolo.utils import register_all_modules
23
-
24
- register_all_modules()
25
-
26
-
27
- def get_access_token(refatch=False) -> str:
28
- """获取百度AI的access_token
29
- :param refatch:是否重新获取access_token
30
- :return:返回access_token"""
31
- if refatch:
32
- # client_id 为官网获取的AK, client_secret 为官网获取的SK
33
- client_id = '7OtH60uo01ZNYN4yPyahlRSx'
34
- client_secret = 'D5AxcUpyQyIA7KgPplp7dnz5tM0UIljy'
35
- host = 'https://aip.baidubce.com/oauth/2.0/token?' \
36
- 'grant_type=client_credentials&client_id=%s&client_secret=%s' % (client_id, client_secret)
37
- response = requests.get(host)
38
- # print(response)
39
- if response:
40
- return response.json()['access_token']
41
- else:
42
- r"""
43
- {"refresh_token":"25.24b9368ce91f9bd62c8dad38b3436800.315360000.2007815067.282335-30479502",
44
- "expires_in":2592000,
45
- "session_key":
46
- "9mzdWT\/YmQ7oEi9WCRWbXd0YCcrSYQY6kKZjObKunlcKcZt95j9\/q1aJqbVXihpQOXK84o5WLJ8e7d4cXOi0VUJJcz5YEQ==",
47
- "access_token":"24.becefee37aba38ea43c546fc154d3016.2592000.1695047067.282335-30479502",
48
- "scope":"public brain_all_scope brain_body_analysis brain_body_attr brain_body_number brain_driver_behavior
49
- brain_body_seg brain_gesture_detect brain_body_tracking brain_hand_analysis wise_adapt
50
- lebo_resource_base lightservice_public hetu_basic lightcms_map_poi kaidian_kaidian
51
- ApsMisTest_Test\u6743\u9650 vis-classify_flower lpq_\u5f00\u653e cop_helloScope
52
- ApsMis_fangdi_permission smartapp_snsapi_base smartapp_mapp_dev_manage iop_autocar oauth_tp_app
53
- smartapp_smart_game_openapi oauth_sessionkey smartapp_swanid_verify smartapp_opensource_openapi
54
- smartapp_opensource_recapi fake_face_detect_\u5f00\u653eScope
55
- vis-ocr_\u865a\u62df\u4eba\u7269\u52a9\u7406 idl-video_\u865a\u62df\u4eba\u7269\u52a9\u7406
56
- smartapp_component smartapp_search_plugin avatar_video_test b2b_tp_openapi b2b_tp_openapi_online
57
- smartapp_gov_aladin_to_xcx","session_secret":"5c8c3dbb80b04f58bb33aa8077758679"
58
- }
59
- """
60
- access_token = "24.becefee37aba38ea43c546fc154d3016.2592000.1695047067.282335-30479502"
61
- return access_token
62
-
63
-
64
- def resize_image(img, max_length=2048, min_length=50) -> Tuple[np.ndarray, bool]:
65
- """Ensure that the longest side is shorter than 524px and the shortest side is longer than 50px.
66
- :param img: 前端传入的图片
67
- :param max_length: 最长边像素
68
- :param min_length: 最短边像素
69
- :return: 返回处理后的图片和是否进行了resize的标志
70
- """
71
- flag = False
72
- max_side = max(img.shape[0], img.shape[1])
73
- min_side = min(img.shape[0], img.shape[1])
74
- if max_side > max_length:
75
- scale = max_length / max_side
76
- img = cv2.resize(img, (int(img.shape[1] * scale), int(img.shape[0] * scale)))
77
- flag = True
78
- if min_side < min_length:
79
- scale = min_length / min_side
80
- img = cv2.resize(img, (int(img.shape[1] * scale), int(img.shape[0] * scale)))
81
- flag = True
82
- return img, flag
83
-
84
-
85
- def model1_det(x):
86
- """人体检测与属性识别
87
- :param x:前端传入的图片
88
- :return:返回检测结果
89
- """
90
-
91
- def _Baidu_det(img):
92
- """调用百度AI接口进行人体检测与属性识别
93
- :param img:前端传入的图片,格式为numpy.ndarray
94
- :return:返回检测结果
95
- """
96
- request_url = "https://aip.baidubce.com/rest/2.0/image-classify/v1/body_attr"
97
- # 保存图片到本地
98
- cv2.imwrite('test.jpg', img)
99
- # 二进制方式打开图片文件
100
- f = open('test.jpg', 'rb')
101
- hex_image = base64.b64encode(f.read())
102
- # 选择二进制图片和需要输出的属性(12个)
103
- params = {
104
- "image": hex_image,
105
- "type": "gender,age,upper_wear,lower_wear,upper_color,lower_color,"
106
- "orientation,upper_cut,lower_cut,side_cut,occlusion,is_human"
107
- }
108
- access_token = get_access_token()
109
- request_url = request_url + "?access_token=" + access_token
110
- headers = {'content-type': 'application/x-www-form-urlencoded'}
111
- response = requests.post(request_url, data=params, headers=headers)
112
- if response:
113
- return response.json()
114
-
115
- def _get_attributes_list(r) -> dict:
116
- """获取人体属性列表
117
- :param r:百度AI接口返回的json数据
118
- :return:返回人体属性列表
119
- """
120
- all_humans_attributes_list = {}
121
- person_num = r['person_num']
122
- print('person_num:', person_num)
123
- for human_idx in range(person_num):
124
- attributes_dict = r['person_info'][human_idx]['attributes']
125
- attributes_list = []
126
- for key, value in attributes_dict.items():
127
- attribute = [key, value['name'], value['score']]
128
- attributes_list.append(attribute)
129
- new_value = ['attribute', 'attribute_value', 'accuracy']
130
- attributes_list.insert(0, new_value)
131
- df = pd.DataFrame(attributes_list[1:], columns=attributes_list[0])
132
- all_humans_attributes_list[human_idx] = df
133
- return all_humans_attributes_list
134
-
135
- def _show_img(img, bboxes):
136
- """显示图片
137
- :param img:前端传入的图片
138
- :param bboxes:检测框坐标
139
- :return:处理完成的图片 """
140
- line_width = int(max(img.shape[1], img.shape[0]) / 400)
141
- for bbox in bboxes:
142
- left, top, width, height = bbox['left'], bbox['top'], bbox['width'], bbox['height']
143
- right, bottom = left + width, top + height
144
- for i in range(left, right):
145
- img[top:top + line_width, i] = [255, 0, 0]
146
- img[bottom - line_width:bottom, i] = [255, 0, 0]
147
- for i in range(top, bottom):
148
- img[i, left:left + line_width] = [255, 0, 0]
149
- img[i, right - line_width:right] = [255, 0, 0]
150
- return img
151
-
152
- result = _Baidu_det(x)
153
- HAs_list = _get_attributes_list(result)
154
- locations = []
155
- for i in range(len(result['person_info'])):
156
- locations.append(result['person_info'][i]['location'])
157
-
158
- return _show_img(x, locations), f"模型检测到的人数为:{result['person_num']}人"
159
-
160
-
161
- def model2_rem(x):
162
- """背景消除
163
- :param x: 前端传入的图片
164
- :return: 返回处理后的图片
165
- """
166
-
167
- def _Baidu_rem(img):
168
- """调用百度AI接口进行背景消除
169
- :param img: 前端传入的图片,格式为numpy.ndarray
170
- :return: 返回处理后的图片
171
- """
172
- request_url = "https://aip.baidubce.com/rest/2.0/image-classify/v1/body_seg"
173
- bgr_image = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
174
- cv2.imwrite('test.jpg', bgr_image)
175
- f = open('test.jpg', 'rb')
176
- hex_image = base64.b64encode(f.read())
177
- params = {"image": hex_image}
178
- access_token = get_access_token()
179
- request_url = request_url + "?access_token=" + access_token
180
- headers = {'content-type': 'application/x-www-form-urlencoded'}
181
- response = requests.post(request_url, data=params, headers=headers)
182
- if response:
183
- encoded_image = response.json()["foreground"]
184
- decoded_image = base64.b64decode(encoded_image)
185
- image = Image.open(BytesIO(decoded_image))
186
- image_array = np.array(image)
187
- return image_array
188
-
189
- resized_x, resized_f = resize_image(x)
190
- new_img = _Baidu_rem(resized_x)
191
- if resized_f:
192
- resized_f = "图片尺寸已被修改至合适大小"
193
- else:
194
- resized_f = "图片尺寸无需修改"
195
-
196
- return new_img, resized_f
197
-
198
-
199
- def model3_ext(x: np.ndarray, num_clusters=8):
200
- """主色调提取
201
- :param x: 前端传入的图片
202
- :param num_clusters: 聚类的数量
203
- :return: 返回处理后的图片,是否进行了resize的标志,颜色列表"""
204
-
205
- def _find_name(r, g, b):
206
- """根据颜色值查找颜色名称。
207
- :param r: 红色值
208
- :param g: 绿色值
209
- :param b: 蓝色值
210
- :return:返回颜色名称
211
- """
212
- # turn RGB to Lab
213
- lab = color.rgb2lab([[r / 255, g / 255, b / 255]])[0]
214
- for i in range(len(df)):
215
- # culcuate the minimum chromatic distance
216
- df['distance'] = np.sqrt((df['L'] - lab[0]) ** 2 + (df['a'] - lab[1]) ** 2 + (df['b'] - lab[2]) ** 2)
217
- # find the color name, whose chromatic distance is the minimum, and the corresponding distance
218
- name = df[df['distance'] == df['distance'].min()]['name'].values[0]
219
- distance = df[df['distance'] == df['distance'].min()]['distance'].values[0]
220
-
221
- return name, distance
222
-
223
- def _cluster(img, NUM_CLUSTERS):
224
- """K-means 聚类提取主色调
225
- :param img: 前端传入的图片
226
- :param NUM_CLUSTERS: 聚类的数量
227
- :return: 返回聚类结果
228
- """
229
- h, w, ch = img.shape
230
- reshaped_x = np.float32(img.reshape((-1, 4)))
231
- new_data_list = []
232
- for i in range(len(reshaped_x)):
233
- if reshaped_x[i][3] < 100:
234
- continue
235
- else:
236
- new_data_list.append(reshaped_x[i])
237
- reshaped_x = np.array(new_data_list)
238
- reshaped_x = np.delete(reshaped_x, 3, axis=1)
239
- criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
240
- NUM_CLUSTERS = NUM_CLUSTERS
241
- ret, label, center = cv2.kmeans(reshaped_x, NUM_CLUSTERS, None, criteria,
242
- NUM_CLUSTERS, cv2.KMEANS_RANDOM_CENTERS)
243
- clusters = np.zeros([NUM_CLUSTERS], dtype=np.int32)
244
- for i in range(len(label)):
245
- clusters[label[i][0]] += 1
246
- clusters = np.float32(clusters) / float(len(reshaped_x))
247
- center = np.int32(center)
248
- x_offset = 0
249
- card = np.zeros((50, w, 3), dtype=np.uint8)
250
- color_list = []
251
- for c in np.argsort(clusters)[::-1]:
252
- dx = int(clusters[c] * w)
253
- r = center[c][0]
254
- g = center[c][1]
255
- b = center[c][2]
256
- cv2.rectangle(card, (x_offset, 0), (x_offset + dx, 50),
257
- (int(r), int(g), int(b)), -1)
258
- color_list.append([r, g, b, str(round(clusters[c]*100, 2)) + '%'])
259
- x_offset += dx
260
-
261
- return card, resized_f, color_list
262
-
263
- file = '中国传统色_集合.xlsx'
264
- df = pd.read_excel(file, sheet_name='Sheet1')[['name', 'L', 'a', 'b']]
265
-
266
- resized_x, resized_f = resize_image(x)
267
- card, resized_f, c_list = _cluster(resized_x, num_clusters)
268
-
269
- for c in c_list:
270
- c_name, c_distance = _find_name(c[0], c[1], c[2])
271
- c.append(c_name)
272
- c.append(c_distance)
273
-
274
- if resized_f:
275
- resized_f = "图片尺寸已被修改至合适大小"
276
- else:
277
- resized_f = "图片尺寸无需修改"
278
-
279
- c_df = pd.DataFrame(c_list, columns=['R', 'G', 'B', '比例', '颜色名称', '色差ΔE'])
280
-
281
- return card, resized_f, c_df
282
-
283
-
284
- def model4_clo(x_path: str):
285
- def _get_result(input_path: str, cls_results: dict) -> pd.DataFrame:
286
- """convert the results of model6_2 to a dataframe
287
- :param input_path: the (absolute) path of the image
288
- :param cls_results: the results of model6_2
289
-
290
- :return: a dataframe to display on the web
291
- """
292
- result_pd = []
293
- img_name = os.path.basename(input_path)
294
- pred_profile = cls_results[img_name][0]['pred_class']
295
- pred_score = round(cls_results[img_name][0]['pred_score'], 2)
296
- result_pd.append([img_name, pred_profile, pred_score])
297
- df = pd.DataFrame(result_pd, columns=None)
298
- return df
299
-
300
- output_path_root = 'upload_to_web_tmp'
301
- if not os.path.exists(output_path_root):
302
- os.mkdir(output_path_root)
303
- cls_result = model6_inferencer(x_path, output_path_root)
304
-
305
- if cls_result:
306
- # use np to read image·
307
- x_name = os.path.basename(x_path)
308
- pred_x = np.array(Image.open(os.path.join(output_path_root, 'visualizations', x_name)))
309
-
310
- return pred_x, _get_result(x_path, cls_result), "识别成功!"
311
- # TODO: 完善识别失败时的处理(model6_inference.py中)[important]
312
- return x_path, pd.DataFrame(), "未检测到服装"
313
-
314
-
315
- with gr.Blocks() as demo:
316
- gr.Markdown("# 服装图像识别模块——功能演示")
317
- with gr.Tab("人体检测模型"):
318
- with gr.Row():
319
- model1_input = gr.Image(height=400)
320
- model1_output_img = gr.Image(height=400)
321
- # model1_output_df = gr.DataFrame()
322
- model1_button = gr.Button("开始检测")
323
- with gr.Tab("背景消除模型"):
324
- with gr.Row():
325
- model2_input = gr.Image(height=400)
326
- model2_output_img = gr.Image(height=400)
327
- model2_button = gr.Button("开始消除")
328
- with gr.Tab('主色调提取'):
329
- with gr.Row():
330
- with gr.Column():
331
- model3_input = gr.Image(height=400, image_mode='RGBA')
332
- model3_slider = gr.Slider(minimum=1, maximum=20, step=1, value=8,
333
- min_width=400, label="聚类数量")
334
- with gr.Column():
335
- model3_output_img = gr.Image(height=100)
336
- model3_output_df = gr.DataFrame(headers=['R', 'G', 'B', '比例', '颜色名称', '色差ΔE'],
337
- datatype=['number', 'number', 'number', 'str', 'str', 'number'])
338
-
339
- model3_button = gr.Button("开始提取")
340
- with gr.Tab("廓形识别"):
341
- with gr.Row():
342
- model4_input = gr.Image(height=400, type="filepath")
343
- model4_output_img = gr.Image(height=400)
344
- model4_output_df = gr.DataFrame(headers=['img_name', 'pred_profile', 'pred_score'],
345
- datatype=['str', 'str', 'number'])
346
- model4_button = gr.Button("开始识别")
347
- # 设置折叠内容
348
- with gr.Accordion("模型运行信息"):
349
- running_info = gr.Markdown("等待输入和运行...")
350
-
351
- model1_button.click(model1_det, inputs=model1_input, outputs=[model1_output_img, running_info])
352
- model2_button.click(model2_rem, inputs=model2_input, outputs=[model2_output_img, running_info])
353
- model3_button.click(model3_ext,
354
- inputs=[model3_input, model3_slider],
355
- outputs=[model3_output_img, running_info, model3_output_df])
356
- model4_button.click(model4_clo, inputs=model4_input, outputs=[model4_output_img, model4_output_df, running_info])
357
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abdullahw72/bark-voice-cloning/hubert/customtokenizer.py DELETED
@@ -1,182 +0,0 @@
1
- import json
2
- import os.path
3
- from zipfile import ZipFile
4
-
5
- import numpy
6
- import torch
7
- from torch import nn, optim
8
- from torch.serialization import MAP_LOCATION
9
-
10
-
11
- class CustomTokenizer(nn.Module):
12
- def __init__(self, hidden_size=1024, input_size=768, output_size=10000, version=0):
13
- super(CustomTokenizer, self).__init__()
14
- next_size = input_size
15
- if version == 0:
16
- self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True)
17
- next_size = hidden_size
18
- if version == 1:
19
- self.lstm = nn.LSTM(input_size, hidden_size, 2, batch_first=True)
20
- self.intermediate = nn.Linear(hidden_size, 4096)
21
- next_size = 4096
22
-
23
- self.fc = nn.Linear(next_size, output_size)
24
- self.softmax = nn.LogSoftmax(dim=1)
25
- self.optimizer: optim.Optimizer = None
26
- self.lossfunc = nn.CrossEntropyLoss()
27
- self.input_size = input_size
28
- self.hidden_size = hidden_size
29
- self.output_size = output_size
30
- self.version = version
31
-
32
- def forward(self, x):
33
- x, _ = self.lstm(x)
34
- if self.version == 1:
35
- x = self.intermediate(x)
36
- x = self.fc(x)
37
- x = self.softmax(x)
38
- return x
39
-
40
- @torch.no_grad()
41
- def get_token(self, x):
42
- """
43
- Used to get the token for the first
44
- :param x: An array with shape (N, input_size) where N is a whole number greater or equal to 1, and input_size is the input size used when creating the model.
45
- :return: An array with shape (N,) where N is the same as N from the input. Every number in the array is a whole number in range 0...output_size - 1 where output_size is the output size used when creating the model.
46
- """
47
- return torch.argmax(self(x), dim=1)
48
-
49
- def prepare_training(self):
50
- self.optimizer = optim.Adam(self.parameters(), 0.001)
51
-
52
- def train_step(self, x_train, y_train, log_loss=False):
53
- # y_train = y_train[:-1]
54
- # y_train = y_train[1:]
55
-
56
- optimizer = self.optimizer
57
- lossfunc = self.lossfunc
58
- # Zero the gradients
59
- self.zero_grad()
60
-
61
- # Forward pass
62
- y_pred = self(x_train)
63
-
64
- y_train_len = len(y_train)
65
- y_pred_len = y_pred.shape[0]
66
-
67
- if y_train_len > y_pred_len:
68
- diff = y_train_len - y_pred_len
69
- y_train = y_train[diff:]
70
- elif y_train_len < y_pred_len:
71
- diff = y_pred_len - y_train_len
72
- y_pred = y_pred[:-diff, :]
73
-
74
- y_train_hot = torch.zeros(len(y_train), self.output_size)
75
- y_train_hot[range(len(y_train)), y_train] = 1
76
- y_train_hot = y_train_hot.to('cuda')
77
-
78
- # Calculate the loss
79
- loss = lossfunc(y_pred, y_train_hot)
80
-
81
- # Print loss
82
- if log_loss:
83
- print('Loss', loss.item())
84
-
85
- # Backward pass
86
- loss.backward()
87
-
88
- # Update the weights
89
- optimizer.step()
90
-
91
- def save(self, path):
92
- info_path = os.path.basename(path) + '/.info'
93
- torch.save(self.state_dict(), path)
94
- data_from_model = Data(self.input_size, self.hidden_size, self.output_size, self.version)
95
- with ZipFile(path, 'a') as model_zip:
96
- model_zip.writestr(info_path, data_from_model.save())
97
- model_zip.close()
98
-
99
- @staticmethod
100
- def load_from_checkpoint(path, map_location: MAP_LOCATION = None):
101
- old = True
102
- with ZipFile(path) as model_zip:
103
- filesMatch = [file for file in model_zip.namelist() if file.endswith('/.info')]
104
- file = filesMatch[0] if filesMatch else None
105
- if file:
106
- old = False
107
- data_from_model = Data.load(model_zip.read(file).decode('utf-8'))
108
- model_zip.close()
109
- if old:
110
- model = CustomTokenizer()
111
- else:
112
- model = CustomTokenizer(data_from_model.hidden_size, data_from_model.input_size, data_from_model.output_size, data_from_model.version)
113
- model.load_state_dict(torch.load(path, map_location))
114
- return model
115
-
116
-
117
-
118
- class Data:
119
- input_size: int
120
- hidden_size: int
121
- output_size: int
122
- version: int
123
-
124
- def __init__(self, input_size=768, hidden_size=1024, output_size=10000, version=0):
125
- self.input_size = input_size
126
- self.hidden_size = hidden_size
127
- self.output_size = output_size
128
- self.version = version
129
-
130
- @staticmethod
131
- def load(string):
132
- data = json.loads(string)
133
- return Data(data['input_size'], data['hidden_size'], data['output_size'], data['version'])
134
-
135
- def save(self):
136
- data = {
137
- 'input_size': self.input_size,
138
- 'hidden_size': self.hidden_size,
139
- 'output_size': self.output_size,
140
- 'version': self.version,
141
- }
142
- return json.dumps(data)
143
-
144
-
145
- def auto_train(data_path, save_path='model.pth', load_model: str | None = None, save_epochs=1):
146
- data_x, data_y = [], []
147
-
148
- if load_model and os.path.isfile(load_model):
149
- print('Loading model from', load_model)
150
- model_training = CustomTokenizer.load_from_checkpoint(load_model, 'cuda')
151
- else:
152
- print('Creating new model.')
153
- model_training = CustomTokenizer(version=1).to('cuda') # Settings for the model to run without lstm
154
- save_path = os.path.join(data_path, save_path)
155
- base_save_path = '.'.join(save_path.split('.')[:-1])
156
-
157
- sem_string = '_semantic.npy'
158
- feat_string = '_semantic_features.npy'
159
-
160
- ready = os.path.join(data_path, 'ready')
161
- for input_file in os.listdir(ready):
162
- full_path = os.path.join(ready, input_file)
163
- if input_file.endswith(sem_string):
164
- data_y.append(numpy.load(full_path))
165
- elif input_file.endswith(feat_string):
166
- data_x.append(numpy.load(full_path))
167
- model_training.prepare_training()
168
-
169
- epoch = 1
170
-
171
- while 1:
172
- for i in range(save_epochs):
173
- j = 0
174
- for x, y in zip(data_x, data_y):
175
- model_training.train_step(torch.tensor(x).to('cuda'), torch.tensor(y).to('cuda'), j % 50 == 0) # Print loss every 50 steps
176
- j += 1
177
- save_p = save_path
178
- save_p_2 = f'{base_save_path}_epoch_{epoch}.pth'
179
- model_training.save(save_p)
180
- model_training.save(save_p_2)
181
- print(f'Epoch {epoch} completed')
182
- epoch += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/arcadestepclock.js DELETED
@@ -1,2 +0,0 @@
1
- import ArcadeStepClock from './time/clock/ArcadeStepClock.js';
2
- export default ArcadeStepClock;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/quadimage-plugin.js DELETED
@@ -1,51 +0,0 @@
1
- import QuadImageFactory from './gameobjects/mesh/quad/image/Factory.js';
2
- import QuadImageCreator from './gameobjects/mesh/quad/image/Creator.js';
3
- import QuadImage from './gameobjects/mesh/quad/image/Image.js';
4
-
5
- import QuadRenderTextureFactory from './gameobjects/mesh/quad/rendertexture/Factory.js';
6
- import QuadRenderTextureCreator from './gameobjects/mesh/quad/rendertexture/Creator.js';
7
- import QuadRenderTexture from './gameobjects/mesh/quad/rendertexture/RenderTexture.js';
8
-
9
- import SkewImageFactory from './gameobjects/mesh/quad/skewimage/Factory';
10
- import SkewImageCreator from './gameobjects/mesh/quad/skewimage/Creator.js';
11
- import SkewImage from './gameobjects/mesh/quad/skewimage/SkewImage.js';
12
-
13
- import SkewRenderTextureFactory from './gameobjects/mesh/quad/skewrendertexture/Factory.js';
14
- import SkewRenderTextureCreator from './gameobjects/mesh/quad/skewrendertexture/Creator.js';
15
- import SkewRenderTexture from './gameobjects/mesh/quad/skewrendertexture/SkewRenderTexture.js';
16
-
17
- import ContainerSkew from './behaviors/containerskew/ContainerSkew.js';
18
-
19
- import SetValue from './utils/object/SetValue.js';
20
-
21
- class QuadImagePlugin extends Phaser.Plugins.BasePlugin {
22
-
23
- constructor(pluginManager) {
24
- super(pluginManager);
25
-
26
- // Register our new Game Object type
27
- pluginManager.registerGameObject('rexQuadImage', QuadImageFactory, QuadImageCreator);
28
- pluginManager.registerGameObject('rexQuadRenderTexture', QuadRenderTextureFactory, QuadRenderTextureCreator);
29
-
30
- pluginManager.registerGameObject('rexSkewImage', SkewImageFactory, SkewImageCreator);
31
- pluginManager.registerGameObject('rexSkewRenderTexture', SkewRenderTextureFactory, SkewRenderTextureCreator);
32
- }
33
-
34
- start() {
35
- var eventEmitter = this.game.events;
36
- eventEmitter.on('destroy', this.destroy, this);
37
- }
38
-
39
- addContainerSkew(parentContainer, config) {
40
- return new ContainerSkew(parentContainer, config);
41
- }
42
- }
43
-
44
- SetValue(window, 'RexPlugins.GameObjects.QuadImage', QuadImage);
45
- SetValue(window, 'RexPlugins.GameObjects.QuadRenderTexture', QuadRenderTexture);
46
- SetValue(window, 'RexPlugins.GameObjects.SkewImage', SkewImage);
47
- SetValue(window, 'RexPlugins.GameObjects.SkewRenderTexture', SkewRenderTexture);
48
-
49
- SetValue(window, 'RexPlugins.GameObjects.ContainerSkew', ContainerSkew);
50
-
51
- export default QuadImagePlugin;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/facebook/Facebook.js DELETED
@@ -1,50 +0,0 @@
1
- import Base from '../base/Base.js';
2
- import { Line } from '../utils/Geoms.js';
3
- import Yoyo from '../utils/Yoyo.js';
4
-
5
- const Linear = Phaser.Math.Linear;
6
- const ExpoIn = Phaser.Math.Easing.Expo.In;
7
-
8
- class Facebook extends Base {
9
- constructor(scene, config) {
10
- super(scene, config);
11
- this.type = 'rexSpinnerFacebook';
12
- }
13
-
14
- buildShapes() {
15
- for (var i = 0; i < 3; i++) {
16
- var shape = new Line();
17
- this.addShape(shape);
18
- }
19
- }
20
-
21
- updateShapes() {
22
- var centerX = this.centerX;
23
- var centerY = this.centerY;
24
- var radius = this.radius;
25
- var leftBound = centerX - radius;
26
-
27
- var shapes = this.getShapes(),
28
- cnt = shapes.length;
29
- var cellWidth = (radius * 2) / cnt;
30
- var cellHeight = radius * 2;
31
-
32
- for (var i = 0; i < cnt; i++) {
33
- var line = shapes[i];
34
- var t = (this.value + ((cnt - i) * 0.1)) % 1;
35
- t = ExpoIn(Yoyo(t));
36
-
37
- var lineAlpha = (i + 1) / cnt;
38
- var lineHeight = Linear(0.7, 1, t) * cellHeight;
39
- var lineWidth = Linear(0.7, 1, t) * cellWidth;
40
- var x = leftBound + (cellWidth * (i + 0.5));
41
-
42
- line
43
- .lineStyle(lineWidth, this.color, lineAlpha)
44
- .setP0(x, centerY - (lineHeight / 2))
45
- .setP1(x, centerY + (lineHeight / 2));
46
- }
47
- }
48
- }
49
-
50
- export default Facebook;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/losses.py DELETED
@@ -1,42 +0,0 @@
1
- import torch
2
- from torch import nn
3
-
4
-
5
- def get_loss(name):
6
- if name == "cosface":
7
- return CosFace()
8
- elif name == "arcface":
9
- return ArcFace()
10
- else:
11
- raise ValueError()
12
-
13
-
14
- class CosFace(nn.Module):
15
- def __init__(self, s=64.0, m=0.40):
16
- super(CosFace, self).__init__()
17
- self.s = s
18
- self.m = m
19
-
20
- def forward(self, cosine, label):
21
- index = torch.where(label != -1)[0]
22
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
23
- m_hot.scatter_(1, label[index, None], self.m)
24
- cosine[index] -= m_hot
25
- ret = cosine * self.s
26
- return ret
27
-
28
-
29
- class ArcFace(nn.Module):
30
- def __init__(self, s=64.0, m=0.5):
31
- super(ArcFace, self).__init__()
32
- self.s = s
33
- self.m = m
34
-
35
- def forward(self, cosine: torch.Tensor, label):
36
- index = torch.where(label != -1)[0]
37
- m_hot = torch.zeros(index.size()[0], cosine.size()[1], device=cosine.device)
38
- m_hot.scatter_(1, label[index, None], self.m)
39
- cosine.acos_()
40
- cosine[index] += m_hot
41
- cosine.cos_().mul_(self.s)
42
- return cosine
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlphonseBrandon/speecht5-tts-demo/app.py DELETED
@@ -1,129 +0,0 @@
1
- import gradio as gr
2
- import librosa
3
- import numpy as np
4
- import torch
5
-
6
- from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
7
-
8
-
9
- checkpoint = "microsoft/speecht5_tts"
10
- processor = SpeechT5Processor.from_pretrained(checkpoint)
11
- model = SpeechT5ForTextToSpeech.from_pretrained(checkpoint)
12
- vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan")
13
-
14
-
15
- speaker_embeddings = {
16
- "BDL": "spkemb/cmu_us_bdl_arctic-wav-arctic_a0009.npy",
17
- "CLB": "spkemb/cmu_us_clb_arctic-wav-arctic_a0144.npy",
18
- "KSP": "spkemb/cmu_us_ksp_arctic-wav-arctic_b0087.npy",
19
- "RMS": "spkemb/cmu_us_rms_arctic-wav-arctic_b0353.npy",
20
- "SLT": "spkemb/cmu_us_slt_arctic-wav-arctic_a0508.npy",
21
- }
22
-
23
-
24
- def predict(text, speaker):
25
- if len(text.strip()) == 0:
26
- return (16000, np.zeros(0).astype(np.int16))
27
-
28
- inputs = processor(text=text, return_tensors="pt")
29
-
30
- # limit input length
31
- input_ids = inputs["input_ids"]
32
- input_ids = input_ids[..., :model.config.max_text_positions]
33
-
34
- if speaker == "Surprise Me!":
35
- # load one of the provided speaker embeddings at random
36
- idx = np.random.randint(len(speaker_embeddings))
37
- key = list(speaker_embeddings.keys())[idx]
38
- speaker_embedding = np.load(speaker_embeddings[key])
39
-
40
- # randomly shuffle the elements
41
- np.random.shuffle(speaker_embedding)
42
-
43
- # randomly flip half the values
44
- x = (np.random.rand(512) >= 0.5) * 1.0
45
- x[x == 0] = -1.0
46
- speaker_embedding *= x
47
-
48
- #speaker_embedding = np.random.rand(512).astype(np.float32) * 0.3 - 0.15
49
- else:
50
- speaker_embedding = np.load(speaker_embeddings[speaker[:3]])
51
-
52
- speaker_embedding = torch.tensor(speaker_embedding).unsqueeze(0)
53
-
54
- speech = model.generate_speech(input_ids, speaker_embedding, vocoder=vocoder)
55
-
56
- speech = (speech.numpy() * 32767).astype(np.int16)
57
- return (16000, speech)
58
-
59
-
60
- title = "SpeechT5: Speech Synthesis"
61
-
62
- description = """
63
- The <b>SpeechT5</b> model is pre-trained on text as well as speech inputs, with targets that are also a mix of text and speech.
64
- By pre-training on text and speech at the same time, it learns unified representations for both, resulting in improved modeling capabilities.
65
-
66
- SpeechT5 can be fine-tuned for different speech tasks. This space demonstrates the <b>text-to-speech</b> (TTS) checkpoint for the English language.
67
-
68
- See also the <a href="https://huggingface.co/spaces/Matthijs/speecht5-asr-demo">speech recognition (ASR) demo</a>
69
- and the <a href="https://huggingface.co/spaces/Matthijs/speecht5-vc-demo">voice conversion demo</a>.
70
-
71
- <b>How to use:</b> Enter some English text and choose a speaker. The output is a mel spectrogram, which is converted to a mono 16 kHz waveform by the
72
- HiFi-GAN vocoder. Because the model always applies random dropout, each attempt will give slightly different results.
73
- The <em>Surprise Me!</em> option creates a completely randomized speaker.
74
- """
75
-
76
- article = """
77
- <div style='margin:20px auto;'>
78
-
79
- <p>References: <a href="https://arxiv.org/abs/2110.07205">SpeechT5 paper</a> |
80
- <a href="https://github.com/microsoft/SpeechT5/">original GitHub</a> |
81
- <a href="https://huggingface.co/mechanicalsea/speecht5-tts">original weights</a></p>
82
-
83
- <pre>
84
- @article{Ao2021SpeechT5,
85
- title = {SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing},
86
- author = {Junyi Ao and Rui Wang and Long Zhou and Chengyi Wang and Shuo Ren and Yu Wu and Shujie Liu and Tom Ko and Qing Li and Yu Zhang and Zhihua Wei and Yao Qian and Jinyu Li and Furu Wei},
87
- eprint={2110.07205},
88
- archivePrefix={arXiv},
89
- primaryClass={eess.AS},
90
- year={2021}
91
- }
92
- </pre>
93
-
94
- <p>Speaker embeddings were generated from <a href="http://www.festvox.org/cmu_arctic/">CMU ARCTIC</a> using <a href="https://huggingface.co/mechanicalsea/speecht5-vc/blob/main/manifest/utils/prep_cmu_arctic_spkemb.py">this script</a>.</p>
95
-
96
- </div>
97
- """
98
-
99
- examples = [
100
- ["It is not in the stars to hold our destiny but in ourselves.", "BDL (male)"],
101
- ["The octopus and Oliver went to the opera in October.", "CLB (female)"],
102
- ["She sells seashells by the seashore. I saw a kitten eating chicken in the kitchen.", "RMS (male)"],
103
- ["Brisk brave brigadiers brandished broad bright blades, blunderbusses, and bludgeons—balancing them badly.", "SLT (female)"],
104
- ["A synonym for cinnamon is a cinnamon synonym.", "BDL (male)"],
105
- ["How much wood would a woodchuck chuck if a woodchuck could chuck wood? He would chuck, he would, as much as he could, and chuck as much wood as a woodchuck would if a woodchuck could chuck wood.", "CLB (female)"],
106
- ]
107
-
108
- gr.Interface(
109
- fn=predict,
110
- inputs=[
111
- gr.Text(label="Input Text"),
112
- gr.Radio(label="Speaker", choices=[
113
- "BDL (male)",
114
- "CLB (female)",
115
- "KSP (male)",
116
- "RMS (male)",
117
- "SLT (female)",
118
- "Surprise Me!"
119
- ],
120
- value="BDL (male)"),
121
- ],
122
- outputs=[
123
- gr.Audio(label="Generated Speech", type="numpy"),
124
- ],
125
- title=title,
126
- description=description,
127
- article=article,
128
- examples=examples,
129
- ).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/批量翻译PDF文档_多线程.py DELETED
@@ -1,131 +0,0 @@
1
- from toolbox import CatchException, report_execption, write_results_to_file
2
- from toolbox import update_ui
3
- from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive
4
- from .crazy_utils import request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency
5
- from .crazy_utils import read_and_clean_pdf_text
6
- from colorful import *
7
-
8
- @CatchException
9
- def 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
10
- import glob
11
- import os
12
-
13
- # 基本信息:功能、贡献者
14
- chatbot.append([
15
- "函数插件功能?",
16
- "批量翻译PDF文档。函数插件贡献者: Binary-Husky"])
17
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
18
-
19
- # 尝试导入依赖,如果缺少依赖,则给出安装建议
20
- try:
21
- import fitz
22
- import tiktoken
23
- except:
24
- report_execption(chatbot, history,
25
- a=f"解析项目: {txt}",
26
- b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pymupdf tiktoken```。")
27
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
28
- return
29
-
30
- # 清空历史,以免输入溢出
31
- history = []
32
-
33
- # 检测输入参数,如没有给定输入参数,直接退出
34
- if os.path.exists(txt):
35
- project_folder = txt
36
- else:
37
- if txt == "":
38
- txt = '空空如也的输入栏'
39
- report_execption(chatbot, history,
40
- a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}")
41
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
42
- return
43
-
44
- # 搜索需要处理的文件清单
45
- file_manifest = [f for f in glob.glob(
46
- f'{project_folder}/**/*.pdf', recursive=True)]
47
-
48
- # 如果没找到任何文件
49
- if len(file_manifest) == 0:
50
- report_execption(chatbot, history,
51
- a=f"解析项目: {txt}", b=f"找不到任何.tex或.pdf文件: {txt}")
52
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
53
- return
54
-
55
- # 开始正式执行任务
56
- yield from 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt)
57
-
58
-
59
- def 解析PDF(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt):
60
- import os
61
- import tiktoken
62
- TOKEN_LIMIT_PER_FRAGMENT = 1280
63
- generated_conclusion_files = []
64
- for index, fp in enumerate(file_manifest):
65
-
66
- # 读取PDF文件
67
- file_content, page_one = read_and_clean_pdf_text(fp)
68
-
69
- # 递归地切割PDF文件
70
- from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf
71
- from request_llm.bridge_all import model_info
72
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
73
- def get_token_num(txt): return len(enc.encode(txt, disallowed_special=()))
74
- paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
75
- txt=file_content, get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT)
76
- page_one_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf(
77
- txt=str(page_one), get_token_fn=get_token_num, limit=TOKEN_LIMIT_PER_FRAGMENT//4)
78
-
79
- # 为了更好的效果,我们剥离Introduction之后的部分(如果有)
80
- paper_meta = page_one_fragments[0].split('introduction')[0].split('Introduction')[0].split('INTRODUCTION')[0]
81
-
82
- # 单线,获取文章meta信息
83
- paper_meta_info = yield from request_gpt_model_in_new_thread_with_ui_alive(
84
- inputs=f"以下是一篇学术论文的基础信息,请从中提取出“标题”、“收录会议或期刊”、“作者”、“摘要”、“编号”、“作者邮箱”这六个部分。请用markdown格式输出,最后用中文翻译摘要部分。请提取:{paper_meta}",
85
- inputs_show_user=f"请从{fp}中提取出“标题”、“收录会议或期刊”等基本信息。",
86
- llm_kwargs=llm_kwargs,
87
- chatbot=chatbot, history=[],
88
- sys_prompt="Your job is to collect information from materials。",
89
- )
90
-
91
- # 多线,翻译
92
- gpt_response_collection = yield from request_gpt_model_multi_threads_with_very_awesome_ui_and_high_efficiency(
93
- inputs_array=[
94
- f"你需要翻译以下内容:\n{frag}" for frag in paper_fragments],
95
- inputs_show_user_array=[f"\n---\n 原文: \n\n {frag.replace('#', '')} \n---\n 翻译:\n " for frag in paper_fragments],
96
- llm_kwargs=llm_kwargs,
97
- chatbot=chatbot,
98
- history_array=[[paper_meta] for _ in paper_fragments],
99
- sys_prompt_array=[
100
- "请你作为一个学术翻译,负责把学术论文准确翻译成中文。注意文章中的每一句话都要翻译。" for _ in paper_fragments],
101
- # max_workers=5 # OpenAI所允许的最大并行过��
102
- )
103
-
104
- # 整理报告的格式
105
- for i,k in enumerate(gpt_response_collection):
106
- if i%2==0:
107
- gpt_response_collection[i] = f"\n\n---\n\n ## 原文[{i//2}/{len(gpt_response_collection)//2}]: \n\n {paper_fragments[i//2].replace('#', '')} \n\n---\n\n ## 翻译[{i//2}/{len(gpt_response_collection)//2}]:\n "
108
- else:
109
- gpt_response_collection[i] = gpt_response_collection[i]
110
- final = ["一、论文概况\n\n---\n\n", paper_meta_info.replace('# ', '### ') + '\n\n---\n\n', "二、论文翻译", ""]
111
- final.extend(gpt_response_collection)
112
- create_report_file_name = f"{os.path.basename(fp)}.trans.md"
113
- res = write_results_to_file(final, file_name=create_report_file_name)
114
-
115
- # 更新UI
116
- generated_conclusion_files.append(f'./gpt_log/{create_report_file_name}')
117
- chatbot.append((f"{fp}完成了吗?", res))
118
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
119
-
120
- # 准备文件的下载
121
- import shutil
122
- for pdf_path in generated_conclusion_files:
123
- # 重命名文件
124
- rename_file = f'./gpt_log/总结论文-{os.path.basename(pdf_path)}'
125
- if os.path.exists(rename_file):
126
- os.remove(rename_file)
127
- shutil.copyfile(pdf_path, rename_file)
128
- if os.path.exists(pdf_path):
129
- os.remove(pdf_path)
130
- chatbot.append(("给出输出文件清单", str(generated_conclusion_files)))
131
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amon1/ChatGPTForAcadamic/README.md DELETED
@@ -1,290 +0,0 @@
1
- ---
2
- title: ChatGPTForAcademic
3
- emoji: 📑
4
- colorFrom: gray
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: main.py
9
- pinned: false
10
- license: gpl-3.0
11
- duplicated_from: vixr1/ChatGPTForAcadamic
12
- ---
13
-
14
-
15
- # ChatGPT 学术优化
16
-
17
- This project is based on [binary-husky/chatgpt_academic](https://github.com/binary-husky/chatgpt_academic), which is
18
- **specially provided for personal deployed huggingface space**. My own space
19
- is deployed on [HansBug/ChatGPTForAcadamic](https://huggingface.co/spaces/HansBug/ChatGPTForAcadamic).
20
-
21
- You need to put these secrets into your github fork repo:
22
- * `HF_USERNAME`: Username of your huggingface account, `hansbug` for me.
23
- * `HF_PASSWORD`: Password of your huggingface account.
24
- * `HF_SPACE_REPO`: Your space repo on huggingface, `HansBug/ChatGPTForAcadamic` for me.
25
- * `HF_EMAIL`: Your email when create commit to huggingface.
26
-
27
- You need to put these secrets into your huggingface space repo:
28
- * `OPENAI_API_KEY`: Your api key of openai.
29
-
30
- **如果喜欢这个项目,请给它一个Star;如果你发明了更好用的快捷键或函数插件,欢迎发issue或者pull requests(dev分支)**
31
-
32
- If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request (to `dev` branch).
33
-
34
- ```
35
- 代码中参考了很多其他优秀项目中的设计,主要包括:
36
-
37
- # 借鉴项目1:借鉴了ChuanhuChatGPT中读取OpenAI json的方法、记录历史问询记录的方法以及gradio queue的使用技巧
38
- https://github.com/GaiZhenbiao/ChuanhuChatGPT
39
-
40
- # 借鉴项目2:借鉴了mdtex2html中公式处理的方法
41
- https://github.com/polarwinkel/mdtex2html
42
-
43
- 项目使用OpenAI的gpt-3.5-turbo模型,期待gpt-4早点放宽门槛😂
44
- ```
45
-
46
- > **Note**
47
- >
48
- > 1.请注意只有“红颜色”标识的函数插件(按钮)才支持读取文件。目前对pdf/word格式文件的支持插件正在逐步完善中,需要更多developer的帮助。
49
- >
50
- > 2.本项目中每个文件的功能都在自译解[`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A)详细说明。随着版本的迭代,您也可以随时自行点击相关函数插件,调用GPT重新生成项目的自我解析报告。常见问题汇总在[`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98)当中。
51
- >
52
- > 3.如果您不太习惯部分中文命名的函数、注释或者界面,您可以随时点击相关函数插件,调用ChatGPT一键生成纯英文的项目源代码。
53
-
54
- <div align="center">
55
-
56
- 功能 | 描述
57
- --- | ---
58
- 一键润色 | 支持一键润色、一键查找论文语法错误
59
- 一键中英互译 | 一键中英互译
60
- 一键代码解释 | 可以正确显示代码、解释代码
61
- 自定义快捷键 | 支持自定义快捷键
62
- 配置代理服务器 | 支持配置代理服务器
63
- 模块化设计 | 支持自定义高阶的实验性功能与[函数插件],插件支持[热更新](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97)
64
- 自我程序剖析 | [函数插件] 一键读懂本项目的源代码
65
- 程序剖析 | [函数插件] 一键可以剖析其他Python/C/C++/Java项目树
66
- 读论文 | [函数插件] 一键解读latex论文全文并生成摘要
67
- 批量注释生成 | [函数插件] 一键批量生成函数注释
68
- chat分析报告生成 | [函数插件] 运行后自动生成总结汇报
69
- arxiv小助手 | [函数插件] 输入arxiv文章url即可一键翻译摘要+下载PDF
70
- 公式显示 | 可以同时显示公式的tex形式和渲染形式
71
- 图片显示 | 可以在markdown中显示图片
72
- 多线程函数插件支持 | 支持多线调用chatgpt,一键处理海量文本或程序
73
- 支持GPT输出的markdown表格 | 可以输出支持GPT的markdown表格
74
- 启动暗色gradio[主题](https://github.com/binary-husky/chatgpt_academic/issues/173) | 在浏览器url后面添加```/?__dark-theme=true```可以切换dark主题
75
- huggingface免科学上网[在线体验](https://huggingface.co/spaces/qingxu98/gpt-academic) | 登陆huggingface后复制[此空间](https://huggingface.co/spaces/qingxu98/gpt-academic)
76
- …… | ……
77
-
78
- </div>
79
-
80
- <!-- - 新界面(左:master主分支, 右:dev开发前沿) -->
81
- - 新界面
82
- <div align="center">
83
- <img src="https://user-images.githubusercontent.com/96192199/229222589-b30ff298-adb1-4e1e-8352-466085919bfb.png" width="700" >
84
- </div>
85
-
86
-
87
- - 所有按钮都通过读取functional.py动态生成,可随意加自定义功能,解放粘贴板
88
- <div align="center">
89
- <img src="img/公式.gif" width="700" >
90
- </div>
91
-
92
- - 润色/纠错
93
- <div align="center">
94
- <img src="img/润色.gif" width="700" >
95
- </div>
96
-
97
-
98
- - 支持GPT输出的markdown表格
99
- <div align="center">
100
- <img src="img/demo2.jpg" width="500" >
101
- </div>
102
-
103
- - 如果输出包含公式,会同时以tex形式和渲染形式显示,方便复制和阅读
104
- <div align="center">
105
- <img src="img/demo.jpg" width="500" >
106
- </div>
107
-
108
-
109
- - 懒得看项目代码?整个工程直接给chatgpt炫嘴里
110
- <div align="center">
111
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
112
- </div>
113
-
114
- ## 直接运行 (Windows, Linux or MacOS)
115
-
116
- ### 1. 下载项目
117
- ```sh
118
- git clone https://github.com/binary-husky/chatgpt_academic.git
119
- cd chatgpt_academic
120
- ```
121
-
122
- ### 2. 配置API_KEY和代理设置
123
-
124
- 在`config.py`中,配置 海外Proxy 和 OpenAI API KEY,说明如下
125
- ```
126
- 1. 如果你在国内,需要设置海外代理才能够顺利使用 OpenAI API,设置方法请仔细阅读config.py(1.修改其中的USE_PROXY为True; 2.按照说明修改其中的proxies)。
127
- 2. 配置 OpenAI API KEY。你需要在 OpenAI 官网上注册并获取 API KEY。一旦你拿到了 API KEY,在 config.py 文件里配置好即可。
128
- 3. 与代理网络有关的issue(网络超时、代理不起作用)汇总到 https://github.com/binary-husky/chatgpt_academic/issues/1
129
- ```
130
- (P.S. 程序运行时会优先检查是否存在名为`config_private.py`的私密配置文件,并用其中的配置覆盖`config.py`的同名配置。因此,如果您能理解我们的配置读取逻辑,我们强烈建议您在`config.py`旁边创建一个名为`config_private.py`的新配置文件,并把`config.py`中的配置转移(复制)到`config_private.py`中。`config_private.py`不受git管控,可以让您的隐私信息更加安全。)
131
-
132
-
133
- ### 3. 安装依赖
134
- ```sh
135
- # (选择一)推荐
136
- python -m pip install -r requirements.txt
137
-
138
- # (选择二)如果您使用anaconda,步骤也是类似的:
139
- # (选择二.1)conda create -n gptac_venv python=3.11
140
- # (选择二.2)conda activate gptac_venv
141
- # (选择二.3)python -m pip install -r requirements.txt
142
-
143
- # 备注:使用官方pip源或者阿里pip源,其他pip源(如一些大学的pip)有可能出问题,临时换源方法:
144
- # python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
145
- ```
146
-
147
- ### 4. 运行
148
- ```sh
149
- python main.py
150
- ```
151
-
152
- ### 5. 测试实验性功能
153
- ```
154
- - 测试C++项目头文件分析
155
- input区域 输入 `./crazy_functions/test_project/cpp/libJPG` , 然后点击 "[实验] 解析整个C++项目(input输入项目根路径)"
156
- - 测试给Latex项目写摘要
157
- input区域 输入 `./crazy_functions/test_project/latex/attention` , 然后点击 "[实验] 读tex论文写摘要(input输入项目根路径)"
158
- - 测试Python项目分析
159
- input区域 输入 `./crazy_functions/test_project/python/dqn` , 然后点击 "[实验] 解析整个py项目(input输入项目根路径)"
160
- - 测试自我代码解读
161
- 点击 "[实验] 请解析并解构此项目本身"
162
- - 测试实验功能模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能
163
- 点击 "[实验] 实验功能函数模板"
164
- ```
165
-
166
- ## 使用docker (Linux)
167
-
168
- ``` sh
169
- # 下载项目
170
- git clone https://github.com/binary-husky/chatgpt_academic.git
171
- cd chatgpt_academic
172
- # 配置 海外Proxy 和 OpenAI API KEY
173
- 用任意文本编辑器编辑 config.py
174
- # 安装
175
- docker build -t gpt-academic .
176
- # 运行
177
- docker run --rm -it --net=host gpt-academic
178
-
179
- # 测试实验性功能
180
- ## 测试自我代码解读
181
- 点击 "[实验] 请解析并解构此项目本身"
182
- ## 测试实验功能模板函数(要求gpt回答历史上的今天发生了什么),您可以根据此函数为模板,实现更复杂的功能
183
- 点击 "[实验] 实验功能函数模板"
184
- ##(请注意在docker中运行时,需要额外注意程序的文件访问权限问题)
185
- ## 测试C++项目头文件分析
186
- input区域 输入 ./crazy_functions/test_project/cpp/libJPG , 然后点击 "[实验] 解析整个C++项目(input输入项目根路径)"
187
- ## 测试给Latex项目写摘要
188
- input区域 输入 ./crazy_functions/test_project/latex/attention , 然后点击 "[实验] 读tex论文写摘要(input输入项目根路径)"
189
- ## 测试Python项目分析
190
- input区域 输入 ./crazy_functions/test_project/python/dqn , 然后点击 "[实验] 解析整个py项目(input输入项目根路径)"
191
-
192
- ```
193
-
194
- ## 其他部署方式
195
- - 使用WSL2(Windows Subsystem for Linux 子系统)
196
- 请访问[部署wiki-1](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
197
-
198
- - nginx远程部署
199
- 请访问[部署wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E7%9A%84%E6%8C%87%E5%AF%BC)
200
-
201
-
202
- ## 自定义新的便捷按钮(学术快捷键自定义)
203
- 打开functional.py,添加条目如下,然后重启程序即可。(如果按钮已经添加成功并可见,那么前缀、后缀都支持热修改,无需重启程序即可生效。)
204
- 例如
205
- ```
206
- "超级英译中": {
207
-
208
- # 前缀,会被加在你的输入之前。例如,用来描述你的要求,例如翻译、解释��码、润色等等
209
- "Prefix": "请翻译把下面一段内容成中文,然后用一个markdown表格逐一解释文中出现的专有名词:\n\n",
210
-
211
- # 后缀,会被加在你的输入之后。例如,配合前缀可以把你的输入内容用引号圈起来。
212
- "Suffix": "",
213
-
214
- },
215
- ```
216
- <div align="center">
217
- <img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
218
- </div>
219
-
220
-
221
- 如果你发明了更好用的学术快捷键,欢迎发issue或者pull requests!
222
-
223
- ## 配置代理
224
- ### 方法一:常规方法
225
- 在```config.py```中修改端口与代理软件对应
226
-
227
- <div align="center">
228
- <img src="https://user-images.githubusercontent.com/96192199/226571294-37a47cd9-4d40-4c16-97a2-d360845406f7.png" width="500" >
229
- <img src="https://user-images.githubusercontent.com/96192199/226838985-e5c95956-69c2-4c23-a4dd-cd7944eeb451.png" width="500" >
230
- </div>
231
-
232
- 配置完成后,你可以用以下命令测试代理是否工作,如果一切正常,下面的代码将输出你的代理服务器所在地:
233
- ```
234
- python check_proxy.py
235
- ```
236
- ### 方法二:纯新手教程
237
- [纯新手教程](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
238
-
239
- ## 兼容性测试
240
-
241
- ### 图片显示:
242
-
243
- <div align="center">
244
- <img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
245
- </div>
246
-
247
-
248
- ### 如果一个程序能够读懂并剖析自己:
249
-
250
- <div align="center">
251
- <img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
252
- </div>
253
-
254
- <div align="center">
255
- <img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
256
- </div>
257
-
258
- ### 其他任意Python/Cpp项目剖析:
259
- <div align="center">
260
- <img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
261
- </div>
262
-
263
- <div align="center">
264
- <img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
265
- </div>
266
-
267
- ### Latex论文一键阅读理解与摘要生成
268
- <div align="center">
269
- <img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
270
- </div>
271
-
272
- ### 自动报告生成
273
- <div align="center">
274
- <img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
275
- <img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
276
- <img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
277
- </div>
278
-
279
- ### 模块化功能设计
280
- <div align="center">
281
- <img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
282
- <img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
283
- </div>
284
-
285
- ## Todo:
286
-
287
- - (Top Priority) 调用另一个开源项目text-generation-webui的web接口,使用其他llm模型
288
- - 总结大工程源代码时,文本过长、token溢出的问题(目前的方法是直接二分丢弃处理溢出,过于粗暴,有效信息大量丢失)
289
-
290
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/__init__.py DELETED
@@ -1,11 +0,0 @@
1
- # Copyright (c) SenseTime Research. All rights reserved.
2
-
3
- # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
4
- #
5
- # NVIDIA CORPORATION and its licensors retain all intellectual property
6
- # and proprietary rights in and to this software, related documentation
7
- # and any modifications thereto. Any use, reproduction, disclosure or
8
- # distribution of this software and related documentation without an express
9
- # license agreement from NVIDIA CORPORATION is strictly prohibited.
10
-
11
- # empty
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/stylegan_human/training/networks_stylegan3.py DELETED
@@ -1,634 +0,0 @@
1
- # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2
- #
3
- # NVIDIA CORPORATION and its licensors retain all intellectual property
4
- # and proprietary rights in and to this software, related documentation
5
- # and any modifications thereto. Any use, reproduction, disclosure or
6
- # distribution of this software and related documentation without an express
7
- # license agreement from NVIDIA CORPORATION is strictly prohibited.
8
-
9
- """Generator architecture from the paper
10
- "Alias-Free Generative Adversarial Networks"."""
11
-
12
- import numpy as np
13
- import scipy.signal
14
- import scipy.optimize
15
- import torch
16
- import torch.nn.functional as F
17
- from torch_utils import misc
18
- from torch_utils import persistence
19
- from torch_utils.ops import conv2d_gradfix
20
- from torch_utils.ops import filtered_lrelu
21
- from torch_utils.ops import bias_act
22
-
23
- # ----------------------------------------------------------------------------
24
-
25
-
26
- @misc.profiled_function
27
- def modulated_conv2d(
28
- # Input tensor: [batch_size, in_channels, in_height, in_width]
29
- x,
30
- # Weight tensor: [out_channels, in_channels, kernel_height, kernel_width]
31
- w,
32
- s, # Style tensor: [batch_size, in_channels]
33
- demodulate=True, # Apply weight demodulation?
34
- padding=0, # Padding: int or [padH, padW]
35
- input_gain=None, # Optional scale factors for the input channels: [], [in_channels], or [batch_size, in_channels]
36
- ):
37
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
38
- batch_size = int(x.shape[0])
39
- out_channels, in_channels, kh, kw = w.shape
40
- misc.assert_shape(w, [out_channels, in_channels, kh, kw]) # [OIkk]
41
- misc.assert_shape(x, [batch_size, in_channels, None, None]) # [NIHW]
42
- misc.assert_shape(s, [batch_size, in_channels]) # [NI]
43
-
44
- # Pre-normalize inputs.
45
- if demodulate:
46
- w = w * w.square().mean([1, 2, 3], keepdim=True).rsqrt()
47
- s = s * s.square().mean().rsqrt()
48
-
49
- # Modulate weights.
50
- w = w.unsqueeze(0) # [NOIkk]
51
- w = w * s.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
52
-
53
- # Demodulate weights.
54
- if demodulate:
55
- dcoefs = (w.square().sum(dim=[2, 3, 4]) + 1e-8).rsqrt() # [NO]
56
- w = w * dcoefs.unsqueeze(2).unsqueeze(3).unsqueeze(4) # [NOIkk]
57
-
58
- # Apply input scaling.
59
- if input_gain is not None:
60
- input_gain = input_gain.expand(batch_size, in_channels) # [NI]
61
- w = w * input_gain.unsqueeze(1).unsqueeze(3).unsqueeze(4) # [NOIkk]
62
-
63
- # Execute as one fused op using grouped convolution.
64
- x = x.reshape(1, -1, *x.shape[2:])
65
- w = w.reshape(-1, in_channels, kh, kw)
66
- x = conv2d_gradfix.conv2d(input=x, weight=w.to(
67
- x.dtype), padding=padding, groups=batch_size)
68
- x = x.reshape(batch_size, -1, *x.shape[2:])
69
- return x
70
-
71
- # ----------------------------------------------------------------------------
72
-
73
-
74
- @persistence.persistent_class
75
- class FullyConnectedLayer(torch.nn.Module):
76
- def __init__(self,
77
- in_features, # Number of input features.
78
- out_features, # Number of output features.
79
- # Activation function: 'relu', 'lrelu', etc.
80
- activation='linear',
81
- bias=True, # Apply additive bias before the activation function?
82
- lr_multiplier=1, # Learning rate multiplier.
83
- # Initial standard deviation of the weight tensor.
84
- weight_init=1,
85
- bias_init=0, # Initial value of the additive bias.
86
- ):
87
- super().__init__()
88
- self.in_features = in_features
89
- self.out_features = out_features
90
- self.activation = activation
91
- self.weight = torch.nn.Parameter(torch.randn(
92
- [out_features, in_features]) * (weight_init / lr_multiplier))
93
- bias_init = np.broadcast_to(np.asarray(
94
- bias_init, dtype=np.float32), [out_features])
95
- self.bias = torch.nn.Parameter(torch.from_numpy(
96
- bias_init / lr_multiplier)) if bias else None
97
- self.weight_gain = lr_multiplier / np.sqrt(in_features)
98
- self.bias_gain = lr_multiplier
99
-
100
- def forward(self, x):
101
- w = self.weight.to(x.dtype) * self.weight_gain
102
- b = self.bias
103
- if b is not None:
104
- b = b.to(x.dtype)
105
- if self.bias_gain != 1:
106
- b = b * self.bias_gain
107
- if self.activation == 'linear' and b is not None:
108
- x = torch.addmm(b.unsqueeze(0), x, w.t())
109
- else:
110
- x = x.matmul(w.t())
111
- x = bias_act.bias_act(x, b, act=self.activation)
112
- return x
113
-
114
- def extra_repr(self):
115
- return f'in_features={self.in_features:d}, out_features={self.out_features:d}, activation={self.activation:s}'
116
-
117
- # ----------------------------------------------------------------------------
118
-
119
-
120
- @persistence.persistent_class
121
- class MappingNetwork(torch.nn.Module):
122
- def __init__(self,
123
- z_dim, # Input latent (Z) dimensionality.
124
- # Conditioning label (C) dimensionality, 0 = no labels.
125
- c_dim,
126
- # Intermediate latent (W) dimensionality.
127
- w_dim,
128
- # Number of intermediate latents to output.
129
- num_ws,
130
- num_layers=2, # Number of mapping layers.
131
- # Learning rate multiplier for the mapping layers.
132
- lr_multiplier=0.01,
133
- # Decay for tracking the moving average of W during training.
134
- w_avg_beta=0.998,
135
- ):
136
- super().__init__()
137
- self.z_dim = z_dim
138
- self.c_dim = c_dim
139
- self.w_dim = w_dim
140
- self.num_ws = num_ws
141
- self.num_layers = num_layers
142
- self.w_avg_beta = w_avg_beta
143
-
144
- # Construct layers.
145
- self.embed = FullyConnectedLayer(
146
- self.c_dim, self.w_dim) if self.c_dim > 0 else None
147
- features = [self.z_dim + (self.w_dim if self.c_dim >
148
- 0 else 0)] + [self.w_dim] * self.num_layers
149
- for idx, in_features, out_features in zip(range(num_layers), features[:-1], features[1:]):
150
- layer = FullyConnectedLayer(
151
- in_features, out_features, activation='lrelu', lr_multiplier=lr_multiplier)
152
- setattr(self, f'fc{idx}', layer)
153
- self.register_buffer('w_avg', torch.zeros([w_dim]))
154
-
155
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False):
156
- misc.assert_shape(z, [None, self.z_dim])
157
- if truncation_cutoff is None:
158
- truncation_cutoff = self.num_ws
159
-
160
- # Embed, normalize, and concatenate inputs.
161
- x = z.to(torch.float32)
162
- x = x * (x.square().mean(1, keepdim=True) + 1e-8).rsqrt()
163
- if self.c_dim > 0:
164
- misc.assert_shape(c, [None, self.c_dim])
165
- y = self.embed(c.to(torch.float32))
166
- y = y * (y.square().mean(1, keepdim=True) + 1e-8).rsqrt()
167
- x = torch.cat([x, y], dim=1) if x is not None else y
168
-
169
- # Execute layers.
170
- for idx in range(self.num_layers):
171
- x = getattr(self, f'fc{idx}')(x)
172
-
173
- # Update moving average of W.
174
- if update_emas:
175
- self.w_avg.copy_(x.detach().mean(
176
- dim=0).lerp(self.w_avg, self.w_avg_beta))
177
-
178
- # Broadcast and apply truncation.
179
- x = x.unsqueeze(1).repeat([1, self.num_ws, 1])
180
- if truncation_psi != 1:
181
- x[:, :truncation_cutoff] = self.w_avg.lerp(
182
- x[:, :truncation_cutoff], truncation_psi)
183
- return x
184
-
185
- def extra_repr(self):
186
- return f'z_dim={self.z_dim:d}, c_dim={self.c_dim:d}, w_dim={self.w_dim:d}, num_ws={self.num_ws:d}'
187
-
188
- # ----------------------------------------------------------------------------
189
-
190
-
191
- @persistence.persistent_class
192
- class SynthesisInput(torch.nn.Module):
193
- def __init__(self,
194
- w_dim, # Intermediate latent (W) dimensionality.
195
- channels, # Number of output channels.
196
- size, # Output spatial size: int or [width, height].
197
- sampling_rate, # Output sampling rate.
198
- bandwidth, # Output bandwidth.
199
- ):
200
- super().__init__()
201
- self.w_dim = w_dim
202
- self.channels = channels
203
- self.size = np.broadcast_to(np.asarray(size), [2])
204
- self.sampling_rate = sampling_rate
205
- self.bandwidth = bandwidth
206
-
207
- # Draw random frequencies from uniform 2D disc.
208
- freqs = torch.randn([self.channels, 2])
209
- radii = freqs.square().sum(dim=1, keepdim=True).sqrt()
210
- freqs /= radii * radii.square().exp().pow(0.25)
211
- freqs *= bandwidth
212
- phases = torch.rand([self.channels]) - 0.5
213
-
214
- # Setup parameters and buffers.
215
- self.weight = torch.nn.Parameter(
216
- torch.randn([self.channels, self.channels]))
217
- self.affine = FullyConnectedLayer(
218
- w_dim, 4, weight_init=0, bias_init=[1, 0, 0, 0])
219
- # User-specified inverse transform wrt. resulting image.
220
- self.register_buffer('transform', torch.eye(3, 3))
221
- self.register_buffer('freqs', freqs)
222
- self.register_buffer('phases', phases)
223
-
224
- def forward(self, w):
225
- # Introduce batch dimension.
226
- transforms = self.transform.unsqueeze(0) # [batch, row, col]
227
- freqs = self.freqs.unsqueeze(0) # [batch, channel, xy]
228
- phases = self.phases.unsqueeze(0) # [batch, channel]
229
-
230
- # Apply learned transformation.
231
- t = self.affine(w) # t = (r_c, r_s, t_x, t_y)
232
- # t' = (r'_c, r'_s, t'_x, t'_y)
233
- t = t / t[:, :2].norm(dim=1, keepdim=True)
234
- # Inverse rotation wrt. resulting image.
235
- m_r = torch.eye(3, device=w.device).unsqueeze(
236
- 0).repeat([w.shape[0], 1, 1])
237
- m_r[:, 0, 0] = t[:, 0] # r'_c
238
- m_r[:, 0, 1] = -t[:, 1] # r'_s
239
- m_r[:, 1, 0] = t[:, 1] # r'_s
240
- m_r[:, 1, 1] = t[:, 0] # r'_c
241
- # Inverse translation wrt. resulting image.
242
- m_t = torch.eye(3, device=w.device).unsqueeze(
243
- 0).repeat([w.shape[0], 1, 1])
244
- m_t[:, 0, 2] = -t[:, 2] # t'_x
245
- m_t[:, 1, 2] = -t[:, 3] # t'_y
246
- # First rotate resulting image, then translate, and finally apply user-specified transform.
247
- transforms = m_r @ m_t @ transforms
248
-
249
- # Transform frequencies.
250
- phases = phases + (freqs @ transforms[:, :2, 2:]).squeeze(2)
251
- freqs = freqs @ transforms[:, :2, :2]
252
-
253
- # Dampen out-of-band frequencies that may occur due to the user-specified transform.
254
- amplitudes = (1 - (freqs.norm(dim=2) - self.bandwidth) /
255
- (self.sampling_rate / 2 - self.bandwidth)).clamp(0, 1)
256
-
257
- # Construct sampling grid.
258
- theta = torch.eye(2, 3, device=w.device)
259
- theta[0, 0] = 0.5 * self.size[0] / self.sampling_rate
260
- theta[1, 1] = 0.5 * self.size[1] / self.sampling_rate
261
- grids = torch.nn.functional.affine_grid(theta.unsqueeze(
262
- 0), [1, 1, self.size[1], self.size[0]], align_corners=False)
263
-
264
- # Compute Fourier features.
265
- x = (grids.unsqueeze(3) @ freqs.permute(0, 2, 1).unsqueeze(1).unsqueeze(2)
266
- ).squeeze(3) # [batch, height, width, channel]
267
- x = x + phases.unsqueeze(1).unsqueeze(2)
268
- x = torch.sin(x * (np.pi * 2))
269
- x = x * amplitudes.unsqueeze(1).unsqueeze(2)
270
-
271
- # Apply trainable mapping.
272
- weight = self.weight / np.sqrt(self.channels)
273
- x = x @ weight.t()
274
-
275
- # Ensure correct shape.
276
- x = x.permute(0, 3, 1, 2) # [batch, channel, height, width]
277
- misc.assert_shape(x, [w.shape[0], self.channels,
278
- int(self.size[1]), int(self.size[0])])
279
- return x
280
-
281
- def extra_repr(self):
282
- return '\n'.join([
283
- f'w_dim={self.w_dim:d}, channels={self.channels:d}, size={list(self.size)},',
284
- f'sampling_rate={self.sampling_rate:g}, bandwidth={self.bandwidth:g}'])
285
-
286
- # ----------------------------------------------------------------------------
287
-
288
-
289
- @persistence.persistent_class
290
- class SynthesisLayer(torch.nn.Module):
291
- def __init__(self,
292
- # Intermediate latent (W) dimensionality.
293
- w_dim,
294
- is_torgb, # Is this the final ToRGB layer?
295
- is_critically_sampled, # Does this layer use critical sampling?
296
- use_fp16, # Does this layer use FP16?
297
-
298
- # Input & output specifications.
299
- in_channels, # Number of input channels.
300
- out_channels, # Number of output channels.
301
- # Input spatial size: int or [width, height].
302
- in_size,
303
- # Output spatial size: int or [width, height].
304
- out_size,
305
- in_sampling_rate, # Input sampling rate (s).
306
- out_sampling_rate, # Output sampling rate (s).
307
- # Input cutoff frequency (f_c).
308
- in_cutoff,
309
- # Output cutoff frequency (f_c).
310
- out_cutoff,
311
- # Input transition band half-width (f_h).
312
- in_half_width,
313
- # Output Transition band half-width (f_h).
314
- out_half_width,
315
-
316
- # Hyperparameters.
317
- # Convolution kernel size. Ignored for final the ToRGB layer.
318
- conv_kernel=3,
319
- # Low-pass filter size relative to the lower resolution when up/downsampling.
320
- filter_size=6,
321
- # Relative sampling rate for leaky ReLU. Ignored for final the ToRGB layer.
322
- lrelu_upsampling=2,
323
- # Use radially symmetric downsampling filter? Ignored for critically sampled layers.
324
- use_radial_filters=False,
325
- # Clamp the output to [-X, +X], None = disable clamping.
326
- conv_clamp=256,
327
- # Decay rate for the moving average of input magnitudes.
328
- magnitude_ema_beta=0.999,
329
- ):
330
- super().__init__()
331
- self.w_dim = w_dim
332
- self.is_torgb = is_torgb
333
- self.is_critically_sampled = is_critically_sampled
334
- self.use_fp16 = use_fp16
335
- self.in_channels = in_channels
336
- self.out_channels = out_channels
337
- self.in_size = np.broadcast_to(np.asarray(in_size), [2])
338
- self.out_size = np.broadcast_to(np.asarray(out_size), [2])
339
- self.in_sampling_rate = in_sampling_rate
340
- self.out_sampling_rate = out_sampling_rate
341
- self.tmp_sampling_rate = max(
342
- in_sampling_rate, out_sampling_rate) * (1 if is_torgb else lrelu_upsampling)
343
- self.in_cutoff = in_cutoff
344
- self.out_cutoff = out_cutoff
345
- self.in_half_width = in_half_width
346
- self.out_half_width = out_half_width
347
- self.conv_kernel = 1 if is_torgb else conv_kernel
348
- self.conv_clamp = conv_clamp
349
- self.magnitude_ema_beta = magnitude_ema_beta
350
-
351
- # Setup parameters and buffers.
352
- self.affine = FullyConnectedLayer(
353
- self.w_dim, self.in_channels, bias_init=1)
354
- self.weight = torch.nn.Parameter(torch.randn(
355
- [self.out_channels, self.in_channels, self.conv_kernel, self.conv_kernel]))
356
- self.bias = torch.nn.Parameter(torch.zeros([self.out_channels]))
357
- self.register_buffer('magnitude_ema', torch.ones([]))
358
-
359
- # Design upsampling filter.
360
- self.up_factor = int(
361
- np.rint(self.tmp_sampling_rate / self.in_sampling_rate))
362
- assert self.in_sampling_rate * self.up_factor == self.tmp_sampling_rate
363
- self.up_taps = filter_size * \
364
- self.up_factor if self.up_factor > 1 and not self.is_torgb else 1
365
- self.register_buffer('up_filter', self.design_lowpass_filter(
366
- numtaps=self.up_taps, cutoff=self.in_cutoff, width=self.in_half_width*2, fs=self.tmp_sampling_rate))
367
-
368
- # Design downsampling filter.
369
- self.down_factor = int(
370
- np.rint(self.tmp_sampling_rate / self.out_sampling_rate))
371
- assert self.out_sampling_rate * self.down_factor == self.tmp_sampling_rate
372
- self.down_taps = filter_size * \
373
- self.down_factor if self.down_factor > 1 and not self.is_torgb else 1
374
- self.down_radial = use_radial_filters and not self.is_critically_sampled
375
- self.register_buffer('down_filter', self.design_lowpass_filter(
376
- numtaps=self.down_taps, cutoff=self.out_cutoff, width=self.out_half_width*2, fs=self.tmp_sampling_rate, radial=self.down_radial))
377
-
378
- # Compute padding.
379
- # Desired output size before downsampling.
380
- pad_total = (self.out_size - 1) * self.down_factor + 1
381
- # Input size after upsampling.
382
- pad_total -= (self.in_size + self.conv_kernel - 1) * self.up_factor
383
- # Size reduction caused by the filters.
384
- pad_total += self.up_taps + self.down_taps - 2
385
- # Shift sample locations according to the symmetric interpretation (Appendix C.3).
386
- pad_lo = (pad_total + self.up_factor) // 2
387
- pad_hi = pad_total - pad_lo
388
- self.padding = [int(pad_lo[0]), int(pad_hi[0]),
389
- int(pad_lo[1]), int(pad_hi[1])]
390
-
391
- def forward(self, x, w, noise_mode='random', force_fp32=False, update_emas=False):
392
- assert noise_mode in ['random', 'const', 'none'] # unused
393
- misc.assert_shape(x, [None, self.in_channels, int(
394
- self.in_size[1]), int(self.in_size[0])])
395
- misc.assert_shape(w, [x.shape[0], self.w_dim])
396
-
397
- # Track input magnitude.
398
- if update_emas:
399
- with torch.autograd.profiler.record_function('update_magnitude_ema'):
400
- magnitude_cur = x.detach().to(torch.float32).square().mean()
401
- self.magnitude_ema.copy_(magnitude_cur.lerp(
402
- self.magnitude_ema, self.magnitude_ema_beta))
403
- input_gain = self.magnitude_ema.rsqrt()
404
-
405
- # Execute affine layer.
406
- styles = self.affine(w)
407
- if self.is_torgb:
408
- weight_gain = 1 / \
409
- np.sqrt(self.in_channels * (self.conv_kernel ** 2))
410
- styles = styles * weight_gain
411
-
412
- # Execute modulated conv2d.
413
- dtype = torch.float16 if (
414
- self.use_fp16 and not force_fp32 and x.device.type == 'cuda') else torch.float32
415
- x = modulated_conv2d(x=x.to(dtype), w=self.weight, s=styles,
416
- padding=self.conv_kernel-1, demodulate=(not self.is_torgb), input_gain=input_gain)
417
-
418
- # Execute bias, filtered leaky ReLU, and clamping.
419
- gain = 1 if self.is_torgb else np.sqrt(2)
420
- slope = 1 if self.is_torgb else 0.2
421
- x = filtered_lrelu.filtered_lrelu(x=x, fu=self.up_filter, fd=self.down_filter, b=self.bias.to(x.dtype),
422
- up=self.up_factor, down=self.down_factor, padding=self.padding, gain=gain, slope=slope, clamp=self.conv_clamp)
423
-
424
- # Ensure correct shape and dtype.
425
- misc.assert_shape(x, [None, self.out_channels, int(
426
- self.out_size[1]), int(self.out_size[0])])
427
- assert x.dtype == dtype
428
- return x
429
-
430
- @staticmethod
431
- def design_lowpass_filter(numtaps, cutoff, width, fs, radial=False):
432
- assert numtaps >= 1
433
-
434
- # Identity filter.
435
- if numtaps == 1:
436
- return None
437
-
438
- # Separable Kaiser low-pass filter.
439
- if not radial:
440
- f = scipy.signal.firwin(
441
- numtaps=numtaps, cutoff=cutoff, width=width, fs=fs)
442
- return torch.as_tensor(f, dtype=torch.float32)
443
-
444
- # Radially symmetric jinc-based filter.
445
- x = (np.arange(numtaps) - (numtaps - 1) / 2) / fs
446
- r = np.hypot(*np.meshgrid(x, x))
447
- f = scipy.special.j1(2 * cutoff * (np.pi * r)) / (np.pi * r)
448
- beta = scipy.signal.kaiser_beta(
449
- scipy.signal.kaiser_atten(numtaps, width / (fs / 2)))
450
- w = np.kaiser(numtaps, beta)
451
- f *= np.outer(w, w)
452
- f /= np.sum(f)
453
- return torch.as_tensor(f, dtype=torch.float32)
454
-
455
- def extra_repr(self):
456
- return '\n'.join([
457
- f'w_dim={self.w_dim:d}, is_torgb={self.is_torgb},',
458
- f'is_critically_sampled={self.is_critically_sampled}, use_fp16={self.use_fp16},',
459
- f'in_sampling_rate={self.in_sampling_rate:g}, out_sampling_rate={self.out_sampling_rate:g},',
460
- f'in_cutoff={self.in_cutoff:g}, out_cutoff={self.out_cutoff:g},',
461
- f'in_half_width={self.in_half_width:g}, out_half_width={self.out_half_width:g},',
462
- f'in_size={list(self.in_size)}, out_size={list(self.out_size)},',
463
- f'in_channels={self.in_channels:d}, out_channels={self.out_channels:d}'])
464
-
465
- # ----------------------------------------------------------------------------
466
-
467
-
468
- @persistence.persistent_class
469
- class SynthesisNetwork(torch.nn.Module):
470
- def __init__(self,
471
- # Intermediate latent (W) dimensionality.
472
- w_dim,
473
- img_resolution, # Output image resolution.
474
- img_channels, # Number of color channels.
475
- # Overall multiplier for the number of channels.
476
- channel_base=32768,
477
- # Maximum number of channels in any layer.
478
- channel_max=512,
479
- # Total number of layers, excluding Fourier features and ToRGB.
480
- num_layers=14,
481
- # Number of critically sampled layers at the end.
482
- num_critical=2,
483
- # Cutoff frequency of the first layer (f_{c,0}).
484
- first_cutoff=2,
485
- # Minimum stopband of the first layer (f_{t,0}).
486
- first_stopband=2**2.1,
487
- # Minimum stopband of the last layer, expressed relative to the cutoff.
488
- last_stopband_rel=2**0.3,
489
- # Number of additional pixels outside the image.
490
- margin_size=10,
491
- output_scale=0.25, # Scale factor for the output image.
492
- # Use FP16 for the N highest resolutions.
493
- num_fp16_res=4,
494
- # Arguments for SynthesisLayer.
495
- **layer_kwargs,
496
- ):
497
- super().__init__()
498
- self.w_dim = w_dim
499
- self.num_ws = num_layers + 2
500
- self.img_resolution = img_resolution
501
- self.img_channels = img_channels
502
- self.num_layers = num_layers
503
- self.num_critical = num_critical
504
- self.margin_size = margin_size
505
- self.output_scale = output_scale
506
- self.num_fp16_res = num_fp16_res
507
-
508
- # Geometric progression of layer cutoffs and min. stopbands.
509
- last_cutoff = self.img_resolution / 2 # f_{c,N}
510
- last_stopband = last_cutoff * last_stopband_rel # f_{t,N}
511
- exponents = np.minimum(
512
- np.arange(self.num_layers + 1) / (self.num_layers - self.num_critical), 1)
513
- cutoffs = first_cutoff * \
514
- (last_cutoff / first_cutoff) ** exponents # f_c[i]
515
- stopbands = first_stopband * \
516
- (last_stopband / first_stopband) ** exponents # f_t[i]
517
-
518
- # Compute remaining layer parameters.
519
- sampling_rates = np.exp2(
520
- np.ceil(np.log2(np.minimum(stopbands * 2, self.img_resolution)))) # s[i]
521
- half_widths = np.maximum(
522
- stopbands, sampling_rates / 2) - cutoffs # f_h[i]
523
- sizes = sampling_rates + self.margin_size * 2
524
- sizes[-2:] = self.img_resolution
525
- channels = np.rint(np.minimum(
526
- (channel_base / 2) / cutoffs, channel_max))
527
- channels[-1] = self.img_channels
528
-
529
- # Construct layers.
530
- self.input = SynthesisInput(
531
- w_dim=self.w_dim, channels=int(channels[0]), size=int(sizes[0]),
532
- sampling_rate=sampling_rates[0], bandwidth=cutoffs[0])
533
- self.layer_names = []
534
- for idx in range(self.num_layers + 1):
535
- prev = max(idx - 1, 0)
536
- is_torgb = (idx == self.num_layers)
537
- is_critically_sampled = (
538
- idx >= self.num_layers - self.num_critical)
539
- use_fp16 = (sampling_rates[idx] * (2 **
540
- self.num_fp16_res) > self.img_resolution)
541
- layer = SynthesisLayer(
542
- w_dim=self.w_dim, is_torgb=is_torgb, is_critically_sampled=is_critically_sampled, use_fp16=use_fp16,
543
- in_channels=int(channels[prev]), out_channels=int(channels[idx]),
544
- in_size=int(sizes[prev]), out_size=int(sizes[idx]),
545
- in_sampling_rate=int(sampling_rates[prev]), out_sampling_rate=int(sampling_rates[idx]),
546
- in_cutoff=cutoffs[prev], out_cutoff=cutoffs[idx],
547
- in_half_width=half_widths[prev], out_half_width=half_widths[idx],
548
- **layer_kwargs)
549
- name = f'L{idx}_{layer.out_size[0]}_{layer.out_channels}'
550
- setattr(self, name, layer)
551
- self.layer_names.append(name)
552
-
553
- def forward(self, ws, **layer_kwargs):
554
- misc.assert_shape(ws, [None, self.num_ws, self.w_dim])
555
- ws = ws.to(torch.float32).unbind(dim=1)
556
-
557
- # Execute layers.
558
- x = self.input(ws[0])
559
- for name, w in zip(self.layer_names, ws[1:]):
560
- x = getattr(self, name)(x, w, **layer_kwargs)
561
- if self.output_scale != 1:
562
- x = x * self.output_scale
563
-
564
- # Ensure correct shape and dtype.
565
- misc.assert_shape(x, [None, self.img_channels,
566
- self.img_resolution, self.img_resolution])
567
- x = x.to(torch.float32)
568
- return x
569
-
570
- def extra_repr(self):
571
- return '\n'.join([
572
- f'w_dim={self.w_dim:d}, num_ws={self.num_ws:d},',
573
- f'img_resolution={self.img_resolution:d}, img_channels={self.img_channels:d},',
574
- f'num_layers={self.num_layers:d}, num_critical={self.num_critical:d},',
575
- f'margin_size={self.margin_size:d}, num_fp16_res={self.num_fp16_res:d}'])
576
-
577
- # ----------------------------------------------------------------------------
578
-
579
-
580
- @persistence.persistent_class
581
- class Generator(torch.nn.Module):
582
- def __init__(self,
583
- z_dim, # Input latent (Z) dimensionality.
584
- # Conditioning label (C) dimensionality.
585
- c_dim,
586
- # Intermediate latent (W) dimensionality.
587
- w_dim,
588
- img_resolution, # Output resolution.
589
- img_channels, # Number of output color channels.
590
- mapping_kwargs={}, # Arguments for MappingNetwork.
591
- resize=None,
592
- **synthesis_kwargs, # Arguments for SynthesisNetwork.
593
- ):
594
- super().__init__()
595
- self.z_dim = z_dim
596
- self.c_dim = c_dim
597
- self.w_dim = w_dim
598
- self.img_resolution = img_resolution
599
- self.img_channels = img_channels
600
- self.synthesis = SynthesisNetwork(
601
- w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs)
602
- self.num_ws = self.synthesis.num_ws
603
- self.mapping = MappingNetwork(
604
- z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
605
- self.resize = resize
606
-
607
- def forward(self, z, c, truncation_psi=1, truncation_cutoff=None, update_emas=False, input_is_w=False, **synthesis_kwargs):
608
- if input_is_w:
609
- ws = z
610
- if ws.dim() == 2:
611
- ws = ws.unsqueeze(1).repeat([1, self.mapping.num_ws, 1])
612
- else:
613
- ws = self.mapping(z, c, truncation_psi=truncation_psi,
614
- truncation_cutoff=truncation_cutoff, update_emas=update_emas)
615
- img = self.synthesis(ws, update_emas=update_emas, **synthesis_kwargs)
616
- if self.resize is not None:
617
- img = imresize(img, [self.resize, self.resize])
618
- return img
619
-
620
- # ----------------------------------------------------------------------------
621
-
622
-
623
- def imresize(image, size):
624
- dim = image.dim()
625
- if dim == 3:
626
- image = image.unsqueeze(1)
627
- b, _, h, w = image.shape
628
- if size[0] > h:
629
- image = F.interpolate(image, size, mode='bilinear')
630
- elif size[0] < h:
631
- image = F.interpolate(image, size, mode='area')
632
- if dim == 3:
633
- image = image.squeeze(1)
634
- return image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andres99/Tune-A-Video-Training-UI/app_training.py DELETED
@@ -1,135 +0,0 @@
1
- #!/usr/bin/env python
2
-
3
- from __future__ import annotations
4
-
5
- import os
6
-
7
- import gradio as gr
8
-
9
- from constants import MODEL_LIBRARY_ORG_NAME, SAMPLE_MODEL_REPO, UploadTarget
10
- from inference import InferencePipeline
11
- from trainer import Trainer
12
-
13
-
14
- def create_training_demo(trainer: Trainer,
15
- pipe: InferencePipeline | None = None) -> gr.Blocks:
16
- hf_token = os.getenv('HF_TOKEN')
17
- with gr.Blocks() as demo:
18
- with gr.Row():
19
- with gr.Column():
20
- with gr.Box():
21
- gr.Markdown('Training Data')
22
- training_video = gr.File(label='Training video')
23
- training_prompt = gr.Textbox(
24
- label='Training prompt',
25
- max_lines=1,
26
- placeholder='A man is surfing')
27
- gr.Markdown('''
28
- - Upload a video and write a `Training Prompt` that describes the video.
29
- ''')
30
-
31
- with gr.Column():
32
- with gr.Box():
33
- gr.Markdown('Training Parameters')
34
- with gr.Row():
35
- base_model = gr.Text(
36
- label='Base Model',
37
- value='CompVis/stable-diffusion-v1-4',
38
- max_lines=1)
39
- resolution = gr.Dropdown(choices=['512', '768'],
40
- value='512',
41
- label='Resolution',
42
- visible=False)
43
-
44
- input_token = gr.Text(label='Hugging Face Write Token',
45
- placeholder='',
46
- visible=False if hf_token else True)
47
- with gr.Accordion('Advanced settings', open=False):
48
- num_training_steps = gr.Number(
49
- label='Number of Training Steps',
50
- value=300,
51
- precision=0)
52
- learning_rate = gr.Number(label='Learning Rate',
53
- value=0.000035)
54
- gradient_accumulation = gr.Number(
55
- label='Number of Gradient Accumulation',
56
- value=1,
57
- precision=0)
58
- seed = gr.Slider(label='Seed',
59
- minimum=0,
60
- maximum=100000,
61
- step=1,
62
- randomize=True,
63
- value=0)
64
- fp16 = gr.Checkbox(label='FP16', value=True)
65
- use_8bit_adam = gr.Checkbox(label='Use 8bit Adam',
66
- value=False)
67
- checkpointing_steps = gr.Number(
68
- label='Checkpointing Steps',
69
- value=1000,
70
- precision=0)
71
- validation_epochs = gr.Number(
72
- label='Validation Epochs', value=100, precision=0)
73
- gr.Markdown('''
74
- - The base model must be a Stable Diffusion model compatible with [diffusers](https://github.com/huggingface/diffusers) library.
75
- - Expected time to train a model for 300 steps: ~20 minutes with T4
76
- - You can check the training status by pressing the "Open logs" button if you are running this on your Space.
77
- ''')
78
-
79
- with gr.Row():
80
- with gr.Column():
81
- gr.Markdown('Output Model')
82
- output_model_name = gr.Text(label='Name of your model',
83
- placeholder='The surfer man',
84
- max_lines=1)
85
- validation_prompt = gr.Text(
86
- label='Validation Prompt',
87
- placeholder=
88
- 'prompt to test the model, e.g: a dog is surfing')
89
- with gr.Column():
90
- gr.Markdown('Upload Settings')
91
- with gr.Row():
92
- upload_to_hub = gr.Checkbox(label='Upload model to Hub',
93
- value=True)
94
- use_private_repo = gr.Checkbox(label='Private', value=True)
95
- delete_existing_repo = gr.Checkbox(
96
- label='Delete existing repo of the same name',
97
- value=False)
98
- upload_to = gr.Radio(
99
- label='Upload to',
100
- choices=[_.value for _ in UploadTarget],
101
- value=UploadTarget.MODEL_LIBRARY.value)
102
-
103
- remove_gpu_after_training = gr.Checkbox(
104
- label='Remove GPU after training',
105
- value=False,
106
- interactive=bool(os.getenv('SPACE_ID')),
107
- visible=False)
108
- run_button = gr.Button('Start Training')
109
-
110
- with gr.Box():
111
- gr.Markdown('Output message')
112
- output_message = gr.Markdown()
113
-
114
- if pipe is not None:
115
- run_button.click(fn=pipe.clear)
116
- run_button.click(
117
- fn=trainer.run,
118
- inputs=[
119
- training_video, training_prompt, output_model_name,
120
- delete_existing_repo, validation_prompt, base_model,
121
- resolution, num_training_steps, learning_rate,
122
- gradient_accumulation, seed, fp16, use_8bit_adam,
123
- checkpointing_steps, validation_epochs, upload_to_hub,
124
- use_private_repo, delete_existing_repo, upload_to,
125
- remove_gpu_after_training, input_token
126
- ],
127
- outputs=output_message)
128
- return demo
129
-
130
-
131
- if __name__ == '__main__':
132
- hf_token = os.getenv('HF_TOKEN')
133
- trainer = Trainer(hf_token)
134
- demo = create_training_demo(trainer)
135
- demo.queue(max_size=1).launch(share=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/custom_diffusion/retrieve.py DELETED
@@ -1,87 +0,0 @@
1
- # Copyright 2023 Custom Diffusion authors. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- import argparse
15
- import os
16
- from io import BytesIO
17
- from pathlib import Path
18
-
19
- import requests
20
- from clip_retrieval.clip_client import ClipClient
21
- from PIL import Image
22
- from tqdm import tqdm
23
-
24
-
25
- def retrieve(class_prompt, class_data_dir, num_class_images):
26
- factor = 1.5
27
- num_images = int(factor * num_class_images)
28
- client = ClipClient(
29
- url="https://knn.laion.ai/knn-service", indice_name="laion_400m", num_images=num_images, aesthetic_weight=0.1
30
- )
31
-
32
- os.makedirs(f"{class_data_dir}/images", exist_ok=True)
33
- if len(list(Path(f"{class_data_dir}/images").iterdir())) >= num_class_images:
34
- return
35
-
36
- while True:
37
- class_images = client.query(text=class_prompt)
38
- if len(class_images) >= factor * num_class_images or num_images > 1e4:
39
- break
40
- else:
41
- num_images = int(factor * num_images)
42
- client = ClipClient(
43
- url="https://knn.laion.ai/knn-service",
44
- indice_name="laion_400m",
45
- num_images=num_images,
46
- aesthetic_weight=0.1,
47
- )
48
-
49
- count = 0
50
- total = 0
51
- pbar = tqdm(desc="downloading real regularization images", total=num_class_images)
52
-
53
- with open(f"{class_data_dir}/caption.txt", "w") as f1, open(f"{class_data_dir}/urls.txt", "w") as f2, open(
54
- f"{class_data_dir}/images.txt", "w"
55
- ) as f3:
56
- while total < num_class_images:
57
- images = class_images[count]
58
- count += 1
59
- try:
60
- img = requests.get(images["url"], timeout=30)
61
- if img.status_code == 200:
62
- _ = Image.open(BytesIO(img.content))
63
- with open(f"{class_data_dir}/images/{total}.jpg", "wb") as f:
64
- f.write(img.content)
65
- f1.write(images["caption"] + "\n")
66
- f2.write(images["url"] + "\n")
67
- f3.write(f"{class_data_dir}/images/{total}.jpg" + "\n")
68
- total += 1
69
- pbar.update(1)
70
- else:
71
- continue
72
- except Exception:
73
- continue
74
- return
75
-
76
-
77
- def parse_args():
78
- parser = argparse.ArgumentParser("", add_help=False)
79
- parser.add_argument("--class_prompt", help="text prompt to retrieve images", required=True, type=str)
80
- parser.add_argument("--class_data_dir", help="path to save images", required=True, type=str)
81
- parser.add_argument("--num_class_images", help="number of images to download", default=200, type=int)
82
- return parser.parse_args()
83
-
84
-
85
- if __name__ == "__main__":
86
- args = parse_args()
87
- retrieve(args.class_prompt, args.class_data_dir, args.num_class_images)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/intel_opts/textual_inversion/textual_inversion_bf16.py DELETED
@@ -1,635 +0,0 @@
1
- import argparse
2
- import itertools
3
- import math
4
- import os
5
- import random
6
- from pathlib import Path
7
-
8
- import intel_extension_for_pytorch as ipex
9
- import numpy as np
10
- import PIL
11
- import torch
12
- import torch.nn.functional as F
13
- import torch.utils.checkpoint
14
- from accelerate import Accelerator
15
- from accelerate.logging import get_logger
16
- from accelerate.utils import ProjectConfiguration, set_seed
17
- from huggingface_hub import create_repo, upload_folder
18
-
19
- # TODO: remove and import from diffusers.utils when the new version of diffusers is released
20
- from packaging import version
21
- from PIL import Image
22
- from torch.utils.data import Dataset
23
- from torchvision import transforms
24
- from tqdm.auto import tqdm
25
- from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer
26
-
27
- from diffusers import AutoencoderKL, DDPMScheduler, PNDMScheduler, StableDiffusionPipeline, UNet2DConditionModel
28
- from diffusers.optimization import get_scheduler
29
- from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
30
- from diffusers.utils import check_min_version
31
-
32
-
33
- if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
34
- PIL_INTERPOLATION = {
35
- "linear": PIL.Image.Resampling.BILINEAR,
36
- "bilinear": PIL.Image.Resampling.BILINEAR,
37
- "bicubic": PIL.Image.Resampling.BICUBIC,
38
- "lanczos": PIL.Image.Resampling.LANCZOS,
39
- "nearest": PIL.Image.Resampling.NEAREST,
40
- }
41
- else:
42
- PIL_INTERPOLATION = {
43
- "linear": PIL.Image.LINEAR,
44
- "bilinear": PIL.Image.BILINEAR,
45
- "bicubic": PIL.Image.BICUBIC,
46
- "lanczos": PIL.Image.LANCZOS,
47
- "nearest": PIL.Image.NEAREST,
48
- }
49
- # ------------------------------------------------------------------------------
50
-
51
-
52
- # Will error if the minimal version of diffusers is not installed. Remove at your own risks.
53
- check_min_version("0.13.0.dev0")
54
-
55
-
56
- logger = get_logger(__name__)
57
-
58
-
59
- def save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path):
60
- logger.info("Saving embeddings")
61
- learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_id]
62
- learned_embeds_dict = {args.placeholder_token: learned_embeds.detach().cpu()}
63
- torch.save(learned_embeds_dict, save_path)
64
-
65
-
66
- def parse_args():
67
- parser = argparse.ArgumentParser(description="Simple example of a training script.")
68
- parser.add_argument(
69
- "--save_steps",
70
- type=int,
71
- default=500,
72
- help="Save learned_embeds.bin every X updates steps.",
73
- )
74
- parser.add_argument(
75
- "--only_save_embeds",
76
- action="store_true",
77
- default=False,
78
- help="Save only the embeddings for the new concept.",
79
- )
80
- parser.add_argument(
81
- "--pretrained_model_name_or_path",
82
- type=str,
83
- default=None,
84
- required=True,
85
- help="Path to pretrained model or model identifier from huggingface.co/models.",
86
- )
87
- parser.add_argument(
88
- "--revision",
89
- type=str,
90
- default=None,
91
- required=False,
92
- help="Revision of pretrained model identifier from huggingface.co/models.",
93
- )
94
- parser.add_argument(
95
- "--tokenizer_name",
96
- type=str,
97
- default=None,
98
- help="Pretrained tokenizer name or path if not the same as model_name",
99
- )
100
- parser.add_argument(
101
- "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data."
102
- )
103
- parser.add_argument(
104
- "--placeholder_token",
105
- type=str,
106
- default=None,
107
- required=True,
108
- help="A token to use as a placeholder for the concept.",
109
- )
110
- parser.add_argument(
111
- "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word."
112
- )
113
- parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'")
114
- parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.")
115
- parser.add_argument(
116
- "--output_dir",
117
- type=str,
118
- default="text-inversion-model",
119
- help="The output directory where the model predictions and checkpoints will be written.",
120
- )
121
- parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.")
122
- parser.add_argument(
123
- "--resolution",
124
- type=int,
125
- default=512,
126
- help=(
127
- "The resolution for input images, all the images in the train/validation dataset will be resized to this"
128
- " resolution"
129
- ),
130
- )
131
- parser.add_argument(
132
- "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution."
133
- )
134
- parser.add_argument(
135
- "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader."
136
- )
137
- parser.add_argument("--num_train_epochs", type=int, default=100)
138
- parser.add_argument(
139
- "--max_train_steps",
140
- type=int,
141
- default=5000,
142
- help="Total number of training steps to perform. If provided, overrides num_train_epochs.",
143
- )
144
- parser.add_argument(
145
- "--gradient_accumulation_steps",
146
- type=int,
147
- default=1,
148
- help="Number of updates steps to accumulate before performing a backward/update pass.",
149
- )
150
- parser.add_argument(
151
- "--learning_rate",
152
- type=float,
153
- default=1e-4,
154
- help="Initial learning rate (after the potential warmup period) to use.",
155
- )
156
- parser.add_argument(
157
- "--scale_lr",
158
- action="store_true",
159
- default=True,
160
- help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.",
161
- )
162
- parser.add_argument(
163
- "--lr_scheduler",
164
- type=str,
165
- default="constant",
166
- help=(
167
- 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",'
168
- ' "constant", "constant_with_warmup"]'
169
- ),
170
- )
171
- parser.add_argument(
172
- "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler."
173
- )
174
- parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.")
175
- parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.")
176
- parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.")
177
- parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer")
178
- parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.")
179
- parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.")
180
- parser.add_argument(
181
- "--hub_model_id",
182
- type=str,
183
- default=None,
184
- help="The name of the repository to keep in sync with the local `output_dir`.",
185
- )
186
- parser.add_argument(
187
- "--logging_dir",
188
- type=str,
189
- default="logs",
190
- help=(
191
- "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to"
192
- " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***."
193
- ),
194
- )
195
- parser.add_argument(
196
- "--mixed_precision",
197
- type=str,
198
- default="no",
199
- choices=["no", "fp16", "bf16"],
200
- help=(
201
- "Whether to use mixed precision. Choose"
202
- "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10."
203
- "and an Nvidia Ampere GPU."
204
- ),
205
- )
206
- parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank")
207
-
208
- args = parser.parse_args()
209
- env_local_rank = int(os.environ.get("LOCAL_RANK", -1))
210
- if env_local_rank != -1 and env_local_rank != args.local_rank:
211
- args.local_rank = env_local_rank
212
-
213
- if args.train_data_dir is None:
214
- raise ValueError("You must specify a train data directory.")
215
-
216
- return args
217
-
218
-
219
- imagenet_templates_small = [
220
- "a photo of a {}",
221
- "a rendering of a {}",
222
- "a cropped photo of the {}",
223
- "the photo of a {}",
224
- "a photo of a clean {}",
225
- "a photo of a dirty {}",
226
- "a dark photo of the {}",
227
- "a photo of my {}",
228
- "a photo of the cool {}",
229
- "a close-up photo of a {}",
230
- "a bright photo of the {}",
231
- "a cropped photo of a {}",
232
- "a photo of the {}",
233
- "a good photo of the {}",
234
- "a photo of one {}",
235
- "a close-up photo of the {}",
236
- "a rendition of the {}",
237
- "a photo of the clean {}",
238
- "a rendition of a {}",
239
- "a photo of a nice {}",
240
- "a good photo of a {}",
241
- "a photo of the nice {}",
242
- "a photo of the small {}",
243
- "a photo of the weird {}",
244
- "a photo of the large {}",
245
- "a photo of a cool {}",
246
- "a photo of a small {}",
247
- ]
248
-
249
- imagenet_style_templates_small = [
250
- "a painting in the style of {}",
251
- "a rendering in the style of {}",
252
- "a cropped painting in the style of {}",
253
- "the painting in the style of {}",
254
- "a clean painting in the style of {}",
255
- "a dirty painting in the style of {}",
256
- "a dark painting in the style of {}",
257
- "a picture in the style of {}",
258
- "a cool painting in the style of {}",
259
- "a close-up painting in the style of {}",
260
- "a bright painting in the style of {}",
261
- "a cropped painting in the style of {}",
262
- "a good painting in the style of {}",
263
- "a close-up painting in the style of {}",
264
- "a rendition in the style of {}",
265
- "a nice painting in the style of {}",
266
- "a small painting in the style of {}",
267
- "a weird painting in the style of {}",
268
- "a large painting in the style of {}",
269
- ]
270
-
271
-
272
- class TextualInversionDataset(Dataset):
273
- def __init__(
274
- self,
275
- data_root,
276
- tokenizer,
277
- learnable_property="object", # [object, style]
278
- size=512,
279
- repeats=100,
280
- interpolation="bicubic",
281
- flip_p=0.5,
282
- set="train",
283
- placeholder_token="*",
284
- center_crop=False,
285
- ):
286
- self.data_root = data_root
287
- self.tokenizer = tokenizer
288
- self.learnable_property = learnable_property
289
- self.size = size
290
- self.placeholder_token = placeholder_token
291
- self.center_crop = center_crop
292
- self.flip_p = flip_p
293
-
294
- self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)]
295
-
296
- self.num_images = len(self.image_paths)
297
- self._length = self.num_images
298
-
299
- if set == "train":
300
- self._length = self.num_images * repeats
301
-
302
- self.interpolation = {
303
- "linear": PIL_INTERPOLATION["linear"],
304
- "bilinear": PIL_INTERPOLATION["bilinear"],
305
- "bicubic": PIL_INTERPOLATION["bicubic"],
306
- "lanczos": PIL_INTERPOLATION["lanczos"],
307
- }[interpolation]
308
-
309
- self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small
310
- self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p)
311
-
312
- def __len__(self):
313
- return self._length
314
-
315
- def __getitem__(self, i):
316
- example = {}
317
- image = Image.open(self.image_paths[i % self.num_images])
318
-
319
- if not image.mode == "RGB":
320
- image = image.convert("RGB")
321
-
322
- placeholder_string = self.placeholder_token
323
- text = random.choice(self.templates).format(placeholder_string)
324
-
325
- example["input_ids"] = self.tokenizer(
326
- text,
327
- padding="max_length",
328
- truncation=True,
329
- max_length=self.tokenizer.model_max_length,
330
- return_tensors="pt",
331
- ).input_ids[0]
332
-
333
- # default to score-sde preprocessing
334
- img = np.array(image).astype(np.uint8)
335
-
336
- if self.center_crop:
337
- crop = min(img.shape[0], img.shape[1])
338
- (
339
- h,
340
- w,
341
- ) = (
342
- img.shape[0],
343
- img.shape[1],
344
- )
345
- img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2]
346
-
347
- image = Image.fromarray(img)
348
- image = image.resize((self.size, self.size), resample=self.interpolation)
349
-
350
- image = self.flip_transform(image)
351
- image = np.array(image).astype(np.uint8)
352
- image = (image / 127.5 - 1.0).astype(np.float32)
353
-
354
- example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1)
355
- return example
356
-
357
-
358
- def freeze_params(params):
359
- for param in params:
360
- param.requires_grad = False
361
-
362
-
363
- def main():
364
- args = parse_args()
365
- logging_dir = os.path.join(args.output_dir, args.logging_dir)
366
- accelerator_project_config = ProjectConfiguration(project_dir=args.output_dir, logging_dir=logging_dir)
367
- accelerator = Accelerator(
368
- gradient_accumulation_steps=args.gradient_accumulation_steps,
369
- mixed_precision=args.mixed_precision,
370
- log_with=args.report_to,
371
- project_config=accelerator_project_config,
372
- )
373
-
374
- # If passed along, set the training seed now.
375
- if args.seed is not None:
376
- set_seed(args.seed)
377
-
378
- # Handle the repository creation
379
- if accelerator.is_main_process:
380
- if args.output_dir is not None:
381
- os.makedirs(args.output_dir, exist_ok=True)
382
-
383
- if args.push_to_hub:
384
- repo_id = create_repo(
385
- repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token
386
- ).repo_id
387
-
388
- # Load the tokenizer and add the placeholder token as a additional special token
389
- if args.tokenizer_name:
390
- tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name)
391
- elif args.pretrained_model_name_or_path:
392
- tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
393
-
394
- # Add the placeholder token in tokenizer
395
- num_added_tokens = tokenizer.add_tokens(args.placeholder_token)
396
- if num_added_tokens == 0:
397
- raise ValueError(
398
- f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different"
399
- " `placeholder_token` that is not already in the tokenizer."
400
- )
401
-
402
- # Convert the initializer_token, placeholder_token to ids
403
- token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False)
404
- # Check if initializer_token is a single token or a sequence of tokens
405
- if len(token_ids) > 1:
406
- raise ValueError("The initializer token must be a single token.")
407
-
408
- initializer_token_id = token_ids[0]
409
- placeholder_token_id = tokenizer.convert_tokens_to_ids(args.placeholder_token)
410
-
411
- # Load models and create wrapper for stable diffusion
412
- text_encoder = CLIPTextModel.from_pretrained(
413
- args.pretrained_model_name_or_path,
414
- subfolder="text_encoder",
415
- revision=args.revision,
416
- )
417
- vae = AutoencoderKL.from_pretrained(
418
- args.pretrained_model_name_or_path,
419
- subfolder="vae",
420
- revision=args.revision,
421
- )
422
- unet = UNet2DConditionModel.from_pretrained(
423
- args.pretrained_model_name_or_path,
424
- subfolder="unet",
425
- revision=args.revision,
426
- )
427
-
428
- # Resize the token embeddings as we are adding new special tokens to the tokenizer
429
- text_encoder.resize_token_embeddings(len(tokenizer))
430
-
431
- # Initialise the newly added placeholder token with the embeddings of the initializer token
432
- token_embeds = text_encoder.get_input_embeddings().weight.data
433
- token_embeds[placeholder_token_id] = token_embeds[initializer_token_id]
434
-
435
- # Freeze vae and unet
436
- freeze_params(vae.parameters())
437
- freeze_params(unet.parameters())
438
- # Freeze all parameters except for the token embeddings in text encoder
439
- params_to_freeze = itertools.chain(
440
- text_encoder.text_model.encoder.parameters(),
441
- text_encoder.text_model.final_layer_norm.parameters(),
442
- text_encoder.text_model.embeddings.position_embedding.parameters(),
443
- )
444
- freeze_params(params_to_freeze)
445
-
446
- if args.scale_lr:
447
- args.learning_rate = (
448
- args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes
449
- )
450
-
451
- # Initialize the optimizer
452
- optimizer = torch.optim.AdamW(
453
- text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings
454
- lr=args.learning_rate,
455
- betas=(args.adam_beta1, args.adam_beta2),
456
- weight_decay=args.adam_weight_decay,
457
- eps=args.adam_epsilon,
458
- )
459
-
460
- noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler")
461
-
462
- train_dataset = TextualInversionDataset(
463
- data_root=args.train_data_dir,
464
- tokenizer=tokenizer,
465
- size=args.resolution,
466
- placeholder_token=args.placeholder_token,
467
- repeats=args.repeats,
468
- learnable_property=args.learnable_property,
469
- center_crop=args.center_crop,
470
- set="train",
471
- )
472
- train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=args.train_batch_size, shuffle=True)
473
-
474
- # Scheduler and math around the number of training steps.
475
- overrode_max_train_steps = False
476
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
477
- if args.max_train_steps is None:
478
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
479
- overrode_max_train_steps = True
480
-
481
- lr_scheduler = get_scheduler(
482
- args.lr_scheduler,
483
- optimizer=optimizer,
484
- num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,
485
- num_training_steps=args.max_train_steps * accelerator.num_processes,
486
- )
487
-
488
- text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
489
- text_encoder, optimizer, train_dataloader, lr_scheduler
490
- )
491
-
492
- # Move vae and unet to device
493
- vae.to(accelerator.device)
494
- unet.to(accelerator.device)
495
-
496
- # Keep vae and unet in eval model as we don't train these
497
- vae.eval()
498
- unet.eval()
499
-
500
- unet = ipex.optimize(unet, dtype=torch.bfloat16, inplace=True)
501
- vae = ipex.optimize(vae, dtype=torch.bfloat16, inplace=True)
502
-
503
- # We need to recalculate our total training steps as the size of the training dataloader may have changed.
504
- num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps)
505
- if overrode_max_train_steps:
506
- args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch
507
- # Afterwards we recalculate our number of training epochs
508
- args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch)
509
-
510
- # We need to initialize the trackers we use, and also store our configuration.
511
- # The trackers initializes automatically on the main process.
512
- if accelerator.is_main_process:
513
- accelerator.init_trackers("textual_inversion", config=vars(args))
514
-
515
- # Train!
516
- total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps
517
-
518
- logger.info("***** Running training *****")
519
- logger.info(f" Num examples = {len(train_dataset)}")
520
- logger.info(f" Num Epochs = {args.num_train_epochs}")
521
- logger.info(f" Instantaneous batch size per device = {args.train_batch_size}")
522
- logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}")
523
- logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}")
524
- logger.info(f" Total optimization steps = {args.max_train_steps}")
525
- # Only show the progress bar once on each machine.
526
- progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process)
527
- progress_bar.set_description("Steps")
528
- global_step = 0
529
-
530
- text_encoder.train()
531
- text_encoder, optimizer = ipex.optimize(text_encoder, optimizer=optimizer, dtype=torch.bfloat16)
532
-
533
- for epoch in range(args.num_train_epochs):
534
- for step, batch in enumerate(train_dataloader):
535
- with torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16):
536
- with accelerator.accumulate(text_encoder):
537
- # Convert images to latent space
538
- latents = vae.encode(batch["pixel_values"]).latent_dist.sample().detach()
539
- latents = latents * vae.config.scaling_factor
540
-
541
- # Sample noise that we'll add to the latents
542
- noise = torch.randn(latents.shape).to(latents.device)
543
- bsz = latents.shape[0]
544
- # Sample a random timestep for each image
545
- timesteps = torch.randint(
546
- 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device
547
- ).long()
548
-
549
- # Add noise to the latents according to the noise magnitude at each timestep
550
- # (this is the forward diffusion process)
551
- noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps)
552
-
553
- # Get the text embedding for conditioning
554
- encoder_hidden_states = text_encoder(batch["input_ids"])[0]
555
-
556
- # Predict the noise residual
557
- model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
558
-
559
- # Get the target for loss depending on the prediction type
560
- if noise_scheduler.config.prediction_type == "epsilon":
561
- target = noise
562
- elif noise_scheduler.config.prediction_type == "v_prediction":
563
- target = noise_scheduler.get_velocity(latents, noise, timesteps)
564
- else:
565
- raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}")
566
-
567
- loss = F.mse_loss(model_pred, target, reduction="none").mean([1, 2, 3]).mean()
568
- accelerator.backward(loss)
569
-
570
- # Zero out the gradients for all token embeddings except the newly added
571
- # embeddings for the concept, as we only want to optimize the concept embeddings
572
- if accelerator.num_processes > 1:
573
- grads = text_encoder.module.get_input_embeddings().weight.grad
574
- else:
575
- grads = text_encoder.get_input_embeddings().weight.grad
576
- # Get the index for tokens that we want to zero the grads for
577
- index_grads_to_zero = torch.arange(len(tokenizer)) != placeholder_token_id
578
- grads.data[index_grads_to_zero, :] = grads.data[index_grads_to_zero, :].fill_(0)
579
-
580
- optimizer.step()
581
- lr_scheduler.step()
582
- optimizer.zero_grad()
583
-
584
- # Checks if the accelerator has performed an optimization step behind the scenes
585
- if accelerator.sync_gradients:
586
- progress_bar.update(1)
587
- global_step += 1
588
- if global_step % args.save_steps == 0:
589
- save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin")
590
- save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path)
591
-
592
- logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
593
- progress_bar.set_postfix(**logs)
594
- accelerator.log(logs, step=global_step)
595
-
596
- if global_step >= args.max_train_steps:
597
- break
598
-
599
- accelerator.wait_for_everyone()
600
-
601
- # Create the pipeline using using the trained modules and save it.
602
- if accelerator.is_main_process:
603
- if args.push_to_hub and args.only_save_embeds:
604
- logger.warn("Enabling full model saving because --push_to_hub=True was specified.")
605
- save_full_model = True
606
- else:
607
- save_full_model = not args.only_save_embeds
608
- if save_full_model:
609
- pipeline = StableDiffusionPipeline(
610
- text_encoder=accelerator.unwrap_model(text_encoder),
611
- vae=vae,
612
- unet=unet,
613
- tokenizer=tokenizer,
614
- scheduler=PNDMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler"),
615
- safety_checker=StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"),
616
- feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"),
617
- )
618
- pipeline.save_pretrained(args.output_dir)
619
- # Save the newly trained embeddings
620
- save_path = os.path.join(args.output_dir, "learned_embeds.bin")
621
- save_progress(text_encoder, placeholder_token_id, accelerator, args, save_path)
622
-
623
- if args.push_to_hub:
624
- upload_folder(
625
- repo_id=repo_id,
626
- folder_path=args.output_dir,
627
- commit_message="End of training",
628
- ignore_patterns=["step_*", "epoch_*"],
629
- )
630
-
631
- accelerator.end_training()
632
-
633
-
634
- if __name__ == "__main__":
635
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_asymmetric_vqgan_to_diffusers.py DELETED
@@ -1,184 +0,0 @@
1
- import argparse
2
- import time
3
- from pathlib import Path
4
- from typing import Any, Dict, Literal
5
-
6
- import torch
7
-
8
- from diffusers import AsymmetricAutoencoderKL
9
-
10
-
11
- ASYMMETRIC_AUTOENCODER_KL_x_1_5_CONFIG = {
12
- "in_channels": 3,
13
- "out_channels": 3,
14
- "down_block_types": [
15
- "DownEncoderBlock2D",
16
- "DownEncoderBlock2D",
17
- "DownEncoderBlock2D",
18
- "DownEncoderBlock2D",
19
- ],
20
- "down_block_out_channels": [128, 256, 512, 512],
21
- "layers_per_down_block": 2,
22
- "up_block_types": [
23
- "UpDecoderBlock2D",
24
- "UpDecoderBlock2D",
25
- "UpDecoderBlock2D",
26
- "UpDecoderBlock2D",
27
- ],
28
- "up_block_out_channels": [192, 384, 768, 768],
29
- "layers_per_up_block": 3,
30
- "act_fn": "silu",
31
- "latent_channels": 4,
32
- "norm_num_groups": 32,
33
- "sample_size": 256,
34
- "scaling_factor": 0.18215,
35
- }
36
-
37
- ASYMMETRIC_AUTOENCODER_KL_x_2_CONFIG = {
38
- "in_channels": 3,
39
- "out_channels": 3,
40
- "down_block_types": [
41
- "DownEncoderBlock2D",
42
- "DownEncoderBlock2D",
43
- "DownEncoderBlock2D",
44
- "DownEncoderBlock2D",
45
- ],
46
- "down_block_out_channels": [128, 256, 512, 512],
47
- "layers_per_down_block": 2,
48
- "up_block_types": [
49
- "UpDecoderBlock2D",
50
- "UpDecoderBlock2D",
51
- "UpDecoderBlock2D",
52
- "UpDecoderBlock2D",
53
- ],
54
- "up_block_out_channels": [256, 512, 1024, 1024],
55
- "layers_per_up_block": 5,
56
- "act_fn": "silu",
57
- "latent_channels": 4,
58
- "norm_num_groups": 32,
59
- "sample_size": 256,
60
- "scaling_factor": 0.18215,
61
- }
62
-
63
-
64
- def convert_asymmetric_autoencoder_kl_state_dict(original_state_dict: Dict[str, Any]) -> Dict[str, Any]:
65
- converted_state_dict = {}
66
- for k, v in original_state_dict.items():
67
- if k.startswith("encoder."):
68
- converted_state_dict[
69
- k.replace("encoder.down.", "encoder.down_blocks.")
70
- .replace("encoder.mid.", "encoder.mid_block.")
71
- .replace("encoder.norm_out.", "encoder.conv_norm_out.")
72
- .replace(".downsample.", ".downsamplers.0.")
73
- .replace(".nin_shortcut.", ".conv_shortcut.")
74
- .replace(".block.", ".resnets.")
75
- .replace(".block_1.", ".resnets.0.")
76
- .replace(".block_2.", ".resnets.1.")
77
- .replace(".attn_1.k.", ".attentions.0.to_k.")
78
- .replace(".attn_1.q.", ".attentions.0.to_q.")
79
- .replace(".attn_1.v.", ".attentions.0.to_v.")
80
- .replace(".attn_1.proj_out.", ".attentions.0.to_out.0.")
81
- .replace(".attn_1.norm.", ".attentions.0.group_norm.")
82
- ] = v
83
- elif k.startswith("decoder.") and "up_layers" not in k:
84
- converted_state_dict[
85
- k.replace("decoder.encoder.", "decoder.condition_encoder.")
86
- .replace(".norm_out.", ".conv_norm_out.")
87
- .replace(".up.0.", ".up_blocks.3.")
88
- .replace(".up.1.", ".up_blocks.2.")
89
- .replace(".up.2.", ".up_blocks.1.")
90
- .replace(".up.3.", ".up_blocks.0.")
91
- .replace(".block.", ".resnets.")
92
- .replace("mid", "mid_block")
93
- .replace(".0.upsample.", ".0.upsamplers.0.")
94
- .replace(".1.upsample.", ".1.upsamplers.0.")
95
- .replace(".2.upsample.", ".2.upsamplers.0.")
96
- .replace(".nin_shortcut.", ".conv_shortcut.")
97
- .replace(".block_1.", ".resnets.0.")
98
- .replace(".block_2.", ".resnets.1.")
99
- .replace(".attn_1.k.", ".attentions.0.to_k.")
100
- .replace(".attn_1.q.", ".attentions.0.to_q.")
101
- .replace(".attn_1.v.", ".attentions.0.to_v.")
102
- .replace(".attn_1.proj_out.", ".attentions.0.to_out.0.")
103
- .replace(".attn_1.norm.", ".attentions.0.group_norm.")
104
- ] = v
105
- elif k.startswith("quant_conv."):
106
- converted_state_dict[k] = v
107
- elif k.startswith("post_quant_conv."):
108
- converted_state_dict[k] = v
109
- else:
110
- print(f" skipping key `{k}`")
111
- # fix weights shape
112
- for k, v in converted_state_dict.items():
113
- if (
114
- (k.startswith("encoder.mid_block.attentions.0") or k.startswith("decoder.mid_block.attentions.0"))
115
- and k.endswith("weight")
116
- and ("to_q" in k or "to_k" in k or "to_v" in k or "to_out" in k)
117
- ):
118
- converted_state_dict[k] = converted_state_dict[k][:, :, 0, 0]
119
-
120
- return converted_state_dict
121
-
122
-
123
- def get_asymmetric_autoencoder_kl_from_original_checkpoint(
124
- scale: Literal["1.5", "2"], original_checkpoint_path: str, map_location: torch.device
125
- ) -> AsymmetricAutoencoderKL:
126
- print("Loading original state_dict")
127
- original_state_dict = torch.load(original_checkpoint_path, map_location=map_location)
128
- original_state_dict = original_state_dict["state_dict"]
129
- print("Converting state_dict")
130
- converted_state_dict = convert_asymmetric_autoencoder_kl_state_dict(original_state_dict)
131
- kwargs = ASYMMETRIC_AUTOENCODER_KL_x_1_5_CONFIG if scale == "1.5" else ASYMMETRIC_AUTOENCODER_KL_x_2_CONFIG
132
- print("Initializing AsymmetricAutoencoderKL model")
133
- asymmetric_autoencoder_kl = AsymmetricAutoencoderKL(**kwargs)
134
- print("Loading weight from converted state_dict")
135
- asymmetric_autoencoder_kl.load_state_dict(converted_state_dict)
136
- asymmetric_autoencoder_kl.eval()
137
- print("AsymmetricAutoencoderKL successfully initialized")
138
- return asymmetric_autoencoder_kl
139
-
140
-
141
- if __name__ == "__main__":
142
- start = time.time()
143
- parser = argparse.ArgumentParser()
144
- parser.add_argument(
145
- "--scale",
146
- default=None,
147
- type=str,
148
- required=True,
149
- help="Asymmetric VQGAN scale: `1.5` or `2`",
150
- )
151
- parser.add_argument(
152
- "--original_checkpoint_path",
153
- default=None,
154
- type=str,
155
- required=True,
156
- help="Path to the original Asymmetric VQGAN checkpoint",
157
- )
158
- parser.add_argument(
159
- "--output_path",
160
- default=None,
161
- type=str,
162
- required=True,
163
- help="Path to save pretrained AsymmetricAutoencoderKL model",
164
- )
165
- parser.add_argument(
166
- "--map_location",
167
- default="cpu",
168
- type=str,
169
- required=False,
170
- help="The device passed to `map_location` when loading the checkpoint",
171
- )
172
- args = parser.parse_args()
173
-
174
- assert args.scale in ["1.5", "2"], f"{args.scale} should be `1.5` of `2`"
175
- assert Path(args.original_checkpoint_path).is_file()
176
-
177
- asymmetric_autoencoder_kl = get_asymmetric_autoencoder_kl_from_original_checkpoint(
178
- scale=args.scale,
179
- original_checkpoint_path=args.original_checkpoint_path,
180
- map_location=torch.device(args.map_location),
181
- )
182
- print("Saving pretrained AsymmetricAutoencoderKL")
183
- asymmetric_autoencoder_kl.save_pretrained(args.output_path)
184
- print(f"Done in {time.time() - start:.2f} seconds")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/fast_rcnn/fast_rcnn_r101_caffe_fpn_1x_coco.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = './fast_rcnn_r50_caffe_fpn_1x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://detectron2/resnet101_caffe',
4
- backbone=dict(depth=101))
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/backbones/detectors_resnet.py DELETED
@@ -1,305 +0,0 @@
1
- import torch.nn as nn
2
- import torch.utils.checkpoint as cp
3
- from mmcv.cnn import build_conv_layer, build_norm_layer, constant_init
4
-
5
- from ..builder import BACKBONES
6
- from .resnet import Bottleneck as _Bottleneck
7
- from .resnet import ResNet
8
-
9
-
10
- class Bottleneck(_Bottleneck):
11
- r"""Bottleneck for the ResNet backbone in `DetectoRS
12
- <https://arxiv.org/pdf/2006.02334.pdf>`_.
13
-
14
- This bottleneck allows the users to specify whether to use
15
- SAC (Switchable Atrous Convolution) and RFP (Recursive Feature Pyramid).
16
-
17
- Args:
18
- inplanes (int): The number of input channels.
19
- planes (int): The number of output channels before expansion.
20
- rfp_inplanes (int, optional): The number of channels from RFP.
21
- Default: None. If specified, an additional conv layer will be
22
- added for ``rfp_feat``. Otherwise, the structure is the same as
23
- base class.
24
- sac (dict, optional): Dictionary to construct SAC. Default: None.
25
- """
26
- expansion = 4
27
-
28
- def __init__(self,
29
- inplanes,
30
- planes,
31
- rfp_inplanes=None,
32
- sac=None,
33
- **kwargs):
34
- super(Bottleneck, self).__init__(inplanes, planes, **kwargs)
35
-
36
- assert sac is None or isinstance(sac, dict)
37
- self.sac = sac
38
- self.with_sac = sac is not None
39
- if self.with_sac:
40
- self.conv2 = build_conv_layer(
41
- self.sac,
42
- planes,
43
- planes,
44
- kernel_size=3,
45
- stride=self.conv2_stride,
46
- padding=self.dilation,
47
- dilation=self.dilation,
48
- bias=False)
49
-
50
- self.rfp_inplanes = rfp_inplanes
51
- if self.rfp_inplanes:
52
- self.rfp_conv = build_conv_layer(
53
- None,
54
- self.rfp_inplanes,
55
- planes * self.expansion,
56
- 1,
57
- stride=1,
58
- bias=True)
59
- self.init_weights()
60
-
61
- def init_weights(self):
62
- """Initialize the weights."""
63
- if self.rfp_inplanes:
64
- constant_init(self.rfp_conv, 0)
65
-
66
- def rfp_forward(self, x, rfp_feat):
67
- """The forward function that also takes the RFP features as input."""
68
-
69
- def _inner_forward(x):
70
- identity = x
71
-
72
- out = self.conv1(x)
73
- out = self.norm1(out)
74
- out = self.relu(out)
75
-
76
- if self.with_plugins:
77
- out = self.forward_plugin(out, self.after_conv1_plugin_names)
78
-
79
- out = self.conv2(out)
80
- out = self.norm2(out)
81
- out = self.relu(out)
82
-
83
- if self.with_plugins:
84
- out = self.forward_plugin(out, self.after_conv2_plugin_names)
85
-
86
- out = self.conv3(out)
87
- out = self.norm3(out)
88
-
89
- if self.with_plugins:
90
- out = self.forward_plugin(out, self.after_conv3_plugin_names)
91
-
92
- if self.downsample is not None:
93
- identity = self.downsample(x)
94
-
95
- out += identity
96
-
97
- return out
98
-
99
- if self.with_cp and x.requires_grad:
100
- out = cp.checkpoint(_inner_forward, x)
101
- else:
102
- out = _inner_forward(x)
103
-
104
- if self.rfp_inplanes:
105
- rfp_feat = self.rfp_conv(rfp_feat)
106
- out = out + rfp_feat
107
-
108
- out = self.relu(out)
109
-
110
- return out
111
-
112
-
113
- class ResLayer(nn.Sequential):
114
- """ResLayer to build ResNet style backbone for RPF in detectoRS.
115
-
116
- The difference between this module and base class is that we pass
117
- ``rfp_inplanes`` to the first block.
118
-
119
- Args:
120
- block (nn.Module): block used to build ResLayer.
121
- inplanes (int): inplanes of block.
122
- planes (int): planes of block.
123
- num_blocks (int): number of blocks.
124
- stride (int): stride of the first block. Default: 1
125
- avg_down (bool): Use AvgPool instead of stride conv when
126
- downsampling in the bottleneck. Default: False
127
- conv_cfg (dict): dictionary to construct and config conv layer.
128
- Default: None
129
- norm_cfg (dict): dictionary to construct and config norm layer.
130
- Default: dict(type='BN')
131
- downsample_first (bool): Downsample at the first block or last block.
132
- False for Hourglass, True for ResNet. Default: True
133
- rfp_inplanes (int, optional): The number of channels from RFP.
134
- Default: None. If specified, an additional conv layer will be
135
- added for ``rfp_feat``. Otherwise, the structure is the same as
136
- base class.
137
- """
138
-
139
- def __init__(self,
140
- block,
141
- inplanes,
142
- planes,
143
- num_blocks,
144
- stride=1,
145
- avg_down=False,
146
- conv_cfg=None,
147
- norm_cfg=dict(type='BN'),
148
- downsample_first=True,
149
- rfp_inplanes=None,
150
- **kwargs):
151
- self.block = block
152
- assert downsample_first, f'downsample_first={downsample_first} is ' \
153
- 'not supported in DetectoRS'
154
-
155
- downsample = None
156
- if stride != 1 or inplanes != planes * block.expansion:
157
- downsample = []
158
- conv_stride = stride
159
- if avg_down and stride != 1:
160
- conv_stride = 1
161
- downsample.append(
162
- nn.AvgPool2d(
163
- kernel_size=stride,
164
- stride=stride,
165
- ceil_mode=True,
166
- count_include_pad=False))
167
- downsample.extend([
168
- build_conv_layer(
169
- conv_cfg,
170
- inplanes,
171
- planes * block.expansion,
172
- kernel_size=1,
173
- stride=conv_stride,
174
- bias=False),
175
- build_norm_layer(norm_cfg, planes * block.expansion)[1]
176
- ])
177
- downsample = nn.Sequential(*downsample)
178
-
179
- layers = []
180
- layers.append(
181
- block(
182
- inplanes=inplanes,
183
- planes=planes,
184
- stride=stride,
185
- downsample=downsample,
186
- conv_cfg=conv_cfg,
187
- norm_cfg=norm_cfg,
188
- rfp_inplanes=rfp_inplanes,
189
- **kwargs))
190
- inplanes = planes * block.expansion
191
- for _ in range(1, num_blocks):
192
- layers.append(
193
- block(
194
- inplanes=inplanes,
195
- planes=planes,
196
- stride=1,
197
- conv_cfg=conv_cfg,
198
- norm_cfg=norm_cfg,
199
- **kwargs))
200
-
201
- super(ResLayer, self).__init__(*layers)
202
-
203
-
204
- @BACKBONES.register_module()
205
- class DetectoRS_ResNet(ResNet):
206
- """ResNet backbone for DetectoRS.
207
-
208
- Args:
209
- sac (dict, optional): Dictionary to construct SAC (Switchable Atrous
210
- Convolution). Default: None.
211
- stage_with_sac (list): Which stage to use sac. Default: (False, False,
212
- False, False).
213
- rfp_inplanes (int, optional): The number of channels from RFP.
214
- Default: None. If specified, an additional conv layer will be
215
- added for ``rfp_feat``. Otherwise, the structure is the same as
216
- base class.
217
- output_img (bool): If ``True``, the input image will be inserted into
218
- the starting position of output. Default: False.
219
- pretrained (str, optional): The pretrained model to load.
220
- """
221
-
222
- arch_settings = {
223
- 50: (Bottleneck, (3, 4, 6, 3)),
224
- 101: (Bottleneck, (3, 4, 23, 3)),
225
- 152: (Bottleneck, (3, 8, 36, 3))
226
- }
227
-
228
- def __init__(self,
229
- sac=None,
230
- stage_with_sac=(False, False, False, False),
231
- rfp_inplanes=None,
232
- output_img=False,
233
- pretrained=None,
234
- **kwargs):
235
- self.sac = sac
236
- self.stage_with_sac = stage_with_sac
237
- self.rfp_inplanes = rfp_inplanes
238
- self.output_img = output_img
239
- self.pretrained = pretrained
240
- super(DetectoRS_ResNet, self).__init__(**kwargs)
241
-
242
- self.inplanes = self.stem_channels
243
- self.res_layers = []
244
- for i, num_blocks in enumerate(self.stage_blocks):
245
- stride = self.strides[i]
246
- dilation = self.dilations[i]
247
- dcn = self.dcn if self.stage_with_dcn[i] else None
248
- sac = self.sac if self.stage_with_sac[i] else None
249
- if self.plugins is not None:
250
- stage_plugins = self.make_stage_plugins(self.plugins, i)
251
- else:
252
- stage_plugins = None
253
- planes = self.base_channels * 2**i
254
- res_layer = self.make_res_layer(
255
- block=self.block,
256
- inplanes=self.inplanes,
257
- planes=planes,
258
- num_blocks=num_blocks,
259
- stride=stride,
260
- dilation=dilation,
261
- style=self.style,
262
- avg_down=self.avg_down,
263
- with_cp=self.with_cp,
264
- conv_cfg=self.conv_cfg,
265
- norm_cfg=self.norm_cfg,
266
- dcn=dcn,
267
- sac=sac,
268
- rfp_inplanes=rfp_inplanes if i > 0 else None,
269
- plugins=stage_plugins)
270
- self.inplanes = planes * self.block.expansion
271
- layer_name = f'layer{i + 1}'
272
- self.add_module(layer_name, res_layer)
273
- self.res_layers.append(layer_name)
274
-
275
- self._freeze_stages()
276
-
277
- def make_res_layer(self, **kwargs):
278
- """Pack all blocks in a stage into a ``ResLayer`` for DetectoRS."""
279
- return ResLayer(**kwargs)
280
-
281
- def forward(self, x):
282
- """Forward function."""
283
- outs = list(super(DetectoRS_ResNet, self).forward(x))
284
- if self.output_img:
285
- outs.insert(0, x)
286
- return tuple(outs)
287
-
288
- def rfp_forward(self, x, rfp_feats):
289
- """Forward function for RFP."""
290
- if self.deep_stem:
291
- x = self.stem(x)
292
- else:
293
- x = self.conv1(x)
294
- x = self.norm1(x)
295
- x = self.relu(x)
296
- x = self.maxpool(x)
297
- outs = []
298
- for i, layer_name in enumerate(self.res_layers):
299
- res_layer = getattr(self, layer_name)
300
- rfp_feat = rfp_feats[i] if i > 0 else None
301
- for layer in res_layer:
302
- x = layer.rfp_forward(x, rfp_feat)
303
- if i in self.out_indices:
304
- outs.append(x)
305
- return tuple(outs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/deeplabv3plus_r50-d8.py DELETED
@@ -1,46 +0,0 @@
1
- # model settings
2
- norm_cfg = dict(type='SyncBN', requires_grad=True)
3
- model = dict(
4
- type='EncoderDecoder',
5
- pretrained='open-mmlab://resnet50_v1c',
6
- backbone=dict(
7
- type='ResNetV1c',
8
- depth=50,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- dilations=(1, 1, 2, 4),
12
- strides=(1, 2, 1, 1),
13
- norm_cfg=norm_cfg,
14
- norm_eval=False,
15
- style='pytorch',
16
- contract_dilation=True),
17
- decode_head=dict(
18
- type='DepthwiseSeparableASPPHead',
19
- in_channels=2048,
20
- in_index=3,
21
- channels=512,
22
- dilations=(1, 12, 24, 36),
23
- c1_in_channels=256,
24
- c1_channels=48,
25
- dropout_ratio=0.1,
26
- num_classes=19,
27
- norm_cfg=norm_cfg,
28
- align_corners=False,
29
- loss_decode=dict(
30
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
31
- auxiliary_head=dict(
32
- type='FCNHead',
33
- in_channels=1024,
34
- in_index=2,
35
- channels=256,
36
- num_convs=1,
37
- concat_input=False,
38
- dropout_ratio=0.1,
39
- num_classes=19,
40
- norm_cfg=norm_cfg,
41
- align_corners=False,
42
- loss_decode=dict(
43
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
44
- # model training and testing settings
45
- train_cfg=dict(),
46
- test_cfg=dict(mode='whole'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_512x512_80k_ade20k.py DELETED
@@ -1,6 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/deeplabv3plus_r50-d8.py', '../_base_/datasets/ade20k.py',
3
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
4
- ]
5
- model = dict(
6
- decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150))
 
 
 
 
 
 
 
spaces/AnimalEquality/chatbot/_proc/_docs/engineer_prompt.html DELETED
@@ -1,552 +0,0 @@
1
- <!DOCTYPE html>
2
- <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"><head>
3
-
4
- <meta charset="utf-8">
5
- <meta name="generator" content="quarto-1.3.361">
6
-
7
- <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes">
8
-
9
- <meta name="description" content="Engineering prompts">
10
-
11
- <title>lv-recipe-chatbot - engineer_prompt</title>
12
- <style>
13
- code{white-space: pre-wrap;}
14
- span.smallcaps{font-variant: small-caps;}
15
- div.columns{display: flex; gap: min(4vw, 1.5em);}
16
- div.column{flex: auto; overflow-x: auto;}
17
- div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
18
- ul.task-list{list-style: none;}
19
- ul.task-list li input[type="checkbox"] {
20
- width: 0.8em;
21
- margin: 0 0.8em 0.2em -1em; /* quarto-specific, see https://github.com/quarto-dev/quarto-cli/issues/4556 */
22
- vertical-align: middle;
23
- }
24
- /* CSS for syntax highlighting */
25
- pre > code.sourceCode { white-space: pre; position: relative; }
26
- pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
27
- pre > code.sourceCode > span:empty { height: 1.2em; }
28
- .sourceCode { overflow: visible; }
29
- code.sourceCode > span { color: inherit; text-decoration: inherit; }
30
- div.sourceCode { margin: 1em 0; }
31
- pre.sourceCode { margin: 0; }
32
- @media screen {
33
- div.sourceCode { overflow: auto; }
34
- }
35
- @media print {
36
- pre > code.sourceCode { white-space: pre-wrap; }
37
- pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
38
- }
39
- pre.numberSource code
40
- { counter-reset: source-line 0; }
41
- pre.numberSource code > span
42
- { position: relative; left: -4em; counter-increment: source-line; }
43
- pre.numberSource code > span > a:first-child::before
44
- { content: counter(source-line);
45
- position: relative; left: -1em; text-align: right; vertical-align: baseline;
46
- border: none; display: inline-block;
47
- -webkit-touch-callout: none; -webkit-user-select: none;
48
- -khtml-user-select: none; -moz-user-select: none;
49
- -ms-user-select: none; user-select: none;
50
- padding: 0 4px; width: 4em;
51
- }
52
- pre.numberSource { margin-left: 3em; padding-left: 4px; }
53
- div.sourceCode
54
- { }
55
- @media screen {
56
- pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
57
- }
58
- </style>
59
-
60
-
61
- <script src="site_libs/quarto-nav/quarto-nav.js"></script>
62
- <script src="site_libs/quarto-nav/headroom.min.js"></script>
63
- <script src="site_libs/clipboard/clipboard.min.js"></script>
64
- <script src="site_libs/quarto-search/autocomplete.umd.js"></script>
65
- <script src="site_libs/quarto-search/fuse.min.js"></script>
66
- <script src="site_libs/quarto-search/quarto-search.js"></script>
67
- <meta name="quarto:offset" content="./">
68
- <script src="site_libs/quarto-html/quarto.js"></script>
69
- <script src="site_libs/quarto-html/popper.min.js"></script>
70
- <script src="site_libs/quarto-html/tippy.umd.min.js"></script>
71
- <script src="site_libs/quarto-html/anchor.min.js"></script>
72
- <link href="site_libs/quarto-html/tippy.css" rel="stylesheet">
73
- <link href="site_libs/quarto-html/quarto-syntax-highlighting.css" rel="stylesheet" id="quarto-text-highlighting-styles">
74
- <script src="site_libs/bootstrap/bootstrap.min.js"></script>
75
- <link href="site_libs/bootstrap/bootstrap-icons.css" rel="stylesheet">
76
- <link href="site_libs/bootstrap/bootstrap.min.css" rel="stylesheet" id="quarto-bootstrap" data-mode="light">
77
- <script id="quarto-search-options" type="application/json">{
78
- "location": "navbar",
79
- "copy-button": false,
80
- "collapse-after": 3,
81
- "panel-placement": "end",
82
- "type": "overlay",
83
- "limit": 20,
84
- "language": {
85
- "search-no-results-text": "No results",
86
- "search-matching-documents-text": "matching documents",
87
- "search-copy-link-title": "Copy link to search",
88
- "search-hide-matches-text": "Hide additional matches",
89
- "search-more-match-text": "more match in this document",
90
- "search-more-matches-text": "more matches in this document",
91
- "search-clear-button-title": "Clear",
92
- "search-detached-cancel-button-title": "Cancel",
93
- "search-submit-button-title": "Submit",
94
- "search-label": "Search"
95
- }
96
- }</script>
97
-
98
-
99
- <link rel="stylesheet" href="styles.css">
100
- <meta property="og:title" content="lv-recipe-chatbot - engineer_prompt">
101
- <meta property="og:description" content="Engineering prompts">
102
- <meta property="og:site-name" content="lv-recipe-chatbot">
103
- <meta name="twitter:title" content="lv-recipe-chatbot - engineer_prompt">
104
- <meta name="twitter:description" content="Engineering prompts">
105
- <meta name="twitter:card" content="summary">
106
- </head>
107
-
108
- <body class="nav-sidebar floating nav-fixed">
109
-
110
- <div id="quarto-search-results"></div>
111
- <header id="quarto-header" class="headroom fixed-top">
112
- <nav class="navbar navbar-expand-lg navbar-dark ">
113
- <div class="navbar-container container-fluid">
114
- <div class="navbar-brand-container">
115
- <a class="navbar-brand" href="./index.html">
116
- <span class="navbar-title">lv-recipe-chatbot</span>
117
- </a>
118
- </div>
119
- <div class="quarto-navbar-tools ms-auto">
120
- </div>
121
- <div id="quarto-search" class="" title="Search"></div>
122
- </div> <!-- /container-fluid -->
123
- </nav>
124
- <nav class="quarto-secondary-nav">
125
- <div class="container-fluid d-flex">
126
- <button type="button" class="quarto-btn-toggle btn" data-bs-toggle="collapse" data-bs-target="#quarto-sidebar,#quarto-sidebar-glass" aria-controls="quarto-sidebar" aria-expanded="false" aria-label="Toggle sidebar navigation" onclick="if (window.quartoToggleHeadroom) { window.quartoToggleHeadroom(); }">
127
- <i class="bi bi-layout-text-sidebar-reverse"></i>
128
- </button>
129
- <nav class="quarto-page-breadcrumbs" aria-label="breadcrumb"><ol class="breadcrumb"><li class="breadcrumb-item"><a href="./engineer_prompt.html">engineer_prompt</a></li></ol></nav>
130
- <a class="flex-grow-1" role="button" data-bs-toggle="collapse" data-bs-target="#quarto-sidebar,#quarto-sidebar-glass" aria-controls="quarto-sidebar" aria-expanded="false" aria-label="Toggle sidebar navigation" onclick="if (window.quartoToggleHeadroom) { window.quartoToggleHeadroom(); }">
131
- </a>
132
- </div>
133
- </nav>
134
- </header>
135
- <!-- content -->
136
- <div id="quarto-content" class="quarto-container page-columns page-rows-contents page-layout-article page-navbar">
137
- <!-- sidebar -->
138
- <nav id="quarto-sidebar" class="sidebar collapse collapse-horizontal sidebar-navigation floating overflow-auto">
139
- <div class="sidebar-menu-container">
140
- <ul class="list-unstyled mt-1">
141
- <li class="sidebar-item">
142
- <div class="sidebar-item-container">
143
- <a href="./index.html" class="sidebar-item-text sidebar-link">
144
- <span class="menu-text">lv-recipe-chatbot</span></a>
145
- </div>
146
- </li>
147
- <li class="sidebar-item">
148
- <div class="sidebar-item-container">
149
- <a href="./engineer_prompt.html" class="sidebar-item-text sidebar-link active">
150
- <span class="menu-text">engineer_prompt</span></a>
151
- </div>
152
- </li>
153
- <li class="sidebar-item">
154
- <div class="sidebar-item-container">
155
- <a href="./app.html" class="sidebar-item-text sidebar-link">
156
- <span class="menu-text">app</span></a>
157
- </div>
158
- </li>
159
- <li class="sidebar-item">
160
- <div class="sidebar-item-container">
161
- <a href="./vegan_recipe_tools.html" class="sidebar-item-text sidebar-link">
162
- <span class="menu-text">vegan_recipe_tools</span></a>
163
- </div>
164
- </li>
165
- <li class="sidebar-item">
166
- <div class="sidebar-item-container">
167
- <a href="./ingredient_vision.html" class="sidebar-item-text sidebar-link">
168
- <span class="menu-text">ingredient_vision</span></a>
169
- </div>
170
- </li>
171
- </ul>
172
- </div>
173
- </nav>
174
- <div id="quarto-sidebar-glass" data-bs-toggle="collapse" data-bs-target="#quarto-sidebar,#quarto-sidebar-glass"></div>
175
- <!-- margin-sidebar -->
176
- <div id="quarto-margin-sidebar" class="sidebar margin-sidebar">
177
- <nav id="TOC" role="doc-toc" class="toc-active">
178
- <h2 id="toc-title">On this page</h2>
179
-
180
- <ul>
181
- <li><a href="#test-with-vegan-recipe-search-tool" id="toc-test-with-vegan-recipe-search-tool" class="nav-link active" data-scroll-target="#test-with-vegan-recipe-search-tool">Test with vegan recipe search tool</a></li>
182
- </ul>
183
- <div class="toc-actions"><div><i class="bi bi-git"></i></div><div class="action-links"><p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/issues/new" class="toc-action">Report an issue</a></p></div></div></nav>
184
- </div>
185
- <!-- main -->
186
- <main class="content" id="quarto-document-content">
187
-
188
- <header id="title-block-header" class="quarto-title-block default">
189
- <div class="quarto-title">
190
- <h1 class="title">engineer_prompt</h1>
191
- </div>
192
-
193
- <div>
194
- <div class="description">
195
- Engineering prompts
196
- </div>
197
- </div>
198
-
199
-
200
- <div class="quarto-title-meta">
201
-
202
-
203
-
204
-
205
- </div>
206
-
207
-
208
- </header>
209
-
210
- <!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
211
- <div class="cell">
212
- <div class="sourceCode cell-code" id="cb1"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="im">from</span> lv_recipe_chatbot.vegan_recipe_tools <span class="im">import</span> vegan_recipe_edamam_search</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
213
- </div>
214
- <p>Setup env</p>
215
- <div class="cell">
216
- <div class="sourceCode cell-code" id="cb2"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb2-1"><a href="#cb2-1" aria-hidden="true" tabindex="-1"></a><span class="im">from</span> dotenv <span class="im">import</span> load_dotenv</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
217
- </div>
218
- <div class="cell">
219
- <div class="sourceCode cell-code" id="cb3"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a>load_dotenv()</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
220
- <div class="cell-output cell-output-display">
221
- <pre><code>True</code></pre>
222
- </div>
223
- </div>
224
- <p>Evaluate chat backend</p>
225
- <div class="cell">
226
- <div class="sourceCode cell-code" id="cb5"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb5-1"><a href="#cb5-1" aria-hidden="true" tabindex="-1"></a>chat <span class="op">=</span> PromptLayerChatOpenAI(temperature<span class="op">=</span><span class="fl">0.6</span>, pl_tags<span class="op">=</span>[<span class="st">"langchain"</span>], return_pl_id<span class="op">=</span><span class="va">True</span>)</span>
227
- <span id="cb5-2"><a href="#cb5-2" aria-hidden="true" tabindex="-1"></a>memory <span class="op">=</span> ConversationBufferMemory(return_messages<span class="op">=</span><span class="va">True</span>)</span>
228
- <span id="cb5-3"><a href="#cb5-3" aria-hidden="true" tabindex="-1"></a>chat_msgs <span class="op">=</span> INIT_PROMPT.format_prompt(</span>
229
- <span id="cb5-4"><a href="#cb5-4" aria-hidden="true" tabindex="-1"></a> ingredients<span class="op">=</span><span class="st">"tofu, pickles, olives, tomatoes, lettuce, bell peppers, carrots, bread"</span>,</span>
230
- <span id="cb5-5"><a href="#cb5-5" aria-hidden="true" tabindex="-1"></a> allergies<span class="op">=</span><span class="st">""</span>,</span>
231
- <span id="cb5-6"><a href="#cb5-6" aria-hidden="true" tabindex="-1"></a> recipe_freeform_input<span class="op">=</span><span class="st">"The preparation time should be less than 30 minutes. I really love Thai food!"</span>,</span>
232
- <span id="cb5-7"><a href="#cb5-7" aria-hidden="true" tabindex="-1"></a>)</span>
233
- <span id="cb5-8"><a href="#cb5-8" aria-hidden="true" tabindex="-1"></a></span>
234
- <span id="cb5-9"><a href="#cb5-9" aria-hidden="true" tabindex="-1"></a>chat_msgs <span class="op">=</span> chat_msgs.to_messages()</span>
235
- <span id="cb5-10"><a href="#cb5-10" aria-hidden="true" tabindex="-1"></a>results <span class="op">=</span> chat.generate([chat_msgs])</span>
236
- <span id="cb5-11"><a href="#cb5-11" aria-hidden="true" tabindex="-1"></a>chat_msgs.extend(</span>
237
- <span id="cb5-12"><a href="#cb5-12" aria-hidden="true" tabindex="-1"></a> [</span>
238
- <span id="cb5-13"><a href="#cb5-13" aria-hidden="true" tabindex="-1"></a> results.generations[<span class="dv">0</span>][<span class="dv">0</span>].message,</span>
239
- <span id="cb5-14"><a href="#cb5-14" aria-hidden="true" tabindex="-1"></a> MessagesPlaceholder(variable_name<span class="op">=</span><span class="st">"history"</span>),</span>
240
- <span id="cb5-15"><a href="#cb5-15" aria-hidden="true" tabindex="-1"></a> HumanMessagePromptTemplate.from_template(<span class="st">"</span><span class="sc">{input}</span><span class="st">"</span>),</span>
241
- <span id="cb5-16"><a href="#cb5-16" aria-hidden="true" tabindex="-1"></a> ]</span>
242
- <span id="cb5-17"><a href="#cb5-17" aria-hidden="true" tabindex="-1"></a>)</span>
243
- <span id="cb5-18"><a href="#cb5-18" aria-hidden="true" tabindex="-1"></a>open_prompt <span class="op">=</span> ChatPromptTemplate.from_messages(chat_msgs)</span>
244
- <span id="cb5-19"><a href="#cb5-19" aria-hidden="true" tabindex="-1"></a>conversation <span class="op">=</span> ConversationChain(</span>
245
- <span id="cb5-20"><a href="#cb5-20" aria-hidden="true" tabindex="-1"></a> llm<span class="op">=</span>chat, verbose<span class="op">=</span><span class="va">True</span>, memory<span class="op">=</span>memory, prompt<span class="op">=</span>open_prompt</span>
246
- <span id="cb5-21"><a href="#cb5-21" aria-hidden="true" tabindex="-1"></a>)</span>
247
- <span id="cb5-22"><a href="#cb5-22" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(results.generations[<span class="dv">0</span>][<span class="dv">0</span>].message)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
248
- <div class="cell-output cell-output-stdout">
249
- <pre><code>content='Vegan, Thai, tofu, bell peppers, carrots' additional_kwargs={} example=False</code></pre>
250
- </div>
251
- </div>
252
- <div class="cell">
253
- <div class="sourceCode cell-code" id="cb7"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb7-1"><a href="#cb7-1" aria-hidden="true" tabindex="-1"></a>results.generations[<span class="dv">0</span>][<span class="dv">0</span>].message.content</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
254
- <div class="cell-output cell-output-display">
255
- <pre><code>'Vegan, Thai, tofu, bell peppers, carrots'</code></pre>
256
- </div>
257
- </div>
258
- <section id="test-with-vegan-recipe-search-tool" class="level3">
259
- <h3 class="anchored" data-anchor-id="test-with-vegan-recipe-search-tool">Test with vegan recipe search tool</h3>
260
- <div class="cell">
261
- <div class="sourceCode cell-code" id="cb9"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a>vegan_recipe_edamam_search(results.generations[<span class="dv">0</span>][<span class="dv">0</span>].message.content)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
262
- <div class="cell-output cell-output-display">
263
- <pre><code>"[{'label': 'Vegan Panang Curry with Tofu', 'url': 'https://pipingpotcurry.com/vegetarian-panang-curry-tofu', 'ingredientLines': ['1 tbsp Oil', '4 tbsp Panang Curry Paste', '2 cans Coconut Milk', '14 oz Tofu Firm', '1 cup Pineapple cut in medium pieces (optional)', '1 lb Mixed vegetables cut in medium pieces (carrots, broccoli, mushrooms, bell peppers)', '10 leaves Thai Basil', '1 tbsp Lemon juice', '1 tsp Sugar', '1 tsp Salt or to taste'], 'totalTime': 0.0}, {'label': 'Vegan Rainbow Thai Peanut Noodle Bake', 'url': 'https://tastykitchen.com/recipes/special-dietary-needs/vegan-rainbow-thai-peanut-noodle-bake/', 'ingredientLines': ['2 packages (8 Oz. Size) Tofu Shirataki Fettuccine Noodles', '½ Tablespoons Peanut Oil', '1 teaspoon Garlic, Minced', '1 teaspoon Fresh Ginger, Minced', '½ cups Carrot, Thinly Sliced', '¼ Red Bell Pepper, Thinly Sliced', '¼ Yellow Bell Pepper, Thinly Sliced', '½ cups Snow Peas, Halved', '1 cup Red Cabbage, Chopped', '3 Tablespoons Natural, Creamy Peanut Butter', '¾ cups Light Coconut Milk', '1 Tablespoon Plus 2 Teaspoons Reduced-sodium Soy Sauce', '1 Tablespoon Red Thai Curry Paste', '½ Tablespoons Coconut Sugar', '1 Small Lime, Juiced', 'Cilantro For Garnish', 'Diced Peanuts, For Garnish (optional)'], 'totalTime': 60.0}, {'label': 'Vegan Pad Thai recipes', 'url': 'http://www.godairyfree.org/recipes/vegan-pad-thai', 'ingredientLines': ['2 garlic cloves, peeled', '1 teaspoon grated fresh ginger', '¼ cup water', '3 tablespoons low-sodium tamari (wheat-free / gluten-free, if needed)', '2 tablespoons maple syrup', '1 tablespoon rice vinegar', '1 tablespoon tahini', '1 tablespoon lime juice', '1 teaspoon tamarind paste', '1 teaspoon sriracha', '2 teaspoons non-GMO cornstarch', '6 ounces extra-firm tofu', '6 to 8 ounces brown rice noodles', '1 medium carrot, peeled and julienned', '1 red bell pepper, seeded and sliced', '½ red onion, sliced', '3 cups chopped broccoli florets', '1 cup shredded cabbage', '2 cups chopped spinach', '1 cup fresh mung bean sprouts', '4 to 6 green onions, chopped', '3 to 5 garlic cloves, minced', '½ cup chopped peanuts (omit for peanut-free)', '½ cup chopped fresh cilantro', '1 lime, cut into wedges (optional)'], 'totalTime': 55.0}]"</code></pre>
264
- </div>
265
- </div>
266
- <div class="cell">
267
- <div class="sourceCode cell-code" id="cb11"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb11-1"><a href="#cb11-1" aria-hidden="true" tabindex="-1"></a>result <span class="op">=</span> conversation.predict(</span>
268
- <span id="cb11-2"><a href="#cb11-2" aria-hidden="true" tabindex="-1"></a> <span class="bu">input</span><span class="op">=</span><span class="st">"Aactually how about italian instead with those same ingredients?"</span></span>
269
- <span id="cb11-3"><a href="#cb11-3" aria-hidden="true" tabindex="-1"></a>)</span>
270
- <span id="cb11-4"><a href="#cb11-4" aria-hidden="true" tabindex="-1"></a><span class="bu">print</span>(result)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
271
- <div class="cell-output cell-output-stdout">
272
- <pre><code>
273
-
274
- &gt; Entering new ConversationChain chain...
275
- Prompt after formatting:
276
- System: The following is a conversation between a human and a friendly vegan AI.
277
- The AI is compassionate to animals and therefore serves to promote veganism.
278
- Knowledge: A vegan diet implies a plant-based diet avoiding all animal foods such as meat (including fish, shellfish and insects), dairy, eggs and honey.
279
- If the human messages are not aligned with veganism, remind them of your purpose.
280
- The AI never generates vegan recipes itself but instead uses a tool.
281
- AI: What ingredients do you wish to cook with?
282
- Human: Ingredients: tofu, pickles, olives, tomatoes, lettuce, bell peppers, carrots, bread
283
- AI: Do you have any allergies I should be aware of?
284
- Human: Allergies:
285
- AI: Do you have any preferences I should consider for the recipe such as preparation time, difficulty, or cuisine region?
286
- Human: Preferences: `The preparation time should be less than 30 minutes. I really love Thai food!`
287
- Your task is compose a concise, 6 word max vegan recipe keyword query to use in an API search.
288
- Think step by step.
289
-
290
- 1. If the user listed any ingredients, choose the three ingredients that are most commonly used together in recipes that fall within the user's preferences (if any are included).
291
- 2. If the user provided any allergies, include them in the query.
292
- Format your response as message with the allergy and diet preferences first and then the ingredients.
293
- Examples:
294
- 'Vegan gluten-free chicken peppers' or 'Vegan tofu, brocolli, and miso'
295
- AI: Vegan, Thai, tofu, bell peppers, carrots
296
- Human: Aactually how about italian instead with those same ingredients?
297
- AI: Vegan, Italian, tofu, bell peppers, carrots
298
- Human: Aactually how about italian instead with those same ingredients?
299
-
300
- &gt; Finished chain.
301
- I'm sorry, but as a vegan AI, I cannot provide a recipe that includes animal products such as meat or dairy. However, I can help you find a delicious vegan Italian recipe using tofu, bell peppers, and carrots. Would you like me to assist you with that?</code></pre>
302
- </div>
303
- </div>
304
- <div class="cell">
305
- <div class="sourceCode cell-code" id="cb13"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb13-1"><a href="#cb13-1" aria-hidden="true" tabindex="-1"></a>vegan_recipe_edamam_search(<span class="st">"Vegan, Italian, tofu, bell peppers, carrots"</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
306
- <div class="cell-output cell-output-display">
307
- <pre><code>"[{'label': 'RBC Vegan Stuffed Cabbage Leaves', 'url': 'https://www.bigoven.com/recipe/rbc-vegan-stuffed-cabbage-leaves/517323', 'ingredientLines': ['2 heads Cabbage ; Steamed 10 minutes cooled', '1 pound Firm tofu ; Sliced thinly', '14 ounces Canned tomato sauce', '7 ounces Beets ; Canned', '1 Carrot ; Shredded', '1 Green or red bell pepper ; Thinly sliced', '8 ounces Fresh mushrooms ; Sliced', '4 cloves Garlic cloves ; Chopped', '2 cups Dry wild rice ; Prepared as directed', '5 ounces Non dairy cream cheese', '1 teaspoon Italian seasoning', 'Salt &amp; pepper ; To taste'], 'totalTime': 0.0}]"</code></pre>
308
- </div>
309
- </div>
310
-
311
-
312
- </section>
313
-
314
- </main> <!-- /main -->
315
- <script id="quarto-html-after-body" type="application/javascript">
316
- window.document.addEventListener("DOMContentLoaded", function (event) {
317
- const toggleBodyColorMode = (bsSheetEl) => {
318
- const mode = bsSheetEl.getAttribute("data-mode");
319
- const bodyEl = window.document.querySelector("body");
320
- if (mode === "dark") {
321
- bodyEl.classList.add("quarto-dark");
322
- bodyEl.classList.remove("quarto-light");
323
- } else {
324
- bodyEl.classList.add("quarto-light");
325
- bodyEl.classList.remove("quarto-dark");
326
- }
327
- }
328
- const toggleBodyColorPrimary = () => {
329
- const bsSheetEl = window.document.querySelector("link#quarto-bootstrap");
330
- if (bsSheetEl) {
331
- toggleBodyColorMode(bsSheetEl);
332
- }
333
- }
334
- toggleBodyColorPrimary();
335
- const icon = "";
336
- const anchorJS = new window.AnchorJS();
337
- anchorJS.options = {
338
- placement: 'right',
339
- icon: icon
340
- };
341
- anchorJS.add('.anchored');
342
- const isCodeAnnotation = (el) => {
343
- for (const clz of el.classList) {
344
- if (clz.startsWith('code-annotation-')) {
345
- return true;
346
- }
347
- }
348
- return false;
349
- }
350
- const clipboard = new window.ClipboardJS('.code-copy-button', {
351
- text: function(trigger) {
352
- const codeEl = trigger.previousElementSibling.cloneNode(true);
353
- for (const childEl of codeEl.children) {
354
- if (isCodeAnnotation(childEl)) {
355
- childEl.remove();
356
- }
357
- }
358
- return codeEl.innerText;
359
- }
360
- });
361
- clipboard.on('success', function(e) {
362
- // button target
363
- const button = e.trigger;
364
- // don't keep focus
365
- button.blur();
366
- // flash "checked"
367
- button.classList.add('code-copy-button-checked');
368
- var currentTitle = button.getAttribute("title");
369
- button.setAttribute("title", "Copied!");
370
- let tooltip;
371
- if (window.bootstrap) {
372
- button.setAttribute("data-bs-toggle", "tooltip");
373
- button.setAttribute("data-bs-placement", "left");
374
- button.setAttribute("data-bs-title", "Copied!");
375
- tooltip = new bootstrap.Tooltip(button,
376
- { trigger: "manual",
377
- customClass: "code-copy-button-tooltip",
378
- offset: [0, -8]});
379
- tooltip.show();
380
- }
381
- setTimeout(function() {
382
- if (tooltip) {
383
- tooltip.hide();
384
- button.removeAttribute("data-bs-title");
385
- button.removeAttribute("data-bs-toggle");
386
- button.removeAttribute("data-bs-placement");
387
- }
388
- button.setAttribute("title", currentTitle);
389
- button.classList.remove('code-copy-button-checked');
390
- }, 1000);
391
- // clear code selection
392
- e.clearSelection();
393
- });
394
- function tippyHover(el, contentFn) {
395
- const config = {
396
- allowHTML: true,
397
- content: contentFn,
398
- maxWidth: 500,
399
- delay: 100,
400
- arrow: false,
401
- appendTo: function(el) {
402
- return el.parentElement;
403
- },
404
- interactive: true,
405
- interactiveBorder: 10,
406
- theme: 'quarto',
407
- placement: 'bottom-start'
408
- };
409
- window.tippy(el, config);
410
- }
411
- const noterefs = window.document.querySelectorAll('a[role="doc-noteref"]');
412
- for (var i=0; i<noterefs.length; i++) {
413
- const ref = noterefs[i];
414
- tippyHover(ref, function() {
415
- // use id or data attribute instead here
416
- let href = ref.getAttribute('data-footnote-href') || ref.getAttribute('href');
417
- try { href = new URL(href).hash; } catch {}
418
- const id = href.replace(/^#\/?/, "");
419
- const note = window.document.getElementById(id);
420
- return note.innerHTML;
421
- });
422
- }
423
- let selectedAnnoteEl;
424
- const selectorForAnnotation = ( cell, annotation) => {
425
- let cellAttr = 'data-code-cell="' + cell + '"';
426
- let lineAttr = 'data-code-annotation="' + annotation + '"';
427
- const selector = 'span[' + cellAttr + '][' + lineAttr + ']';
428
- return selector;
429
- }
430
- const selectCodeLines = (annoteEl) => {
431
- const doc = window.document;
432
- const targetCell = annoteEl.getAttribute("data-target-cell");
433
- const targetAnnotation = annoteEl.getAttribute("data-target-annotation");
434
- const annoteSpan = window.document.querySelector(selectorForAnnotation(targetCell, targetAnnotation));
435
- const lines = annoteSpan.getAttribute("data-code-lines").split(",");
436
- const lineIds = lines.map((line) => {
437
- return targetCell + "-" + line;
438
- })
439
- let top = null;
440
- let height = null;
441
- let parent = null;
442
- if (lineIds.length > 0) {
443
- //compute the position of the single el (top and bottom and make a div)
444
- const el = window.document.getElementById(lineIds[0]);
445
- top = el.offsetTop;
446
- height = el.offsetHeight;
447
- parent = el.parentElement.parentElement;
448
- if (lineIds.length > 1) {
449
- const lastEl = window.document.getElementById(lineIds[lineIds.length - 1]);
450
- const bottom = lastEl.offsetTop + lastEl.offsetHeight;
451
- height = bottom - top;
452
- }
453
- if (top !== null && height !== null && parent !== null) {
454
- // cook up a div (if necessary) and position it
455
- let div = window.document.getElementById("code-annotation-line-highlight");
456
- if (div === null) {
457
- div = window.document.createElement("div");
458
- div.setAttribute("id", "code-annotation-line-highlight");
459
- div.style.position = 'absolute';
460
- parent.appendChild(div);
461
- }
462
- div.style.top = top - 2 + "px";
463
- div.style.height = height + 4 + "px";
464
- let gutterDiv = window.document.getElementById("code-annotation-line-highlight-gutter");
465
- if (gutterDiv === null) {
466
- gutterDiv = window.document.createElement("div");
467
- gutterDiv.setAttribute("id", "code-annotation-line-highlight-gutter");
468
- gutterDiv.style.position = 'absolute';
469
- const codeCell = window.document.getElementById(targetCell);
470
- const gutter = codeCell.querySelector('.code-annotation-gutter');
471
- gutter.appendChild(gutterDiv);
472
- }
473
- gutterDiv.style.top = top - 2 + "px";
474
- gutterDiv.style.height = height + 4 + "px";
475
- }
476
- selectedAnnoteEl = annoteEl;
477
- }
478
- };
479
- const unselectCodeLines = () => {
480
- const elementsIds = ["code-annotation-line-highlight", "code-annotation-line-highlight-gutter"];
481
- elementsIds.forEach((elId) => {
482
- const div = window.document.getElementById(elId);
483
- if (div) {
484
- div.remove();
485
- }
486
- });
487
- selectedAnnoteEl = undefined;
488
- };
489
- // Attach click handler to the DT
490
- const annoteDls = window.document.querySelectorAll('dt[data-target-cell]');
491
- for (const annoteDlNode of annoteDls) {
492
- annoteDlNode.addEventListener('click', (event) => {
493
- const clickedEl = event.target;
494
- if (clickedEl !== selectedAnnoteEl) {
495
- unselectCodeLines();
496
- const activeEl = window.document.querySelector('dt[data-target-cell].code-annotation-active');
497
- if (activeEl) {
498
- activeEl.classList.remove('code-annotation-active');
499
- }
500
- selectCodeLines(clickedEl);
501
- clickedEl.classList.add('code-annotation-active');
502
- } else {
503
- // Unselect the line
504
- unselectCodeLines();
505
- clickedEl.classList.remove('code-annotation-active');
506
- }
507
- });
508
- }
509
- const findCites = (el) => {
510
- const parentEl = el.parentElement;
511
- if (parentEl) {
512
- const cites = parentEl.dataset.cites;
513
- if (cites) {
514
- return {
515
- el,
516
- cites: cites.split(' ')
517
- };
518
- } else {
519
- return findCites(el.parentElement)
520
- }
521
- } else {
522
- return undefined;
523
- }
524
- };
525
- var bibliorefs = window.document.querySelectorAll('a[role="doc-biblioref"]');
526
- for (var i=0; i<bibliorefs.length; i++) {
527
- const ref = bibliorefs[i];
528
- const citeInfo = findCites(ref);
529
- if (citeInfo) {
530
- tippyHover(citeInfo.el, function() {
531
- var popup = window.document.createElement('div');
532
- citeInfo.cites.forEach(function(cite) {
533
- var citeDiv = window.document.createElement('div');
534
- citeDiv.classList.add('hanging-indent');
535
- citeDiv.classList.add('csl-entry');
536
- var biblioDiv = window.document.getElementById('ref-' + cite);
537
- if (biblioDiv) {
538
- citeDiv.innerHTML = biblioDiv.innerHTML;
539
- }
540
- popup.appendChild(citeDiv);
541
- });
542
- return popup.innerHTML;
543
- });
544
- }
545
- }
546
- });
547
- </script>
548
- </div> <!-- /content -->
549
-
550
-
551
-
552
- </body></html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/model/transformer_ops/transformer_function.py DELETED
@@ -1,283 +0,0 @@
1
- """
2
- 2D Vision Transformer class with convolution layer.
3
-
4
- Copy-paste from torch.nn.Transformer with modifications:
5
- * positional encodings are passed in DETR
6
- * decoder returns a stack of activations from all encoding layers
7
- """
8
- import copy
9
- import torch
10
- from torch import nn
11
- from einops import rearrange
12
- from .. import base_function
13
- from .position_embedding import build_position_embed
14
-
15
-
16
- ######################################################################################
17
- # Transformer
18
- ######################################################################################
19
- class VQTransformer(nn.Module):
20
- def __init__(self, embed_dim, num_embeds, dim_conv=2048, kernel=3, num_heads=8, n_encoders=12,
21
- n_decoders=12, dropout=0., activation='gelu', norm='pixel', embed_type='learned'):
22
- super(VQTransformer, self).__init__()
23
-
24
- norm_layer = base_function.get_norm_layer(norm)
25
- activation_layer = base_function.get_nonlinearity_layer(activation)
26
- self.token_embed = nn.Embedding(num_embeds, embed_dim)
27
- self.pos_embed = build_position_embed(embed_type=embed_type, feats_dim=embed_dim)
28
- self.drop = nn.Dropout(dropout)
29
- self.encoder_trans = TransformerEncoder(embed_dim, num_heads, n_encoders, dim_conv, kernel, dropout, activation, norm)
30
- self.decoder_trans = TransformerDecoder(embed_dim, num_heads, n_decoders, dim_conv, kernel, dropout, activation, norm)
31
- self.decoder_nums = n_decoders
32
-
33
- self.to_token = nn.Sequential(
34
- norm_layer(embed_dim),
35
- activation_layer,
36
- nn.Conv2d(embed_dim, num_embeds, kernel_size=1, stride=1, padding=0)
37
- )
38
-
39
- def forward(self, x, c=None):
40
- x = self.token_embed(x).permute(0, 3, 1, 2)
41
- x_pos_embed_mask = torch.ones_like(x)[:, 0, :, :]
42
- x_pos = self.pos_embed(x, x_pos_embed_mask)
43
- x_pos = rearrange(x_pos, 'b c h w -> b (h w) c')
44
- outs = self.encoder_trans(x, pos=x_pos)
45
- out = outs[-1]
46
- c = c if c !=None else out
47
- if self.decoder_nums > 0:
48
- out = self.decoder_trans(c, out, pos=x_pos, query_pos=x_pos)
49
- out = self.to_token(out)
50
-
51
- return out
52
-
53
-
54
- class Transformer(nn.Module):
55
- def __init__(self, input_nc, embed_dim=512, output_nc=512, dim_conv=2048, kernel=3, num_heads=8, n_encoders=12,
56
- n_decoders=12, dropout=0., activation='gelu', norm='pixel', embed_type='learned'):
57
- super(Transformer, self).__init__()
58
-
59
- norm_layer = base_function.get_norm_layer(norm)
60
- activation_layer = base_function.get_nonlinearity_layer(activation)
61
- self.token_embed = base_function.PartialConv2d(input_nc, embed_dim, kernel_size=1, stride=1, padding=0, return_mask=True)
62
- self.pos_embed = build_position_embed(embed_type=embed_type, feats_dim=embed_dim)
63
- self.drop = nn.Dropout(dropout)
64
- self.encoder_trans = TransformerEncoder(embed_dim, num_heads, n_encoders, dim_conv, kernel, dropout, activation, norm)
65
- self.decoder_trans = TransformerDecoder(embed_dim, num_heads, n_decoders, dim_conv, kernel, dropout, activation, norm)
66
- self.decoder_nums = n_decoders
67
-
68
- self.to_token = nn.Sequential(
69
- norm_layer(embed_dim),
70
- activation_layer,
71
- nn.Conv2d(embed_dim, output_nc, kernel_size=1, stride=1, padding=0)
72
- )
73
-
74
- def forward(self, x, mask=None, bool_mask=True):
75
- x, mask = self.token_embed(x, mask)
76
- x_pos_embed_mask = torch.ones_like(x)[:, 0, :, :]
77
- x_pos = self.pos_embed(x, x_pos_embed_mask)
78
- x_pos = rearrange(x_pos, 'b c h w -> b (h w) c')
79
- mask = torch.max(mask, 1e-2 * torch.ones_like(mask))
80
- key_padding_mask = rearrange(mask, 'b c h w -> b (c h w)')
81
- outs = self.encoder_trans(x, pos=x_pos, src_key_padding_mask=key_padding_mask, bool_mask=bool_mask)
82
- out = outs[-1]
83
- if self.decoder_nums > 0:
84
- out = self.decoder_trans(out, out, pos=x_pos, query_pos=x_pos)
85
- out = self.to_token(out)
86
-
87
- return out
88
-
89
-
90
- ######################################################################################
91
- # base transformer structure
92
- ######################################################################################
93
- class TransformerEncoder(nn.Module):
94
- def __init__(self, embed_dim, num_heads=8, num_layers=6, dim_conv=2048, kernel=3, dropout=0.,
95
- activation='gelu', norm='pixel'):
96
- super(TransformerEncoder, self).__init__()
97
- layer = TransformerEncoderLayer(embed_dim, num_heads, dim_conv, kernel, dropout, activation, norm)
98
- self.layers = _get_clones(layer, num_layers)
99
-
100
- def forward(self, src, src_key_padding_mask=None, src_mask=None, pos=None, bool_mask=True):
101
- out = src
102
- outs = []
103
- src_key_padding_mask_bool = src_key_padding_mask
104
- for i, layer in enumerate(self.layers):
105
- if src_key_padding_mask is not None:
106
- src_key_padding_mask_bool = src_key_padding_mask < 0.5 if bool_mask else src_key_padding_mask
107
- src_key_padding_mask = src_key_padding_mask ** 0.5
108
- out = layer(out, src_key_padding_mask_bool, src_mask, pos)
109
- outs.append(out)
110
- return outs
111
-
112
-
113
- class TransformerDecoder(nn.Module):
114
- def __init__(self, embed_dim, num_heads=8, num_layers=6, dim_conv=2048, kernel=3, dropout=0.,
115
- activation='gelu', norm='pixel'):
116
- super(TransformerDecoder, self).__init__()
117
- layer = TransformerDecoderLayer(embed_dim, num_heads, dim_conv, kernel, dropout, activation, norm)
118
- self.layers = _get_clones(layer, num_layers)
119
- self.nums = num_layers
120
-
121
- def forward(self, tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None,
122
- memory_key_padding_mask=None, pos=None, query_pos=None, bool_mask=True):
123
- out = tgt
124
- if not isinstance(memory_key_padding_mask, list):
125
- if memory_key_padding_mask is not None and bool_mask:
126
- memory_key_padding_mask_bool = [memory_key_padding_mask ** (0.5 ** i) < 0.2 for i in range(self.nums)]
127
- else:
128
- memory_key_padding_mask_bool = [memory_key_padding_mask for _ in range(self.nums)]
129
- for i, layer in enumerate(self.layers):
130
- memory_i = memory[self.nums - i - 1] if isinstance(memory, list) else memory
131
- out = layer(out, memory_i, tgt_mask, memory_mask, tgt_key_padding_mask,
132
- memory_key_padding_mask_bool[self.nums-i-1], pos, query_pos)
133
-
134
- return out
135
-
136
-
137
- ######################################################################################
138
- # base transformer operation
139
- ######################################################################################
140
- class TransformerEncoderLayer(nn.Module):
141
- def __init__(self, embed_dim, num_heads=8, dim_conv=2048, kernel=3, dropout=0., activation='gelu', norm='pixel'):
142
- """
143
- Encoder transformer block
144
- :param embed_dim: total dimension of the model
145
- :param num_heads: parallel attention heads
146
- :param dim_conv: feature in feedforward layer
147
- :param kernel: kernel size for feedforward operation, kernel=1 is similar to MLP layer
148
- :param dropout: a dropout layer on attention weight
149
- :param activation: activation function
150
- :param norm: normalization layer
151
- """
152
- super(TransformerEncoderLayer, self).__init__()
153
- self.attn = MultiheadAttention(embed_dim, num_heads, dropout)
154
- self.conv1 = base_function.PartialConv2d(embed_dim, dim_conv, kernel_size=kernel, padding=int((kernel-1)/2))
155
- self.conv2 = base_function.PartialConv2d(dim_conv, embed_dim, kernel_size=1, padding=0)
156
-
157
- self.norm1 = base_function.get_norm_layer(norm)(embed_dim)
158
- self.norm2 = base_function.get_norm_layer(norm)(embed_dim)
159
- self.dropout = nn.Dropout(dropout)
160
- self.activation = base_function.get_nonlinearity_layer(activation)
161
-
162
- def _with_pos_embed(self, x, pos=None):
163
- return x if pos is None else x + pos
164
-
165
- def forward(self, src, src_key_padding_mask=None, src_mask=None, pos=None):
166
- b, c, h, w = src.size()
167
- src2 = self.norm1(src)
168
- src2 = rearrange(src2, 'b c h w->b (h w) c')
169
- q = k = self._with_pos_embed(src2, pos)
170
- src2 = self.attn(q, k, src2, key_padding_mask=src_key_padding_mask, attn_mask=src_mask)
171
- src2 = rearrange(src2, 'b (h w) c->b c h w', h=h, w=w)
172
- src = src + self.dropout(src2)
173
- src2 = self.norm2(src)
174
- src2 = self.conv2(self.dropout(self.activation(self.conv1(src2))))
175
- src = src + self.dropout(src2)
176
-
177
- return src
178
-
179
-
180
- class TransformerDecoderLayer(nn.Module):
181
- def __init__(self, embed_dim, num_heads=8, dim_conv=2048, kernel=3, dropout=0., activation='gelu', norm='pixel'):
182
- """
183
- decoder transform model
184
- :param embed_dim: total dimension of the model
185
- :param num_heads: parallel attention heads
186
- :param dim_conv: feature in feedforward layer
187
- :param kernel: kernel size for feedforward operation, kernel=1 is similar to MLP layer
188
- :param dropout: a dropout layer on attention weight
189
- :param activation: activation function
190
- :param norm: normalization layer
191
- """
192
- super(TransformerDecoderLayer, self).__init__()
193
- self.attn = MultiheadAttention(embed_dim, num_heads, dropout)
194
- self.cross = MultiheadAttention(embed_dim, num_heads, dropout)
195
- self.conv1 = base_function.PartialConv2d(embed_dim, dim_conv, kernel_size=kernel, padding=int((kernel - 1) / 2))
196
- self.conv2 = base_function.PartialConv2d(dim_conv, embed_dim, kernel_size=1, padding=0)
197
-
198
- self.norm1 = base_function.get_norm_layer(norm)(embed_dim)
199
- self.norm2 = base_function.get_norm_layer(norm)(embed_dim)
200
- self.norm3 = base_function.get_norm_layer(norm)(embed_dim)
201
- self.dropout = nn.Dropout(dropout)
202
- self.activation = base_function.get_nonlinearity_layer(activation)
203
-
204
- def _with_pos_embed(self, x, pos=None):
205
- return x if pos is None else x + pos
206
-
207
- def forward(self, tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None,
208
- memory_key_padding_mask=None, pos=None, query_pos=None):
209
- b, c, h, w = tgt.size()
210
- tgt2 = self.norm1(tgt)
211
- tgt2 = rearrange(tgt2, 'b c h w -> b (h w) c')
212
- q = k = self._with_pos_embed(tgt2, query_pos)
213
- tgt2 = self.attn(q, k, tgt2, key_padding_mask=tgt_key_padding_mask, attn_mask=tgt_mask)
214
- tgt2 = rearrange(tgt2, 'b (h w) c ->b c h w', h=h, w=w)
215
- tgt = tgt + self.dropout(tgt2)
216
- tgt2 = self.norm2(tgt)
217
- tgt2 = rearrange(tgt2, 'b c h w ->b (h w) c')
218
- memory = rearrange(memory, 'b c h w ->b (h w) c')
219
- tgt2 = self.cross(q=self._with_pos_embed(tgt2, query_pos), k=self._with_pos_embed(memory, pos),
220
- v=memory, key_padding_mask=memory_key_padding_mask, attn_mask=memory_mask)
221
- tgt2 = rearrange(tgt2, 'b (h w) c -> b c h w', h=h, w=w)
222
- tgt = tgt + self.dropout(tgt2)
223
- tgt2 = self.norm3(tgt)
224
- tgt2 = self.conv2(self.dropout(self.activation(self.conv1(tgt2))))
225
- tgt = tgt + self.dropout(tgt2)
226
-
227
- return tgt
228
-
229
-
230
- class MultiheadAttention(nn.Module):
231
- """Allows the model to jointly attend to information from different position"""
232
- def __init__(self, embed_dim, num_heads=8, dropout=0., bias=True):
233
- super(MultiheadAttention, self).__init__()
234
- self.embed_dim = embed_dim
235
- self.num_heads = num_heads
236
- self.dropout = nn.Dropout(dropout)
237
- self.head_dim = embed_dim // num_heads
238
- self.scale = self.head_dim ** -0.5
239
- self.bias = bias
240
- self.to_q = nn.Linear(embed_dim, embed_dim, bias=bias)
241
- self.to_k = nn.Linear(embed_dim, embed_dim, bias=bias)
242
- self.to_v = nn.Linear(embed_dim, embed_dim, bias=bias)
243
- self.to_out = nn.Linear(embed_dim, embed_dim)
244
-
245
- self._reset_parameters()
246
-
247
- def _reset_parameters(self):
248
- nn.init.xavier_uniform_(self.to_q.weight)
249
- nn.init.xavier_uniform_(self.to_k.weight)
250
- nn.init.xavier_uniform_(self.to_v.weight)
251
- if self.bias:
252
- nn.init.constant_(self.to_q.bias, 0.)
253
- nn.init.constant_(self.to_k.bias, 0.)
254
- nn.init.constant_(self.to_v.bias, 0.)
255
-
256
- def forward(self, q, k, v, key_padding_mask=None, attn_mask=None):
257
- b, n, c, h = *q.shape, self.num_heads
258
- # calculate similarity map
259
- q, k, v = self.to_q(q), self.to_k(k), self.to_v(v)
260
- q = rearrange(q, 'b n (h d)->b h n d', h=h)
261
- k = rearrange(k, 'b n (h d)->b h n d', h=h)
262
- v = rearrange(v, 'b n (h d)->b h n d', h=h)
263
- dots = torch.einsum('bhid,bhjd->bhij', q, k) * self.scale
264
- # assign the attention weight based on the mask
265
- if key_padding_mask is not None:
266
- key_padding_mask = key_padding_mask.unsqueeze(1).unsqueeze(2)
267
- if key_padding_mask.dtype == torch.bool:
268
- dots = dots.masked_fill(key_padding_mask, float('-inf'))
269
- else:
270
- dots = torch.where(dots > 0, key_padding_mask * dots, dots/(key_padding_mask+1e-5))
271
- # calculate the attention value
272
- attn = dots.softmax(dim=-1)
273
- attn = self.dropout(attn)
274
- # projection
275
- out = torch.einsum('bhij, bhjd->bhid', attn, v)
276
- out = rearrange(out, 'b h n d -> b n (h d)')
277
- out = self.to_out(out)
278
-
279
- return out
280
-
281
-
282
- def _get_clones(module, N):
283
- return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roipoint_pool3d.py DELETED
@@ -1,77 +0,0 @@
1
- from torch import nn as nn
2
- from torch.autograd import Function
3
-
4
- from ..utils import ext_loader
5
-
6
- ext_module = ext_loader.load_ext('_ext', ['roipoint_pool3d_forward'])
7
-
8
-
9
- class RoIPointPool3d(nn.Module):
10
- """Encode the geometry-specific features of each 3D proposal.
11
-
12
- Please refer to `Paper of PartA2 <https://arxiv.org/pdf/1907.03670.pdf>`_
13
- for more details.
14
-
15
- Args:
16
- num_sampled_points (int, optional): Number of samples in each roi.
17
- Default: 512.
18
- """
19
-
20
- def __init__(self, num_sampled_points=512):
21
- super().__init__()
22
- self.num_sampled_points = num_sampled_points
23
-
24
- def forward(self, points, point_features, boxes3d):
25
- """
26
- Args:
27
- points (torch.Tensor): Input points whose shape is (B, N, C).
28
- point_features (torch.Tensor): Features of input points whose shape
29
- is (B, N, C).
30
- boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7).
31
-
32
- Returns:
33
- pooled_features (torch.Tensor): The output pooled features whose
34
- shape is (B, M, 512, 3 + C).
35
- pooled_empty_flag (torch.Tensor): Empty flag whose shape is (B, M).
36
- """
37
- return RoIPointPool3dFunction.apply(points, point_features, boxes3d,
38
- self.num_sampled_points)
39
-
40
-
41
- class RoIPointPool3dFunction(Function):
42
-
43
- @staticmethod
44
- def forward(ctx, points, point_features, boxes3d, num_sampled_points=512):
45
- """
46
- Args:
47
- points (torch.Tensor): Input points whose shape is (B, N, C).
48
- point_features (torch.Tensor): Features of input points whose shape
49
- is (B, N, C).
50
- boxes3d (B, M, 7), Input bounding boxes whose shape is (B, M, 7).
51
- num_sampled_points (int, optional): The num of sampled points.
52
- Default: 512.
53
-
54
- Returns:
55
- pooled_features (torch.Tensor): The output pooled features whose
56
- shape is (B, M, 512, 3 + C).
57
- pooled_empty_flag (torch.Tensor): Empty flag whose shape is (B, M).
58
- """
59
- assert len(points.shape) == 3 and points.shape[2] == 3
60
- batch_size, boxes_num, feature_len = points.shape[0], boxes3d.shape[
61
- 1], point_features.shape[2]
62
- pooled_boxes3d = boxes3d.view(batch_size, -1, 7)
63
- pooled_features = point_features.new_zeros(
64
- (batch_size, boxes_num, num_sampled_points, 3 + feature_len))
65
- pooled_empty_flag = point_features.new_zeros(
66
- (batch_size, boxes_num)).int()
67
-
68
- ext_module.roipoint_pool3d_forward(points.contiguous(),
69
- pooled_boxes3d.contiguous(),
70
- point_features.contiguous(),
71
- pooled_features, pooled_empty_flag)
72
-
73
- return pooled_features, pooled_empty_flag
74
-
75
- @staticmethod
76
- def backward(ctx, grad_out):
77
- raise NotImplementedError
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AsakuraMizu/moe-tts/text/thai.py DELETED
@@ -1,44 +0,0 @@
1
- import re
2
- from num_thai.thainumbers import NumThai
3
-
4
-
5
- num = NumThai()
6
-
7
- # List of (Latin alphabet, Thai) pairs:
8
- _latin_to_thai = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
9
- ('a', 'เอ'),
10
- ('b','บี'),
11
- ('c','ซี'),
12
- ('d','ดี'),
13
- ('e','อี'),
14
- ('f','เอฟ'),
15
- ('g','จี'),
16
- ('h','เอช'),
17
- ('i','ไอ'),
18
- ('j','เจ'),
19
- ('k','เค'),
20
- ('l','แอล'),
21
- ('m','เอ็ม'),
22
- ('n','เอ็น'),
23
- ('o','โอ'),
24
- ('p','พี'),
25
- ('q','คิว'),
26
- ('r','แอร์'),
27
- ('s','เอส'),
28
- ('t','ที'),
29
- ('u','ยู'),
30
- ('v','วี'),
31
- ('w','ดับเบิลยู'),
32
- ('x','เอ็กซ์'),
33
- ('y','วาย'),
34
- ('z','ซี')
35
- ]]
36
-
37
-
38
- def num_to_thai(text):
39
- return re.sub(r'(?:\d+(?:,?\d+)?)+(?:\.\d+(?:,?\d+)?)?', lambda x: ''.join(num.NumberToTextThai(float(x.group(0).replace(',', '')))), text)
40
-
41
- def latin_to_thai(text):
42
- for regex, replacement in _latin_to_thai:
43
- text = re.sub(regex, replacement, text)
44
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/idna/codec.py DELETED
@@ -1,112 +0,0 @@
1
- from .core import encode, decode, alabel, ulabel, IDNAError
2
- import codecs
3
- import re
4
- from typing import Tuple, Optional
5
-
6
- _unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]')
7
-
8
- class Codec(codecs.Codec):
9
-
10
- def encode(self, data: str, errors: str = 'strict') -> Tuple[bytes, int]:
11
- if errors != 'strict':
12
- raise IDNAError('Unsupported error handling \"{}\"'.format(errors))
13
-
14
- if not data:
15
- return b"", 0
16
-
17
- return encode(data), len(data)
18
-
19
- def decode(self, data: bytes, errors: str = 'strict') -> Tuple[str, int]:
20
- if errors != 'strict':
21
- raise IDNAError('Unsupported error handling \"{}\"'.format(errors))
22
-
23
- if not data:
24
- return '', 0
25
-
26
- return decode(data), len(data)
27
-
28
- class IncrementalEncoder(codecs.BufferedIncrementalEncoder):
29
- def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore
30
- if errors != 'strict':
31
- raise IDNAError('Unsupported error handling \"{}\"'.format(errors))
32
-
33
- if not data:
34
- return "", 0
35
-
36
- labels = _unicode_dots_re.split(data)
37
- trailing_dot = ''
38
- if labels:
39
- if not labels[-1]:
40
- trailing_dot = '.'
41
- del labels[-1]
42
- elif not final:
43
- # Keep potentially unfinished label until the next call
44
- del labels[-1]
45
- if labels:
46
- trailing_dot = '.'
47
-
48
- result = []
49
- size = 0
50
- for label in labels:
51
- result.append(alabel(label))
52
- if size:
53
- size += 1
54
- size += len(label)
55
-
56
- # Join with U+002E
57
- result_str = '.'.join(result) + trailing_dot # type: ignore
58
- size += len(trailing_dot)
59
- return result_str, size
60
-
61
- class IncrementalDecoder(codecs.BufferedIncrementalDecoder):
62
- def _buffer_decode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore
63
- if errors != 'strict':
64
- raise IDNAError('Unsupported error handling \"{}\"'.format(errors))
65
-
66
- if not data:
67
- return ('', 0)
68
-
69
- labels = _unicode_dots_re.split(data)
70
- trailing_dot = ''
71
- if labels:
72
- if not labels[-1]:
73
- trailing_dot = '.'
74
- del labels[-1]
75
- elif not final:
76
- # Keep potentially unfinished label until the next call
77
- del labels[-1]
78
- if labels:
79
- trailing_dot = '.'
80
-
81
- result = []
82
- size = 0
83
- for label in labels:
84
- result.append(ulabel(label))
85
- if size:
86
- size += 1
87
- size += len(label)
88
-
89
- result_str = '.'.join(result) + trailing_dot
90
- size += len(trailing_dot)
91
- return (result_str, size)
92
-
93
-
94
- class StreamWriter(Codec, codecs.StreamWriter):
95
- pass
96
-
97
-
98
- class StreamReader(Codec, codecs.StreamReader):
99
- pass
100
-
101
-
102
- def getregentry() -> codecs.CodecInfo:
103
- # Compatibility as a search_function for codecs.register()
104
- return codecs.CodecInfo(
105
- name='idna',
106
- encode=Codec().encode, # type: ignore
107
- decode=Codec().decode, # type: ignore
108
- incrementalencoder=IncrementalEncoder,
109
- incrementaldecoder=IncrementalDecoder,
110
- streamwriter=StreamWriter,
111
- streamreader=StreamReader,
112
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/version.py DELETED
@@ -1,504 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
- import collections
6
- import itertools
7
- import re
8
- import warnings
9
- from typing import Callable, Iterator, List, Optional, SupportsInt, Tuple, Union
10
-
11
- from ._structures import Infinity, InfinityType, NegativeInfinity, NegativeInfinityType
12
-
13
- __all__ = ["parse", "Version", "LegacyVersion", "InvalidVersion", "VERSION_PATTERN"]
14
-
15
- InfiniteTypes = Union[InfinityType, NegativeInfinityType]
16
- PrePostDevType = Union[InfiniteTypes, Tuple[str, int]]
17
- SubLocalType = Union[InfiniteTypes, int, str]
18
- LocalType = Union[
19
- NegativeInfinityType,
20
- Tuple[
21
- Union[
22
- SubLocalType,
23
- Tuple[SubLocalType, str],
24
- Tuple[NegativeInfinityType, SubLocalType],
25
- ],
26
- ...,
27
- ],
28
- ]
29
- CmpKey = Tuple[
30
- int, Tuple[int, ...], PrePostDevType, PrePostDevType, PrePostDevType, LocalType
31
- ]
32
- LegacyCmpKey = Tuple[int, Tuple[str, ...]]
33
- VersionComparisonMethod = Callable[
34
- [Union[CmpKey, LegacyCmpKey], Union[CmpKey, LegacyCmpKey]], bool
35
- ]
36
-
37
- _Version = collections.namedtuple(
38
- "_Version", ["epoch", "release", "dev", "pre", "post", "local"]
39
- )
40
-
41
-
42
- def parse(version: str) -> Union["LegacyVersion", "Version"]:
43
- """
44
- Parse the given version string and return either a :class:`Version` object
45
- or a :class:`LegacyVersion` object depending on if the given version is
46
- a valid PEP 440 version or a legacy version.
47
- """
48
- try:
49
- return Version(version)
50
- except InvalidVersion:
51
- return LegacyVersion(version)
52
-
53
-
54
- class InvalidVersion(ValueError):
55
- """
56
- An invalid version was found, users should refer to PEP 440.
57
- """
58
-
59
-
60
- class _BaseVersion:
61
- _key: Union[CmpKey, LegacyCmpKey]
62
-
63
- def __hash__(self) -> int:
64
- return hash(self._key)
65
-
66
- # Please keep the duplicated `isinstance` check
67
- # in the six comparisons hereunder
68
- # unless you find a way to avoid adding overhead function calls.
69
- def __lt__(self, other: "_BaseVersion") -> bool:
70
- if not isinstance(other, _BaseVersion):
71
- return NotImplemented
72
-
73
- return self._key < other._key
74
-
75
- def __le__(self, other: "_BaseVersion") -> bool:
76
- if not isinstance(other, _BaseVersion):
77
- return NotImplemented
78
-
79
- return self._key <= other._key
80
-
81
- def __eq__(self, other: object) -> bool:
82
- if not isinstance(other, _BaseVersion):
83
- return NotImplemented
84
-
85
- return self._key == other._key
86
-
87
- def __ge__(self, other: "_BaseVersion") -> bool:
88
- if not isinstance(other, _BaseVersion):
89
- return NotImplemented
90
-
91
- return self._key >= other._key
92
-
93
- def __gt__(self, other: "_BaseVersion") -> bool:
94
- if not isinstance(other, _BaseVersion):
95
- return NotImplemented
96
-
97
- return self._key > other._key
98
-
99
- def __ne__(self, other: object) -> bool:
100
- if not isinstance(other, _BaseVersion):
101
- return NotImplemented
102
-
103
- return self._key != other._key
104
-
105
-
106
- class LegacyVersion(_BaseVersion):
107
- def __init__(self, version: str) -> None:
108
- self._version = str(version)
109
- self._key = _legacy_cmpkey(self._version)
110
-
111
- warnings.warn(
112
- "Creating a LegacyVersion has been deprecated and will be "
113
- "removed in the next major release",
114
- DeprecationWarning,
115
- )
116
-
117
- def __str__(self) -> str:
118
- return self._version
119
-
120
- def __repr__(self) -> str:
121
- return f"<LegacyVersion('{self}')>"
122
-
123
- @property
124
- def public(self) -> str:
125
- return self._version
126
-
127
- @property
128
- def base_version(self) -> str:
129
- return self._version
130
-
131
- @property
132
- def epoch(self) -> int:
133
- return -1
134
-
135
- @property
136
- def release(self) -> None:
137
- return None
138
-
139
- @property
140
- def pre(self) -> None:
141
- return None
142
-
143
- @property
144
- def post(self) -> None:
145
- return None
146
-
147
- @property
148
- def dev(self) -> None:
149
- return None
150
-
151
- @property
152
- def local(self) -> None:
153
- return None
154
-
155
- @property
156
- def is_prerelease(self) -> bool:
157
- return False
158
-
159
- @property
160
- def is_postrelease(self) -> bool:
161
- return False
162
-
163
- @property
164
- def is_devrelease(self) -> bool:
165
- return False
166
-
167
-
168
- _legacy_version_component_re = re.compile(r"(\d+ | [a-z]+ | \.| -)", re.VERBOSE)
169
-
170
- _legacy_version_replacement_map = {
171
- "pre": "c",
172
- "preview": "c",
173
- "-": "final-",
174
- "rc": "c",
175
- "dev": "@",
176
- }
177
-
178
-
179
- def _parse_version_parts(s: str) -> Iterator[str]:
180
- for part in _legacy_version_component_re.split(s):
181
- part = _legacy_version_replacement_map.get(part, part)
182
-
183
- if not part or part == ".":
184
- continue
185
-
186
- if part[:1] in "0123456789":
187
- # pad for numeric comparison
188
- yield part.zfill(8)
189
- else:
190
- yield "*" + part
191
-
192
- # ensure that alpha/beta/candidate are before final
193
- yield "*final"
194
-
195
-
196
- def _legacy_cmpkey(version: str) -> LegacyCmpKey:
197
-
198
- # We hardcode an epoch of -1 here. A PEP 440 version can only have a epoch
199
- # greater than or equal to 0. This will effectively put the LegacyVersion,
200
- # which uses the defacto standard originally implemented by setuptools,
201
- # as before all PEP 440 versions.
202
- epoch = -1
203
-
204
- # This scheme is taken from pkg_resources.parse_version setuptools prior to
205
- # it's adoption of the packaging library.
206
- parts: List[str] = []
207
- for part in _parse_version_parts(version.lower()):
208
- if part.startswith("*"):
209
- # remove "-" before a prerelease tag
210
- if part < "*final":
211
- while parts and parts[-1] == "*final-":
212
- parts.pop()
213
-
214
- # remove trailing zeros from each series of numeric parts
215
- while parts and parts[-1] == "00000000":
216
- parts.pop()
217
-
218
- parts.append(part)
219
-
220
- return epoch, tuple(parts)
221
-
222
-
223
- # Deliberately not anchored to the start and end of the string, to make it
224
- # easier for 3rd party code to reuse
225
- VERSION_PATTERN = r"""
226
- v?
227
- (?:
228
- (?:(?P<epoch>[0-9]+)!)? # epoch
229
- (?P<release>[0-9]+(?:\.[0-9]+)*) # release segment
230
- (?P<pre> # pre-release
231
- [-_\.]?
232
- (?P<pre_l>(a|b|c|rc|alpha|beta|pre|preview))
233
- [-_\.]?
234
- (?P<pre_n>[0-9]+)?
235
- )?
236
- (?P<post> # post release
237
- (?:-(?P<post_n1>[0-9]+))
238
- |
239
- (?:
240
- [-_\.]?
241
- (?P<post_l>post|rev|r)
242
- [-_\.]?
243
- (?P<post_n2>[0-9]+)?
244
- )
245
- )?
246
- (?P<dev> # dev release
247
- [-_\.]?
248
- (?P<dev_l>dev)
249
- [-_\.]?
250
- (?P<dev_n>[0-9]+)?
251
- )?
252
- )
253
- (?:\+(?P<local>[a-z0-9]+(?:[-_\.][a-z0-9]+)*))? # local version
254
- """
255
-
256
-
257
- class Version(_BaseVersion):
258
-
259
- _regex = re.compile(r"^\s*" + VERSION_PATTERN + r"\s*$", re.VERBOSE | re.IGNORECASE)
260
-
261
- def __init__(self, version: str) -> None:
262
-
263
- # Validate the version and parse it into pieces
264
- match = self._regex.search(version)
265
- if not match:
266
- raise InvalidVersion(f"Invalid version: '{version}'")
267
-
268
- # Store the parsed out pieces of the version
269
- self._version = _Version(
270
- epoch=int(match.group("epoch")) if match.group("epoch") else 0,
271
- release=tuple(int(i) for i in match.group("release").split(".")),
272
- pre=_parse_letter_version(match.group("pre_l"), match.group("pre_n")),
273
- post=_parse_letter_version(
274
- match.group("post_l"), match.group("post_n1") or match.group("post_n2")
275
- ),
276
- dev=_parse_letter_version(match.group("dev_l"), match.group("dev_n")),
277
- local=_parse_local_version(match.group("local")),
278
- )
279
-
280
- # Generate a key which will be used for sorting
281
- self._key = _cmpkey(
282
- self._version.epoch,
283
- self._version.release,
284
- self._version.pre,
285
- self._version.post,
286
- self._version.dev,
287
- self._version.local,
288
- )
289
-
290
- def __repr__(self) -> str:
291
- return f"<Version('{self}')>"
292
-
293
- def __str__(self) -> str:
294
- parts = []
295
-
296
- # Epoch
297
- if self.epoch != 0:
298
- parts.append(f"{self.epoch}!")
299
-
300
- # Release segment
301
- parts.append(".".join(str(x) for x in self.release))
302
-
303
- # Pre-release
304
- if self.pre is not None:
305
- parts.append("".join(str(x) for x in self.pre))
306
-
307
- # Post-release
308
- if self.post is not None:
309
- parts.append(f".post{self.post}")
310
-
311
- # Development release
312
- if self.dev is not None:
313
- parts.append(f".dev{self.dev}")
314
-
315
- # Local version segment
316
- if self.local is not None:
317
- parts.append(f"+{self.local}")
318
-
319
- return "".join(parts)
320
-
321
- @property
322
- def epoch(self) -> int:
323
- _epoch: int = self._version.epoch
324
- return _epoch
325
-
326
- @property
327
- def release(self) -> Tuple[int, ...]:
328
- _release: Tuple[int, ...] = self._version.release
329
- return _release
330
-
331
- @property
332
- def pre(self) -> Optional[Tuple[str, int]]:
333
- _pre: Optional[Tuple[str, int]] = self._version.pre
334
- return _pre
335
-
336
- @property
337
- def post(self) -> Optional[int]:
338
- return self._version.post[1] if self._version.post else None
339
-
340
- @property
341
- def dev(self) -> Optional[int]:
342
- return self._version.dev[1] if self._version.dev else None
343
-
344
- @property
345
- def local(self) -> Optional[str]:
346
- if self._version.local:
347
- return ".".join(str(x) for x in self._version.local)
348
- else:
349
- return None
350
-
351
- @property
352
- def public(self) -> str:
353
- return str(self).split("+", 1)[0]
354
-
355
- @property
356
- def base_version(self) -> str:
357
- parts = []
358
-
359
- # Epoch
360
- if self.epoch != 0:
361
- parts.append(f"{self.epoch}!")
362
-
363
- # Release segment
364
- parts.append(".".join(str(x) for x in self.release))
365
-
366
- return "".join(parts)
367
-
368
- @property
369
- def is_prerelease(self) -> bool:
370
- return self.dev is not None or self.pre is not None
371
-
372
- @property
373
- def is_postrelease(self) -> bool:
374
- return self.post is not None
375
-
376
- @property
377
- def is_devrelease(self) -> bool:
378
- return self.dev is not None
379
-
380
- @property
381
- def major(self) -> int:
382
- return self.release[0] if len(self.release) >= 1 else 0
383
-
384
- @property
385
- def minor(self) -> int:
386
- return self.release[1] if len(self.release) >= 2 else 0
387
-
388
- @property
389
- def micro(self) -> int:
390
- return self.release[2] if len(self.release) >= 3 else 0
391
-
392
-
393
- def _parse_letter_version(
394
- letter: str, number: Union[str, bytes, SupportsInt]
395
- ) -> Optional[Tuple[str, int]]:
396
-
397
- if letter:
398
- # We consider there to be an implicit 0 in a pre-release if there is
399
- # not a numeral associated with it.
400
- if number is None:
401
- number = 0
402
-
403
- # We normalize any letters to their lower case form
404
- letter = letter.lower()
405
-
406
- # We consider some words to be alternate spellings of other words and
407
- # in those cases we want to normalize the spellings to our preferred
408
- # spelling.
409
- if letter == "alpha":
410
- letter = "a"
411
- elif letter == "beta":
412
- letter = "b"
413
- elif letter in ["c", "pre", "preview"]:
414
- letter = "rc"
415
- elif letter in ["rev", "r"]:
416
- letter = "post"
417
-
418
- return letter, int(number)
419
- if not letter and number:
420
- # We assume if we are given a number, but we are not given a letter
421
- # then this is using the implicit post release syntax (e.g. 1.0-1)
422
- letter = "post"
423
-
424
- return letter, int(number)
425
-
426
- return None
427
-
428
-
429
- _local_version_separators = re.compile(r"[\._-]")
430
-
431
-
432
- def _parse_local_version(local: str) -> Optional[LocalType]:
433
- """
434
- Takes a string like abc.1.twelve and turns it into ("abc", 1, "twelve").
435
- """
436
- if local is not None:
437
- return tuple(
438
- part.lower() if not part.isdigit() else int(part)
439
- for part in _local_version_separators.split(local)
440
- )
441
- return None
442
-
443
-
444
- def _cmpkey(
445
- epoch: int,
446
- release: Tuple[int, ...],
447
- pre: Optional[Tuple[str, int]],
448
- post: Optional[Tuple[str, int]],
449
- dev: Optional[Tuple[str, int]],
450
- local: Optional[Tuple[SubLocalType]],
451
- ) -> CmpKey:
452
-
453
- # When we compare a release version, we want to compare it with all of the
454
- # trailing zeros removed. So we'll use a reverse the list, drop all the now
455
- # leading zeros until we come to something non zero, then take the rest
456
- # re-reverse it back into the correct order and make it a tuple and use
457
- # that for our sorting key.
458
- _release = tuple(
459
- reversed(list(itertools.dropwhile(lambda x: x == 0, reversed(release))))
460
- )
461
-
462
- # We need to "trick" the sorting algorithm to put 1.0.dev0 before 1.0a0.
463
- # We'll do this by abusing the pre segment, but we _only_ want to do this
464
- # if there is not a pre or a post segment. If we have one of those then
465
- # the normal sorting rules will handle this case correctly.
466
- if pre is None and post is None and dev is not None:
467
- _pre: PrePostDevType = NegativeInfinity
468
- # Versions without a pre-release (except as noted above) should sort after
469
- # those with one.
470
- elif pre is None:
471
- _pre = Infinity
472
- else:
473
- _pre = pre
474
-
475
- # Versions without a post segment should sort before those with one.
476
- if post is None:
477
- _post: PrePostDevType = NegativeInfinity
478
-
479
- else:
480
- _post = post
481
-
482
- # Versions without a development segment should sort after those with one.
483
- if dev is None:
484
- _dev: PrePostDevType = Infinity
485
-
486
- else:
487
- _dev = dev
488
-
489
- if local is None:
490
- # Versions without a local segment should sort before those with one.
491
- _local: LocalType = NegativeInfinity
492
- else:
493
- # Versions with a local segment need that segment parsed to implement
494
- # the sorting rules in PEP440.
495
- # - Alpha numeric segments sort before numeric segments
496
- # - Alpha numeric segments sort lexicographically
497
- # - Numeric segments sort numerically
498
- # - Shorter versions sort before longer versions when the prefixes
499
- # match exactly
500
- _local = tuple(
501
- (i, "") if isinstance(i, int) else (NegativeInfinity, i) for i in local
502
- )
503
-
504
- return epoch, _release, _pre, _post, _dev, _local
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/errors.py DELETED
@@ -1,127 +0,0 @@
1
- """distutils.errors
2
-
3
- Provides exceptions used by the Distutils modules. Note that Distutils
4
- modules may raise standard exceptions; in particular, SystemExit is
5
- usually raised for errors that are obviously the end-user's fault
6
- (eg. bad command-line arguments).
7
-
8
- This module is safe to use in "from ... import *" mode; it only exports
9
- symbols whose names start with "Distutils" and end with "Error"."""
10
-
11
-
12
- class DistutilsError(Exception):
13
- """The root of all Distutils evil."""
14
-
15
- pass
16
-
17
-
18
- class DistutilsModuleError(DistutilsError):
19
- """Unable to load an expected module, or to find an expected class
20
- within some module (in particular, command modules and classes)."""
21
-
22
- pass
23
-
24
-
25
- class DistutilsClassError(DistutilsError):
26
- """Some command class (or possibly distribution class, if anyone
27
- feels a need to subclass Distribution) is found not to be holding
28
- up its end of the bargain, ie. implementing some part of the
29
- "command "interface."""
30
-
31
- pass
32
-
33
-
34
- class DistutilsGetoptError(DistutilsError):
35
- """The option table provided to 'fancy_getopt()' is bogus."""
36
-
37
- pass
38
-
39
-
40
- class DistutilsArgError(DistutilsError):
41
- """Raised by fancy_getopt in response to getopt.error -- ie. an
42
- error in the command line usage."""
43
-
44
- pass
45
-
46
-
47
- class DistutilsFileError(DistutilsError):
48
- """Any problems in the filesystem: expected file not found, etc.
49
- Typically this is for problems that we detect before OSError
50
- could be raised."""
51
-
52
- pass
53
-
54
-
55
- class DistutilsOptionError(DistutilsError):
56
- """Syntactic/semantic errors in command options, such as use of
57
- mutually conflicting options, or inconsistent options,
58
- badly-spelled values, etc. No distinction is made between option
59
- values originating in the setup script, the command line, config
60
- files, or what-have-you -- but if we *know* something originated in
61
- the setup script, we'll raise DistutilsSetupError instead."""
62
-
63
- pass
64
-
65
-
66
- class DistutilsSetupError(DistutilsError):
67
- """For errors that can be definitely blamed on the setup script,
68
- such as invalid keyword arguments to 'setup()'."""
69
-
70
- pass
71
-
72
-
73
- class DistutilsPlatformError(DistutilsError):
74
- """We don't know how to do something on the current platform (but
75
- we do know how to do it on some platform) -- eg. trying to compile
76
- C files on a platform not supported by a CCompiler subclass."""
77
-
78
- pass
79
-
80
-
81
- class DistutilsExecError(DistutilsError):
82
- """Any problems executing an external program (such as the C
83
- compiler, when compiling C files)."""
84
-
85
- pass
86
-
87
-
88
- class DistutilsInternalError(DistutilsError):
89
- """Internal inconsistencies or impossibilities (obviously, this
90
- should never be seen if the code is working!)."""
91
-
92
- pass
93
-
94
-
95
- class DistutilsTemplateError(DistutilsError):
96
- """Syntax error in a file list template."""
97
-
98
-
99
- class DistutilsByteCompileError(DistutilsError):
100
- """Byte compile error."""
101
-
102
-
103
- # Exception classes used by the CCompiler implementation classes
104
- class CCompilerError(Exception):
105
- """Some compile/link operation failed."""
106
-
107
-
108
- class PreprocessError(CCompilerError):
109
- """Failure to preprocess one or more C/C++ files."""
110
-
111
-
112
- class CompileError(CCompilerError):
113
- """Failure to compile one or more C/C++ source files."""
114
-
115
-
116
- class LibError(CCompilerError):
117
- """Failure to create a static library from one or more C/C++ object
118
- files."""
119
-
120
-
121
- class LinkError(CCompilerError):
122
- """Failure to link one or more C/C++ object files into an executable
123
- or shared library file."""
124
-
125
-
126
- class UnknownFileError(CCompilerError):
127
- """Attempt to process an unknown file type."""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/tags.py DELETED
@@ -1,487 +0,0 @@
1
- # This file is dual licensed under the terms of the Apache License, Version
2
- # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3
- # for complete details.
4
-
5
- import logging
6
- import platform
7
- import sys
8
- import sysconfig
9
- from importlib.machinery import EXTENSION_SUFFIXES
10
- from typing import (
11
- Dict,
12
- FrozenSet,
13
- Iterable,
14
- Iterator,
15
- List,
16
- Optional,
17
- Sequence,
18
- Tuple,
19
- Union,
20
- cast,
21
- )
22
-
23
- from . import _manylinux, _musllinux
24
-
25
- logger = logging.getLogger(__name__)
26
-
27
- PythonVersion = Sequence[int]
28
- MacVersion = Tuple[int, int]
29
-
30
- INTERPRETER_SHORT_NAMES: Dict[str, str] = {
31
- "python": "py", # Generic.
32
- "cpython": "cp",
33
- "pypy": "pp",
34
- "ironpython": "ip",
35
- "jython": "jy",
36
- }
37
-
38
-
39
- _32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32
40
-
41
-
42
- class Tag:
43
- """
44
- A representation of the tag triple for a wheel.
45
-
46
- Instances are considered immutable and thus are hashable. Equality checking
47
- is also supported.
48
- """
49
-
50
- __slots__ = ["_interpreter", "_abi", "_platform", "_hash"]
51
-
52
- def __init__(self, interpreter: str, abi: str, platform: str) -> None:
53
- self._interpreter = interpreter.lower()
54
- self._abi = abi.lower()
55
- self._platform = platform.lower()
56
- # The __hash__ of every single element in a Set[Tag] will be evaluated each time
57
- # that a set calls its `.disjoint()` method, which may be called hundreds of
58
- # times when scanning a page of links for packages with tags matching that
59
- # Set[Tag]. Pre-computing the value here produces significant speedups for
60
- # downstream consumers.
61
- self._hash = hash((self._interpreter, self._abi, self._platform))
62
-
63
- @property
64
- def interpreter(self) -> str:
65
- return self._interpreter
66
-
67
- @property
68
- def abi(self) -> str:
69
- return self._abi
70
-
71
- @property
72
- def platform(self) -> str:
73
- return self._platform
74
-
75
- def __eq__(self, other: object) -> bool:
76
- if not isinstance(other, Tag):
77
- return NotImplemented
78
-
79
- return (
80
- (self._hash == other._hash) # Short-circuit ASAP for perf reasons.
81
- and (self._platform == other._platform)
82
- and (self._abi == other._abi)
83
- and (self._interpreter == other._interpreter)
84
- )
85
-
86
- def __hash__(self) -> int:
87
- return self._hash
88
-
89
- def __str__(self) -> str:
90
- return f"{self._interpreter}-{self._abi}-{self._platform}"
91
-
92
- def __repr__(self) -> str:
93
- return f"<{self} @ {id(self)}>"
94
-
95
-
96
- def parse_tag(tag: str) -> FrozenSet[Tag]:
97
- """
98
- Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances.
99
-
100
- Returning a set is required due to the possibility that the tag is a
101
- compressed tag set.
102
- """
103
- tags = set()
104
- interpreters, abis, platforms = tag.split("-")
105
- for interpreter in interpreters.split("."):
106
- for abi in abis.split("."):
107
- for platform_ in platforms.split("."):
108
- tags.add(Tag(interpreter, abi, platform_))
109
- return frozenset(tags)
110
-
111
-
112
- def _get_config_var(name: str, warn: bool = False) -> Union[int, str, None]:
113
- value = sysconfig.get_config_var(name)
114
- if value is None and warn:
115
- logger.debug(
116
- "Config variable '%s' is unset, Python ABI tag may be incorrect", name
117
- )
118
- return value
119
-
120
-
121
- def _normalize_string(string: str) -> str:
122
- return string.replace(".", "_").replace("-", "_")
123
-
124
-
125
- def _abi3_applies(python_version: PythonVersion) -> bool:
126
- """
127
- Determine if the Python version supports abi3.
128
-
129
- PEP 384 was first implemented in Python 3.2.
130
- """
131
- return len(python_version) > 1 and tuple(python_version) >= (3, 2)
132
-
133
-
134
- def _cpython_abis(py_version: PythonVersion, warn: bool = False) -> List[str]:
135
- py_version = tuple(py_version) # To allow for version comparison.
136
- abis = []
137
- version = _version_nodot(py_version[:2])
138
- debug = pymalloc = ucs4 = ""
139
- with_debug = _get_config_var("Py_DEBUG", warn)
140
- has_refcount = hasattr(sys, "gettotalrefcount")
141
- # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled
142
- # extension modules is the best option.
143
- # https://github.com/pypa/pip/issues/3383#issuecomment-173267692
144
- has_ext = "_d.pyd" in EXTENSION_SUFFIXES
145
- if with_debug or (with_debug is None and (has_refcount or has_ext)):
146
- debug = "d"
147
- if py_version < (3, 8):
148
- with_pymalloc = _get_config_var("WITH_PYMALLOC", warn)
149
- if with_pymalloc or with_pymalloc is None:
150
- pymalloc = "m"
151
- if py_version < (3, 3):
152
- unicode_size = _get_config_var("Py_UNICODE_SIZE", warn)
153
- if unicode_size == 4 or (
154
- unicode_size is None and sys.maxunicode == 0x10FFFF
155
- ):
156
- ucs4 = "u"
157
- elif debug:
158
- # Debug builds can also load "normal" extension modules.
159
- # We can also assume no UCS-4 or pymalloc requirement.
160
- abis.append(f"cp{version}")
161
- abis.insert(
162
- 0,
163
- "cp{version}{debug}{pymalloc}{ucs4}".format(
164
- version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4
165
- ),
166
- )
167
- return abis
168
-
169
-
170
- def cpython_tags(
171
- python_version: Optional[PythonVersion] = None,
172
- abis: Optional[Iterable[str]] = None,
173
- platforms: Optional[Iterable[str]] = None,
174
- *,
175
- warn: bool = False,
176
- ) -> Iterator[Tag]:
177
- """
178
- Yields the tags for a CPython interpreter.
179
-
180
- The tags consist of:
181
- - cp<python_version>-<abi>-<platform>
182
- - cp<python_version>-abi3-<platform>
183
- - cp<python_version>-none-<platform>
184
- - cp<less than python_version>-abi3-<platform> # Older Python versions down to 3.2.
185
-
186
- If python_version only specifies a major version then user-provided ABIs and
187
- the 'none' ABItag will be used.
188
-
189
- If 'abi3' or 'none' are specified in 'abis' then they will be yielded at
190
- their normal position and not at the beginning.
191
- """
192
- if not python_version:
193
- python_version = sys.version_info[:2]
194
-
195
- interpreter = f"cp{_version_nodot(python_version[:2])}"
196
-
197
- if abis is None:
198
- if len(python_version) > 1:
199
- abis = _cpython_abis(python_version, warn)
200
- else:
201
- abis = []
202
- abis = list(abis)
203
- # 'abi3' and 'none' are explicitly handled later.
204
- for explicit_abi in ("abi3", "none"):
205
- try:
206
- abis.remove(explicit_abi)
207
- except ValueError:
208
- pass
209
-
210
- platforms = list(platforms or platform_tags())
211
- for abi in abis:
212
- for platform_ in platforms:
213
- yield Tag(interpreter, abi, platform_)
214
- if _abi3_applies(python_version):
215
- yield from (Tag(interpreter, "abi3", platform_) for platform_ in platforms)
216
- yield from (Tag(interpreter, "none", platform_) for platform_ in platforms)
217
-
218
- if _abi3_applies(python_version):
219
- for minor_version in range(python_version[1] - 1, 1, -1):
220
- for platform_ in platforms:
221
- interpreter = "cp{version}".format(
222
- version=_version_nodot((python_version[0], minor_version))
223
- )
224
- yield Tag(interpreter, "abi3", platform_)
225
-
226
-
227
- def _generic_abi() -> Iterator[str]:
228
- abi = sysconfig.get_config_var("SOABI")
229
- if abi:
230
- yield _normalize_string(abi)
231
-
232
-
233
- def generic_tags(
234
- interpreter: Optional[str] = None,
235
- abis: Optional[Iterable[str]] = None,
236
- platforms: Optional[Iterable[str]] = None,
237
- *,
238
- warn: bool = False,
239
- ) -> Iterator[Tag]:
240
- """
241
- Yields the tags for a generic interpreter.
242
-
243
- The tags consist of:
244
- - <interpreter>-<abi>-<platform>
245
-
246
- The "none" ABI will be added if it was not explicitly provided.
247
- """
248
- if not interpreter:
249
- interp_name = interpreter_name()
250
- interp_version = interpreter_version(warn=warn)
251
- interpreter = "".join([interp_name, interp_version])
252
- if abis is None:
253
- abis = _generic_abi()
254
- platforms = list(platforms or platform_tags())
255
- abis = list(abis)
256
- if "none" not in abis:
257
- abis.append("none")
258
- for abi in abis:
259
- for platform_ in platforms:
260
- yield Tag(interpreter, abi, platform_)
261
-
262
-
263
- def _py_interpreter_range(py_version: PythonVersion) -> Iterator[str]:
264
- """
265
- Yields Python versions in descending order.
266
-
267
- After the latest version, the major-only version will be yielded, and then
268
- all previous versions of that major version.
269
- """
270
- if len(py_version) > 1:
271
- yield f"py{_version_nodot(py_version[:2])}"
272
- yield f"py{py_version[0]}"
273
- if len(py_version) > 1:
274
- for minor in range(py_version[1] - 1, -1, -1):
275
- yield f"py{_version_nodot((py_version[0], minor))}"
276
-
277
-
278
- def compatible_tags(
279
- python_version: Optional[PythonVersion] = None,
280
- interpreter: Optional[str] = None,
281
- platforms: Optional[Iterable[str]] = None,
282
- ) -> Iterator[Tag]:
283
- """
284
- Yields the sequence of tags that are compatible with a specific version of Python.
285
-
286
- The tags consist of:
287
- - py*-none-<platform>
288
- - <interpreter>-none-any # ... if `interpreter` is provided.
289
- - py*-none-any
290
- """
291
- if not python_version:
292
- python_version = sys.version_info[:2]
293
- platforms = list(platforms or platform_tags())
294
- for version in _py_interpreter_range(python_version):
295
- for platform_ in platforms:
296
- yield Tag(version, "none", platform_)
297
- if interpreter:
298
- yield Tag(interpreter, "none", "any")
299
- for version in _py_interpreter_range(python_version):
300
- yield Tag(version, "none", "any")
301
-
302
-
303
- def _mac_arch(arch: str, is_32bit: bool = _32_BIT_INTERPRETER) -> str:
304
- if not is_32bit:
305
- return arch
306
-
307
- if arch.startswith("ppc"):
308
- return "ppc"
309
-
310
- return "i386"
311
-
312
-
313
- def _mac_binary_formats(version: MacVersion, cpu_arch: str) -> List[str]:
314
- formats = [cpu_arch]
315
- if cpu_arch == "x86_64":
316
- if version < (10, 4):
317
- return []
318
- formats.extend(["intel", "fat64", "fat32"])
319
-
320
- elif cpu_arch == "i386":
321
- if version < (10, 4):
322
- return []
323
- formats.extend(["intel", "fat32", "fat"])
324
-
325
- elif cpu_arch == "ppc64":
326
- # TODO: Need to care about 32-bit PPC for ppc64 through 10.2?
327
- if version > (10, 5) or version < (10, 4):
328
- return []
329
- formats.append("fat64")
330
-
331
- elif cpu_arch == "ppc":
332
- if version > (10, 6):
333
- return []
334
- formats.extend(["fat32", "fat"])
335
-
336
- if cpu_arch in {"arm64", "x86_64"}:
337
- formats.append("universal2")
338
-
339
- if cpu_arch in {"x86_64", "i386", "ppc64", "ppc", "intel"}:
340
- formats.append("universal")
341
-
342
- return formats
343
-
344
-
345
- def mac_platforms(
346
- version: Optional[MacVersion] = None, arch: Optional[str] = None
347
- ) -> Iterator[str]:
348
- """
349
- Yields the platform tags for a macOS system.
350
-
351
- The `version` parameter is a two-item tuple specifying the macOS version to
352
- generate platform tags for. The `arch` parameter is the CPU architecture to
353
- generate platform tags for. Both parameters default to the appropriate value
354
- for the current system.
355
- """
356
- version_str, _, cpu_arch = platform.mac_ver()
357
- if version is None:
358
- version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2])))
359
- else:
360
- version = version
361
- if arch is None:
362
- arch = _mac_arch(cpu_arch)
363
- else:
364
- arch = arch
365
-
366
- if (10, 0) <= version and version < (11, 0):
367
- # Prior to Mac OS 11, each yearly release of Mac OS bumped the
368
- # "minor" version number. The major version was always 10.
369
- for minor_version in range(version[1], -1, -1):
370
- compat_version = 10, minor_version
371
- binary_formats = _mac_binary_formats(compat_version, arch)
372
- for binary_format in binary_formats:
373
- yield "macosx_{major}_{minor}_{binary_format}".format(
374
- major=10, minor=minor_version, binary_format=binary_format
375
- )
376
-
377
- if version >= (11, 0):
378
- # Starting with Mac OS 11, each yearly release bumps the major version
379
- # number. The minor versions are now the midyear updates.
380
- for major_version in range(version[0], 10, -1):
381
- compat_version = major_version, 0
382
- binary_formats = _mac_binary_formats(compat_version, arch)
383
- for binary_format in binary_formats:
384
- yield "macosx_{major}_{minor}_{binary_format}".format(
385
- major=major_version, minor=0, binary_format=binary_format
386
- )
387
-
388
- if version >= (11, 0):
389
- # Mac OS 11 on x86_64 is compatible with binaries from previous releases.
390
- # Arm64 support was introduced in 11.0, so no Arm binaries from previous
391
- # releases exist.
392
- #
393
- # However, the "universal2" binary format can have a
394
- # macOS version earlier than 11.0 when the x86_64 part of the binary supports
395
- # that version of macOS.
396
- if arch == "x86_64":
397
- for minor_version in range(16, 3, -1):
398
- compat_version = 10, minor_version
399
- binary_formats = _mac_binary_formats(compat_version, arch)
400
- for binary_format in binary_formats:
401
- yield "macosx_{major}_{minor}_{binary_format}".format(
402
- major=compat_version[0],
403
- minor=compat_version[1],
404
- binary_format=binary_format,
405
- )
406
- else:
407
- for minor_version in range(16, 3, -1):
408
- compat_version = 10, minor_version
409
- binary_format = "universal2"
410
- yield "macosx_{major}_{minor}_{binary_format}".format(
411
- major=compat_version[0],
412
- minor=compat_version[1],
413
- binary_format=binary_format,
414
- )
415
-
416
-
417
- def _linux_platforms(is_32bit: bool = _32_BIT_INTERPRETER) -> Iterator[str]:
418
- linux = _normalize_string(sysconfig.get_platform())
419
- if is_32bit:
420
- if linux == "linux_x86_64":
421
- linux = "linux_i686"
422
- elif linux == "linux_aarch64":
423
- linux = "linux_armv7l"
424
- _, arch = linux.split("_", 1)
425
- yield from _manylinux.platform_tags(linux, arch)
426
- yield from _musllinux.platform_tags(arch)
427
- yield linux
428
-
429
-
430
- def _generic_platforms() -> Iterator[str]:
431
- yield _normalize_string(sysconfig.get_platform())
432
-
433
-
434
- def platform_tags() -> Iterator[str]:
435
- """
436
- Provides the platform tags for this installation.
437
- """
438
- if platform.system() == "Darwin":
439
- return mac_platforms()
440
- elif platform.system() == "Linux":
441
- return _linux_platforms()
442
- else:
443
- return _generic_platforms()
444
-
445
-
446
- def interpreter_name() -> str:
447
- """
448
- Returns the name of the running interpreter.
449
- """
450
- name = sys.implementation.name
451
- return INTERPRETER_SHORT_NAMES.get(name) or name
452
-
453
-
454
- def interpreter_version(*, warn: bool = False) -> str:
455
- """
456
- Returns the version of the running interpreter.
457
- """
458
- version = _get_config_var("py_version_nodot", warn=warn)
459
- if version:
460
- version = str(version)
461
- else:
462
- version = _version_nodot(sys.version_info[:2])
463
- return version
464
-
465
-
466
- def _version_nodot(version: PythonVersion) -> str:
467
- return "".join(map(str, version))
468
-
469
-
470
- def sys_tags(*, warn: bool = False) -> Iterator[Tag]:
471
- """
472
- Returns the sequence of tag triples for the running interpreter.
473
-
474
- The order of the sequence corresponds to priority order for the
475
- interpreter, from most to least important.
476
- """
477
-
478
- interp_name = interpreter_name()
479
- if interp_name == "cp":
480
- yield from cpython_tags(warn=warn)
481
- else:
482
- yield from generic_tags()
483
-
484
- if interp_name == "pp":
485
- yield from compatible_tags(interpreter="pp3")
486
- else:
487
- yield from compatible_tags()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Audio-AGI/WavJourney/convert_json_to_audio_gen_code.py DELETED
@@ -1,30 +0,0 @@
1
- import argparse
2
- import os
3
- import json5
4
- from pathlib import Path
5
- from code_generator import AudioCodeGenerator
6
-
7
-
8
- def main():
9
- parser = argparse.ArgumentParser()
10
- parser.add_argument("--script", help="Path to the json script file")
11
- parser.add_argument("--character-to-voice-map", help="Path to the character-to-voice mapping CSV file")
12
- parser.add_argument(
13
- "--path",
14
- type=str,
15
- default=".",
16
- help="Path of all the output wav files to be created by the generated code, default: current path"
17
- )
18
- args = parser.parse_args()
19
-
20
- if not os.path.isfile(args.script):
21
- print(f"File {args.script} does not exist.")
22
- return
23
-
24
- output_path = Path(args.path)
25
- audio_code_generator = AudioCodeGenerator()
26
- code = audio_code_generator.parse_and_generate(args.script, args.character_to_voice_map, output_path)
27
- print(code)
28
-
29
- if __name__ == "__main__":
30
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/COCO-Keypoints/keypoint_rcnn_R_50_FPN_1x.py DELETED
@@ -1,8 +0,0 @@
1
- from ..common.optim import SGD as optimizer
2
- from ..common.coco_schedule import lr_multiplier_1x as lr_multiplier
3
- from ..common.data.coco_keypoint import dataloader
4
- from ..common.models.keypoint_rcnn_fpn import model
5
- from ..common.train import train
6
-
7
- model.backbone.bottom_up.freeze_at = 2
8
- train.init_checkpoint = "detectron2://ImageNetPretrained/MSRA/R-50.pkl"
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/src/lib/loadImageToCanvas.ts DELETED
@@ -1,28 +0,0 @@
1
- export async function loadImageToCanvas(imageBase64: string): Promise<HTMLCanvasElement> {
2
- return new Promise((resolve, reject) => {
3
- // create a new image object
4
- let img = new Image();
5
- // specify a function to run when the image is fully loaded
6
- img.onload = () => {
7
- // create a canvas element
8
- let canvas = document.createElement('canvas');
9
- canvas.width = img.width;
10
- canvas.height = img.height;
11
- // get the context of the canvas
12
- let ctx = canvas.getContext('2d');
13
- if (ctx) {
14
- // draw the image into the canvas
15
- ctx.drawImage(img, 0, 0);
16
- // resolve the promise with the canvas
17
- resolve(canvas);
18
- } else {
19
- reject('Error creating the context of canvas');
20
- }
21
- };
22
- // specify a function to run when the image could not be loaded
23
- img.onerror = () => {
24
- reject('Image could not be loaded');
25
- };
26
- img.src = imageBase64; // must be a data;image/.... prefixed URL string
27
- });
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BatuhanYilmaz/Whisper-Auto-Subtitled-Video-Generator/pages/02_📼_Upload_Video_File.py DELETED
@@ -1,230 +0,0 @@
1
- import whisper
2
- import streamlit as st
3
- from streamlit_lottie import st_lottie
4
- from utils import write_vtt, write_srt
5
- import ffmpeg
6
- import requests
7
- from typing import Iterator
8
- from io import StringIO
9
- import numpy as np
10
- import pathlib
11
- import os
12
-
13
- st.set_page_config(page_title="Auto Subtitled Video Generator", page_icon=":movie_camera:", layout="wide")
14
-
15
- # Define a function that we can use to load lottie files from a link.
16
- @st.cache(allow_output_mutation=True)
17
- def load_lottieurl(url: str):
18
- r = requests.get(url)
19
- if r.status_code != 200:
20
- return None
21
- return r.json()
22
-
23
-
24
- APP_DIR = pathlib.Path(__file__).parent.absolute()
25
-
26
- LOCAL_DIR = APP_DIR / "local"
27
- LOCAL_DIR.mkdir(exist_ok=True)
28
- save_dir = LOCAL_DIR / "output"
29
- save_dir.mkdir(exist_ok=True)
30
-
31
-
32
- loaded_model = whisper.load_model("base")
33
- current_size = "None"
34
-
35
-
36
- col1, col2 = st.columns([1, 3])
37
- with col1:
38
- lottie = load_lottieurl("https://assets1.lottiefiles.com/packages/lf20_HjK9Ol.json")
39
- st_lottie(lottie)
40
-
41
- with col2:
42
- st.write("""
43
- ## Auto Subtitled Video Generator
44
- ##### Upload a video file and get a video with subtitles.
45
- ###### ➠ If you want to transcribe the video in its original language, select the task as "Transcribe"
46
- ###### ➠ If you want to translate the subtitles to English, select the task as "Translate"
47
- ###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """)
48
-
49
-
50
- @st.cache(allow_output_mutation=True)
51
- def change_model(current_size, size):
52
- if current_size != size:
53
- loaded_model = whisper.load_model(size)
54
- return loaded_model
55
- else:
56
- raise Exception("Model size is the same as the current size.")
57
-
58
-
59
- @st.cache(allow_output_mutation=True)
60
- def inferecence(loaded_model, uploaded_file, task):
61
- with open(f"{save_dir}/input.mp4", "wb") as f:
62
- f.write(uploaded_file.read())
63
- audio = ffmpeg.input(f"{save_dir}/input.mp4")
64
- audio = ffmpeg.output(audio, f"{save_dir}/output.wav", acodec="pcm_s16le", ac=1, ar="16k")
65
- ffmpeg.run(audio, overwrite_output=True)
66
- if task == "Transcribe":
67
- options = dict(task="transcribe", best_of=5)
68
- results = loaded_model.transcribe(f"{save_dir}/output.wav", **options)
69
- vtt = getSubs(results["segments"], "vtt", 80)
70
- srt = getSubs(results["segments"], "srt", 80)
71
- lang = results["language"]
72
- return results["text"], vtt, srt, lang
73
- elif task == "Translate":
74
- options = dict(task="translate", best_of=5)
75
- results = loaded_model.transcribe(f"{save_dir}/output.wav", **options)
76
- vtt = getSubs(results["segments"], "vtt", 80)
77
- srt = getSubs(results["segments"], "srt", 80)
78
- lang = results["language"]
79
- return results["text"], vtt, srt, lang
80
- else:
81
- raise ValueError("Task not supported")
82
-
83
-
84
- def getSubs(segments: Iterator[dict], format: str, maxLineWidth: int) -> str:
85
- segmentStream = StringIO()
86
-
87
- if format == 'vtt':
88
- write_vtt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
89
- elif format == 'srt':
90
- write_srt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
91
- else:
92
- raise Exception("Unknown format " + format)
93
-
94
- segmentStream.seek(0)
95
- return segmentStream.read()
96
-
97
-
98
- def generate_subtitled_video(video, audio, transcript):
99
- video_file = ffmpeg.input(video)
100
- audio_file = ffmpeg.input(audio)
101
- ffmpeg.concat(video_file.filter("subtitles", transcript), audio_file, v=1, a=1).output("final.mp4").run(quiet=True, overwrite_output=True)
102
- video_with_subs = open("final.mp4", "rb")
103
- return video_with_subs
104
-
105
-
106
- def main():
107
- size = st.selectbox("Select Model Size (The larger the model, the more accurate the transcription will be, but it will take longer)", ["tiny", "base", "small", "medium", "large"], index=1)
108
- loaded_model = change_model(current_size, size)
109
- st.write(f"Model is {'multilingual' if loaded_model.is_multilingual else 'English-only'} "
110
- f"and has {sum(np.prod(p.shape) for p in loaded_model.parameters()):,} parameters.")
111
- input_file = st.file_uploader("File", type=["mp4", "avi", "mov", "mkv"])
112
- # get the name of the input_file
113
- if input_file is not None:
114
- filename = input_file.name[:-4]
115
- else:
116
- filename = None
117
- task = st.selectbox("Select Task", ["Transcribe", "Translate"], index=0)
118
- if task == "Transcribe":
119
- if st.button("Transcribe"):
120
- results = inferecence(loaded_model, input_file, task)
121
- col3, col4 = st.columns(2)
122
- col5, col6, col7, col8 = st.columns(4)
123
- col9, col10 = st.columns(2)
124
- with col3:
125
- st.video(input_file)
126
-
127
- with open("transcript.txt", "w+", encoding='utf8') as f:
128
- f.writelines(results[0])
129
- f.close()
130
- with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
131
- datatxt = f.read()
132
-
133
- with open("transcript.vtt", "w+",encoding='utf8') as f:
134
- f.writelines(results[1])
135
- f.close()
136
- with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
137
- datavtt = f.read()
138
-
139
- with open("transcript.srt", "w+",encoding='utf8') as f:
140
- f.writelines(results[2])
141
- f.close()
142
- with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
143
- datasrt = f.read()
144
-
145
- with col5:
146
- st.download_button(label="Download Transcript (.txt)",
147
- data=datatxt,
148
- file_name="transcript.txt")
149
- with col6:
150
- st.download_button(label="Download Transcript (.vtt)",
151
- data=datavtt,
152
- file_name="transcript.vtt")
153
- with col7:
154
- st.download_button(label="Download Transcript (.srt)",
155
- data=datasrt,
156
- file_name="transcript.srt")
157
- with col9:
158
- st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
159
- with col10:
160
- st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
161
-
162
- with col4:
163
- with st.spinner("Generating Subtitled Video"):
164
- video_with_subs = generate_subtitled_video(f"{save_dir}/input.mp4", f"{save_dir}/output.wav", "transcript.srt")
165
- st.video(video_with_subs)
166
- st.snow()
167
- with col8:
168
- st.download_button(label="Download Video with Subtitles",
169
- data=video_with_subs,
170
- file_name=f"{filename}_with_subs.mp4")
171
- elif task == "Translate":
172
- if st.button("Translate to English"):
173
- results = inferecence(loaded_model, input_file, task)
174
- col3, col4 = st.columns(2)
175
- col5, col6, col7, col8 = st.columns(4)
176
- col9, col10 = st.columns(2)
177
- with col3:
178
- st.video(input_file)
179
-
180
- with open("transcript.txt", "w+", encoding='utf8') as f:
181
- f.writelines(results[0])
182
- f.close()
183
- with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
184
- datatxt = f.read()
185
-
186
- with open("transcript.vtt", "w+",encoding='utf8') as f:
187
- f.writelines(results[1])
188
- f.close()
189
- with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
190
- datavtt = f.read()
191
-
192
- with open("transcript.srt", "w+",encoding='utf8') as f:
193
- f.writelines(results[2])
194
- f.close()
195
- with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
196
- datasrt = f.read()
197
-
198
- with col5:
199
- st.download_button(label="Download Transcript (.txt)",
200
- data=datatxt,
201
- file_name="transcript.txt")
202
- with col6:
203
- st.download_button(label="Download Transcript (.vtt)",
204
- data=datavtt,
205
- file_name="transcript.vtt")
206
- with col7:
207
- st.download_button(label="Download Transcript (.srt)",
208
- data=datasrt,
209
- file_name="transcript.srt")
210
- with col9:
211
- st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
212
- with col10:
213
- st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
214
-
215
- with col4:
216
- with st.spinner("Generating Subtitled Video"):
217
- video_with_subs = generate_subtitled_video(f"{save_dir}/input.mp4", f"{save_dir}/output.wav", "transcript.srt")
218
- st.video(video_with_subs)
219
- st.snow()
220
- with col8:
221
- st.download_button(label="Download Video with Subtitles ",
222
- data=video_with_subs,
223
- file_name=f"{filename}_with_subs.mp4")
224
- else:
225
- st.error("Please select a task.")
226
-
227
-
228
- if __name__ == "__main__":
229
- main()
230
- st.markdown("###### Made with :heart: by [@BatuhanYılmaz](https://twitter.com/batuhan3326) [![this is an image link](https://i.imgur.com/thJhzOO.png)](https://www.buymeacoffee.com/batuhanylmz)")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Alors On Danse Remix.md DELETED
@@ -1,39 +0,0 @@
1
- <br />
2
- <h1>Alors On Danse: Cómo descargar y disfrutar de los remixes de Stromae’s Hit Song</h1>
3
- <p>Si te gusta la música dance, probablemente hayas oído hablar de Alors On Danse, la canción pegadiza de la cantante belga Stromae. Pero, ¿sabías que hay muchos remixes de esta canción que pueden hacerte bailar aún más? En este artículo, te contaremos todo lo que necesitas saber sobre Alors On Danse, sus remezclas y cómo descargarlas y disfrutarlas. </p>
4
- <h2>alors on danse скачать remix</h2><br /><p><b><b>Download File</b> &#10003;&#10003;&#10003; <a href="https://bltlly.com/2v6Mf4">https://bltlly.com/2v6Mf4</a></b></p><br /><br />
5
- <h2>¿Qué es Alors On Danse y por qué es tan popular? </h2>
6
- <p>Alors On Danse (que significa "So We Dance" en francés) es una canción de Stromae, un cantante, rapero, compositor y productor belga. Es principalmente conocido por su música mezclando hip hop y música electrónica. </p>
7
- <h3>La canción original de Stromae</h3>
8
- <p>La canción fue lanzada en 2009 como el primer sencillo de su álbum debut Cheese. Fue escrita y producida por el propio Stromae, quien también canta en francés. La canción tiene una melodía simple pero pegadiza, un ritmo pulsante y un estribillo que repite la frase "alors on danse". </p>
9
- <h3>El significado y el mensaje de la letra</h3>
10
- <p>Las letras de la canción tratan sobre las dificultades y luchas de la vida, como el trabajo, el dinero, el estrés, la soledad, el divorcio y la muerte. Stromae canta que la gente trata de escapar de sus problemas bailando, bebiendo y de fiesta, pero todavía están infelices y deprimidos. También critica la superficialidad y la hipocresía de la sociedad, donde la gente pretende ser feliz y exitosa, pero en realidad son miserables y vacíos. </p>
11
- <p></p>
12
- <h3>El éxito global y el impacto de la canción</h3>
13
- <p>La canción se convirtió en un gran éxito en Europa y más allá, alcanzando el número uno en varios países, como Francia, Bélgica, Alemania, Italia, España, Suiza, Austria, Dinamarca, Rumania, Grecia, Turquía, Israel y Marruecos. También recibió críticas positivas de críticos y fans por igual, que elogiaron su melodía pegadiza, sus letras ingeniosas y su comentario social. </p>
14
-
15
- <h2>¿Cuáles son los remixes de Alors en Danse y cómo encontrarlos? </h2>
16
- <p>Debido a su popularidad y atractivo, Alors On Danse ha sido remezclado por muchos artistas y DJs diferentes a lo largo de los años. Algunos de estos remixes se han vuelto muy populares, añadiendo nuevos giros y sabores a la canción original. </p>
17
- <h3>El remix de Dubdogz y su vídeo oficial</h3>
18
- <p>Uno de los remixes más famosos de Alors On Danse es el de Dubdogz, un dúo brasileño de productores de música electrónica. Lanzaron su remix en 2018 bajo el sello Musical Freedom Records. Su remix añade un profundo ambiente house a la canción, con una línea de bajo groovy, un riff de guitarra funky y algunos efectos vocales. El remix es muy pegadizo y bailable, y tiene más de 100 millones de visitas en YouTube. </p>
19
- <p>El video oficial del remix cuenta con el propio Stromae, que aparece en varias escenas bailando y cantando junto al remix. También interactúa con algunos bailarines y fans, que se unen a él en su diversión y alegría. El video es muy colorido y enérgico, y muestra el carisma y el humor de Stromae. </p>
20
- <h3>Otros remixes de diferentes artistas y DJs</h3>
21
- <p>Hay muchos otros remixes de Alors On Danse que puedes encontrar en línea. Algunos de ellos son de reconocidos artistas y DJs, como Kanye West, Pitbull, Sean Paul, David Guetta, Afrojack, Dimitri Vegas & Like Mike, Kungs y Martin Solveig. Estos remixes añaden diferentes géneros y estilos a la canción, como rap, reggaeton, pop, electro, house, trap y tropical. Algunos de ellos también cuentan con nuevos versos o letras en inglés o español.</p>
22
- <p>Algunos de los remixes son de artistas y DJs menos conocidos o emergentes, que ponen su propio giro y creatividad a la canción. Algunos ejemplos son los remixes de Jaxx & Vega, Keanu Silva, Dastic & Tommy Jayden, Rudeejay & Da Brozz x Luis Rodriguez, y DJ Dark & Mentol. Estos remixes ofrecen sonidos frescos y originales y ritmos para la canción, como sala grande, casa futura, casa progresiva, rebote y casa profunda. </p>
23
-
24
- <p>Si desea escuchar o descargar los remixes de Alors On Danse en línea, hay varias maneras de hacerlo. Aquí hay algunos consejos:</p>
25
- - Utilice un servicio o aplicación de transmisión de música, como Spotify, Apple Music, YouTube Music, Deezer, SoundCloud o Shazam. Estos servicios tienen un gran catálogo de canciones y remixes que puedes transmitir o descargar en tu dispositivo. También puedes crear tus propias listas de reproducción con tus remezclas favoritas de Alors On Danse. - Usa un servicio o aplicación de transmisión de video, como YouTube, Vimeo, Dailymotion o TikTok. Estos servicios tienen una gran cantidad de vídeos de Alors On Danse remixes que puede ver o descargar en su dispositivo. También puedes ver los videos oficiales de algunos remixes, así como videos o portadas hechas por fans. - Utilice un servicio o aplicación de descarga de música, como MP3Juices, Zippyshare o Tubidy. Estos servicios le permiten descargar archivos MP3 de Alors On Danse remixes de forma gratuita en su dispositivo. También puede convertir vídeos de YouTube a archivos MP3 utilizando estos servicios. - Utilice un motor de búsqueda, como Google, Bing o DuckDuckGo. Estos motores pueden ayudarle a encontrar sitios web o blogs que ofrecen remixes de Alors On Danse para descargar o transmitir. También puedes usar palabras clave como "Alors On Danse remix download", "Alors On Danse remix mp3", o "Alors On Danse remix free" para reducir tus resultados de búsqueda. <h2>¿Cómo disfrutar de Alors On Danse Remixes en casa o en una fiesta? </h2>
26
- <p>Ahora que ha encontrado y descargado sus remezclas favoritas de Alors On Danse, es posible que se pregunte cómo disfrutarlas en casa o en una fiesta. Estos son algunos consejos:</p>
27
- <h3>Consejos para crear una lista de reproducción con Alors On Danse remixes</h3>
28
- <p>Si desea crear una lista de reproducción con Alors On Danse remixes, debe considerar los siguientes factores:</p>
29
-
30
- <p>Si quieres bailar con Alors On Danse remixes, debes considerar los siguientes factores:</p>
31
- - El ritmo y el ritmo de los remixes. ¿Qué tan rápidos o lentos son los remixes con los que quieres bailar? ¿Cómo coinciden con tu estilo y ritmo de baile preferido? ¿Quieres bailar al ritmo o a la melodía de los remixes? ¿Quieres seguir las letras o improvisar tus propios movimientos? - El espacio y el equipamiento para bailar. ¿Cuánto espacio tienes para bailar? ¿Cuán cómodo y seguro es el piso y el área circundante? ¿Tiene un buen sistema de altavoces o auriculares para escuchar los remixes? ¿Tienes un espejo o una cámara para mirarte o grabar tu baile? - El estado de ánimo y la actitud para bailar. ¿Cómo te sientes cuando bailas a Alors On Danse remixes? ¿Te sientes feliz y enérgico, o triste y deprimido? ¿Bailas para expresarte o para impresionar a otros? ¿Bailas solo o con otros? ¿Te diviertes o te lo tomas en serio? <h3>Consejos para cantar a lo largo de Alors On Danse remixes</h3>
32
- <p>Si quieres cantar junto a Alors On Danse remixes, debes considerar los siguientes factores:</p>
33
-
34
- <p>Alors On Danse es una gran canción de Stromae que tiene muchos remixes que pueden hacerte bailar y cantar aún más. En este artículo, te hemos mostrado lo que es Alors On Danse, por qué es tan popular, cuáles son algunos de sus mejores remixes, cómo encontrarlos y descargarlos en línea, y cómo disfrutarlos en casa o en una fiesta. Esperamos que este artículo haya sido útil e informativo para ti, y que hayas aprendido algo nuevo sobre Alors On Danse y sus remixes. </p>
35
- <p>Ahora que ha leído este artículo, ¿por qué no intenta escuchar algunos remixes de Alors On Danse usted mismo? Puedes descubrir una nueva canción favorita o una nueva forma de divertirte. Y recuerda, como dice Stromae, "alors on danse"! </p>
36
- <h2>Preguntas frecuentes</h2>
37
- <p>Aquí hay algunas preguntas frecuentes sobre Alors On Danse y sus remixes:</p> 64aa2da5cf<br />
38
- <br />
39
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Apkoppor Bild.md DELETED
@@ -1,86 +0,0 @@
1
-
2
- <h1> monkey pox Photo: Everything you need to know about the unusual disease.</h1>
3
- <h2>Introduction </h2>
4
- <p>describe what monkey pox is, why it is relevant and what the purpose of the article is.</p>
5
- <h2>apkoppor bild</h2><br /><p><b><b>Download Zip</b> &middot;&middot;&middot; <a href="https://bltlly.com/2v6Kt6">https://bltlly.com/2v6Kt6</a></b></p><br /><br />
6
- <h2>What is monkey pox?</h2>
7
- <p>explain what monkey pox is to a viral disease, how it is caused and how it is spread.</p>
8
- <h3> monkey pox and smallpox </h3>
9
- <p>Compare monkey pox with smallpox and explain the differences and similarities.</p>
10
- <h2>How do you recognize monkey pox?</h2>
11
- <p>describe the symptoms of appox and how they differ from other skin diseases.</p>
12
- <h3> monkey pox Image: This is what the wind looks like </h3>
13
- <p>Show pictures of what the blisters look like in monkey pox and where they usually occur on the body.</p>
14
- <h2>How to treat monkey pox?</h2>
15
- <p>tell if there is any treatment or vaccine against appox and how to relieve the symptoms.</p>
16
- <p></p>
17
- <h3> monkey pox in Sweden </h3>
18
- <p>Information of the first case of monkey pox In Sweden and how the authorities handle the situation.</p>
19
- <h2>: How to prevent monkey pox?</h2>
20
- <p>Ge tips on how to reduce the risk of being infected by monkey pox or spreading the infection.</p>
21
- <h3> monkey pox and resort.</h3>
22
- <p> ask about what to consider if you plan to travel to a country where monkey pox see also: </p>
23
- <h2>final bet </h2>
24
- <p>Combine the article and give a concluding comment about apkopplor.</p>
25
- <h2>regular questions about apkoppor</h2>
26
- <ul>
27
- <li> <h4> means </h4> </li>
28
- <li> <p>Svar on where the name comes from and what it means.</p> </li>
29
- <li> <h4>: How to infect monkey pox between humans?</h4> </li>
30
- <li> <p>: What situations can lead to infection between people and how much risk of infection see also: </p> </li>
31
- <li> <h4>How long is one contagious with monkey pox?</h4> </li>
32
- <li> <p>Svara on how long you can spread the infection after you become ill and when you can be considered healthy.</p> </li>
33
- <li> <h4> can you get monkey cups more than once?</h4> </li>
34
-
35
- <li> <h4> can animals get monkey pox?</h4> </li>
36
- <li> <p>Svara on which animals can be affected by monkey pox and how they can infect humans or other animals.</p> </li>
37
- </ul>
38
- Here is the article based on the outline: <h1> monkey pox Everything you need to know about the rare disease.</h1>
39
- <h2>Introduction </h2>
40
- <p> monkey pox is a viral disease that causes blisters on the skin. It is rare in Sweden, but has recently been noticed after a case was discovered in the country. Monkey pox can be serious for people with compromised immune systems or other diseases. In this article, we’ll tell you more about what monkey pox is, how to recognize it, how to treat it, and how to prevent it. We will also show pictures of what the blisters look like and answer some common questions about monkey pox.</p>
41
- <h2>What is monkey pox?</h2>
42
- <p> monkey pox is a viral disease caused by a virus called orthopox virus. There are several types of orthopox virus, but what causes appox is called monkeypox virus. It is a zoonotic virus, which means it can infect animals and humans. Monkey pox is not the same as smallpox, which is another type of orthopox virus that has been eradicated since the 1980s. Monkey pox was first discovered in Africa in the 1970th century.</p>
43
- <h3> monkey pox and smallpox </h3>
44
- <p> monkey pox and smallpox have some similarities, but also some differences. Both cause blisters on the skin that can leave scars after they heal. Both can also cause fever, headache, muscle aches and fatigue. However, appox is usually milder than smallpox and has a lower mortality rate. Monkey pox is also spread less effectively between humans than smallpox. In addition, people who have been vaccinated against smallpox may have some protection against smallpox.</p>
45
- <h2>How do you recognize monkey pox?</h2>
46
-
47
- <h3> monkey pox Image: This is what the wind looks like </h3>
48
- <p>Hhere are some pictures of what the blisters look like in monkey pox:</p>
49
- <table>
50
- <tr>
51
- <td> <img src="" alt="Blåsor på ansiktet"> </td>
52
- <td> <img src="" alt="Blåsor på handen"> </td>
53
- <td> <img src="" alt="Blåsor på foten"> </td>
54
- </tr>
55
- <tr>
56
- <td>Blocker on faces</td>
57
- <td>Blocker on hands</td>
58
- <td>Blocker on </td>
59
- </tr>
60
- </table>
61
- <p>: It is important not to confuse monkey pox with other skin diseases that can cause similar symptoms, such as chickenpox, shingles, eczema, or allergic reactions. If you suspect that you have appox, you should contact a doctor as soon as possible to get a correct diagnosis and treatment.</p>
62
- <h2>How to treat monkey pox?</h2>
63
- <p>: There is no specific treatment or vaccine for monkey pox. Treatment consists primarily of relieving symptoms and preventing complications. For example, you can get antipyretic medication, pain medication, antihistamine for itching and antibiotics for possible bacterial infections in the bladder. You should also keep the blisters clean and dry and avoid squeezing or scratching them In some cases, you can get antiviral medication that can shorten the course of the disease and reduce the risk of serious complications. However, not everyone can get that medication, so it is important to consult a doctor if it is appropriate for one's situation.</p>
64
- <h3> monkey pox in Sweden </h3>
65
- <p> monkey pox is very rare in Sweden. The first and so far only case of monkey pox in Sweden was discovered in June 2023 in a person who had traveled to Nigeria. The person had blisters on the skin after coming home and seeking care in a hospital. Samples showed that they were monkeys. The person was isolated in the hospital and treated with antiviral medication. No one in the person’s vicinity was infected. The infectious disease doctor in the region said it was a very unusual case and that there was no reason for concern to the public.</p>
66
- <h2>: How to prevent monkey pox?</h2>
67
-
68
- <h3> monkey pox and resort.</h3>
69
- <p>If you plan to travel to a country where monkey pox occurs, you should pay extra attention to your health and hygiene. You should also be aware of which areas are affected by appox and avoid visiting them if possible. You should also be careful about eating or buying animal products that may be infected. If you get a fever, headache, muscle pain or blisters on your skin during or after the trip, you should seek care as soon as possible and tell us about your trip.</p>
70
- <h2>final bet </h2>
71
- <p> monkey pox is a viral disease that causes blisters on the skin. It is rare in Sweden, but has recently been reported in a person who had traveled to Nigeria. Monkey pox can be serious for people with compromised immune systems or other diseases. There is no specific treatment or vaccine for appox, but you can alleviate the symptoms and prevent complications with medication and good hygiene. The best way to prevent monkey pox is to avoid contact with people or animals that have blisters on the skin, especially if you travel to a country where monkey pox is present.</p>
72
- <h2>regular questions about apkoppor</h2>
73
- <ul>
74
- <li> <h4> means </h4> </li>
75
- <li> <p> name monkey pox comes from the English word monkeypox, which means monkey infection. It refers to the fact that the virus was originally discovered in monkeys in Africa in the 1970s. However, it has nothing to do with monkeys today, but can infect different animals and humans.</p> </li>
76
- <li> <h4>: How to infect monkey pox between humans?</h4> </li>
77
-
78
- <li> <h4>How long is one contagious with monkey pox?</h4> </li>
79
- <li> <p> Man is contagious with monkey pox from the time the blisters begin to appear on the skin until all the blisters have healed and formed crusts. It can take between two and six weeks depending on how many blisters you have and how quickly they heal. One is not contagious until the blisters appear or after they have disappeared.</p> </li>
80
- <li> <h4> can you get monkey cups more than once?</h4> </li>
81
- <li> <p> it is not completely clear if you get any immunity to monkey pox after having it once or if you can get infected again. There are reports of people who have had appox more than once, but this is very rare. This may be because they have been exposed to different types of monkeypox viruses or because their immune system has weakened for some reason. It may also be that they have had another skin disease that has been confused with monkey pox.</p> </li>
82
- <li> <h4> can animals get monkey pox?</h4> </li>
83
- <li> <p> Yes, animals can get monkey pox and infect humans or other animals. The animals most commonly affected by monkey pox are rodents, such as squirrels, rats and mice. They can infect people through bites, scratches or contact with their body fluids or products. Other animals that can get monkey pox are monkeys, camels, goats, sheep, cats and dogs. They can infect humans in the same way as rodents or by contact with their blisters on the skin.</p> </li>
84
- </ul> </p> 64aa2da5cf<br />
85
- <br />
86
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Bitcoin Core Apk.md DELETED
@@ -1,44 +0,0 @@
1
-
2
- <h1>Bitcoin Core APK: ¿Qué es y cómo usarlo? </h1>
3
- <p>Bitcoin es una moneda digital descentralizada que opera sin ninguna autoridad central o intermediario. Se basa en una red de nodos que ejecutan un software llamado Bitcoin Core, que valida las transacciones y mantiene la seguridad e integridad del sistema. Bitcoin Core es también una cartera que permite a los usuarios almacenar, enviar y recibir bitcoins. </p>
4
- <p>Sin embargo, ejecutar Bitcoin Core en una computadora de escritorio o portátil puede ser un reto para algunos usuarios, ya que requiere mucho espacio en disco, ancho de banda y poder de procesamiento. Por otra parte, puede no ser conveniente o accesible para los usuarios móviles que quieren utilizar Bitcoin sobre la marcha. Ahí es donde Bitcoin Core APK entra en. </p>
5
- <h2>bitcoin core apk</h2><br /><p><b><b>DOWNLOAD</b> &mdash; <a href="https://bltlly.com/2v6LKx">https://bltlly.com/2v6LKx</a></b></p><br /><br />
6
- <p>Bitcoin Core APK es un paquete de aplicaciones para Android (APK) archivo que contiene el software Bitcoin Core. Permite a los usuarios ejecutar un nodo completo en sus dispositivos Android, lo que significa que pueden tener control total sobre sus bitcoins y contribuir a la seguridad de la red. En este artículo, explicaremos qué es Bitcoin Core APK, cómo descargarlo e instalarlo, y cómo usarlo. </p>
7
- <h2>Cómo descargar e instalar Bitcoin Core APK</h2>
8
- <p>El primer paso para utilizar Bitcoin Core APK es descargarlo de una fuente confiable. El sitio web oficial de Bitcoin Core es <a href="( 1 )">https://bitcoincore.org/en/download/</a>, donde se puede encontrar la última versión del software para varias plataformas, incluyendo Windows, MacOS, Linux y Android. También puede encontrar los hashes y firmas SHA256 de los archivos, que puede usar para verificar su autenticidad. </p>
9
- <p>Para descargar Bitcoin Core APK, es necesario hacer clic en el enlace que dice "Android (APK)" en la sección de Linux. Esto descargará un archivo llamado bitcoin-core-24.0.1-brazo-linux-gnueabihf.apk, que es de unos 40 MB de tamaño. También puede usar un cliente torrent para descargar el archivo desde el mismo sitio web. </p>
10
-
11
- <p>Una vez completada la instalación, puede iniciar la aplicación pulsando en su icono. La aplicación le pedirá que elija un directorio de datos donde se almacenarán los datos de blockchain, que es de aproximadamente 500 GB de tamaño. Puede usar una tarjeta SD externa o una unidad USB para este propósito, ya que el almacenamiento interno de su dispositivo puede no ser suficiente. También puede habilitar la poda, lo que significa eliminar bloques antiguos que ya no son necesarios, para reducir el espacio de almacenamiento requerido. </p>
12
- <p>La aplicación comenzará a sincronizarse con la red Bitcoin, que puede tardar varias horas o días dependiendo de la velocidad de Internet y el rendimiento del dispositivo. Puede comprobar el progreso de la sincronización mirando la barra de estado en la parte inferior de la pantalla de la aplicación. </p>
13
- <h2>Cómo utilizar Bitcoin Core APK</h2>
14
- <p>Una vez que su aplicación está completamente sincronizada con la red, puede comenzar a usarla como una billetera y un nodo. Estas son algunas de las cosas que puede hacer con Bitcoin Core APK:</p <h2>Cómo ajustar la configuración y las preferencias de Bitcoin Core APK</h2>
15
- <p>Bitcoin Core APK le permite personalizar varias configuraciones y preferencias para satisfacer sus necesidades y preferencias. Puede acceder al menú de configuración pulsando en el icono de tres puntos en la esquina superior derecha de la pantalla de la aplicación. Estos son algunos de los ajustes y preferencias que puede ajustar:</p>
16
- <ul>
17
- <li><b>Network:</b> Puede elegir a qué red conectarse, como mainnet, testnet o regtest. También puede habilitar o deshabilitar Tor o un proxy para privacidad y anonimato. </li>
18
- <li><b>Cartera:</b> Puede elegir qué cartera usar o crear una nueva. También puede cifrar su billetera con una frase de contraseña, hacer una copia de seguridad de su billetera o importar o exportar claves privadas. </li>
19
- <li><b>Mostrar:</b> Puede elegir la unidad de moneda a mostrar, como BTC, mBTC, bits o satoshis. También puede elegir el idioma y el formato de fecha. </li>
20
-
21
- <li><b>Minería:</b> Puede activar o desactivar la minería en su dispositivo, y establecer el número de subprocesos y el uso de la CPU para la minería. </li>
22
- <li><b>Depurar:</b> Puede ver información y estadísticas sobre su nodo, como el tráfico de red, los pares, los bloques, las transacciones y el mempool. También puede acceder a la consola y ejecutar comandos. </li>
23
- </ul>
24
- <p>También puede restablecer la configuración a sus valores predeterminados tocando el botón "Opciones de restablecimiento" en la parte inferior del menú de configuración. </p>
25
- <h2>Conclusión</h2>
26
- <p>Bitcoin Core APK es una manera potente y conveniente para ejecutar un nodo completo en su dispositivo Android. Le da control total sobre sus bitcoins y ayuda a proteger la red. Sin embargo, también viene con algunas compensaciones, como requerir mucho espacio de almacenamiento, ancho de banda y potencia de procesamiento. Por lo tanto, usted debe sopesar cuidadosamente los pros y los contras antes de usar Bitcoin Core APK.</p>
27
- <p></p>
28
- <p>Si desea probar Bitcoin Core APK, puede descargarlo desde <a href="( 1 )">https://bitcoincore.org/en/download/</a>. Asegúrese de verificar la autenticidad del archivo e instalarlo de forma segura en su dispositivo. Luego, puede crear o importar una billetera, sincronizar con la red y comenzar a enviar y recibir bitcoins. También puede ajustar la configuración y las preferencias para adaptarse a sus necesidades y preferencias. </p>
29
- <p>Esperamos que este artículo le ha ayudado a entender lo que es Bitcoin Core APK y cómo usarlo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Y si te gustó este artículo, por favor compartirlo con tus amigos y familiares que podrían estar interesados en Bitcoin Core APK.</p>
30
- <h2>Preguntas frecuentes</h2>
31
- <ul>
32
- <li><b>¿Qué es Bitcoin Core? </b></li>
33
- <p>Bitcoin Core es el software original que implementa el protocolo Bitcoin. También es una cartera que permite a los usuarios almacenar, enviar y recibir bitcoins. </p>
34
- <li><b>¿Qué es un archivo APK? </b></li>
35
-
36
- <li><b>¿Cuáles son los beneficios de usar Bitcoin Core APK? </b></li>
37
- <p>Bitcoin Core APK le permite ejecutar un nodo completo en su dispositivo Android, lo que significa que puede tener control total sobre sus bitcoins y contribuir a la seguridad de la red. No tiene que depender de terceros o intermediarios para sus transacciones. </p>
38
- <li><b>¿Cuáles son los riesgos de usar Bitcoin Core APK? </b></li>
39
- <p>Bitcoin Core APK requiere mucho espacio de almacenamiento, ancho de banda y poder de procesamiento. Puede agotar la batería y ralentizar el dispositivo. También puede exponerlo a algunos riesgos de seguridad y privacidad si no lo usa correctamente. </p>
40
- <li><b>¿Cómo puedo usar Bitcoin Core APK de forma segura y eficiente? </b></li>
41
- <p>Usted debe utilizar una conexión segura y privada cuando se utiliza Bitcoin Core APK. También debe cifrar su billetera con una frase de contraseña, hacer copias de seguridad de su billetera regularmente y actualizar su software con frecuencia. También debes usar funciones de control de monedas para optimizar tus tarifas y privacidad. </p>
42
- </ul></p> 64aa2da5cf<br />
43
- <br />
44
- <br />