Spaces:
Running
on
Zero
Running
on
Zero
File size: 2,384 Bytes
8ed2f16 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
# <u>Data Preprocessing Pipeline</u> by *AvatarArtist*
This repo describes how to process your own data for using our model.
## π Overview
<div align=center>
<img src="data_process_pipe.png">
</div>
## βοΈ Requirements and Installation
We recommend the requirements as follows.
### Environment
```bash
git clone --depth=1 https://github.com/ant-research/AvatarArtist
cd AvatarArtist
conda create -n avatarartis python=3.9.0
conda activate avatarartist
pip install -r requirements.txt
```
### Download Weights
The weights are available at [π€HuggingFace](https://huggingface.co/KumaPower/AvatarArtist), you can download it with the following commands. Please move the required files into the `pretrained_model` directory:
```bash
# if you are in china mainland, run this first: export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --repo-type model \
KUMAPOWER/AvatarArtist \
--local-dir pretrained_model
```
## ποΈ Usage
Process the target video to obtain the target pose motion and mesh.
```bash
python3 input_img_align_extract_ldm.py --input_dir ./demo_data/hongyu_2.mp4 --is_video --save_dir ./demo_data/data_process_out
```
Process the image to extract the source image.
```bash
python3 input_img_align_extract_ldm.py --input_dir ./demo_data/ip_imgs --is_img --save_dir ./demo_data/data_process_out
```
Our code supports step-by-step data processing. For example, if your images are already aligned, you can proceed directly to the next step.
```bash
python3 input_img_align_extract_ldm.py --input_dir ./demo_data/ip_imgs --is_img --save_dir ./demo_data/data_process_out --already_align
```
Once ready, the data will be organized in this format:
```
π¦ datasets/
βββ π dataset/
β βββ π coeffs/
β βββ π images512x512/
β βββ π uvRender256x256/
β βββ π orthRender256x256_face_eye/
β βββ π motions/
βββ π crop_fv_tracking/
βββ π realign_detections/
βββ π realign_detections/
βββ π realign/
βββ π raw_detection/
βββ π align_3d_landmark/
βββ π raw_frames/
```
## π Credits
- This code builds on [Portrait4D](https://github.com/YuDeng/Portrait-4D) and [InvertAvatar](https://github.com/XChenZ/invertAvatar). We have integrated and organized their data processing code. Thanks for open-sourcing!
|