|
# Real-World Evaluation Images for Articulated Objects Interaction Generation |
|
|
|
This dataset contains the real-world images used in evaluating [DragAPart](https://dragapart.github.io/), a conditional image generator that models interaction with articulated objects. |
|
|
|
## 📦 How to Use It? |
|
|
|
Each sample consists of: |
|
- `original_image_XXX.png`: The base image showing an articulated object. |
|
- `arrow_locations_XXX.npy`: A NumPy file containing the arrow coordinates for interaction. |
|
|
|
The `.npy` file stores one arrow as: |
|
|
|
```python |
|
[x0, y0, x1, y1] # Normalized coordinates in [0, 1] |
|
``` |
|
|
|
Where: |
|
- `(x0, y0)` is the **starting point** of the interaction (e.g., where the user clicks), |
|
- `(x1, y1)` is the **end point** indicating the direction or extent of the manipulation. |
|
|
|
These coordinates are normalized relative to the image size. |
|
|
|
--- |
|
|
|
## 🖼️ Visualization |
|
|
|
You can visualize the interaction using the following Python script: |
|
|
|
```python |
|
import numpy as np |
|
from PIL import Image |
|
import matplotlib.pyplot as plt |
|
|
|
# Load image and arrow data |
|
image_path = "original_image_000.png" |
|
arrow_path = "arrow_locations_000.npy" |
|
|
|
image = Image.open(image_path) |
|
arrow = np.load(arrow_path)[0] # [x0, y0, x1, y1] |
|
|
|
# Convert normalized coordinates to pixel values |
|
width, height = image.size |
|
x0, y0 = int(arrow[0] * width), int(arrow[1] * height) |
|
x1, y1 = int(arrow[2] * width), int(arrow[3] * height) |
|
|
|
# Plot the image and overlay the interaction arrow |
|
plt.figure(figsize=(6, 6)) |
|
plt.imshow(image) |
|
plt.arrow(x0, y0, x1 - x0, y1 - y0, |
|
color='red', width=2, head_width=10, length_includes_head=True) |
|
plt.axis('off') |
|
plt.title("Interactive Manipulation Arrow") |
|
plt.show() |
|
``` |
|
|
|
This will display the original image with a red arrow showing the suggested user interaction as below: |
|
|
|
|
|
 |
|
|
|
 |
|
|