Keypoint Detection
Transformers
Safetensors
vitpose
Inference Endpoints

ValueError( ValueError: dataset_index must be provided when using multiple experts

#1
by yangshuerDr - opened

Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False.
/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/utils/cpp_extension.py:1965: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Traceback (most recent call last):
File "/home/yang/MyWorkFolder/V_check/Op/tests.py", line 60, in
outputs = model(**inputs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/transformers/models/vitpose/modeling_vitpose.py", line 306, in forward
outputs = self.backbone.forward_with_filtered_kwargs(
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/transformers/utils/backbone_utils.py", line 235, in forward_with_filtered_kwargs
return self(*args, **filtered_kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 515, in forward
outputs = self.encoder(
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 365, in forward
layer_outputs = layer_module(hidden_states, dataset_index, layer_head_mask, output_attentions)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/yang/miniconda3/envs/Learn/lib/python3.10/site-packages/transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 299, in forward
raise ValueError(
ValueError: dataset_index must be provided when using multiple experts (num_experts=6). Please provide dataset_index to the forward pass.

Sign up or log in to comment