Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: Unnamed: 0: int64 scenario: string model_cls: string num_params_M: double flops_M: double time_plain_s: double mem_plain_GB: double time_compile_s: double mem_compile_GB: double fullgraph: bool mode: string github_sha: string vs pipeline_cls: string ckpt_id: string batch_size: int64 num_inference_steps: int64 model_cpu_offload: bool run_compile: bool time (secs): string memory (gbs): string actual_gpu_memory (gbs): double github_sha: string Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3339, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2300, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 520, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: Unnamed: 0: int64 scenario: string model_cls: string num_params_M: double flops_M: double time_plain_s: double mem_plain_GB: double time_compile_s: double mem_compile_GB: double fullgraph: bool mode: string github_sha: string vs pipeline_cls: string ckpt_id: string batch_size: int64 num_inference_steps: int64 model_cpu_offload: bool run_compile: bool time (secs): string memory (gbs): string actual_gpu_memory (gbs): double github_sha: string
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Welcome to 🤗 Diffusers Benchmarks!
This is dataset where we keep track of the inference latency and memory information of the core pipelines in the diffusers
library.
Currently, the core pipelines are the following:
- Stable Diffusion and its derivatives such as ControlNet, T2I Adapter, Image-to-Image, Inpainting
- Stable Diffusion XL and its derivatives
- SSD-1B
- Kandinsky
- Würstchen
- LCM
Note that we will continue to extend the list of core pipelines based on their API usage.
We use this GitHub Actions workflow to report the above numbers automatically. This workflow runs on a biweekly cadence.
The benchmarks are run on an A10G GPU.
- Downloads last month
- 142