( new_arr: list ) → list
Get the original order of the data.
( split_id: int ) → tuple
Get the start and end indices of a dataset split.
Iterator that yields the start and end indices of each dataset split. Also updates the starting batch size for each split (trying to double the batch every time we move to a new split).
( requests: list num_dataset_splits: int )
( num_dataset_splits ) → type
Initialises the split limits based on generation parameters. The splits are used to estimate time remaining when evaluating, and in the case of generative evaluations, to group similar samples together.
For generative tasks, self._sorting_criteria outputs:
In the current function, we create evaluation groups by generation parameters (logits and eos), so that samples with similar properties get batched together afterwards. The samples will then be further organised by length in each split.
( requests: list num_dataset_splits: int )
( dataset: Dataset num_replicas: typing.Optional[int] = None rank: typing.Optional[int] = None shuffle: bool = True seed: int = 0 drop_last: bool = False )
A distributed sampler that copy the last element only when drop_last is False so we keep a small padding in the batches as our samples are sorted by length.