Methods for using the Hugging Face Hub:
( model_id: str task_type: str dataset_type: str dataset_name: str metric_type: str metric_name: str metric_value: float task_name: str = None dataset_config: str = None dataset_split: str = None dataset_revision: str = None dataset_args: typing.Dict[str, int] = None metric_config: str = None metric_args: typing.Dict[str, int] = None overwrite: bool = False )
Parameters
str) — Model id from https://hf.co/models.
str) — Task id, refer to
https://github.com/huggingface/evaluate/blob/main/src/evaluate/config.py#L154 for allowed values.
str) — Dataset id from https://hf.co/datasets.
str) — Pretty name for the dataset.
str) — Metric id from https://hf.co/metrics.
str) — Pretty name for the metric.
float) — Computed metric value.
str, optional) — Pretty name for the task.
str, optional) — Dataset configuration used in datasets.load_dataset().
See huggingface/datasets docs for more info: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_dataset.name
str, optional) — Name of split used for metric computation.
str, optional) — Git hash for the specific version of the dataset.
dict[str, int], optional) — Additional arguments passed to datasets.load_dataset().
str, optional) — Configuration for the metric (e.g. the GLUE metric has a configuration for each subset)
dict[str, int], optional) — Arguments passed during Metric.compute().
bool, optional, defaults to False) — If set to True an existing metric field can be overwritten, otherwise
attempting to overwrite any existing fields will cause an error.
Pushes the result of a metric to the metadata of a model repository in the Hub.