YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Demo
Please try this β€β€β€ Colab Notebook Demo (click me!)
Context | Response | updown score |
---|---|---|
I love NLP! | Hereβs a free textbook (URL) in case anyone needs it. | 0.613 |
I love NLP! | Me too! | 0.111 |
The updown
score predicts how likely the response is getting upvoted.
DialogRPT-updown
Dialog Ranking Pretrained Transformers
How likely a dialog response is upvoted π and/or gets replied π¬?
This is what DialogRPT is learned to predict. It is a set of dialog response ranking models proposed by Microsoft Research NLP Group trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., DialoGPT) by re-ranking the generated response candidates.
Quick Links:
We considered the following tasks and provided corresponding pretrained models. This page is for the updown
task, and other model cards can be found in table below.
Task | Description | Pretrained model |
---|---|---|
Human feedback | given a context and its two human responses, predict... | |
updown |
... which gets more upvotes? | this model |
width |
... which gets more direct replies? | model card |
depth |
... which gets longer follow-up thread? | model card |
Human-like (human vs fake) | given a context and one human response, distinguish it with... | |
human_vs_rand |
... a random human response | model card |
human_vs_machine |
... a machine generated response | model card |
Contact:
Please create an issue on our repo
Citation:
@inproceedings{gao2020dialogrpt,
title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data},
author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan},
year={2020},
booktitle={EMNLP}
}
- Downloads last month
- 3,602
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.