title
stringlengths
15
185
link
stringlengths
53
219
replies
int64
0
43
views
int64
18
25.9k
initial_post
stringlengths
4
20.5k
initial_post_date
stringlengths
20
20
responses
listlengths
0
20
Why are segment and position embeddings so large?
https://discuss.huggingface.co/t/why-are-segment-and-position-embeddings-so-large/254
2
1,530
Cross-post from:Size of feature embeddings (and some digression about casing methods) - Development - OpenNMTThese days I am part-time doing work on improving translation models. We are working with regular transformer seq2seq networks using OpenNMT. This question is not about OpenNMT but it was triggered by going through its documentation. In onmt one can addfeatures to each word. These features are then used to train their own embedding. For example, if you want to train a lower case model but still want to give importance to casing, you can add a casing feature that indicates whether the word was lower case or not.i│C like│l cookies│l from│l new│C york│CThis will create two embedding layers under the hood. One for the tokens, and one for the case features.Intheir documentation, they state that the default size for features is… set to N^feat_vec_exponent where N is the number of values the feature takes.where the default feat_vec_exponent value is 0.7.However, that means that for two features,they would only get a size of 1 or 2(1.6). The embeddings (token and casing) are thenconcatenated. This contrasts sharply with the language models that I know. Take for instance,BERT, which has token (30k values), segment (two values), and position (512 values) whichallhave 512 dimensions, even the segment embeddings. These embeddings aresummed.My question thus ends up being: I always thought that the number of items in the embedding should more or less dictate the hidden size of that embedding (as onmt suggests), but BERT and siblings do not do this. So what is the best way, and why? How come that only two features in a 512 dimension space make sense?
2020-07-13T15:40:03Z
[ { "date": "2020-07-29T09:29:33Z", "reply": "It’s actually more a question of projecting in a high-dimensionality dense vector space versus a sparse space rather than the dimensionality it-self.A lot of the recent developments in NLP are about projecting labels and tabular data in a high-dim vector space (assigning learned vectors to spare categorical features) prior to computation.One striking demonstration of the efficiency of casting in high-dimension is in the work of John Wieting and Douwe Kiela:https://openreview.net/forum?id=BkgPajAcY7but there is also a much older history of work on random projections and the Johnson-Lindenstrauss lemma:https://scikit-learn.org/stable/modules/random_projection.htmlA related discussion on the JL lemma you may want to join is here:https://github.com/huggingface/awesome-papers/discussions/7Note however that there is a limit in the optimal dimension for the input embedding and recent models like ALBERT (https://openreview.net/forum?id=H1eA7AEtvS) or approach like Adaptive inputs (http://arxiv.org/abs/1809.10853) keep the input dimension smaller the models hidden-size to reach more optimal ratio between both of these dimensions." }, { "date": "2020-08-02T11:59:57Z", "reply": "Thanks for your reply! I read through the reading group’s thread as well as the Linformer. From what I understand, the biggest problem with projections in large spaces is speed. On the other hand, large, random initialisations perform well out-of-the-box. One would guess, then, that the middle ground is finding trained, smaller dimension feature spaces, leading to a balanced trade-off between speed and performance.However, there is still a big difference in sizewith respect to the inputbetween the two examples that I mention. So let’s assume we have a feature with two possible values (e.g. segment IDs, 0 or 1). In onmt this would be encoded (by default) in a space of two values, and one dimension. In BERT, though, it is much larger: two values, but 512 dimensions. What I am interested in is not only the difference between having 1 dimension vs 512, but also how this is motivated in BERT. In BERT (and siblings) there is no constraint between input size of the embedding and its dimensions. 30k vocabulary, 512 positions, 2 segments. All get the same dimensions so they can be summed. I still have not seen any evaluation on this research question that comes down to:is/should the quality of a vector space determined by the size of its keys? The problem to evaluate this, I think, is that in language models these spaces are not trained separately but as part of the whole model. Therefore it is hard to make statements about the embeddings themselves.As an update about my own research: we found that having a 4-values, 6-dimensions feature, concatenated to a 506 token embedding performsbetterthan summing 4-values, 512-dimensions to a 512-dimension token representation." } ]
Understanding what went wrong in attention
https://discuss.huggingface.co/t/understanding-what-went-wrong-in-attention/386
5
1,615
I am working on attention analysis. I want to learn more about where self attention made mistakes while attending to context query. Given two sentences, I am interested in learning more about where self-attention should have paid more attention (and not irrelevant tokens) to provide correct answers. In general, what went wrong in processing a given sample even if fine-tuned transformer is employed.While there are projects based on visualization likeBertViz,ExBERT, I am not sure if it’s straightforward to extract the information I’m looking for.Do you know of any good projects, or workarounds inTransformersto answer my query ?
2020-07-20T05:27:08Z
[ { "date": "2020-07-21T03:30:49Z", "reply": "Can anyone point me to the method on how to visualize attention in matrix form between query and context sentences ? Is there any other alternative ? Any pointers will be appreciated." }, { "date": "2020-07-24T18:48:54Z", "reply": "The two you mentioned are the only I know of off the top of my head in terms of visualization tools. What are you trying to do thatBertVizandExBERTdon’t provide? (disclaimer: not an expert in this area)One tricky thing is that the notion of where the modelshouldorshould nothave paid more attention is not well defined. There’s been debate about whether attention weights can or should be used for interpretation, for example see[1],[2]. Coming up with a convincing argument that a given attention matrix should be one way or the other would probably not be trivial." }, { "date": "2020-07-25T04:52:15Z", "reply": "Thanks for your helpful reply. I had a look at their abstracts, and do not have a firm opinion on whether attention can fully help us understand what I’m looking for.Both are good tools for interactive visualization but I want something that provide some quantifiable-ness. For now, I’m usingsrushway of visualizing attention heatmaps like he did inAnnotated Transformer. Since I need to report in the paper, I am looking for static visualizations.Based on my little interaction withexbertlive demo, it can be hard for the reader to distinguish between what both models are looking at (for comparsion purposes).For my use-case, I want the reader to be able to distinguish what two networks look at and how one is better than other. I hope it makes some sense." }, { "date": "2020-07-29T07:34:23Z", "reply": "@joeddavCould you please suggest what’s the recommended way to do what Exbert does with our own weights (seeing which token in sentence the model pays attention to) ? HF Exbert works for default pretrained LMs, I want my trained weights to be used for inference task. I’m running experiments on server, building npm and other stuff seems like a lot of work, but I think things may have changed a bit after introduction on HF inference API. I’m usingbert-base-uncased(pretty standard), want to load weights from HF model hub instead." }, { "date": "2020-07-31T14:19:04Z", "reply": "Got it working by using exBERT locally." } ]
ACL 2020 highlights – Joe
https://discuss.huggingface.co/t/acl-2020-highlights-joe/188
3
1,577
I had a great time at ACL this week. There are many great papers and I’m still going through them. Here’s a summary of just a few that I wanted to highlight. I’d love to get thoughts and retorts from anyone reading!“To Test Machine Comprehension, Start by Defining Comprehension”by Jesse Dunietz, Gregory Burnham, Akash Bharadwaj, Owen Rambow, Jennifer Chu-Carroll, and David FerrucciLike most great ideas, the framework presented here is simple – seemingly obvious, even. They take a specific look at Machine Reading Comprehension (MRC) and argue that current evaluation metrics don’treallyinspire much confidence in the system’s comprehension of the relevant information in the passage to make it trust it in any real-world setting. They argue that rather than making questions harder, we should explicitly defining so-called “Templates of Understanding” to measure the different dimensions of comprehension within a particular context. For example, in the context of a story, they lay out the following ToU:image2190×1140 238 KBThe authors do a great job thinking with clarity and simplicity about how we should approach evaluating MRC systems.“Intermediate-Task Transfer Learning with Pretrained Language Models”by Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut,Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann,Samuel R. BowmanRecently the pre-train/fine-tune paradigm has become ubiquitous. This paper explores whether we can take advantage of labeled data during anintermediatetraining step. The authors do really extensive analysis on what kinds of datasets are useful for intermediate training and what downstream tasks they have a positive (or negative) effect on.image1200×574 58.1 KBA really interesting insight for me is that commonsense tasks don’t ever seem to have a negative effect. They either help on the downstream task, or don’t have much of an effect at all. I wonder if this because we do havelabeledcommonsense data that is used, or if we could build some kind of unsupervised commonsense objective into the pre-training procedure that would work just as well.“Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data”by Emily M. Bender and Alexander KollerThis paper is not focused around any one method or technique, but rather makes a general and pretty bold argument: meaning cannot be learned from form. In other words, just giving a model access to a whole bunch of text will never be enough to learn meaningfully about the real world.Whether you buy their argument or not, I found it to be an intellectually stimulating presentation. I suspect the hyperintelligent octopus argument will be one that sticks around for a long time.image2802×1558 296 KBI also appreciated their word of caution about the way we use different words when communicating about a model’s capabilities. At the very end of the presentation, Alexander warned,As a community, let’s remember that we’re scientists and not marketing people. Let’s be a little bit careful when we use terms like understanding, meaning, and comprehension.
2020-07-10T16:08:17Z
[ { "date": "2020-07-11T03:12:41Z", "reply": "Intermediate Task Transfer is a very practical one. It exhaustively provides many results that can help engineers save much time." }, { "date": "2020-07-12T02:14:53Z", "reply": "Thanks so much Joe@joeddav, I was trying to catch up all tutorials and workshops within my limited time, and I almost miss these extremely interesting papers.I just finish watching the first paper (To Test Machine Comprehension, Start by Defining Comprehension – in a sense it has the same spirit as the CheckList best paper), and found that their slides and rocket discussions are very valuable! Sadly, these materials will be deleted soon, so I took a quick cap screens of some slides and would like to post supplementary materials mentioned in Rocket Chat here. Hopefully it can be useful for other people.Temporal access to dataset used in the paper :https://drive.google.com/file/d/1jXU__4BDDbofWKhZiYKfoIJOVbTv7AfQ/view?usp=sharing.Related papers suggested by Sowmya VajjalaDeveloping reading comprehension questionshttps://nflrc.hawaii.edu/rfl/April2005/day/day.htmland commneted by Jesse (the author)My initial reaction is that the progression of “types of comprehension” listed there lay out a massive challenge for scaffolding up MRC to richer abilities. I don’t think people have been explicit about generating questions according to these categories, but many of them do appear in MRC datasets. Mostly people seem to focus on literal comprehension, throwing in reorganization/inference when they want to make the test harder. Prediction is sometimes tested as part of commonsense reasoning (e.g., Story Cloze).As for how these categories relate to ToUs, I think it would mostly be as forms of error analysis. You’d establish in advance that you want your system to figure out from this text that Maria died at age 55, and then when it succeeds/fails, you’d want to count that in the “reorganization” bucket. I’m not sure how important the categories would be for generating questions, though—our argument is that questions should be generated in accordance with what content downstream applications need, not what mode of reasoning would be needed to get there.Reut Tsarfaty asked a great question on ‘motivational’ perspective :I am particularly interested in the \"motivational’, It seems you conflate it with “what if”, but this is a very small fragment of motivation sources. Motivation can come from goals (“teleological”) “We are set to achieve our financial goals at Q2”, personal prefs (“buletic”) “I prefer to sit outside”, morals (“deontic”) “you should not drink and drive”, and more. Did you have thoughts on structuring this space of (sources of) motivations for the prescribed events?And the author replied in some valuable thoughts :Thanks, Reut, and great question! You’ve put your finger on a point our exposition glossed over. We do actually allow for all of the types of motivation your listed, though there are probably others we haven’t yet encountered and will have to figure out how to handle.In our scheme, any given explanation, whether mechanistic or motivational, has three main structural elements:The “root cause.”A series of “causal links” connecting the root cause to the outcome (as shown in Fig. 2 of the paper).The recursive explanations for the root cause and for each causal link, each of which consists of a) a general causal rule (“most dogs prefer not getting rained on to getting rained on”) and b) supporting facts that establish the causal rule applies (“Rover is a dog”).In motivational explanations—i.e., explanations where an agent is portrayed as taking a deliberate action—the root cause is always some form of preference over states expected to follow or happen concurrently with the action. In that sense, it does indeed have to be some sort of “what if”—e.g., if Timmy doesn’t take this action, he won’t get to sit outside. But the preference can be any form of desirability/undesirability. Here’s how we might handle the cases you listed:Joanna would prefer that the organization achieve its Q2 financial goals than that it fall short of them.Timmy would prefer sitting outside rather than inside.Alice driving drunk would violate her moral standards, whereas driving in a normal state of mind would not.…and each would be recursively explained in terms of some general rule about what makes people consider such things desirable/undesirable. In the final case, that would probably mean stating that people generally think driving drunk is immoral.Now, theoretically each statement of preference should be connected to the corresponding action by a general rule—e.g.:Joanna cancels the event, rather than leaving it scheduled, because:Joanna would prefer that the organization achieve its Q2 financial goals than that it fall short of them.Joanna expects that;<imagined causal chain connecting canceling/not canceling to meeting/falling short of goals>When an agent prefers outcome X to outcome X’, and they believe action A will lead to outcome X whereas action A’ will lead to outcome X’, they often take action A instead of action A’.*But it’s unwieldy to include such a foundational piece of agentive behavior in every motivational explanation, so we allow annotators to assume it. Currently we have a small list of such general rules that annotators can assume:• Agents act to realize their preferences.• Agents act to fulfill their obligations.• Agents act to conform to their moral standards.(These are shorthand versions of the more unwieldy contrastive rules.)I believe it’s that list that you were correctly pointing out we need; is that right?and more :The possible-worlds notion is definitely underlying our whole approach to describing causality and motivation: we’re assuming a Lewis-like notion of a nearby possible world where everything is the same except for one little tweak. (Important differences include that we don’t care whether possible worlds are metaphysically “real” and that we sometimes consider multiple nearby worlds if there are multiple salient contrasts.)So far we’ve been sticking with plain English as the annotation format, so that we can work out all the content and conceptual structures intuitively without first committing to a formalism. That makes explicit formal semantics hard to incorporate. But in other corners of Elemental Cognition—particularly the ones working on systems that can actually _produce_ answers like this—we are indeed doing some formal representation, and we’ve discussed the need to represent various kinds of irrealis contexts, including the alternative possible worlds evoked by causal chains.Lastly, Emily Bender (the author of the last Octupus-argument paper that@joeddavmentioned) also joined the discussions. But I am not sure I should post them here since they are extremely long (50+ replies)" }, { "date": "2020-07-30T05:57:51Z", "reply": "@joeddavStunningly, regarding the Octopus paper (Bender & Koller 2020) which contains a challenge on “advice on Bear chasing”, Gwern has tested this example with GPT-3, and found that GPT-3 can make many valid suggestions to deal with a beargwern.netGPT-3 Creative FictionCreative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling. Plus advice on effective GPT-3 prompt programming & avoiding common errors." } ]
Debiasing models by HEX projection
https://discuss.huggingface.co/t/debiasing-models-by-hex-projection/473
1
519
I am interested in implementing the orthogonality portion ofTowards Robustifying NLI Models Against Lexical Dataset Biasesfrom ACL 2020 in Pytorch.The overall idea seems simple. Have a primary model and a sub-model (like BOW) to detect superficial features, and then use HEX projection (Wang et al., 2019a) to project the representation of the original primary model to the orthogonal space of the representation of the BoW model.In this case, I would use a transformer as the primary model. I’m not sure about the implementation of HEX projection. If someone is familiar with it, it would be really helpful if they can share the snippet responsible for projecting the representation orthogonally.Additionally, adding adebiasingexample in Transformers repo would be a good addition which I’m happy to add, once I implement myself.
2020-07-25T16:47:25Z
[ { "date": "2020-07-28T05:36:55Z", "reply": "I figured out myself from the equations in the paper (Wang et al.). My implementation seems to be working. Will share the link for my repo once I open source the code." } ]
What does it mean to prime a GPT model?
https://discuss.huggingface.co/t/what-does-it-mean-to-prime-a-gpt-model/446
5
4,134
I am not sure I understand what it means to prime a LM. I came across this concept in several blogposts and papers (sometimes also referred to as exploring the capabilities of meta learning of the model or as in context learning).From the openai gpt2 paper, section3.7 Translation:We test whether GPT-2 has begun to learn how to translate from one language to another. In order to help it infer that this is the desired task, we condition the language model on a context of example pairs of the format english sentence = french sentence and then after a final prompt of english sentence =This I believe is an example of priming? Since with transformers there is no concept of hidden state being passed from one step to another, we provide the model with an input sequence of tokens of up to 1024 length and the model will output up to 1024 x vocab size softmax activations where each will encode the probability of the subsequent word (following the word at a given position). So priming would be just constructing the input sequence in a specific manner?If I am reading this correctly, priming would refer to the act of passing a sequence into the model expecting that the model’s meta learning capability would affect its output?In this sense, for priming, we are always limited to a sequence of < 1024 tokens (where 1024 need to suffice for the priming sequence and the output)?Passing thepastparameter just saves on compute, it provides the model with the key value pairs calculated at earlier steps of text generation but there is nothing else magical happening there?And last but not least - are such questions okay to ask? Meaning, this would certainly qualify as a beginner question, but it doesn’t directly relate to the library I suppose. I really appreciate the amazing resource you put out there, the transformer library along with the wonderful documentation, in fact I am blown over by how awesome it is, just would like to make sure I am not bothering you with my questions and am using the forums in a way that they were intended to be used.Thank you very much!
2020-07-23T16:17:32Z
[ { "date": "2020-07-24T22:07:41Z", "reply": "If I am reading this correctly, priming would refer to the act of passing a sequence into the model expecting that the model’s meta learning capability would affect its output?You’ve nailed it on the head. When talking about a left-to-right model like GPT-N, priming is just prepending text that is similar in some way to the text you are predicting which often helps the model to predict it correctly.Incidentally, this is the thing that GPT-3 seems to be especially good at. There seems to be something about language models that we don’t completely understand that can make priming a surprisingly effective meta-learning technique, especially when the models get really big. Seethis Twitter threadfor some examples.And yes, this kind of question is perfect for the forums. However, I’d sayResearchis probably a better category fit since this more about general NLP/research talk and rather than the HF libraries" }, { "date": "2020-07-25T07:32:55Z", "reply": "Thank you very much for your answer Joe, really appreciate it!And thank you for linking to the Twitter thread - super interesting. Will keep note of theResearchcategory going forward!" }, { "date": "2020-07-25T20:11:19Z", "reply": "Just as an informative comment: priming is actually a term from psychology and perhaps peculiarly psycholinguistics. I am doing some research into this. An example of priming is: if you show participants a whole number of sentences, and most of those use a passive construction (“The apple was eaten by the man.”), and then show them a picture and ask them to describe it, and they describe what they see with a passive then they were (unconsciously)primedby the earlier texts." }, { "date": "2020-07-27T07:40:24Z", "reply": "They used the term ‘condition’ but it’s of course not truly conditional compared to methods like CTRL and PPLM. So referring to it as ‘priming’ might be a great choice." }, { "date": "2020-07-27T13:17:40Z", "reply": "Personally I use them interchangeably in this context. I have a slight preference for “priming” because IMO it’s more evocative in communicating what you’re trying to accomplish with this particular kind of conditioning, but I think either works (conditioning is probably more common?)." } ]
Attaching TF models to CNN features
https://discuss.huggingface.co/t/attaching-tf-models-to-cnn-features/391
1
444
This may be not entirely about NLP.I am working on Image captioning and learning textual representations from CNN features.Idea is to train CNN using captioning. So, I tried to use GPT-2 tokenizer but I had to create Captioning model from scratch.Is there any way to attach TF Transformer models to other CV applications for better learning?My VirTex implementation in Keras
2020-07-20T08:43:42Z
[ { "date": "2020-07-24T18:59:52Z", "reply": "Hey@surajp. Sorry, I’m not familiar enough with VirTex to give a concrete response here. But our TF models are compatible with TF2/Keras, so you should be able to include them in your TF graph. If you’re having trouble with this, please post some more specifics and I’ll see if we can be of any more help." } ]
Is it reasonableto pretrain by masking certain dimensions of each vector, rather than the individual token?
https://discuss.huggingface.co/t/is-it-reasonableto-pretrain-by-masking-certain-dimensions-of-each-vector-rather-than-the-individual-token/290
3
459
Let’s say I want to adapt Transformers to a non-NLP task, like financial data or a multiplayer online video game. You can imagine that the high-dimensional vector of each input will contain information that pertain to different events. For example, the first 10 dimensions might describe player 1, and the next 10 dimensions might describe player 2.If I were to extend the pre-training exercise to these non-NLP tasks, I think it could be reasonable to mask the actions of certain players in order to predict back their actions. This would essentially involve masking certain dimensions of a vector rather than masking the entire “input”.My question is: is this reasonable to do and is this even the right approach?
2020-07-15T03:11:56Z
[ { "date": "2020-07-15T14:55:13Z", "reply": "I don’t know what kind of input embeddings you’d be working with in that case, but the problem you’ll probably run into is that latent embeddings are usually not as nicely disentangled as you’ve described here. We sometimes talk about them as if they were for illustrative purposes, but in reality your description of “player 1” is probably distributed across the entire vector rather than existing entirely in some subset of vector positions." }, { "date": "2020-07-16T00:25:38Z", "reply": "Hi Joeddav,Naive question here, but rather than learned embeddings like in the case of words, if I directly create the input vector such that player1’s actions can be described via dimensions 1-5, player2’s actions are described via dimensions 6-10 and etc, then does that mean that each player’s information is disentangled by design?If so, would my question become a reasonable one or is there another way to encode multi-player information in a Transformers model?Thanks so much" }, { "date": "2020-07-21T19:52:50Z", "reply": "Sure, that sounds like a reasonable thing to try. Let us know how it goes – I’m sure we’d all learn something" } ]
Print All Tokens Over a Certain Probability Threshold
https://discuss.huggingface.co/t/print-all-tokens-over-a-certain-probability-threshold/329
3
1,091
I am curious to know how I would do this using GPT-2. Thank you for your time!
2020-07-16T20:12:49Z
[ { "date": "2020-07-20T14:19:41Z", "reply": "Hi there, here is a quick way to do this for the last token on a given sentence in PyTorch:from transformers import GPT2LMHeadModel, GPT2Tokenizer\nimport torch\nimport torch.nn.functional as F\n\n# Load model and tokenizer\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\n\n# Input example\ninput_txt = \"Hello, my name is Sylvain.\"\ninputs = tokenizer(input_txt, return_tensors='pt')\noutputs = model(**inputs)\n\n# If you are not on a source install, replace outputs.logits by outputs[0]\npredictions = F.softmax(outputs.logits, dim=-1)\n\nthresh = 1e-2\nvocab_size = predictions.shape[-1]\n\n# Predictions has one sentence (index 0) and we look at the last token predicted (-1)\nidxs = torch.arange(0, vocab_size)[predictions[0][-1] >= thresh]\nprint(tokenizer.convert_ids_to_tokens(idxs))" }, { "date": "2020-07-21T00:57:53Z", "reply": "I can’t thank you enough for your detailed response! I apologize if I am asking too much of this forum, but given I have this question I am sure others would benefit from an answer as well.While on this topic, I wonder what steps would need to be taken to expand this function for the output to include phrases in addition to words." }, { "date": "2020-07-21T16:44:52Z", "reply": "I don’t think the generate method can return the probabilities, so you might have to tweak the generate function to return them." } ]
Building a custom Squad 2.0 style dataset, is it worth it?
https://discuss.huggingface.co/t/building-a-custom-squad-2-0-style-dataset-is-it-worth-it/398
3
987
Was wondering what the experts think and whether this is a sensible approach. The pre-trained Squad 2.0 models perform well in a custom domain, but can be greatly improved, given the target domain is rather narrow and the vocabulary is different but there is overlap.Do you think it is worth obtaining a custom dataset, say 1000 observations, using the same methodology as Squad v2.0 but derived from data of the target domain?Is 1000 observation enough for the fine-tuning?
2020-07-20T15:04:28Z
[ { "date": "2020-07-20T15:30:50Z", "reply": "Hi@swayson, not an expert here but fine-tuning on your domain should give better results. I can’t comment on if 1000 examples will e enough or not, you’ll probably need to experiment.Also have look at this question generationmodels. You can try to create synthetic QA corpora using these models. Synthetic QA corpora has shown to improve results for QA." }, { "date": "2020-07-20T15:50:54Z", "reply": "Thank you@valhalla; I am going to give the synthetic QA models a shot and see if I can get some improvements." }, { "date": "2020-07-20T16:06:43Z", "reply": "Here’s a relevantpaper. See table 2 for Synthetic QA results." } ]
State of the art technique for initializing Embedding Matrix?
https://discuss.huggingface.co/t/state-of-the-art-technique-for-initializing-embedding-matrix/326
3
4,774
What are your thoughts on the state-of-the-art technique for initializing Embedding Weight matrices? Currently, PyTorch uses normal distribution to initialize these. Does using Kaiming Init make more sense?
2020-07-16T17:55:52Z
[ { "date": "2020-07-16T18:40:21Z", "reply": "From what I remember, Transformer modules should use Xavier init by default. I don’t remember the reasonwhy, though, nor whether Kaiming is a better choice." }, { "date": "2020-07-17T09:46:34Z", "reply": "Transformer uses xavier.So using Kaiming init for Embedding matrix is preferred for RNN? In case of transformer Xavier is preferred? Am I correct to say this?" }, { "date": "2020-07-19T12:18:52Z", "reply": "Based on init_weights of bert , bert normalize linear and embedding with mean 0 and 0.2 std.BTW, I tried to use kaiming (Pytorch default initialization) on Linear and embedding, on my toy task with 2 layer transformer. And it gives slightly better performance. I won’t say it is better than xavier surely. But it is definitely worth trying." } ]
Modern NLP for "Economics of Innovation" (Open Research Project using Patent Data)
https://discuss.huggingface.co/t/modern-nlp-for-economics-of-innovation-open-research-project-using-patent-data/235
4
752
Hi all,Suraj and I started discussing a potential research project and he suggested I make a thread here to discuss. As a quick intro, I am an NLP hobbyist and consumer of NLP research, and Suraj is a software developer with a keen interest in NLP.From my perspective, here are a few goals of the project:Upgrade our NLP skills in generalMake an immediate contribution to an applied field by introducing modern NLP methodsDig deeper into NLP research and potentially make a minor advancementMy idea was to introduce the Innovation Studies (or Economics of Innovation) field to modern NLP methods. I suggested this for a few reasons. First, it is generally accepted that the long-run economic growth rate, and standard of living, is driven by innovation. And second, there are about 8 million published US patents - that are freely available - that we can use as data.I am open to any directions to take, but here are a few starting points:Patent ClassificationI can see two reasons for improving patent classifications. One is for Innovation researchers to use the improved patent classes for their research - rather than relying on officially listed patent classes. And two, would be for actual innovation policy. One consensus in the field is that basic research is drastically under-invested in, since companies do not directly benefit from the large spillovers of basic research. So the rate of return on basic research is much higher for society than for any single company. However, when governments try to encourage basic research through incentivizing these types of patents, inventors can try to “cheat the system” by re-labeling their patent. Economists Ufuk Akcigit and Stefanie Stantcheva [1] say “Going forward, finding a feasible way to differentiate between basic and applied research is essential to better innovation tax policies.”Estimating the “Impact” of a PatentAs far as I know, the vast majority of innovation studies, that use patent data, use the number of citations as a proxy for the impact of a patent. So improving the “impact score” of a patent might help many innovation researchers. Professor Bryan Kelly et al [2] use a very clever modification of TF-IDF to find similarity scores between patents. A patent’s impact is then estimated by finding the difference in similarity scores between the target patent and all previous patents, and the target patent and all future patents. This makes sense to me, and is well explained in their paper. However, I do think that using other methods of finding patent embeddings may be worth investigating - like using AllenAI’s SPECTER document embedding approach. I’d also like to look into deep graph networks to see if they can help produce an estimate of the impact of a patent, without using citations.Patent Idea GenerationI think it would be cool to generate a patent abstract (or idea) either unconditionally, or conditioned on a sentence that would guide the generation. There are lots of directions we could pursue with this.Anyway, sorry for the long post. Please let us know if you have ideas, suggestions, would like to participate, etc.[1]https://www.nber.org/chapters/c14428.pdf[2]https://www.nber.org/papers/w25266.pdf
2020-07-12T15:40:52Z
[ { "date": "2020-07-12T15:50:04Z", "reply": "@VictorSanh,@joeddav,@yjernitewe would love to hear your thoughts on this" }, { "date": "2020-07-13T20:25:53Z", "reply": "Fun idea – thanks for sharing! I think any of these directions would make for a fun and educational project. Some thoughts/questions on each:Do you have a good dataset with applied vs. basic annotations? If so, this should be pretty easy. If not, one direction would be to explore semi-supervised learning (Seb Ruder hasa good blog poston it).This seems more interesting than#1to me, but try to be careful with fairness and bias here. The model could easily learn to associate the race or gender of the the patent holders or the prestige of the organizations that they come from with the patent’s impact, since (I assume) these factors will correlate with citations. Removing bias completely won’t be possible, but it will add legitimacy to your project if you are careful & transparent about them. You wouldn’t want a scenario where companies use your tool to determine the value of their employee’s patents which would likely end up disproportionately rewarding men over women, for example.After a quick google I foundthis paper, so I’d use that as a starting point and see what you could do that would be fun or interesting on top of that." }, { "date": "2020-07-14T03:28:48Z", "reply": "Thanks for the feedback!I believe there are a few small datasets that clearly label applied vs basic research, for patents. There are also the official patent classes, which could help inform classification - but they do not contain clear applied vs basic research distinctions. However, ideally, one would create a classifier which would distinguish between many hundreds of classes. This could allow policy makers to take advantage of the fact that within applied or basic research, some areas would have higher social returns, or relate to a specific mission, like climate change. And thanks for that link!My original plan was toonlyuse publication dates, and patent abstract and description text for estimating impact. This makes the task more challenging but I believe it would remove as much bias as possible. I appreciate the recommendation, I will keep that in mind.Edit: To clarify, as far as I understand, the typical approach to analyzing the patent/innovation space is to create a network of individual inventors, institutions, and patent IDs. Then linking these nodes via citations, authorship and affiliations.Whereas, I propose to ignore all of the above and only focus on the content of patents. This could help decrease the influence of the biases associated with citations, and increase the information associated with each patent. This latter point assumes that there is more information about a patent in the language embedding space than the citation network space. To me, its a fair assumption, but I have no evidence yetYup! That’s a cool paper - and I agree a great starting point.Again, thanks for the feedback. Once Suraj and I decide on a starting point we can update this thread" }, { "date": "2020-07-14T14:39:11Z", "reply": "Hi@joeddavThanks for the feedback.This also seems important to me as well. Fairness will be utmost concern, no private info (race, gender) will be visible to the model. And I think the embeddings should also help in discoverability i.e finding out concepts/patents/papers which are similar to a particular paper .Generation is always fun so will definitely start from there" } ]
ACL 2020 - Some personal highlights - Victor
https://discuss.huggingface.co/t/acl-2020-some-personal-highlights-victor/202
4
1,356
Hey!I had such a blast at ACL2020 this week! So many cool works, and lots of very interesting discussions both in the chat and in the zoom Q&A sessions!Here’s a pick of 3 of my highlights (there are extremely biased towards what I’m currently interested in):(1)Inherent Disagreements in Human Textual Inferencesby Ellie Pavlick, Tom KwiatkowskiNatural Language Inference (sometimes referred to as textual entailment) has become fundamental in evaluating language understanding and semantics. The central question of this paper is “what should we use as ground truth labels for textual inference?” The authors show that the apparent “annotation noise” often results from a multi-modality among the annotators’ labels. They discuss the implication of this uncertainty and argue for a refined evaluation that better captures the diversity of human judgments.(2)Unsupervised Domain Clusters in Pretrained Language Modelsby Roee Aharoni, Yoav GoldbergThe authors propose a “data-driven” approach to define what a domain is in NLP and to select in-domain data. They show that large pre-trained language models are able to capture these domains in an unsupervised way and leverage this insight to select in-domain data to train neural machine translation models.(3)Syntactic Data Augmentation Increases Robustness to Inference Heuristicsby Junghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, Tal LinzenNatural Language Inference models fine-tuned on top of models like BERT show high accuracy on standard test datasets but fail on challenge sets. The authors propose a simple syntactic data augmentation procedure to augment the standard training set to up to a thousand examples. Results show great improvement (and generalization) by just exposing the model to these controlled syntactic examples supporting that hypothesis that BERT contains knowledge that simply needs to be “activated”. Cases failures (like passive) support that the idea there is also knowledge pre-trained BERT is not aware of.How about you? Did any work change your perspective?
2020-07-10T18:46:20Z
[ { "date": "2020-07-11T23:39:11Z", "reply": "Hi@VictorSanh, thanks so much for your list. As the conference is overwhelming with contents, I did not see these papers at all.In paper (3) , syntactic augmentation is very interesting since(a) Augmentation is very successful in Computer Vision (CV), but in NLP, augmentation is much more non-obvious (regarding how to do it) and maybe sensitive to downstream tasks (more robust in CV)(b) In the paper Section 3, author stated that the augmented examples are noisyWe did not attempt to ensure the naturalness ofthe generated examples; e.g., in the INVERSIONtransformation, The carriage made a lot of noisewas transformed into A lot of noise made the carriage. In addition, the labels of the augmentationdataset were somewhat noisy; e.g., we assumedthat INVERSION changed the correct label from entailment to neutral, but this is not necessarily thecase (if The buyer met the seller, it is likely thatThe seller met the buyer). As we show below, thisnoise did not hurt accuracy on MNLI.This is very interesting to me (in CV it’s often intuitively clear which augmentation is noiseless / noisy), so I assume that the ‘noisy-ratio’ is minimum since too much noise should degrade the overall performance …Further, in CV, we also have soft-labels augmentation like MixUp and CutMix, so maybe this similar area in NLP also has more potential.(on Kaggle we also tried our own (non-published) augmentation to NLP with this similar ideas –e.g. In the recent Jigsaw Toxic classification competition where a paragraph of comment texts are given as an example. We can combine two paragraphs together [with Toxic + Neutral = Toxic label Formula) , or dynamic random shuffling sentences within the given paragraph where toxicity degree should be invariant with this operation.)" }, { "date": "2020-07-13T15:18:51Z", "reply": "That’s very interesting!I agree, automatic data augmentation is still something somehow mysterious to me in NLP since it is way less controllable than in vision. It seems fine to me that the resulting examples are extremely noisy (I saw some works in vision where perturbed images where the original label becomes quite ambiguous). There might be a balance to find: you want the model to learn through the noise but also not to be over-confident when you have ambiguous examples…Do you have guidelines you can share on the data augmentation in NLP? In which case it works? Why it works? Or a survey?" }, { "date": "2020-07-14T05:34:43Z", "reply": "Hi Victor, I haven’t seen the guideline on NLP augmentation before.Just want to note two potential augmentation codes.As you may know, in vision, we have a lot of augmentation libraries, but one which really stands out isalbumentationsdue to its speed and variety. (De facto choices for all Kaggle competitors)Recently, there’s a creative guy who applied the basic Albumentations class to NLP task of Jigsaw’s multi-lingual toxic classification (of course with HuggingFace model) :https://www.kaggle.com/shonenkov/nlp-albumentationsI believe we can extend this class in the future.Another worth-mentioning isnlpaug(https://github.com/makcedward/nlpaug) where we can augment with simpler ideas like synonym / antonym word swapping via word suggestions from NLTK and BertBTW, do your team also attend ICML this week?" }, { "date": "2020-07-14T14:24:44Z", "reply": "Interesting! Thanks for the pointer, I’ll definitely check this out!No, unfortunately, no one in the team is at ICML this week." } ]
ICLR 2020 highlights - Yacine
https://discuss.huggingface.co/t/iclr-2020-highlights-yacine/37
1
1,737
I took some notes on some ICLR2020 papers that seemed most relevants to my research topics: information retrieval for QA, model architectures and analysis, and text generations. You can find them here!docs.google.comICLR papersTransformer architectures / pretraining losses Lite Transformer with Long-Short Range Attention Long Short Range Attention uses smaller dimension global attention in parallel with convolutions to capture local context. The approach is more...
2020-07-07T17:44:03Z
[ { "date": "2020-07-11T23:59:28Z", "reply": "Thanks for the great summary [email protected]’s a pity that there’s no ICLR video presentation now on slidelive (maybe they deleted 1-2 weeks after the conference end ?) … Some of them can still be found on Youtube though" } ]
About the Research category
https://discuss.huggingface.co/t/about-the-research-category/26
2
441
Use this category for any research question or to coordinate on a project with other users.
2020-07-07T16:00:19Z
[ { "date": "2020-07-11T23:09:45Z", "reply": "Thanks so much to have this category! Love it." } ]
ACL 2020 highlights – Canwen
https://discuss.huggingface.co/t/acl-2020-highlights-canwen/183
1
911
The original Twitter threadhere.The selecting criterion here is beinginteresting. Not an exhaustive list.Let Me Choose: From Verbal Context to Font Selectionaclweb.org2020.acl-main.762.pdf687.19 KBBridging text with its font! Very interesting application paper from Adobe. They even have emojis play a role in it! Even fonts have their semantics and sentiments.Contextualized Weak Supervision for Text Classificationaclweb.org2020.acl-main.30.pdf1819.53 KBThis paper cleverly introduces word disambiguation into weakly supervised text classification and the method for data augmentation is also great!Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words?aclweb.org2020.acl-main.419.pdf813.57 KBThis paper answered the interesting question that if machines read text just like us humans? Though the conclusion may not be surprising, it opens a new path to understand attention.Pre-train and Plug-in: Flexible Conditional Text Generation with Variational Auto-Encodersaclweb.org2020.acl-main.23.pdf912.50 KBSorry for self-promoting but this paper is actually very interesting. The flexible framework can be extended to many more fields including text style transfer, image generation, voice conversion, etc.
2020-07-10T14:25:10Z
[ { "date": "2020-07-10T17:01:08Z", "reply": "I like the human attention maps one. It’s interesting that humans have much more peaked distributions, focusing in a few key words where as the ML system attends to a larger sweet of words with varying weights." } ]
ACL 2020 highlights - Yacine
https://discuss.huggingface.co/t/acl-2020-highlights-yacine/186
0
1,398
These are some of the papers I discovered at this year’s ACL conference. I focused on three main themes:Model Analysis(Conditional) Text GenerationSociety & Ethics and NLPI tried to provide a short summary for each of the papers outlining the methods and contributions: please refer to the papers themselves for more details, they are all well worth the read!I was particularly impressed by the depth of thinking in a lot of the papers accepted to the Ethics & NLP track, and would love to have further conversations about them here!Link to the Google Docs versionModel AnalysisEvaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?aclweb.org2020.acl-main.491.pdf839.04 KBhttps://virtual.acl2020.org/paper_main.491.htmlThis work proposes an experimental setup based onasking humans to simulate model behaviourto evaluate how much insight various visualization and explainability methods actually give users. In the first experiments they proposed, users are asked to predict model outputs, thenshown explanations for these outputs provided by automated tools. They are then asked to predict outputs for a new set of examples, and the usefulness of the automatic explanation tools is measured byhow much their accuracy improves in this second stage. Another experiment shows user model outputs and explanations, and asks them to predict the model behavior on counterfactual examples where the input is perturbed in a targeted fashion. The authors show that the measured accuracy improvements givemore interpretable and reliable information about the quality of the explanation tool than subjective Likert-scale judgments. Replicating this study at a larger scale seems a promising way to evaluate explanation toolsBeyond Accuracy: Behavioral Testing of NLP Models with CheckListaclweb.org2020.acl-main.442.pdf391.44 KBhttps://virtual.acl2020.org/paper_main.442.htmlThis paper proposes a framework to develop checklists: suites of tests that can be applied to various NLP models to check for “bugs”. One significant difference between the proposed checklist approach and the benchmarks that have been guiding the progress of the field is that the former is more targeted: instead of reporting the average performance of a model across a large test set created through crawling or crowd-sourcing, it proposes tocome up with a set of simple unit tests corresponding to use cases we want to ensure our systems succeed at before they can be deployed and used. In order to make this process systematic and affordable, one important contribution of this work is a set of tools which allow practitioners to easily and efficiently design such testing suites by providing an intuitive UI and leveraging models to suggest likely test examples. Allowing people to easily develop, share and aggregate these test suites has thepotential to significantly increase user trust in NLP models.Conditional GenerationAsking and Answering Questions to Evaluate the Factual Consistency of Summariesaclweb.org2020.acl-main.450.pdf1184.49 KBhttps://virtual.acl2020.org/paper_main.450.htmlFEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarizationaclweb.org2020.acl-main.454.pdf448.59 KBhttps://virtual.acl2020.org/paper_main.454.htmlThese two concurrent papers take a similar approach to evaluating the actuality of generated abstractive summaries of news text, but are complementary in their implementation and analysis. The basic idea is that we cancheck whether a summary conveys information that is faithful to the source materialby checking that aquestion answering system will give similar answers when using either as supporting document. The questions are generated through a two-step process: first, use a heuristic to identify spans in the summary we want to check for accuracy, then, use an automatic question generation system to obtain questions whose answers should be those spans. If a machine reading comprehension system finds the same answer to the question when reading the article as when reading the summary, the information is probably correct. FEQA and QAGS differ in how they filter the candidate spans and how they compare agreement, but both find thatquestion based metrics correlate better with human judgments of factualitythan other metrics. One caveat however is that both methodswork better on CNN/DM than on Xsum, which is more abstractive in nature. Finally, QAGS note that in addition to being used as an aggregated automatic metric, these methods can beuseful for visualizing specific examplesin human-in-the-loop settings.On Faithfulness and Factuality in Abstractive Summarizationaclweb.org2020.acl-main.173.pdf379.69 KBhttps://virtual.acl2020.org/paper_main.173.htmlThis paper further investigates thestate of the art for the factuality/faithfulness of abstractive summarizationby providing alarge-scale human evaluation of the hallucinations produced by recently published systems. This work classifies the hallucinations into an intrinsic (model misunderstands the input) and extrinsic (model invents completely new facts) category. Note that in this setting, factual information is still considered to be a hallucination if it’s not in the input. The paper focuses on Xsum (one sentence summaries, abstractive in nature), andprovides annotations for the output of models published up to 2019. As a result, large-scale pre-trained seq2seq models (T5, BART) are missing. Can use NLI for summary selection to improve faithfulness at the cost of ROUGE. The annotations are available at::https://github.com/google-research-datasets/xsum_hallucination_annotationsExploring Content Selection in Summarization of Novel Chaptersaclweb.org2020.acl-main.453.pdf335.81 KBhttps://virtual.acl2020.org/paper_main.453.htmlThe authors take some step towards training a book chapter summarization model: namely, theygather summaries of public domain book chapters from study guide websites, use these to align book sentences to summary sentences using IDF-weighted ROUGE (which seems to work better than plain ROUGE, METEOR, or BERT - would be interesting to see BLEURT/BertScore results), and train an RNN-based extractive summarization system using these noisy labels. The authors still have to release their pre-processed data and (hopefully) noisy labels, but this is anice foray into long-input summarization outside of the news/scientific article domain.Dataset InformationAbout 8000 chapters from Gutenberg project books with 2-5 summaries per chapter gathered from study guide websites (licensing!). Chapters are ~5,200 words, summaries are ~370 wordsScript to re-construct dataset athttps://github.com/manestay/novel-chapter-datasetLeveraging Pre-trained Checkpoints for Sequence Generation Taskshttps://www.mitpressjournals.org/doi/pdf/10.1162/tacl_a_00313https://virtual.acl2020.org/paper_tacl.1849.htmlThe paper explores how we canuse pre-trained encoder-only and decoder-only models to warm-start an encoder-decoder model. While their methods still lags behind full encoder-decoder pre-trained models (R1 on Xsum for their method: 41.45 vs BART: 45.14), they show some improvements over the baselines byinitializing encoder and decoder weights with Roberta checkpoints, and randomly initializing cross-attention. The model can even be made more memory-efficient by sharing encoder and decoder weights.Improved Natural Language Generation via Loss Truncationaclweb.org2020.acl-main.66.pdf513.42 KBhttps://virtual.acl2020.org/paper_main.66.htmlNot specific to conditional generation. The authors argue that log-likelihood as a loss is not robust to noise since it needs to haveprobability mass on every seen example (including outliers). Instead, our primary aim should be to ensure that generations from the model are indistinguishable from natural language data. The authors show that atruncated log-likelihood loss can serve as an upper bound for a measure of distinguishability. Generations from the full output distribution of a model trained with truncated loss are rated better than top-k or top-p sampling for a model trained with the full loss when evaluated with HUSE.Society & Ethics and NLPSocial Biases in NLP Models as Barriers for Persons with Disabilitiesaclweb.org2020.acl-main.487.pdf980.68 KBhttps://virtual.acl2020.org/paper_main.487.htmlThe authors consider the effect of the mention of disability on sentiment and toxicity classifiers, and the subsequent impact on the life and discourse of people with disabilities. They show that commonly used classifiers consistently associate higher toxicity score and more negative sentiment score, which would among other thingsexpose people to a heavier burden of content moderation false positives when talking about their own disability. The authors trace these biases in part to BERT model behavior and to dynamics of the training data. The authors also discuss thenecessity of involving the affected communities in work about ableism in ML and NLP,and describe which resources from advocacy groups they relied on for their experimental design.Social Bias Frames: Reasoning about Social and Power Implications of Languageaclweb.org2020.acl-main.486.pdf674.00 KBhttps://virtual.acl2020.org/paper_main.486.htmlThe authors propose anew annotation scheme for offensive language which goes beyond binary hate speech classificationand focuses on the intent of the utterance: the annotators are asked to identify the target group, whether the utterance is an instance of in-group speech, and toexplicitly write out the offensive implication. The authors created a fairly large dataset of 45k posts from a variety of sources using these guidelines and fine-tuned a GPT2 model to predict the frames. The model has some initial success but still leaves room for improvement, especially to generate better explanations.Dataset Informationhttps://homes.cs.washington.edu/~msap/social-bias-frames/The dataset consists of 45k utterances collected from Twitter, Reddit, as well as known hate sites. 42% are classified as offensive, only about 5% have the in-group annotation. The total data is made up of 150k training pairs since several posts target multiple groups.The paper provides asection on ethical considerationsof making and using the dataset anddescribes the demographic makeup of the annotators.Language (Technology) is Power: A Critical Survey of “Bias” in NLPaclweb.org2020.acl-main.485.pdf1406.04 KBhttps://virtual.acl2020.org/paper_main.485.htmlThe authors start by reviewing a large number of papers on bias in NLP systems, and find that there is a common lack of rigorous definition or motivation of the problem they aim to address, inconsistencies in the way bias is defined across the field, and general lack of engagement with relevant sociolinguistic work. As a result, the authors propose a set of recommendations for future work which include: grounding work on in therelevant literature outside of NLP that explores the relationships between language and social hierarchies, explicitly statingwhy the system behaviors described are harmful, in what ways, and to whom,and examining language use in practice byengaging with the lived experiences of members of communities affected by NLP systems. To illustrate how these recommendations can be interpreted in practice, the authors present a case study of African American English. The whole paper is packed with citations to relevant recent work that make up a necessary reading list for NLP practitioners aiming to think more deeply about the societal impact of their work.Give Me Convenience and Give Her Death: Who Should Decide What Uses of NLP are Appropriate, and on What Basis?aclweb.org2020.acl-main.261.pdf164.96 KBhttps://virtual.acl2020.org/paper_main.261.htmlThe authors analyse an EMNLP 2019 paper on automatic legal sentencing as acase study for learning how to work toward an ethical assessment of works in the field. Specifically, the work relies on previously published recommendations fordata statements(Bender and Friedman, 2018)anddataset sheets(Gebru et al., 2018)to ask and answer a number of fundamental questions about the creation and use of the dataset. The paper then describes the concept of dual use, encouraging dataset and algorithm creators to consider whether alternative uses of their work may have nefarious effects. Overall, this paper can be a good introduction to the above cited works specifically and ethical considerations about work in NLP more broadly.
2020-07-10T15:55:38Z
[]
Paper Discussion: Weight Poisoning Attacks on Pre-trained Models
https://discuss.huggingface.co/t/paper-discussion-weight-poisoning-attacks-on-pre-trained-models/129
0
1,019
Copied over from GitHub discussions. See the original discussionhere.Hi everyone, for thisScience Tuesday I wrote up a quick discussion on a great paper from Kurita et al.'s on how pre-trained models can be “poisoned” to exhibit nefarious behavior that persist even after fine-tuning on downstream tasks. Below are a few general discussion questions I’d love to get your input on, but feel free to also bring up anything that’s interesting to you!Paper:Weight Poisoning Attacks on Pre-trained ModelsAuthors: Keita Kurita,Paul Michel,Graham NeubigPresenter:Joe DavisonPresentation:Colab notebook/postDiscussion QuestionsThe authors give a brute-force method for identifying trigger words by simply evaluating the LFR (label flip rate) for every word in a corpus. Words with very high LFRs can then be inspect to see if they make sense, or if they might be engineered triggers. Is this a practical thing that people should do before deploying models they didn’t train themselves? Is there another way that words with anamolous effects on a model could be identified? How else could poisoned weights be identified?Is it safe for companies with features like spam and toxicity detection to use pre-trained models from the community in deployed applications?When does it make sense for an attacker to try to disseminate a poisoned model and when is it smarter to attack an existing model by creating adversarial examples?Do you buy the author’s explanation of why the method doesn’t do as well on spam classification? If not, why do you think it is?The authors say that ignoring second-order information in “preliminary experiments” did not degrade performance (end of section 3.1). For the people who are better at math than me, do you buy this? Should they have tried to do some Hessian approximation to more extensively test whether first order information is sufficient?
2020-07-08T20:19:26Z
[]