title
stringlengths
15
185
link
stringlengths
53
219
replies
int64
0
43
views
int64
18
25.9k
initial_post
stringlengths
4
20.5k
initial_post_date
stringlengths
20
20
responses
listlengths
0
20
Domain-specific word similarity problem
https://discuss.huggingface.co/t/domain-specific-word-similarity-problem/29071
2
838
I am trying to create a chat-bot like application (inspired by chatGPT). The bot or application should be able to answer questions about our software on basis of help documents.I have tried to finetune QuestionAnswering models like distilbert_base_uncased on less than 100 annotated samples. But my model performance is not great. Can anyone suggest alternative approaches?
2023-01-06T12:56:33Z
[ { "date": "2023-01-06T19:13:42Z", "reply": "Hi Vikassss,Are you talking about the performance of the Q&A engine applied on a test dataset or more generally after deployment?In the second case, the low performance could be originated in different parts of the pipeline, not only the model. For example:1- what are you using as the retriever?2- what is your ranking strategy for the context?3- same question about the reader?If your fine-tuned model is “forced” to find answers in non-optimal ranked contexts, it will fail.Could you please tell us more about your evaluation methodology?ThanksBest RegardsJerome" }, { "date": "2023-07-19T00:59:55Z", "reply": "The most concrete suggestion I have would be to fine-tune the embeddings model on larger samples. For domain-specific use cases it’ll be really important to give as much of the domain-specific context as possible. Also, for my learning, what service are you using to fine-tune distilbert_base_uncased?" } ]
Question about loss calculation on LLM finetuning
https://discuss.huggingface.co/t/question-about-loss-calculation-on-llm-finetuning/46825
0
6,758
When fine-tuning the dialogue model (Alpaca, Vicuna), the common loss calculation method is to sum the cross-entropy loss of all tokens in each sequence and divide it by the sequence length (similar to the per-token perplexity calculation method), The final total loss is equal to the average of each sequence loss.Is it necessary to divide by the sequence length here? If it is maximum-likelihood estimation, I understand that each token loss should be summed directly without dividing by the sequence length (equal to logprob), and finally the total loss is obtained by averaging the loss of each sequence.Another question is that fine-tuning the dialogue model is actually the conditional probability of the answer for the instruction. Does the conditional maximum likelihood need special treatment here?
2023-07-14T13:06:10Z
[]
Abstractive Opinion Summarization with different level of sentiment
https://discuss.huggingface.co/t/abstractive-opinion-summarization-with-different-level-of-sentiment/46179
0
194
Hello,My name is Bhargav, I have a dataset that consists of different levels (-1 to +1 with intervals of 0.2) of opinions in the form of text from several users on a specific topic collected from one of the discussion boards. Now I would like to summarize the opinions of all users at each level using an abstractive summarization technique. I am new to the field of NLP. Please suggest a starting point and sample models to perform the task.Thanks in advance.
2023-07-09T16:02:05Z
[]
The Verification of Reasoning by Humans and Artificial Intelligence Systems
https://discuss.huggingface.co/t/the-verification-of-reasoning-by-humans-and-artificial-intelligence-systems/46053
0
323
Verifying Human ReasoningHello. If you haven’t already seen it, I would like to call your attention to theLurchMath projectwhich hasa quick explanatory video.Could AI systems be of use for verifying human reasoning? Could AI systems process documents and, for example, issue informational messages, warnings, or errors with respect to any reasoning steps occurring in the documents? Might this processing encompass mathematical reasoning and other forms of reasoning, e.g., natural-language argumentation?One can also envision the benefits of such tools when authoring orco-authoring documents. AI systems could simultaneously interact as both co-authors in word processing software and as chatbots in auxiliary chat channels and apps. These AI systems would be useful “bots” for multi-user word processing scenarios. Verifying reasoning might be but one type of such a useful “bot”.Beyond processing and co-authoring documents, verifying human reasoning processes could also be useful for enabling and enhancing man-machineSocratic dialogue.Verifying Artificial ReasoningHere are some publications about verifying the reasoning, e.g., chain-of-thought reasoning, of AI systems [1][2][3].ConclusionThank you. I look forward to discussing any of these ideas with you.References[1] Lightman, Hunter, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. “Let’s Verify Step by Step.”arXiv preprint arXiv:2305.20050(2023).[2] Poesia, Gabriel, Kanishk Gandhi, Eric Zelikman, and Noah D. Goodman. “Certified Reasoning with Language Models.”arXiv preprint arXiv:2306.04031(2023).[3] Ling, Zhan, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. “Deductive Verification of Chain-of-Thought Reasoning.”arXiv preprint arXiv:2306.03872(2023).P.S.: Please also check out the following PhD or postdoc opportunity which pertains toChatGPT for Mathematics:Ph.D./Postdoc position: ChatGPT for Mathematics. Please do feel free to share this excellent opportunity with any interested others.
2023-07-07T22:40:01Z
[]
Handle number on ASR
https://discuss.huggingface.co/t/handle-number-on-asr/44181
1
385
Hi there can anyone help me to find way to handle numbers on Speech recognition model, I’m working on some low resources language but sometime audio may contains number thatare spelling in french like 2000, 6000, ectI’m trying to finetune MMS or Wav2vec2 on wolof where sometime audio may contains numberscc@patrickvonplaten
2023-06-22T12:21:56Z
[ { "date": "2023-07-06T08:35:55Z", "reply": "I usedGitHub - savoirfairelinux/num2words: Modules to convert numbers to words. 42 --> forty-twofor this. Supports many languages, as well as ordinal numbers and years" } ]
Open API standard for open-source LLMs
https://discuss.huggingface.co/t/open-api-standard-for-open-source-llms/45241
0
819
Does anyone have experience/interest in creating API standards? I think we need an API standard for open source models, like OpenAI Completions/ChatCompletions. This will greatly simplify running benchmarks and evaluations on open-source models, since we don’t need to implement inference code/dialogue templates for each model.Personally, I use a custom-written local OpenAI-compatible API server to run standard benchmarks. This allows me to easily run benchmark code from different sources just by modifying api_base, since almost all benchmarks support OpenAI models.
2023-07-01T09:47:41Z
[]
Have you submitted feedback about ChatGPT?
https://discuss.huggingface.co/t/have-you-submitted-feedback-about-chatgpt/36421
4
609
Hi everyone,I am a PhD candidate from the Australian National University. I am interested in understanding how and why users provide feedback on the outputs of generative AI systems, such as ChatGPT.If you have used ChatGPT AND submitted feedback through the interface (by clicking the up/down arrows, submitting additional open-text feedback., etc.) I would really appreciate it if you could completethis 5—10 min questionnairebased on your experience.NB: the ethical aspects of this research have been approved by the ANU Human Research Ethics Committee (Protocol 2022/833).If you have any questions or comments for me, you can reply here or email me at edward.cooper [at] anu.edu.au.Thank you!Ned
2023-04-13T07:00:14Z
[ { "date": "2023-04-17T22:17:47Z", "reply": "oui je me sert très régulièrement de chatgpt clair et facile d’utilisation je reçois les rapports d’incidents et envois mes remarques .c’est un bel outil . ses synthèses sont utiles et bien faites" }, { "date": "2023-05-16T06:13:52Z", "reply": "Hi everyone! Thank you very much for the survey responses. If you would like to respond, please do so soon. I will be closing this survey tomorrow!" }, { "date": "2023-06-27T07:05:26Z", "reply": "Hey This survey is not active right now." }, { "date": "2023-06-27T07:17:49Z", "reply": "Thanks for your interest@emma532. Unfortunately I closed the survey last month." } ]
Working on Low Resource Machine Translation
https://discuss.huggingface.co/t/working-on-low-resource-machine-translation/41526
2
533
I’m working on a Machine Translation system for low resource languages, and I am able to train tokenizer,do POS, till NER - using the Trankit for multilingual NLP. However, since my project is transfer based translation, I need some guidance on doing the next steps, i.e Lexical Transfer, Syntactic Transfer and Morphological Transfer.Are there are python packages that I can use? I know this might not be the exact place where I need to ask for answers, but I could really use some guidance.Thanks a lot in advance !!
2023-05-30T17:47:05Z
[ { "date": "2023-06-27T06:24:53Z", "reply": "Trankit provides a great foundation for multilingual NLP tasks, but for more specific tasks like Lexical Transfer, Syntactic Transfer, and Morphological Transfer, you may need to explore additional Python packages. you can check NLTK (Natural Language Toolkit), spacy library, Syntax Net, Morfessor library.Good luck with your project!" }, { "date": "2023-06-27T07:14:06Z", "reply": "Thanks a lot Emma for answering !! I knew most of these, but Syntax Net and Morfessor are something new. Although I figured out the other parts(not stuck at completely different problem), I hope these will help in some way or other !!" } ]
Using Transformers(?) for Tibetan-English Translation
https://discuss.huggingface.co/t/using-transformers-for-tibetan-english-translation/44078
0
501
Hi! I’m a computer science student/robotics research assistant at a research-oriented American university interested in AI and NLP. I recently read this paper (paper,news about paper) about researchers using markov models and bidirectional LSTMs to translate Akkadian cuneiform. I have contacted a Tibetologist who has extensive access to digitized (in XML) but yet-untranslated Tibetan texts. I am interested in working on a machine translation project. My intuition is that a model like BART would offer improvements over the HMM and BiLSTMs. I do not have extensive experience with NLP, but have done a text classification project and enjoy learning about NLP and different neural architectures in general, especially since the introduction of GPT.I’m looking for collaborators and for advice - at this stage, mostly about model selection and high level design rather than granular implementation details. Please reply with your thoughts or DM if you’d like to get involved! Thanks for reading!
2023-06-21T16:21:50Z
[]
Medical NER based on Bert in Norwegian
https://discuss.huggingface.co/t/medical-ner-based-on-bert-in-norwegian/44037
0
271
Hi, community. I am trying to build an app that extracts Patient Sensitive Data, Diagnoses, Procedures, and Treatments from a Patient Note. The Patient notes are in Norwegian. My goal is to reach 90%+ accuracy. What do you recommend on how to achieve this?Should I fine-tune it into English first and then translate it into Norwegian? Or should I fine-tune directly based on Norwegian data?Please consider that I can access a high volume of already anonymized quality patient data in English and a minimal volume not anonymized in Norwegian.Looking forward to your feedback
2023-06-21T12:17:21Z
[]
A criticism of instruction fine-tuning datasets
https://discuss.huggingface.co/t/a-criticism-of-instruction-fine-tuning-datasets/43757
2
2,030
ChatGPT has taken the world by storm, and will go down in history as one of the most important showpieces in the development of AI. However, it has created an unhealthy obsession with chat bots that is hindering the true potential of open-source language models. Allow me to clarifty.A fun demonstration of the abilities of chat bots is to ask them questions about their opinions. Withing many instruction fine-tuning datasets there are many questions that rely on the LLM’s general knowledge. An example from Databricks Dolly-15k is “Why can camels survive for long without water?” Within the context of fine-tuning, what does this teach the language model?What value does this kind of instruction provide for the language model? For business applications you need instructions like “generate a title based on [keywords, extracted phrases, full text]” or “given this data [summarise, write something, convert to some form”].We really need to distinguish between chat bot behaviour (requiring large general knowledge) and language models for business applications (practical task based on information provided). They are both useful in their own context, but businesses do not need to ask a chat bot for opinions, they need their workloads reduced.
2023-06-19T09:01:47Z
[ { "date": "2023-06-20T05:02:05Z", "reply": "Strongly agree. For what it’s worth, I’ve been using the Dolly-15k dataset in a heavily filtered manner (mixed with other datasets). If you filter by task type the examples become less about opinion and more about performing a task. But still, the quality is mediocre at best.I would love to see more high quality instruction datasets where all the questions were answerable using strictly the context and common sense." }, { "date": "2023-06-20T07:46:40Z", "reply": "I use some old BART summarisation models during development because inference is very fast and the quality is good enough for proof of concepts. I bring this up because it is based on open datasets (xsum and CNN, example sets of articles and their human-created summaries).If I may have one more criticism of instruction fine-tuning datasets is that they are all reinventing the wheel. There are old school datasets from a time where transformers were being trained for single purposes. As far as I know, no one has ever pulled these together because the original idea was to distil knowledge from ChatGPT. Dolly, bless its creators’ souls, is literally reinventing the wheel with some of their tasks, and the dataset is small as a result.I don’t have time for it myself, so I’m putting the idea out there. Include old school datasets in your instruction fine-tuning data. The state of the summarisation capacity of most recent models is shocking (3B parameter and below): the old school BART (around 1B parameters) outperforms all of the LaMini models and all the Evol-Instruct models on summarisation, for example. These deficiencies have to have an expression on larger models tuned with the same datasets too.Another advantage to this approach is that many of the single-task datasets were created with business implementations in mind - before the chat bot craze. So the dataset you get by adapting them into a single instruction-based dataset is certain to have relevant functionality, and then you can add synthetic data on top for flavour and balance." } ]
Forward-Forward algorithm by Geoffrey Hinton
https://discuss.huggingface.co/t/forward-forward-algorithm-by-geoffrey-hinton/30656
10
4,748
I would like to initiate a discussion on the recent publication by Geoffrey Hinton proposing an alternative to the traditional backpropagation algorithm -The Forward-Forward Algorithm: Some Preliminary Investigationsand the paper by Alexander Ororbia and Ankur Mali -The Predictive Forward-Forward Algorithmwhich suggests incorporating a generative circuit into the original FF network.I am interested in hearing the thoughts and insights of the community on these papers. I am particularly interested in discussing the potential benefits of layer level weights update in the Forward-Forward algorithm as it could potentially allow for training a network layer by layer without the need for a huge amount of VRAM.
2023-01-29T01:21:34Z
[ { "date": "2023-01-29T07:30:05Z", "reply": "Implementations found so far:Tensorflow ImplementationPyTorch Implementation" }, { "date": "2023-01-29T07:30:38Z", "reply": "More:Another PyTorch ImplementationDRD2 activity prediction using the Forward-Forward Algorithm" }, { "date": "2023-01-29T07:31:11Z", "reply": "Another one:Tensorflow Implementation" }, { "date": "2023-02-22T19:32:46Z", "reply": "I am attempting to build a mini-GPT version using the Forward-forward idea.I cant find much of anything using it in generative language models, or any example of the NLP benchmark referenced in the Hinton paper.if anyone has any thoughts or repos to provide that type of Implementing of the Forward-Forward Algorithm it would be very helpful.best so far is a few not working repos:nebuly-ai:nebullvm/apps/accelerate/forward_forward at 5fb48f6cda4d2ab756f20a91eea7b482f38ca50f · nebuly-ai/nebullvm · GitHuband kyleliang919:GitHub - kyleliang919/forward_forward_gpt: Using the forward forward algorithm to train large language model" }, { "date": "2023-04-02T06:45:44Z", "reply": "The implementation of the predictive forward-forward algorithm has been released publicly:https://github.com/ago109/predictive-forward-forward" }, { "date": "2023-04-17T07:40:58Z", "reply": "Hi,has anyone tried to train the famousdeep spiking neural networksusing forward-forward ?" }, { "date": "2023-04-27T14:12:36Z", "reply": "Hello,Yes, there was work that came out about a month or so ago that proposed a generalization of forward-forward (and predictive forward-forward) for (deep) spiking networks - this was called theevent-driven forward-forward algorithm(as they had to craft a formulation that worked with spikes themselves):https://arxiv.org/abs/2303.18187" }, { "date": "2023-05-17T02:30:27Z", "reply": "An implementation which is morenative to pytorch" }, { "date": "2023-06-10T17:27:25Z", "reply": "I think the idea of high layer-activations only for the positive data, interesting. The network essentially isn’t giving anOutputlike in backpropagation, but it’s now thePropertyof the network to “light up” for correct labels, and therefore indicating whether it’s a positive data or not. I enjoyed thisinterviewgiven by Hinton about his paper.Find mynotebookimplementation based on the work of Mohammad Pezeshki. It’s modular so you can experiment with different candidates for goodness functions, layerwise loss functions and negative data generation." }, { "date": "2023-06-17T15:23:57Z", "reply": "I am finding it difficult to implement FF algorithm to convnets. I suspect that it might be due to the label information overlayed on the input getting diffused so much. Could someone guide me on this? My attempt is uploaded to my repo in the previous response. Thanks!" } ]
Language model gradients sensitive to target value/length
https://discuss.huggingface.co/t/language-model-gradients-sensitive-to-target-value-length/43543
0
335
I’m trying out amethodto identify important training samples for a given test-time prediction. What it essentially boils down to is calculating the gradient of a test-time prediction and ordering the training samples by their gradient similarity to the test-time gradient. My interpretation is that it attempts to answer the question of which training samples has nudged/influenced the models parameters as similarly a given test-time prediction would have had it been a training sample. It’s not all too important for the question but I hope it makes sense.The model I’m using is T5 and here’s where I run into trouble. What I observe is that very similar (input, target)-pairs produce vastly different gradients in terms of cosine similarity.Let me provide an example starting with a sanity check on a dummy example which should be easily reproducible (helper functions are found below):MODEL_PATH = "t5-small" model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_PATH) tokenizer = T5TokenizerFast.from_pretrained(MODEL_PATH) sentence_1 = get_grads(model, tokenizer, inputs="I like to eat <extra_id_0>", targets="<extra_id_0> pizza") sentence_2 = get_grads(model, tokenizer, inputs="I like to eat <extra_id_0>", targets="<extra_id_0> pizza") cos_sim(sentence_1, sentence_2) >>> 1.0which is totally expected as the same sample would affect the model’s parameters exactly the same. Now changingsentence_2s target slightly to"<extra_id_0> pizza.", i.e. with a period at the end, I get a cosine similarity of 0.46.What I don’t quite understand is that the introduction of a seemingly insignificant token can change the gradients that much?Any help, hints and guidance in understanding this is greatly appreciated!My helper functions:def get_grads(model, tokenizer, inputs, targets): device = "cuda" if torch.cuda.is_available() else "cpu" outputs = model(**{k: v.to(device) for k, v in tokenizer(text=inputs, text_target=targets, truncation=True, return_tensors="pt").items()}) grads = torch.autograd.grad(outputs.loss, model.parameters()) return torch.cat([grad.flatten() for grad in grads]) def cos_sim(a, b): return np.dot(a, b)/(np.linalg.norm(a)*np.linalg.norm(b))
2023-06-16T17:00:42Z
[]
Masked Language Model Scoring
https://discuss.huggingface.co/t/masked-language-model-scoring/5541
5
2,565
Is there an implementation of the Psuedo Log Likelihood for bidirectional language models (i.e.Salazar et al. Masked Language Model Scoring) intransformers? The github repo in the linked paper uses transformers 3.3 and I’ve been unable to get it to work for 4.5.
2021-04-16T03:56:41Z
[ { "date": "2021-04-16T07:54:57Z", "reply": "what kind of problems are you running into? presumably it’s due to a change in the API, so sharing what steps you’re taking and the error messages will help with the debugging" }, { "date": "2021-04-16T08:59:49Z", "reply": "Do you mean with theGitHub - awslabs/mlm-scoring: Python library & examples for Masked Language Model Scoring (ACL 2020)implementation? I’m assuming there’s not much I can do to try and get a 3rd party library which is specifically designed for transformers 3.3 to work with a transformer / tokeniser trained with version 4.5. Specifically my tokeniser is in the new single json file format and as far as I can see the 3.3 library is trying to load from the legacy format. The main issue is the setup.py of the mlm-scoring library requires ==3.3 rather than >=3.3 so installing it downgrades. I suppose I could try removing the version requirement and see what happens.But ideally the metric would be available via a library which is more up to date. I’ll probably code it up myself altouhg it wont be overly efficient, you need to compute the MLM objective masking each token in order and then sum the log likelyhoods to compute PLL for a single sentance." }, { "date": "2021-04-16T09:58:36Z", "reply": "david-waterworth:Do you mean with theGitHub - awslabs/mlm-scoring: Python library & examples for Masked Language Model Scoring (ACL 2020)implementation?yes, i was wondering whether you could adapt their code to match the currenttransformersAPI.david-waterworth:Specifically my tokeniser is in the new single json file format and as far as I can see the 3.3 library is trying to load from the legacy formatcan you point me to the line of code where this is done? i might be able to suggest a workaround this way" }, { "date": "2023-05-16T10:16:13Z", "reply": "Was this implemented in transformers or was there some solution for this? I am attempting to use this scoring technique in my project. Could you please share some details?" }, { "date": "2023-06-15T21:33:58Z", "reply": "Hi, do you have some solutions? Could you share some experience?" } ]
Modification of self attention in BERT without pretraining
https://discuss.huggingface.co/t/modification-of-self-attention-in-bert-without-pretraining/40357
1
357
Hello!I need to turn bidirectional self attention layer into unidirectional one in BERT - from what I understood I just need to apply so called attention mask triangle to the matrix with the attention scores in the source code. However, in this case, before usage of model I need to pretrain it and this is a problem due to limited resources. Do you have any idea how to modify attention without changing the source code?Thank you in advance,
2023-05-19T08:58:16Z
[ { "date": "2023-06-15T21:21:37Z", "reply": "Interested in the question too:)" } ]
Fine tuning gpt-neo via ppo
https://discuss.huggingface.co/t/fine-tuning-gpt-neo-via-ppo/7938
1
1,346
I have a wild idea to improve smaller gpt3 esqe models by tuning their output with ppo a reinforcement learning paper. Originally, this was done to adjust gpt2’s performance to human preference.https://arxiv.org/pdf/1909.08593.pdfI propose to fine-tune gpt neo directly on “prompt driven” data. Most obviously, higher performing models could teach the lower performance models by providing examples from which the smaller lower performance models could learn.However I wonder if it is possible to fine-tune the model in a narrower domain ie code completion like copilot. Would proof writing not be the ideal test? With many proofs accessible, perhaps it would make for easily accessible data with more definitive evaluation than conversational quality. Ie we might compare a naive proof to a fine tuned proof of the same problem? I am aware that human eval is still required.Other prompt driven data likely exists like essays etc. However the technical dream is to compress model performance by fine-tuning with ppo on examples that are sourced from lqrger/higher performance models. Perhaps then we might be able to pull in robust narrow capacities from larger models into smaller models without distilling the entire teacher models knowledge.Is this a good idea to try? And is the model simply to big to consider this email? Ie deep speed questionsBest,Aidan
2021-07-02T20:49:07Z
[ { "date": "2023-06-11T11:16:37Z", "reply": "Hi@arcco96,I have the same issue, have you been successful to fine-tune the gpt neo with ppo and get goo results? can I know your resource?many many thanks" } ]
Muti-Task Model - OCR + Object Detection
https://discuss.huggingface.co/t/muti-task-model-ocr-object-detection/42554
0
882
Hello Everyone,I’m new to Transformers and HuggingFace ecosystem in general.I need some guidance with a project as part of my studies consisting of creating a single model that can handle 2 tasks related to document processing. It takes as input an image containing handwritten text and signatures and stamps. the objective is to 1. detect the existance of a signature and a stamp in the image ( and then extract them by defining bounding boxes around them) and 2. extract the handwritten text.I thought model architectures like TrOCR and LayoutLM might help.Any suggestions on how to build such model , or any scientific papers/blogs that might orient me to the correct direction ?Many Thanks,Cheers !
2023-06-08T14:51:37Z
[]
How to use T5 for sentence embedding?
https://discuss.huggingface.co/t/how-to-use-t5-for-sentence-embedding/1097
6
15,579
is there any way to use encoder part of T5 model for representation learning?
2020-09-12T12:11:23Z
[ { "date": "2020-09-12T13:11:28Z", "reply": "Hi@banucoolYou can initialize theT5Modelclass and only forward pass through it’s encoder. The first element of the returned tuple is the final hidden states.model = T5Model.from_pretrained(\"t5-small\")\ntok = T5Tokenizer.from_pretrained(\"t5-small\")\n\nenc = tok(\"some text\", return_tensors=\"pt\")\n\n# forward pass through encoder only\noutput = model.encoder(\n input_ids=enc[\"input_ids\"], \n attention_mask=enc[\"attention_mask\"], \n return_dict=True\n)\n# get the final hidden states\nemb = output.last_hidden_stateThe shape ofembwill be(batch_size, seq_len, hidden_size)" }, { "date": "2020-09-12T14:06:41Z", "reply": "thanks a lot@valhalla" }, { "date": "2020-09-12T15:33:12Z", "reply": "can we use pruned version of bert for feature extraction?does it make sense?" }, { "date": "2020-09-12T18:08:55Z", "reply": "To clarify, the above code just returns the final hidden state of each token and not whole sentence embedding.for sentence embedding you can trysentence-bert.https://huggingface.co/sentence-transformers" }, { "date": "2022-05-31T17:01:27Z", "reply": "valhalla:model = T5Model.from_pretrained(\"t5-small\")\ntok = T5Tokenizer.from_pretrained(\"t5-small\")\n\nenc = tok(\"some text\", return_tensors=\"pt\")\n\n# forward pass through encoder only\noutput = model.encoder(\n input_ids=enc[\"input_ids\"], \n attention_mask=enc[\"attention_mask\"], \n return_dict=True\n)\n# get the final hidden states\nemb = output.last_hidden_stateHi, I’m interested in using T5 to generate word embeddings. I tried the code supplied above. Unfortunately, got this error message:---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n<ipython-input-40-5f6e22d1ad1e> in <module>()\n 1 model = T5Model.from_pretrained(\"t5-small\")\n----> 2 tok = T5Tokenizer.from_pretrained(\"t5-small\")\n 3 \n 4 enc = tok(\"some text\", return_tensors=\"pt\")\n 5 \n\nTypeError: 'NoneType' object is not callableDo you have any thoughts on resolving this error message?Thank you in advance for your help." }, { "date": "2023-05-27T02:25:02Z", "reply": "arXiv.orgSentence-T5: Scalable Sentence Encoders from Pre-trained Text-to-Text ModelsWe provide the first exploration of sentence embeddings from text-to-text\ntransformers (T5). Sentence embeddings are broadly useful for language\nprocessing tasks. While T5 achieves impressive performance on language tasks\ncast as sequence-to-sequence..." } ]
My QUESTION is how run a very big model like bloom on a cluster of machines?
https://discuss.huggingface.co/t/my-question-is-how-run-a-very-big-model-like-bloom-on-a-cluster-of-machines/41086
0
280
Hello i can run opt 66b on one server with 6 gpu 24 Gb by using your page on huggingface on how load big models : I give device_map. I can also run bloom on one server with 8 GPUs 24 GB by giving device_map but it uses offload on CPU and it takes time to answer. My QUESTION is how run a very big model like bloom on a cluster of machines indeed bloom would need 20 GPus 24 Gb and it needs a cluster of 3 machines with 8 gpus to deploy, with accelerate it is not possible as we are limited to only one machine. with Dp and ddp it is not possible as the model span on more than one machine I have tried everything, deep speed inference, RPC Framework, etc … Thanks for your help. Regards Pat
2023-05-26T14:34:20Z
[]
Few-shot learning vs Fine-Tuning
https://discuss.huggingface.co/t/few-shot-learning-vs-fine-tuning/41024
0
1,727
I am trying to define a comparison metric which compares the few-shot learning techniques vs normal fine-tuning, for any NLP down stream task. for example text-classification task.I am using SETFIT for fewshot with bert as sentence transformer and same bert in sequence classification.My current thoughts are since fewshot method require very few examples per class in order to achieve similar performance as fine-tuned model, if we follow same rule in normal fine-tuning, will that model give any sensible accuracy score or not. (currently i am getting random accuracy on fixed evaluation set in normal fine-tuning).I have used samples per class in range 2,4,8,16,32. I am able to see the results for setfit makes sense, but not in case of normal fine-tuning.Will appreciate flaws in above approach, and new directions for search, any papers in this line will be very helpful
2023-05-26T00:40:11Z
[]
Finetuning on a recent topic/domain
https://discuss.huggingface.co/t/finetuning-on-a-recent-topic-domain/40067
2
531
Hi,I’m trying to learn and understand as most of possible the language models but something remains unclear to me. Assuming I want my LLM such as BLOOM be aware of recent events, let’s say the FIFA world cup 2022. As far as know, BLOOM was trained with data up to July 2022 so its knowledge about how the cup went is very limited. I can do prompt-engineering such as providing some context but it’s not good as I want and the context window is restricting.The solution would be finetuning the model but it’s hard to me to clearly understand how to collect the data.If a scrap the webpage of wikipedia about the world cup and finetune the model on it, would it be sufficient ? And then if I need a chabot I can finetune again with the alpaca or vicuna dataset.A lot of tutorials and blog posts deal with some instruction datasets, but in my case why would I need such format ?Thanks for your hints
2023-05-16T12:51:32Z
[ { "date": "2023-05-22T22:46:06Z", "reply": "Hi@Alex21j!I can think of a few naive ways to test this. The first, with respect to data collection, I think it might depend on what your end task is. If a dataset doesn’t already exist, you may have to create one. You could scrape several websites (like wikipedia or FIFA) and collect all the text related to the 2022 world cup. One would then need to format that data appropriately for the task.Let’s say you were interested in being able to ask questions to your finetuned model. One would need to format the collected data for thequestion answering task, then finetune BLOOM. Unfortunately I cannot think of good process for evaluating the goodness of the finetuning. Maybe others on here have something they can share. A naive approach would be to ask the finetuned model “Who won the 2022 FIFA world cup” and see what the response is. As this is more anecdotal, it’s not a very quantitative means for evaluating how well the finetuned model responds to questions about the 2022 world cup.With respect to your second question, what I understand this to be is a dataset format that provides the model with data that is formatted in a more conversational tone. Taking the example above, you could prompt the model with “Summarize the 2022 FIFA world cup”. Ideally it would give you a summary of the game, the participants, who won, and what the score was. I don’t know this to be the case, but it’s what I could infer from reading thecleaned alpaca dataset github.Lastly, I should mention that I don’t have any experience with BLOOM. Most of what I have dealt with in language modeling comes from finetuning GPT2. I also found theTaskspage on the HF site to be very insightful. Maybe there is something better there that suits your needs.Apologies I don’t have better insight, but I hope the above is useful." }, { "date": "2023-05-25T11:49:48Z", "reply": "Hi@aclifton314,That’s a lot of insights, it makes much more sense, thanks !So now I’m trying to understand if it’s worth building a QA dataset.If I specialize a LLM at a low cost just by finetuning it with some articles or wikipedia pages in raw text and then use few-shot QA, would it be sufficient ?I’m also wondering what could be the effect of finetuning an chatbot such as vicuna with raw text. Any chances than the conversational mode will be lost after finetuning ?It’s kinda hard to evaluate the benefit of building a QA/conversational dataset instead of “simply” finetuning the model with domain-specific raw texts." } ]
Opcodeo Tokenizer
https://discuss.huggingface.co/t/opcodeo-tokenizer/40129
0
259
Is there a tokenizer for opcode sources? (a model will be even better)
2023-05-17T06:24:00Z
[]
Importance of sentinel token placement in T5?
https://discuss.huggingface.co/t/importance-of-sentinel-token-placement-in-t5/40061
0
662
Hi there!There is this paper that I have been trying to reproduce (https://arxiv.org/pdf/2205.11482.pdf) as part of my master’s thesis. It uses T5 to learn facts from the training set where either the object or the subject is masked with a sentinel token. An example of a training sample (called abstracts) can be seen here:Input: “Animal Farm is an allegorical and dystopian novella by <extra_id_0>, first published in England on 17 August 1945.”Target: “<extra_id_0> George Orwell”The entire dataset can be found hereekinakyurek/ftrace · Datasets at Hugging FaceThe thing I’m wondering is that in the docs, the use of sentinel tokens are as specified:Input: “The <extra_id_0> walks in <extra_id_1> park”Target: “<extra_id_0> cute dog <extra_id_1> the <extra_id_2>”i.e. a sort of inverse of each other’s masking.You will notice that this is not the case for the example from the dataset that I’m working on. If I’m right the target should be “<extra_id_0> George Orwell <extra_id_1>” since the input mask is in the middle of the abstract.It is far from the only case as you will see if you explore the dataset.This has left me to wonder how this “not-so-perfect” placement and formatting of sentinel tokens might affect training of T5? Should it be considered a serious data-quality issue or does its implications sort of go away with training on a lot of data?Thanks for reading through my question! Hope that someone will be able to clarify my doubts:)
2023-05-16T11:59:43Z
[]
Integration with Public-sector Data Portals
https://discuss.huggingface.co/t/integration-with-public-sector-data-portals/40079
0
334
Hello. I am pleased to share some information with the community about integrating AI systems with public-sector data portals.If you are interested in developing multimodal dialogue systems, chatbots, for contexts likehttps://www.ms.gov,https://data.gov, andhttps://www.usaspending.gov/, then you should exploreCKANandDKAN, if you haven’t already.CKAN(Comprehensive Knowledge Archive Network) is used by national and regional government organizations throughout the European Union, the Americas, Asia, and Oceania to power a variety of official and community data portals. Documentation is availablehere. Documentation about developing extensions is availablehere. Source code is availablehere.DKAN(Drupal-based Knowledge Archive Network) is a community-driven, free and open-source open data platform that gives organizations and individuals ultimate freedom to publish and consume structured information. DKAN is inspired by CKAN and is built on top of the very popular Drupal CMS. Documentation is availablehere. Source code is availablehere.There are tremendous opportunities with respect to AI, civic technology, and open government and I wanted to share this information with the community. Thank you.
2023-05-16T16:03:24Z
[]
Multi-GPU Machine Setup Guide and QnA
https://discuss.huggingface.co/t/multi-gpu-machine-setup-guide-and-qna/5891
6
6,012
This is a WIKI post - so if you feel you can contribute please answer a few questions, improve upon existing answers or add an alternative answer or add new questions:This thread is to discuss Multi-GPU machine setup for ML.Basic RecommendationsQ. What are basic recommendations on how to design a multi-GPU machine?Would be great to factor in price vs performance (so we can know how much we save vs pre-built)?A. See the links to the guides in the Resources sections below.Critical decisions to makeQ. What are the smartest decisions to make it future proof (mine is already obsolete)?A. Computers are black holes that suck everything in and give little out (other than some RGB colors). There is no such thing as future proofing in modern computers, other than mechanical parts like your PC tower.Q. Can we do it at all or is it necessary to redesign it every 1-2 years?Ideally you just upgrade parts as they need upgrading, rather than replacing the whole PC. I use a 10-year old tower still.In-house vs. cloudQ. Is it worth building a good local machine or should you just learn how to leverage the cloud?A. Typically, for small set ups - up to several consumer GPUs, it’s almost always worth to have a local setup than cloud, unless you find some upstart cloud provider that for a while underprices their cost-per-hour.Pros:Of course, it depends on your usage patterns. If you are going to use it once in a blue moon, cloud it is. If you use it a lot then local will be cheaper. You can calculate your costs to purchase the machine vs. renting it.Not needing to worry about forgetting to turn the instance off and having the $$ counter running might be another plus.Heat is good. Heat is bad. In cold countries a home-based ML server is a great adjunct to keeping your working space warm. Not so much if you live in tropics.Cons:If you want a lot of large GPUs you might not be able to build it on consumer-level hardware, or the cost might be prohibitively expensive.Electricity cost is another factor. Some cities have very expensive electricity. Especially if you go over the “normal” usage quota that some electric companies have.Hardware gets outdated fast, so your needs may quickly become larger than what you have. You may or may not be able to recover some of the investment when trying to sell your old hardware.Key componentsQ .What are the main components to look for?Q. Sample setups would be great too (and why they are great).A.Make sure your CPU has enough PCIe lanes to support all the cards you plan to useMake sure your MB has enough PCIe slots and they are at the right distance to support modern GPUs that take up 2 slots.Research your PSU - so that it has enough extra power to handle those power-hungry GPUsPlan to have a lot of RAM, so ideally buy as large of a single RAM stick as possible. i.e. try not to fill out all RAM slots from the get going unless you buy some 256GB from the get going.NVMe slot or a few are going to be super-important. Try to have your OS on a different drive (e.g. SSD) - you don’t want to share your data NVMe with your OS operations.Does the box have enough space for cooling? Be it water cooling or lots of fans.Definitely don’t buy those pre-packaged PCs by large retailers, you can’t mod those. Buy your own components and plan for expansion.Puchase TimingQ. Is it a good time to buy GPU or when to know when there are good deals (seem a bit high right now)?A. Black Friday in North America gives you by far the best deals. But don’t just buy because it’s BF, do your research, since some companies raise their prices, instead of lowering those.ResourcesLecture 6 from Full Stack Deep LearningA 15000$ Machine Learning Rig: 2x3090 + 1xA6000 BuildBlogs focusing on ML Hardware:The Best 4-GPU Deep Learning Rig only costs $7000 not $11,000Tim Dettmers’ great posts aboutchoosing GPUs for deep learningandHardware Guide to Deep Learning. The guides do not focus on distributed setup, but there are suggestions on multi-GPU machines and how to select a GPU for your task and budget.
2021-04-30T19:51:53Z
[ { "date": "2021-04-30T20:23:27Z", "reply": "I would recommend to check out Tim Dettmers’ great posts aboutchoosing GPUs for deep learningandHardware Guide to Deep Learning. The guides do not focus on distributed setup, but there are suggestions on multiGPU machines and how to select a GPU for your task and budget." }, { "date": "2021-04-30T20:26:20Z", "reply": "Thank you! merged it into the OP.Please feel free to put your notes directly in there and we will progressively massage it into a readable/organized doc." }, { "date": "2021-05-01T04:41:45Z", "reply": "I’ve answered all of these Qs along with some tips on how to best air cool these in my recent video:" }, { "date": "2021-05-01T08:01:46Z", "reply": "thanks@Sanyam! i’ve added your video to the OP" }, { "date": "2021-05-01T12:22:33Z", "reply": "I really likedthis blog postby Emil Wallner, lots of good information there including some good insights on current hw options (will probably change in a couple of months)Emil makes a very good point why a home rig is the way to go:The main reason to own hardware is workflow. To not waste time on cloud savings and encourage robust experimentation.I would also recommendthis hardware guideby Tim Dettmers. It is the definitive resource with timeless answers to many questionsTwo observations from Tim Dettmers’ guide worth highlighting:the number of PCI lanes is not as important as it seemsRAM timings are not importantBoth of these points above can save you a lot of money." }, { "date": "2021-05-01T12:23:10Z", "reply": "(had to split the post in two as new users can post max 2 links)Other than that, the quality of PSUs really differs - it is importantwhat PSU you go for(watts given by the manufacturer is next to meaningless). I did a bit of an investigation on thishere." } ]
Help me with my PhD research on voice dataset documentation by completing this survey
https://discuss.huggingface.co/t/help-me-with-my-phd-research-on-voice-dataset-documentation-by-completing-this-survey/37751
1
447
Do you work with voice or speech data?You mightcontributedata, write dataspecificationsfor collection, performfilteringor pre-processing,trainASR or TTS models, ordesignor perform evaluations on ML speech models.If so, I’d love your help to understand current dataset documentation practices, and what we can do to make them better as part of my PhD research atAustralian National University’s School of Cybernetics.The survey takes 10-20 minutes to complete, and you can opt in to win one of 3 gift cards valued at $AUD 50 each.Research Protocol 2021/427 approved by ANU Human Research Ethics Committeehttps://anu.au1.qualtrics.com/jfe/form/SV_cSFODa5osYtm96esurvey-promotion-linkedin1200×627 298 KB
2023-04-26T04:07:07Z
[ { "date": "2023-05-13T04:05:13Z", "reply": "Firstly, a huge thank you to everyone who filled in the survey - hugely appreciated. If you haven’t, and you would like to, it’s closing in just under a week" } ]
Feeding a Knowledge Base into Transformer model
https://discuss.huggingface.co/t/feeding-a-knowledge-base-into-transformer-model/13150
1
1,304
Hey HuggingFace family,I’m an undergrad in CS working in NLP. I’m really fascinated by the idea of incorporating everyday commonsense reasoning within existing Language Models. Although there are some commonsense knowledge bases like ConceptNet, ATOMIC, OpenMind Commonsense (MIT), Cyc etc… they exist in forms of knowledge graphs, ontologies.My question is, how can I go about feeding these knowledge bases into current transformer LMs like BERT and GPT-2?Is there a way I can fine-tune them, such that they retain their language modelling capabilities but also learn new commonsense understanding of our physical world?
2021-12-27T09:31:41Z
[ { "date": "2023-05-02T02:48:37Z", "reply": "Hello@ShivamArya, did you ever figure out how to do this?" } ]
Model that generates comments for the AITA subreddit
https://discuss.huggingface.co/t/model-that-generates-comments-for-the-aita-subreddit/38156
0
402
Hey everyone!My friend and I, are in our final year of university studying Computer Science and we built a model using Bart and T5 to generate comments for the AITA subreddit, trained on all the posts in AITA from 2013.As part of the model evaluation, we created a survey to help us determine if the AI-generated responses are distinguishable from the model-generated comments.Here is the link to the survey:https://forms.gle/zx7ShNyNDFSCaHXS9The survey should take no longer than 10 minutes to complete. It contains 5 posts from the AITA subreddit, with 3 comments, for each post you are asked to rank the comments from best to worst. After which, you are asked to guess which comment is a human comment.At the end of the survey, you will get feedback on your ability to guess human responses.Your feedback is valuable and will contribute to our research.A huge thank you in advance for your time and support!We are planning to release the dataset of Reddit posts scraped from the subreddit and the models in the future, after we submit the model and it is assessed by the university.
2023-04-29T22:58:20Z
[]
Cost Effective LLM - For Small Guys
https://discuss.huggingface.co/t/cost-effective-llm-for-small-guys/37963
0
1,036
We, at Assemble Teams are building a new LLM that addresses the challenges of bias, accuracy, explainability, security, and safety.We believe that LLMs have the potential to be powerful tools for a variety of tasks, but we also recognize that they come with some challenges. Our goal is to build an LLM that is both powerful and safe.Here are some of the challenges that we are addressing:Bias:We are using a dataset that is carefully curated to minimize bias. We are also using techniques to debias the output of our model.Accuracy:We are using a state-of-the-art training algorithm and a large dataset to train our model. We are also using techniques to improve the accuracy of our model.Explainability:We are developing techniques to explain how our model generates its output. This will make it easier to trust the output of our model and to debug it when it generates incorrect or misleading information.Security:We are using security techniques to make our model more resistant to attack. We are also working to develop security best practices for using LLMs.Safety:We are developing techniques to make our model more safe to use. We are also working to develop safety best practices for using LLMs.We are inviting developers and followers to engage in building cost effective LLMs.We believe that building cost effective LLMs is important for making these tools accessible to a wider range of people. We are open to collaborating with developers and followers to build cost effective LLMs.If you are interested in collaborating with us, please contact us viaTwitterand join ourDiscord
2023-04-27T22:17:26Z
[]
Civic Technology Community Group
https://discuss.huggingface.co/t/civic-technology-community-group/37472
1
397
IntroductionArtificial intelligence is already having a big impact across domains, including government services. Users will soon be able to ask natural-language questions and engage in multimodal dialogues about large-scale, public-sector financial, accounting, and budgetary data, receiving responses comprised of language, mathematics, charts, diagrams, figures, graphs, infographics, and tables.Recent advancements to artificial intelligence technology can equip: (1) accountants, auditors, analysts, comptrollers, public officials, legislators, oversight committees, and members of their staffs, and (2) the public, journalists, and government watchdog organizations, to better make sense of and interact with public-sector data.Civic Technology and Open GovernmentAccording to Wikipedia, “civic technology enhances the relationship between the people and government with software for communications, decision-making, service delivery, and political process. It includes information and communications technology supporting government with software built by community-led teams of volunteers, nonprofits, consultants, and private companies as well as embedded tech teams working within government.”“Open government is the governing doctrine which maintains that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight.”Award-winning Government WebsitesAward-winning government websites include those of Mississippi (https://www.ms.gov), which provides a dialogue system on its front page, and Utah (https://www.utah.gov/), which provides live chat support.Modernizing Government Websites and ServicesThere are opportunities to contribute to the modernization of other government websites and services, e.g.,data.gov,performance.gov, andusaspending.gov.Decision-support ScenariosImportant scenarios include, but are not limited to, providing decision-support for users preparing to vote and for users preparing to select a city to relocate to.In the first scenario, decision-support for voting preparation, users preparing to vote could review the public data of their cities, counties, states, and federal government.In the second scenario, decision-support for selecting a city to relocate to, users preparing to relocate to a city could interact with data from multiple cities while comparing analytics and performance indicators of interest to them in their decision-making processes.Multimodal conversational AI can enhance both of these scenarios.Human-computer Interaction ConceptsMobile and desktop computing scenarios involving both written and spoken conversational interaction with AI systems are of interest to the new group.Scenarios involving the Web are of interest to the new group.Multiple users could, together, speak with remote AI systems using smartphones or smart speaker devices while viewing AI systems’ responses in the form of streaming video content, visual analytics dashboards, displayed on connected smart televisions.ConclusionThe newCivic Technology Community Groupwill bring together those interested in civic technology, open government, and artificial intelligence to share information, to discuss these topics, to advance the state of the art, and to ensure that the Web is well-suited for these applications.In order tojoin the group, you will needa W3C account. Please note, however, that W3C Membership is not required to join a Community Group. Joining is fast, free, and easy to do.Interested group participants are also invited to consider entering the group’s election processes to serve as Chairs.Thank you. Please consider forwarding this information to any others interested in these topics.
2023-04-23T08:25:23Z
[ { "date": "2023-04-25T22:25:36Z", "reply": "Opinion PollingI am also pleased to share with the community that AI, LLMs, natural-language processing, and text embeddings can be of use for enhancing opinion polling technologies [1][2].I recently shared with the Civic Technology Community Groupmailing list:Artificial intelligence systems, virtual opinion pollsters, can perform structured, semi-structured, and unstructured surveys, questionnaires, and interviews across a number of communication channels (e.g., Web-based chatbots, email, telephone, Microsoft Teams, Skype, Facebook, Slack, Kik, Telegram, Line, GroupMe, Twilio, WebEx, WhatsApp, Zoom, RingCentral, etc.).Recent advancements to artificial intelligence and natural-language processing, e.g., text embeddings, are interesting to consider with respect to the advancement of opinion polling technologies. With natural-language processing, virtual opinion pollsters can perform open-ended questions [1], e.g., follow-up questions which might explore rationales, justifications, and argumentation of respondents’ previous answers.In addition to being able to perform predefined lists, or sequences, of questions, virtual opinion pollsters can traverse larger trees or graphs of questions, with paths branching, or varying, based upon respondents’ answers.Thank you. I hope that these ideas are interesting to you. Any thoughts?[1]https://news.gallup.com/opinion/methodology/406922/natural-language-processing-aids-open-ended-questions.aspx[2]https://news.gallup.com/opinion/methodology/233291/why-phone-web-survey-results-aren.aspx" } ]
Fine-tuned MLM based RoBERTa not improving performance
https://discuss.huggingface.co/t/fine-tuned-mlm-based-roberta-not-improving-performance/36913
2
932
We have lots of domain-specific data (200M+ data points, each document having ~100 to ~500 words). We wanted to have a domain-specific LM.We took some sample data points (2M+) & fine-tuned RoBERTa-base using the Mask Language Modelling (MLM) task.So farwe did 4-5 epochs (512 sequence length, batch-size=48)used cosine learning rate scheduler (2-3 cycles/epochs)We used dynamin masking (masked 15% tokens)Since the RoBERTa model is finetuned on domain-specific data, we do expect this model to perform better than the pre-trained-RoBERTa which is trained on general texts (wiki data, books, etc)We did perform some tasks like Named Entity Recognition (NER), Text Classification, and Embedding generation to perform cosine similarity tasks. We did this on both finetuned domain-specific RoBERTa and pre-trained-RoBERTa.Surprisingly, the results are the same (very small difference) for both models. We did try Spacy models too, but the results are same.Perplexity scores indicate that finetuned MLM-based RoBERTa has a minimal loss.Can anyone please help us understand why MLM based model is NOT performing better?should we go for more data OR more epochs OR both, to see some effect?are we doing anything wrong here? Let me know if any required details are missing. I will updateany suggestions OR any valuable links addressing these concerns would be really helpful
2023-04-18T04:27:54Z
[ { "date": "2023-04-20T05:07:30Z", "reply": "I’m not sure why they perform the same, but maybe by looking at the FP samples for both models in the test set you might see a noticeable trade-off between the generalization and overfitting." }, { "date": "2023-04-20T16:17:48Z", "reply": "@phosseini: Could you offer some assistance here, please? Do you have any ideas or suggestions?" } ]
A complete survey on ChatGPT: One Small Step for Generative AI, One Giant Leap for AGI
https://discuss.huggingface.co/t/a-complete-survey-on-chatgpt-one-small-step-for-generative-ai-one-giant-leap-for-agi/35607
0
1,179
We recently conducted a comprehensive research on ChatGPT, hoping it would be helpful to you!Link to survey:One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC EraOpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be seen as one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google Scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI.rooafsojtzra11084×1666 77.2 KBLink to survey:One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
2023-04-05T05:04:02Z
[]
Continue pre-training GPT2
https://discuss.huggingface.co/t/continue-pre-training-gpt2/34692
0
518
Hi guys,Since 2019, when OpenAI introduced to us GPT2, a lot has changed and new methods/optimization schemes emerged.I believe GPT2 is sub-optimal considering the jump NLP made since then.Therefore, I’m trying to continue pre-training GPT2 (small, medium, large), and would love to hear from your experience!I’m using the openwebtext dataset, do any of you recommend a better/richer one?Did any of you try distillation to continue pre-train GPT2?Any other SOTA trick/optimization method you do recommend?
2023-03-26T07:29:53Z
[]
NLP: Infer intent of finalising a transaction in a dialogue/chat system
https://discuss.huggingface.co/t/nlp-infer-intent-of-finalising-a-transaction-in-a-dialogue-chat-system/34422
0
251
Hi all,I have been tasked with tacking the following problem and I wanted to ask for different approaches on how to best approach it.ProblemI am looking to infer the intent of finalising the transaction during a chat conversation. For example: buyer messages “are there any scratches on the table?” and gets a response “no, there are no scratches, the table is brand new” the probability of finalizing the transaction is 89%.Data AvailableChat data is available for the last month all inPolishwith a flag pointing if a transaction was completed or not. The feedback was acquired by sending a custom binary closed question 48h after the conversation ended probing both sides buyer and seller.My approachI was looking to preprocess the whole dialogue (remove stopwords, lemmatisation) as one text and pass it through a TF-IDF (use n-grams as well). Then based on the frequency of words determine how relevant those words are to a transaction or not and then fit a classifier (naive bayes) to determine the probability of a transaction. An open question still to answer is to use the whole dialogue up until a point or just use the last 2,4… message exchanged between the buyer and the seller.Looking forward to your thoughts on the topic. Thanks a lot in advance for your help.
2023-03-22T18:20:34Z
[]
Conversational Budget Analytics
https://discuss.huggingface.co/t/conversational-budget-analytics/33730
1
516
I recently thought of an idea which seems like it might be useful and so I would like to share it with the Hugging Face R&D Community.Last year, I did some volunteering pertaining to catalyzing and spurring AI-enhanced budget navigation and analytics. A thought was that the general public, accountants, and auditors could each navigate public sector budgetary data conversationally using dialogue systems or chatbots. Furthermore, these dialogue systems could be multimodal, producing data visualizations and analytics alongside their natural-language responses.More recently, large language models and chatbots are quite popular. Contemporary dialogue systems can answer natural-language questions while indicating their document-based sources. What about dialogue systems which could answer questions about large-scale budgets, spreadsheets, tables, and other database data?Approaches to connecting dialogue systems to budgetary data include, but are not limited to:Software data adapters.Automatically generating abundant “virtual documents” with “pass-through data provenance” which can be used to trace back to data resources utilized to generate the documents.Expanding on point 2, the idea that I would like to share, today, is that, for large-scale budgetary datasets, software tools could generate a very large number of “virtual documents” which each utilize natural language (and, perhaps, multimodal data visualizations) to answer automatically-generated questions.Large language models could, then, be trained on large-scale corpora of “virtual documents”. Large language models could, with respect to providing sources, dereference or redirect through these “virtual documents” back to the actual data (spreadsheets, tables, budget-related files). In this way, provided answers accompanied by hyperlinks would be able to refer end-users to actual data through the “virtual documents”.That is, accompanying hyperlinks provided to end-users would “pass through” or “redirect through” the “virtual documents” (which needn’t be, but could be, stored after training) to allow end-users to conversationally interact with budgetary data and to navigate into (views of) backing data.I wanted to broach these topics with the Hugging Face R&D Community and would be very much interested in discussing these and any other ideas towards delivering conversational budget analytics to end-users. Thank you.
2023-03-13T23:51:08Z
[ { "date": "2023-03-19T06:45:37Z", "reply": "Clarifying, pertinent technologies include: (1) AI-enhanced business intelligence for public-sector accountants, auditors, analysts, and comptrollers, and (2) AI-enhanced Web-based UI/UX for the public, journalists, and government watchdog organizations to be able to better access and interact with this same data.Today, in the United States of America, relevant websites include, but are not limited to:data.gov,usaspending.gov, andperformance.gov.Also, there was an exciting development since I wrote the earlier post. Here is an example of the new state of the art,Copilot for Excel:https://www.youtube.com/watch?v=I-waFp6rLc0." } ]
TRL loss blowing up
https://discuss.huggingface.co/t/trl-loss-blowing-up/33821
2
537
Hello@lvwerra,@natolambert, I am trying to use a Pegasus model and improve it in certain aspects using the TRL library. My reward function is based on ROUGE. While training it on a subset of the CNN dataset, the model loss seems to explode and the model outputs gibberish. Since I am new to this area, I needed some help understanding the problem. You can view the Wandb logshere.Best,Raj
2023-03-15T01:37:06Z
[ { "date": "2023-03-15T14:03:18Z", "reply": "Hi@RajSangcould you please share a Colab notebook or a minimal example that reproduces your problem? That will help us better understand what’s going wrong" }, { "date": "2023-03-16T00:30:51Z", "reply": "Thanks for responding@lewtun,hereis the colab notebook!" } ]
Diffusion models for environmental sound generation
https://discuss.huggingface.co/t/diffusion-models-for-environmental-sound-generation/33708
0
341
I have in mind to generate environmental sounds from text or even simpler numerical values, based on stable diffusion. Does anyone have any research suggestions for me? The idea is to generate a sound scene like “rain with a very strong wind”. Or just modulate the intensity of the rain for example.Thanks in advance for the ideas/advice.
2023-03-13T17:18:27Z
[]
Dose any one fine tune bloom7b model with peft?
https://discuss.huggingface.co/t/dose-any-one-fine-tune-bloom7b-model-with-peft/33690
0
414
I want it fine tune bloom7b with peft but it doesn’t work.It give me the following errorRuntimeError: self and mat2 must have the same dtype
2023-03-13T11:23:14Z
[]
Minimize number of transformers checkpoints for serving muliple client
https://discuss.huggingface.co/t/minimize-number-of-transformers-checkpoints-for-serving-muliple-client/29733
3
380
Hi all,my objective is to build a platform where every costumer can send its own classification text corpus and get back its own model trained and served. Training a single transformers for every costumer is straightforward but untractable in terms of disk usage while number of costumers increases. I could use a single bert backbone to get embeddings from each corpus and train a custom two layers neural net for each costumers. It is a first strategy that make disk usage more reasonable.My question is : does it exist a kind of white paper, blog or whatever that assess the problem and propose possible strategies while maintaining the highest performance.I’m sure it is a common issue every AI based company could face.Thanks for your help.Regards
2023-01-16T07:28:17Z
[ { "date": "2023-02-15T10:26:55Z", "reply": "Hey@ykacer– have you looked at our newest library,peft? If your problem can be solved through fine-tuning of a few base models, the total disk usage is very reasonable" }, { "date": "2023-03-01T07:28:51Z", "reply": "Hi@joaogante, thanks a lot for the suggestion i’m gonna have a look at it." }, { "date": "2023-03-09T15:14:42Z", "reply": "Dear@joaogante, thanks again for your information, i was able to succesfully run a Lora based roberta with my own data using one of your examples notebook. Just a question: I was wondering how PEFT is different from Adapter framework?" } ]
How to approach NLG problem, mainly generating summaries from a table/chart using trasnformers based models
https://discuss.huggingface.co/t/how-to-approach-nlg-problem-mainly-generating-summaries-from-a-table-chart-using-trasnformers-based-models/33201
0
287
I am trying to explore on training a model on the table/chart(aggregated data), chart title, axis labels and target text summary. Any suggestions on how to proceed.
2023-03-06T23:37:30Z
[]
Carrying Gradients Through Generate
https://discuss.huggingface.co/t/carrying-gradients-through-generate/301
5
2,593
Hi folks,How would you best recommend that I pass gradients through generate? below is a rough code snippet explaining the objective.I am thinking that I could take the hypo_ids directly from the model output (instead of from generate), but this is no longer natural because teacher-forcing is used to generate these.Thoughts?Context from Pytorch Lightning Implementation:# self.model = BartForConditionalGeneration("facebook/bart-base") def forward(self, batch, batch_id): return self.model(input_ids = batch["x"], decoder_inputs=["decoder_inputs"], decoder_labels = ["decoder_labels"] ) def training_step(self, batch, batch_id) """Want two losses, language modelling loss and semantic similarity loss""" # language modelling loss outputs = self(batch)[0] language_modelling_loss = outputs[0] # semantic similarity loss target_ids = batch["target_ids"] hypo_ids = self.model.generate(batch["x"]) # no gradients passed of course semsim_loss = 1 - nn.CosineSimilarity(dim=0)(target_ids, hypo_ids) return {"loss": language_modelling_loss + semsim_loss}
2020-07-15T11:45:52Z
[ { "date": "2020-07-16T11:16:02Z", "reply": "EDIT: The only method seems to be to use RL to simulate the sampling that occurs.seehttps://papers.nips.cc/paper/8682-training-language-gans-from-scratch.pdf" }, { "date": "2020-07-16T17:14:36Z", "reply": "@yjerniteis also interested in this line of work.I would write a method similar to parlai’sdecode_forcedthat forces the model to decode the tgt sequence and estimates its probability, then backprob the sum of the GT sequence. I’m not sure if that will lead to super similar results to the current teacher-forcing training approach, but it would be interesting to test!" }, { "date": "2020-08-06T17:06:36Z", "reply": "I just tried a simple ffnn to replicate argmax, but found that the gradients are almost always zero which makes sense I guess - changing other vector values will almost never change the maximum value." }, { "date": "2020-11-02T12:55:18Z", "reply": "This should also be interesting:Big `generate()` refactor" }, { "date": "2023-01-29T22:57:54Z", "reply": "Hello,I’m trying to do something similar. Did you manage to implement something working?" } ]
Model Adaptation
https://discuss.huggingface.co/t/model-adaptation/30295
0
320
Hello, the aim of this discussion is to share ideas. I would like to let’s say adapt some model for differents tasks(summarization, translation,…) while trying to find out how it works(evaluation part) by digging through the model or even the tool(e.g. which layer makes which decision that affect the model decision).Anyone have idea to share about this. Which models are suitable and how can this be done.
2023-01-24T13:11:26Z
[]
Swapping out self-attention layer in BERT
https://discuss.huggingface.co/t/swapping-out-self-attention-layer-in-bert/29398
0
546
Hi team, I am looking to swap out the self attention layer in the BERT construction, and just retrain the embeddings with all other parts as is. I basically want to swap outthese20 lines.Is it possible for me to write my own self attention module, keep everything else the same and retrain the BERT embeddings? (I have high confidence that it is, but looking hopefully for instant gratification than sifting through 1000s of lines of code :D. Ideally, I think I would write my own module likethisone and just wire it into the current pipeline ) Just scoping out the effort for this
2023-01-11T19:44:12Z
[]
Why are huge batch sizes used for pretraining and small ones for finetuning?
https://discuss.huggingface.co/t/why-are-huge-batch-sizes-used-for-pretraining-and-small-ones-for-finetuning/10836
3
9,520
In most, if not all papers on language models, I find that they often use very large batch sizes for pretraining on a language modeling task. But when they then finetune their model to show its performance on downstream tasks, the batch sizes are suddenly very small.For instance, theRoBERTa papershows that its batch size during pretraining was 8k sentences (Table 9 in the appendix), however for finetuning the batches are considerably smaller (Table 10, appendix): 16 (RACE), 48 (SQuAD), 16, 32 (GLUE).This has puzzled me since forever and I have never discovered the rationale behind this. Is it a matter of scale? Something like: while pretraining you have so much different data, that you just want as much in one go as you can - it does not matter as much that the loss is smoothed out (averaged) over such huge batches. But when finetuning over a smaller dataset you do not want to average the loss over too much of the dataset at once because you then lose peculiarities of samples quickly.Or is there another reason? All ideas are welcome.
2021-10-17T00:10:59Z
[ { "date": "2021-10-18T01:00:39Z", "reply": "I don’t think they use the same hardware for pretraining and fine-tuning. E.g. multiple TPU pods or a GPU cluster for pretraining allows a big batch size but that’s maybe something the research team can only do once. Fine-tuning, and something more accessible (just one GPU for instance) then requires a smaller batch size to avoid the OOM.This is just a guess however." }, { "date": "2022-04-12T10:58:00Z", "reply": "So apparently I never sent this reply, but it was typed already:That’s actually a very good point that I had never considered.I wonder whether my argument about batch sizes still holds. 16 is still a quite small batch size, and gradient accumulation is quite cheap." }, { "date": "2023-01-10T15:11:10Z", "reply": "I’ve noticed a huge increase in performance of my model when I fine tuned T5 with a smaller batch size (16 or 32) than even 128. I think it simply boils down to the model getting to see a more diverse set of samples during fine tuning." } ]
How to load only a few parameters
https://discuss.huggingface.co/t/how-to-load-only-a-few-parameters/29117
0
412
I want to modify the parameters of a model"hidden_size": 256or"pooler_fc_size": 256If so, I will not be able to load the parameters of the pre-trained model completely, I want to load only part of the parameters because it is a model with modified network structure, now my plan is to load the last_hidden_states, how do I write the code? Or where is the documentation?
2023-01-07T15:22:29Z
[]
Encoder-Decoder vs Decoder Only Architecture Models
https://discuss.huggingface.co/t/encoder-decoder-vs-decoder-only-architecture-models/28075
0
1,492
Transformers originally started with the Encoder-Decoder models for solving the machine translation tasks. Since then Decoder only transformer models have emerged as strong contenders for 1) translation 2) better generalization for downstream tasks 3) host of application from classification to translation to generation.When should we consider a encoder-decoder style architecture vs a decoder only architecture?In what cases can a encoder-decoder architecture outperform a decoder only architecture?Thanks
2022-12-18T19:27:17Z
[]
Train BERT with sentence embeddings
https://discuss.huggingface.co/t/train-bert-with-sentence-embeddings/27785
0
411
Hi,I’m trying to use calculated sentence embeddings by average pooling of chunks of a long sentence as input to train a model based on the AutoModelForSequenceClassification class. I used the “inputs_embeds” parameter to pass the embeddings to the model, but something strange is happening. Metrics do not change over time. These are the values that practically remain in the 30 epochs:{‘eval_loss’: 0.48057085275650024,‘eval_f1’: 0.3008849557522124,‘eval_roc_auc’: 0.5,‘eval_accuracy’: 0.0,‘eval_precision’: 0.17708333333333334,‘eval_recall’: 1.0,‘eval_hammingLoss’: 0.8229166666666666,‘eval_runtime’: 0.7474,‘eval_samples_per_second’: 149,856,‘eval_steps_per_second’: 149,856,‘epoch’: 30.0}Does anyone have any tips on how to train BERT using embeddings as input?
2022-12-14T01:39:56Z
[]
Is the evaluate-metric/accuracy the same as macro-accuracy?
https://discuss.huggingface.co/t/is-the-evaluate-metric-accuracy-the-same-as-macro-accuracy/27770
0
472
I am running tests on BERT transformers and using the evaluate Python library. On the site, it says: “computed with Accuracy = (TP + TN) / (TP + TN + FP + FN) Where: TP: True positive TN: True negative FP: False positive FN: False negative”, which seems to indicate that it is the macro-accuracy.
2022-12-13T18:53:14Z
[]
ConformerCTC for streaming
https://discuss.huggingface.co/t/conformerctc-for-streaming/27480
1
554
Is there a way to train a Conformer model with CTC loss function, such that when inferring live using blocked buffered data, you get the same output as if passing the whole data in one go. Also, could this be resilient to sample offsets?I would like to use a Conformer model trained with CTC loss live using buffered data coming of a sensor.
2022-12-08T19:03:58Z
[ { "date": "2022-12-12T10:19:02Z", "reply": "There are a few papers on this already, such ashttps://arxiv.org/pdf/2203.05736.pdf.How about using memories? Such astransformer recurrence" } ]
Sequence classification
https://discuss.huggingface.co/t/sequence-classification/27664
0
397
Hallo. I am working on my grduation project and it is my first project in MLI am asked to highlight the sentence which their tag or label will be predicted. Now i have predicted. I have [id, tag, predictionstring]Is there any way to highlight using the prediction string or i have to get the start and end chrachter for each predicted string.Another question is : the Model predict just the long sentences for example concluding sentence and did not predict any closing or salutation. I dont know what is the wrong or how can i fix this. Thanks in advance
2022-12-11T21:34:51Z
[]
Individually Logging All The Layer/Neuron Outputs
https://discuss.huggingface.co/t/individually-logging-all-the-layer-neuron-outputs/27034
0
445
I’m interested in exploring the outputs of different layers/attention heads in models like BERT and BART and was wondering if there is any way to log all the individual outputs from different layers and components within those layers so feedforward networks etc. for a piece of input.Any leads or suggestions on how to do this? The only way I can think of right now is to modify the code myself and add logging everywhere but that is not generalizable across models and I’ll need to do it individually on a case to case basis.
2022-12-01T07:15:35Z
[]
Incremental decoding with T5
https://discuss.huggingface.co/t/incremental-decoding-with-t5/26930
0
805
Recently, we have seen evidence that in a variety of tasks, it may be helpful for a model to attend over intermediate computation steps when solving a task. An example isReAct: Synergizing Reasoning and Acting in Language Models – Google AI Blog (googleblog.com). The authors cite some work from the neural program synthesis community where this approach was found beneficial.Let’s assume we are processing conversations, where the context is progressively longer as the user and agent interact. Typically, we would re-encode the dialogue history and generate the answer from scratch for every interaction. Schematically, this could be represented as follows:step 1:[usr] sent_1→answer_1step 2:[usr] sent_1 [agent] sent_1 [usr] sent_2→answer_2…step k:[usr] sent_1 [agent] sent_1 [usr] sent_2 ... [agent] sent_k [user] sent_k→answer_kAbovesentis just an abbreviation for “sentence”. The LHS of “->” is the encoder input, the “RHS” is the decoder output. However, the answers are highly correlated, so arguably the model could predict more consistently if it was asked to show all the reasoning steps as the conversation progresses, instead of producing a single answer for the task. Schematically:step 1:[usr] sent_1→answer_1step 2:[usr] sent_1 [agent] sent_1 [usr] sent_2→answer_1<sep>answer_2…step k:[usr] sent_1 [agent] sent_1 [usr] sent_2 ... [agent] sent_k [user] sent_k→ answer_1<sep>answer_2<sep>…<sep>answer_kIn inference, this is problematic because concatenating the answers can lead to very long sequences if everything was generated from scratch. However, I was wondering if theuse_cachefeature together with thepast_key_valuecould be used to effectively implement a memory on the decoder side? In the above, after we decodeanswer_1we feed back the keys and values generated during decoding aspast_key_valuesto decodeanswer_2. Then we would feed back the outputs to generateanswer_3and so on. So the model could attend over an updated conversational context and its past answers but would not “revise” all its previous answers.@patrickvonplaten, am I naive to think that the caching during inference could be implemented withhuggingfaceas is?
2022-11-29T12:36:32Z
[]
Is it possible to split a Bert-alike model's output into different task?
https://discuss.huggingface.co/t/is-it-possible-to-split-a-bert-alike-models-output-into-different-task/26820
0
463
Given a sequence output with 256 tokens, is it logical or reasonable to split it into two equal length sub-sequence which are used for two independent downstream tasks?
2022-11-28T04:12:31Z
[]
Privacy enhancing technologies in model development
https://discuss.huggingface.co/t/privacy-enhancing-technologies-in-model-development/26521
0
485
Dear Community!I am a PhD researcher at the London School of Economics and am exploring the field of privacy enhancing technology (PET) usage in ML model development (e.g., approaches like differential privacy) and their societal and organisational implications. However, I am struggling with finding data sources that allow me to analyse the diffusion of PETs (in the best case across companies or locations). One thought was that model cards on Huggingface would capture the usage of PETs - is that the case? do you have other data sources in mind I that indicate PET usage?Thanks so much for your support!
2022-11-22T16:09:41Z
[]
Conversational QA pretrained model?
https://discuss.huggingface.co/t/conversational-qa-pretrained-model/26441
0
729
I was wondering if we’ve a pretrained model for Conversational QA. We’ve conversational AI, we’ve QA which needs context, do we’ve anything which when given a context, replies technical questions but also acts as a chat botAny help is appreciated
2022-11-21T10:52:30Z
[]
Composition Training/Validation Split of AutoTrain
https://discuss.huggingface.co/t/composition-training-validation-split-of-autotrain/26328
0
955
Hey everyone,is there any documentation about how AutoTrain splits your data into training and validation data? I selected the option that it should automatically do so. I conducted binary text classification with BERT.It would be great to know in order to report a percentage split in a research project.Thanks!Bestrob
2022-11-18T19:04:20Z
[]
Do the common tricks in transformers help with RNNs?
https://discuss.huggingface.co/t/do-the-common-tricks-in-transformers-help-with-rnns/25879
0
485
Does anybody know any research or work that utilizes common tricks (layer norm, masked language training, etc) commonly used with transformers with RNNs?Do these things still help improve RNNs? If not, are there reasons you think these techniques would/would not translate to rnns?
2022-11-10T17:48:33Z
[]
Metadata of NLP datasets
https://discuss.huggingface.co/t/metadata-of-nlp-datasets/25603
0
589
Hi,I’m new to the NLP domain and HuggingFace ecosystem.I wanted to some suggestions on where to read about the meta data of datasets used for NLP.I have worked mostly with vision data so far and simple meta features shared by image datasets in general were:image resolutionNo. of training samplesNo. of classification labelsNo. of channelsWould the text data used in NLP tasks have some such features in common? Aside Number of training samples and number of classification labels. Any thoughts are welcome.Thanks!
2022-11-05T19:51:51Z
[]
I'd like to understand on how to train a neural net with agents and evolution
https://discuss.huggingface.co/t/id-like-to-understand-on-how-to-train-a-neural-net-with-agents-and-evolution/25334
0
542
I’d like to understand on how to train a neural net with agents and evolution.It might be easier to think of a game world though i don’t create games.The training will be inside jupyter data.Say I got 10 inputs, the the agent has some value (reward store).The 10 values are unknown and to be interpreted by a NNIt needs to improve its reward though its output is just 3 options, like left/forward/rightNot every move results in a reward so training likely takes time period.Depending on their reactions agents might be in a different scenario, though at some time one selects the best (n) agents. (high reward)And then trains again until a supergood agent is able to interpren the input valuesHow does one create train such a network?,Normally in neural networks one trains a network toward a certain goal, using back prop, ea several inputs to resolve something alike DNN or a LSTM. But the rules here are so different.Anyone knows of some jupyter sample for a DNN training alike that ?.
2022-11-01T14:04:11Z
[]
How to annotate these type of data for custom tr-ocr training
https://discuss.huggingface.co/t/how-to-annotate-these-type-of-data-for-custom-tr-ocr-training/25227
0
503
Help
2022-10-30T14:50:15Z
[]
Online/streaming speech recognition
https://discuss.huggingface.co/t/online-streaming-speech-recognition/4456
2
2,989
Are there plans to implement online decoding for the speech recognition models such as wav2vec2 and XLSR? More specifically, to be able to receive audio in short chunks, and output partial transcripts as they become available.MotivationMany use cases are covered by the current wav2vec2 model in the library, involving batch recognition of pre-recorded text. However for an online application that wanted to continuously recognize speech on a live input stream, this may not be sufficient.
2021-03-17T00:22:36Z
[ { "date": "2021-09-11T18:50:39Z", "reply": "I would very much like to know whether this is possible too! Have you gotten any further on this,@arkadyark?" }, { "date": "2022-10-26T08:19:28Z", "reply": "please check this oneUse wav2vec2 models with a microphone easilyBeginnersHello folks, \nI wrote a little lib to be able to use any wav2vec2 model from the model hub with a microphone. Since wav2vec2 does not support streaming mode, I used voice activity detection to create audio chunks that I can feed into the model. \nHere is a little example, you canfind the code on github. \nfrom live_asr import LiveWav2Vec2\n\ngerman_model = \"maxidl/wav2vec2-large-xlsr-german\"\nasr = LiveWav2Vec2(german_model,device_name=\"default\")\nasr.start()\n\ntry: \n while True:\n tex…" } ]
Exploring contexts of occurrence of particular words in large datasets
https://discuss.huggingface.co/t/exploring-contexts-of-occurrence-of-particular-words-in-large-datasets/22119
2
803
Hi everybody, how are you?. I am currently working on a project where we would like to explore and be able to obtain the contexts of occurrence of particular words or n-grams in large datasets used to train language models, such asGitHub - josecannete/spanish-corpora: Unannotated Spanish 3 Billion Words Corpora.As you can imagine, the problem is that when dealing with such large datasets, conventional strategies like using libraries like pandas and the like require a lot of RAM and computing power, so here are my questions:Does the platform have any tools already available to carry out different types of searches on large datasets, which facilitates this task?Is there some kind of server/service within the platform with enough RAM and computing power that we can access to load the full datasets and use an API to interact from our Space?Thank you very much! Hernán
2022-08-25T23:55:27Z
[ { "date": "2022-08-29T22:10:11Z", "reply": "@nanomI’ll try and take a shot at providing some assistance. I am still a beginner at the huggingface suite but I’ve been using various aspects of it recently.Does the platform have any tools already available to carry out different types of searches on large datasets, which facilitates this task?Perhaps one thing to consider is thedatasetslibrary (here). From what I gather, it utilizes Apache Arrow under the hood to efficiently build a memory map of the data for efficient loading and processing. Withindatasetsthere is amap()function that I have used extensively with great success. If your dataset is some what customized, it might be worthwhile to build a loading script for thedatasetsobject and then runmap()over the data to perform your searches/calculations. I have done both of these recently and am happy to help and share my experience if you think it will benefit you.Is there some kind of server/service within the platform with enough RAM and computing power that we can access to load the full datasets and use an API to interact from our Space?This one I’m not super certain of. If I read your question correctly, you’re asking about the possibly to load some data, model, and training routine onto a set of compute hardware on hugginface’s end that has a lot of RAM (and possibly GPUs) available to run the training pipeline. If this is the case, then perhaps thehardware solutionand/or theHF servicesmight be of interest." }, { "date": "2022-10-19T17:27:37Z", "reply": "Thank you very much for your response!@nanommanaged to implement an inverted index to address the first problem but we are still struggling with hardware limitations. Do you know who we should contact to ask some questions about which is the best pricing option for a particular project regarding hardware?HF services" } ]
Explaining medical diagnosis
https://discuss.huggingface.co/t/explaining-medical-diagnosis/24664
0
517
Hi there,Have any of you came across a model/set of models for explaining clinical diagnosis.I think it may be similar to text summarisation, but with few additions:it needs to use different vocab (for patients),some parts may not be necessary or even shouldn’t be explained (disturbing content that a real doctor should explain directly)Best,Michal
2022-10-19T11:59:29Z
[]
Attention mask and token ids
https://discuss.huggingface.co/t/attention-mask-and-token-ids/15243
1
2,224
HI,I am taking following wonderful course,TransformesWhile we do padding we pad the sequece with 0 and ask model not to consider the padding. I was wondering if there is some token with id = 0? Because in this case we will be avoiding a token with id = 0, which is not good. Could anybody please help me here.Thank you very much.
2022-03-01T23:23:57Z
[ { "date": "2022-10-18T14:37:46Z", "reply": "First, you’re right, we wouldn’t want to avoid real input.That’s why we use a padding token.There are different special tokens, such as the padding token, begin of sentence (BOS) token, end of sentence (EOS), unknown (unk) and more.Eventually, since we’re working with vectors of numbers (tensors) every token has a token id corresponding to the token. Meaning, the special tokens are also embedded as numbers.Usually the padding id correspond to 0, so when you pad with 0, you actually use the padding token, which is great" } ]
BERT from scratch without self-supervised learning
https://discuss.huggingface.co/t/bert-from-scratch-without-self-supervised-learning/24397
0
599
Suppose one copies or creates the bert-base architecture, meaning the model layers themselves and not the training curriculum (MLM and NSP). Next suppose that one adds on a classifier head to the copied bert-base architecture thatconsists of a single linear layer to make predictions over the set of classes associated with a dataset. One then randomizes the model’s parameters and begins training this model on a labeled dataset using supervised learning only.Namely, with the preprocessed data (that includes positional embeddings), this data is passed all the way through the bert-base architecture and the linear classifier layer to produce prediction set over the class set,a loss is calculated, and the weights are updated via backpropagation and stochastic gradient descent.My question is, would this be a good idea? Is there anything about this approach (compared to the self-supervised followed by task specific training curriculum) that would prevent one from obtaining decent metrics on a test set?As I understand, the bert authors had a lot of unlabelled data but suppose one had an equivalent amount of labelled data for a particular domain (let’s say sentiment about movie reviews). Is there any reason why the above approach wouldproduce poor results?
2022-10-13T17:12:22Z
[]
Cross Lingual Transfer Learning ( XNLI )
https://discuss.huggingface.co/t/cross-lingual-transfer-learning-xnli/24383
0
802
I was reading this paper on XNLI.And I wanted to understand what does TRANSLATE-TRAIN and TRANSLATE-TEST entail.I will write down what I understood.TRANSLATE-TRAIN: In this, we train N models. N stands for 15 languages. So we train 15 separate models for each language. How do we test this model? Should we run each of these 15 models per language and jot down the average accuracy under each language? For eg: We train 15 language models, then we test each of these 15 models on the English test set and then calculate the average accuracy. Does this sound right?I have been struggling with this baseline for so long.https://arxiv.org/abs/1911.02116image1671×994 262 KB
2022-10-13T11:58:02Z
[]
XLSR-Wav2Vec2 with punctuation
https://discuss.huggingface.co/t/xlsr-wav2vec2-with-punctuation/5775
1
1,362
Hi,I’ve been trying to train XLSR-Wav2Vec2 to predict transcription + “relevant” punctuation (typically we don’t keep the punctuation).The idea was to get punctuation in an end-to-end manner as the audio sample gives us additional hints to differentiate between statements, questions and exclamations vs doing an additional post-processing.The goal is to be able to speak without saying “period”, “question mark”, etc… which is unnatural.Here are my main steps:I started from the transformers examplerun_common_voiceI use the CommonVoice English dataset as it’s easier to preprocess than other languagesI useunidecodeto preprocess the text which does a lot of smart changes → Málaga becomes Malaga, François becomes Francois, etcmy regex of chars to remove is"()[\]_+/=%|` (was tricky to create, the order here matters)I have a dict of resamplers (since they’re not all 16,000)I filter by durationNot sure if the wer metric should be adapted. Maybe I should add a separator between the punctuation but based on the way it’s calculated, I feel like it should decrease regardless.So far my training loss reduces (when using the full dataset it gets to nan probably due to some corrupted examples) but I keep a wer of 1. When testing a long run, I just get an empty output.To reproduce:clonethis repopython run_common_voice.py --dataset_config_name en --output_dir ./model --overwrite_output_dir --model_name_or_path facebook/wav2vec2-large-xlsr-53 --num_train_epochs 3 --per_device_train_batch_size 16 --evaluation_strategy epoch --fp16 --freeze_feature_extractor --group_by_length --gradient_checkpointing --do_train --do_eval --save_total_limit 1 --logging_steps 100 --warmup_steps 500 --load_best_model_at_end --metric_for_best_model wer --greater_is_better False --gradient_accumulation 2 --activation_dropout 0.055 --attention_dropout 0.094 --feat_proj_dropout 0.04 --hidden_dropout 0.047 --layerdrop 0.041 --learning_rate 0.000234 --mask_time_prob 0.082 --per_device_eval_batch_size 8Feel free to give any suggestions. I’ll update if I get more interesting results.
2021-04-26T14:07:49Z
[ { "date": "2022-10-12T17:24:42Z", "reply": "Hi, how did you preprocess the punctuation?" } ]
How to train relation extraction?
https://discuss.huggingface.co/t/how-to-train-relation-extraction/24280
0
1,285
I a little bit confused, for example I want to fine tune a NER in bert english and I realize inJohn Snow Labhave a NLP task for relation extraction, My question how we can the train the relation after fine tune the NER? can we do it in hugginface and what transformer model use in relation extraction?
2022-10-11T15:33:14Z
[]
Problem in understanding the test phase in Few-shot learning
https://discuss.huggingface.co/t/problem-in-understanding-the-test-phase-in-few-shot-learning/24231
0
581
I am studying few-shot learning ([1703.05175] Prototypical Networks for Few-shot Learning) and its source code. But the point is that the query set and the support sets in the test phase does not make sense to me. I am trying to understand why the test phase or validation phase uses label data for the query set. It is very different from classification or semi-supervised learning.When we train the encoder, we use l2-distance, a support set, and a query set to train the network. The samples in both sets are chosen based on their labels (k-way+n-shot setting). In the test phase, we have different labels from the train set. We choose the support and query sets the same way as the training phase without updating the encoder weights. But the query set is chosen based on label.I do not understand why in the test phase we use the query set and try to predict the label based on distance from just x-way. Should not we calculate the prototypes of each label and calculate the distance of query images with all labels in the test phase? It makes more sense than calculating the distance of query samples (without considering them based on their labels). Also, the same in the training phase. no distance among all labels’ prototypes.Again, testing based on support and query set does not make sense in the test phase.
2022-10-10T15:20:05Z
[]
`nan` training loss but eval loss does improve over time
https://discuss.huggingface.co/t/nan-training-loss-but-eval-loss-does-improve-over-time/4521
5
3,896
I’ve been playing around with the XLSR-53 fine-tuning functionality but I keep gettingnantraining loss.Audio files I’m using are:Down-sampled to 16kHzSet to one channel onlyVary in length between 4 to 10sI’ve set the following hyper-params:attention_dropout=0.1hidden_dropout=0.1feat_proj_dropout=0.0mask_time_prob=0.05layerdrop=0.1learning rate:on a warmup schedule to3e-4for 3 epochsat5e-4for 3 epochsback to3e-4Sadly, I’m fine-tuning the model on an unpublished corpus, so I am probably not at liberty to upload it here which might hinder reproducibility efforts greatly.Here’s what the loss and WER progression looks like:image497×815 75.2 KBAnyone know what could be happening here? The model seems to be training just fine and some testing proves that the model performs well on the language I’m training it on. So what’s up with the training loss?Pinging@patrickvonplatenand@valhallaas this might be relevant to them.
2021-03-17T19:53:43Z
[ { "date": "2021-03-18T06:59:14Z", "reply": "Hey@jjdv,I’m sorry without a google colab it will be difficult to debug this for us. Given that your WER seems to decrease nicely - there might just be a problem at displaying the values…let’s see whether other people encounter the same problem" }, { "date": "2021-03-18T16:41:32Z", "reply": "hey@patrickvonplaten!I forgot to attach the notebook to my post. (I’m not fine-tuning on colab so feel free to just import the notebook there).Again, not sure how useful it would be since the data isn’t available publicly (yet!)Here’s the notebook!" }, { "date": "2021-03-21T21:04:36Z", "reply": "I looked a bit into it and the problem is the following:If one loss becomesnanorinfall the following displayed losses also becomenanorinfsince the shown loss is the average of all losses seen so far, see:transformers/trainer.py at 82b8d8c7b02562695f88be81cf0993972e324874 · huggingface/transformers · GitHubHowever this doesn’t mean that the losses afternanis displayed are actually useless → the model can very well train. So it’s more of a display error than an actual error often times. All in all my best suggestion here is to just take a look at the validation loss and if it goes down smoothly continue training" }, { "date": "2021-03-23T19:43:03Z", "reply": "Someone suggested adding this parameter in hopes of getting rid of this problem:ctc_zero_infinity=TrueLoss is gonna be gigantic and it does hold that every time I faced this issue, the first training loss wasInfso this is probably a good fix for the issue!" }, { "date": "2022-10-10T10:43:56Z", "reply": "i have same problem but also i have eval_wer is 1.0, at the beginning of training eval_wer is 0.6 and 0.5 and after 19 ephocs the eval_wer is 1.0 and still 1.0 in ephoc 33" } ]
LayoutLM for extraction of information from tables
https://discuss.huggingface.co/t/layoutlm-for-extraction-of-information-from-tables/7464
1
1,483
Can the LayoutLM model be used or tuned for table detection and extraction?The paper says that it works on forms, receipts and for document classification tasks.
2021-06-27T15:43:27Z
[ { "date": "2022-09-29T07:23:58Z", "reply": "Hi@ujjayants, were you able to find the answer. I too have the same question in mind. Just want to know your findings.Thanks" } ]
Is there a way to split a news article into subtopic
https://discuss.huggingface.co/t/is-there-a-way-to-split-a-news-article-into-subtopic/23436
4
1,239
Hello, is there a way I can perform text segmentation on news articles?For example, a news article usually contains the main topic, but when reading through, there might probably be some subtopics present in the article. Is there a way I can divide those articles into those subsections/subtopics so that a news article can contain 2,3 or more sections depending on the subtopics discussed in that particular article.In case you are curious about what I need this for, I’m performing summarization on news articles, so instead of summarizing or parsing the whole article into the model at once, I want to divide them into sections based on what is discussed in the article and then summarize each section. Basically I’m trying to imitate what is done atsummari.comI will appreciate it if someone has done something like this before, or if anybody knows a way I can work through it.
2022-09-21T10:53:57Z
[ { "date": "2022-09-21T15:55:51Z", "reply": "I’d recommend looking intoBERTopic" }, { "date": "2022-09-22T11:04:33Z", "reply": "Thanks for your response, I checked it out and it is not addressing what I’m trying to do.BertTopic is kind of grouping multiple articles into various topics based on how frequently some words appear there.But what I’m trying to do is that given a single article I want to be able to divide that article into sections/subtopics if any is present." }, { "date": "2022-09-22T13:04:12Z", "reply": "You could break the article into paragraphs and run it through BERTopic" }, { "date": "2022-09-22T14:23:00Z", "reply": "Wow, I will try this out. Thank you." } ]
Best practices for estimating FLOPs-per-token with real datasets?
https://discuss.huggingface.co/t/best-practices-for-estimating-flops-per-token-with-real-datasets/23394
1
1,693
Hi folks,I’m currently reading the T-Few paper on few-shot learning and in section 4.2 they provide a table and estimate of the 11B parameter model’s inference costs as follows:We summarize the costs in table 1 and discuss them below. For all estimates, we use the median number of shots (41) across the datasets we consider. Rank evaluation and our unlikelihood loss both require processing every possible output choice to attain a prediction for an unlabeled example.The median combined tokenized sequence length for the input and all possible targets is 103 for the datasets we consider.…Processing a single input and all target choices with T-Few requires 11e9×103 = 1.1e12 FLOPs, whereas few-shot ICL with GPT-3 175B requires 2×175e9×(41 × 98 + 103) = 1.4e15 FLOPs – more than 3 orders of magnitude more.My question is: why is themedianinput sequence length used for the FLOPs estimate instead of themean?I understand that a dataset can have outliers in length, but I’m curious whether using the median is common practice.Thanks!
2022-09-20T12:40:53Z
[ { "date": "2022-09-20T13:44:17Z", "reply": "From Colin Raffel internally:Yeah, the mean can be a bit weird for sequence length since it’s a heavy-tailed distribution with lots of outliers (not normally distributed). I think in this case the median and mean were similar and we just used the median since it’s an int." } ]
Resources on interpretability of wav2vec-style speech models
https://discuss.huggingface.co/t/resources-on-interpretability-of-wav2vec-style-speech-models/23050
0
629
Hello everyoneBig thanks to HuggingFace for creating this amazing framework, and the active community as well! I’ve been using huggingface for a while now and been reading this forum as well.I am working on multi-lingual speech models and am interested in understanding how the pre-trained wav2vec-style models represent input utterances (from a phonetics perspective if possible). For example, I would like to know how Language Identification Model like “VoxLingua107 Wav2Vec Spoken Language Identification Model” goes about representing a collection of short utterances in English vs. say Thai.The most straight-forward method I know is to take final layer output embeddings (in inference mode) and to use t-SNE to cluster. But this doesn’t seem to help as much.I am looking for literature, codes, frameworks (like Captum) and tutorials which use wav2vec-style models and focus on interpretability. Please help. Thank you
2022-09-12T23:53:24Z
[]
Keypoint Detection Accuracy is Very Low
https://discuss.huggingface.co/t/keypoint-detection-accuracy-is-very-low/22483
0
867
Unfortunately, I cannot say too much about my data set but I am trying to predict hundreds of keypoints/landmarks on a given image/video feed.I’m having great difficulty with my model architecture, I have not been able to get an accuracy greater than 20%.My model is based on similar ones I found via GitHub and a few academic papers, however they were all predicting dozens of points vs my hundreds. I only saw two model architectures across these sources:model = tf.keras.models.Sequential([ layers.Conv2D(32, (3,3), padding='same', input_shape=(512,512,1)), layers.LeakyReLU(), layers.MaxPool2D((2,2)), layers.Conv2D(64, (3,3), padding='same'), layers.LeakyReLU(), layers.MaxPool2D((2,2)), layers.Flatten(), layers.BatchNormalization(), layers.Dense(128), layers.ReLU(), layers.Dropout(0.5), layers.Dense(64), layers.ReLU(), layers.Dropout(0.5), layers.Dense(501) ])model = tf.keras.models.Sequential([ layers.Conv2D(32, (5,5), input_shape=(512,512,1), strides=1), layers.Conv2D(32, (3,3), strides=1), layers.MaxPool2D((2,2), padding="valid"), layers.BatchNormalization(), layers.Dropout(0.2), layers.Conv2D(64, (5,5), strides=2), layers.Conv2D(64, (5,5), strides=2), layers.AveragePooling2D((2,2), padding="valid"), layers.Flatten(), layers.Dense(128), layers.ReLU(), layers.Dropout(0.5), layers.Dense(501), layers.Softmax() ])My dataset is quite large, roughly 15K; this includes augmented data. Just looking for feedback.I’ve put this in the research section because I noticed that there’s very few models out there for keypoint detection; object detection seems to be much more popular.
2022-09-03T12:16:41Z
[]
Abstractive summarization ensemble
https://discuss.huggingface.co/t/abstractive-summarization-ensemble/19987
1
947
Hi! I was wondering if anyone could point me to papers, blog posts, etc that explain how to ensemble previously trained models for abstractive text summarization (if possible).Moreover, is anything like this already implemented in Huggingface?
2022-07-04T21:13:13Z
[ { "date": "2022-08-31T12:26:18Z", "reply": "What do you mean exactly by “ensembling” models?There are some models available on HuggingFace, for example the BART summarization model fine-tuned on the CNN/DailyMail dataset. You can take a look at the model card and how to use ithere. The implementation is very straightforward." } ]
Using OPTForSequenceClassification
https://discuss.huggingface.co/t/using-optforsequenceclassification/22340
0
683
Hi.I’m getting an error when trying to import the OPTForSequenceClassification class:ImportError: cannot import name ‘OPTForSequenceClassification’ from ‘transformers’ (/opt/conda/lib/python3.7/site-packages/transformers/init.py)Any heads up on why this might be the case? I saw the huggingface github already added this class.
2022-08-31T09:54:05Z
[]
Zero shot classification for automated electrocardiogram reports
https://discuss.huggingface.co/t/zero-shot-classification-for-automated-electrocardiogram-reports/21594
3
1,187
HiI am a person from healthcare and new to this forum, I am doing research related to classifying automated electrocardiogram reports(ECG) with pre-defined labels, After reading about zero shot classification,I feel this can support my research,And i really want to know what type transformers will support for this work. And also the steps to be taken, Thank you
2022-08-13T17:53:25Z
[ { "date": "2022-08-26T14:55:48Z", "reply": "Hello,While I do not have a specific answer to your question, I would like to make a couple of observations that might help in getting you more replies.Your question is a little bit too open-ended for the forum perhaps, but regardless, it would greatly help other readers if you could specify:Why you feel zero shot learning could support your research (if you have access to labels as you seem to say, you may not need zero-shot learning but you could do some actual training!)What exactly your ECG data look like. Are they pictures, numbers in a sequence, Excel tables, free text? This makes a huge difference to model choice and right now your question is really not specific enough to be able to help you. Most HuggingFace models are meant to be used with sequential data (such as free text), although there are exceptions.What output you would expect from the model. Is this a binary output (such as “healthy”/“unhealthy”), a multi-class output, or a fully fledged worded report written by a machine? This will also influence your model choice quite a lot.A more precise indication of what you aim to get out of this question. Regarding the “steps to be taken” (and not knowing your level of knowledge in machine learning) this request could require a full book / course to be written, or maybe just a few high level bullet points (however these are unlikely to help if you don’t have coding experience and prior machine learning understanding). Again I’m having to make too many assumptions, which is why probably this question hasn’t received many replies, together with being too broad / vague. Adding the info I mentioned in the bullet points above might help others understand your use case better.Hope this helps." }, { "date": "2022-08-26T17:41:24Z", "reply": "HiThank you for your time and replyMy ECG data is a free text like belowimage943×563 80 KBI want to classify the free text into 4 defined labelsSince i have few training data with pre-defined labels, i thought of using zero shot classification or few shot classificationThe output is to say Eg : Normal ECG , Abnormal ECG, Myocardial infarctionThe aim is to classify the free text into 4 common labelsAnd I don’t have coding experience, trying to find the way to do and get help from experts in the forumThank you" }, { "date": "2022-08-26T18:31:50Z", "reply": "Hello,so here are my initial thoughts:Firstly, your “free text” is actually part of an image (what you posted is an image to a computer, not text), so before we even consider training a model, you’d need to find a way to convert the image into text. Something like OCR (thisis a free online software but there is much better around) would do the job, however you’d need to check the quality of the results, as there are things in your image which you could add noise such as |V1 |V4 etc., as well as the ECG itself, so this is the very first step.Once you’re confident that the images are correctly (within a reasonable margin, as there will be some mistakes) converted into text by your software of choice, and assuming you do not need the actual ECG trace for your prediction but just the text, I’d suggest starting from a simple multi-class classification model. Considering your 3 classes are very specific, I wouldn’t think that zero shot would work particularly well, so even if you have just a few training data I’d suggest using them all for fine-tuning. However, large models require large amounts of data to perform well, unless you’re using more niche techniques like Bayesian learning which are way outside the scope of this answer.Hereis an example of what your code could look like if you want to use transformers (which is what this forum is about) for multi class classification, however there are also other simpler NLP models available (such as SVM, random forest, Bayes classifier, logistic regression etc.), for example available from the scikit-learn python library. However, any coding activity will involve taking inspiration from others code, and making your own modifications to suit your use case, and without any coding experience I believe it would be extremely hard to “go blind” and copy others code without understanding it and getting it to work. That’s why I believe this project, for a beginner who has never coded before, would require an amount of supervision which is well outside the scope of a single question on this forum in my view. Depending on your time and dedication, I believe that a good point to start to get more familiar with these things would be to do a crash course on Python (learning the basics of the python language)andalso a machine learning course. There are plenty of free resources available online but there will be a learning curve.Regarding machine learning for beginners (but with some coding and maths understanding) I would recommendAndrew Ng’s courseson Coursera (online attendance is free). Course 1 will explain all the basics of machine learning, and Course 5 will discuss language models, which are relevant to NLP and will enable you to start writing your own sequence to sequence models.Let’s see if others have further useful suggestions, but to manage expectation, in my opinion this is a task which requires a non negligible amount of learning before it can be attempted by someone with zero prior coding experience." } ]
Can you make Q&A language model stay on topic?
https://discuss.huggingface.co/t/can-you-make-q-a-language-model-stay-on-topic/21828
0
487
I’m thinking of fine-tuning a pre-trained language model for a Q&A task. More specifically, I’d like to fine-tune the model on a single chapter in a classic college textbook. Afterward, the reader of the chapter should be able to engage in a Q&A session with the model about the content of the chapter. But how do I make sure that the model stays on topic and doesn’t go out of a tangent? I know it is possible when looking at whathttps://play.aidungeon.io/has achieved, but I don’t know if it will require me to build a model from the ground for each chapter. Can anyone tell me if I’m out of my mind or if it’s feasible?Best,
2022-08-19T12:04:58Z
[]
How to get embedding to each n-grams from a sentence using BERT?
https://discuss.huggingface.co/t/how-to-get-embedding-to-each-n-grams-from-a-sentence-using-bert/21562
0
743
Given a set of labels with different numbers of words, such as:labels=["computer accessories", "baby", "beauty and personal care"]Is there an approach to computing label embeddings in a single BERT forward pass (considering the list of labels as a single sentence)? Or has it the same as the computational cost of a forward pass for each label?
2022-08-12T14:44:01Z
[]
Public Research Survey
https://discuss.huggingface.co/t/public-research-survey/20979
0
602
Hello, my name is Christian Flores. I am a recent UCSD graduate gathering public opinion on the impact of AI in day-to–day life for a research project. I would appreciate it if you could spare a few minutes to share your thoughts on the topic. The survey is estimated to take 5-10 minutes to complete. Your answers are anonymous. Thank you![https://ucsd.co1.qualtrics.com/jfe/form/SV_0B8t3X8WfSZep1A](https://SurveyLink)
2022-07-28T22:26:22Z
[]
DeBerta Paper Explained and Dissected
https://discuss.huggingface.co/t/deberta-paper-explained-and-dissected/20158
0
718
Hello Everyone , Deberta has been ruling Kaggle competitions as well as Global Benchmarks recently , if you ever used it and wonder how it works internally , I have put together a Blog post for itjarvislabs.ai – 5 Jul 22DeBerta is the new King! | Jarvislabs.aiLearn about DeBerta architecture and find out how it outperforms the SOTA Bert and RoBerta.In this blog post I explain all the novel components that deberta introduces and all of it works together to create a performance boost .You will love it if you really love the transformers
2022-07-08T17:29:32Z
[]
Summary Decoding params
https://discuss.huggingface.co/t/summary-decoding-params/19711
0
590
Hi,When applying decoding, it is common to provide decoding params , such asmin_length , max_length, beam_size , length penalty etc.I wonder if anyone is aware of a methodology or research for determining these params as well as if these could be dynamic and not hard-coded.I have found this paper, for multi-document summarization,aclanthology.orgD13-1069.pdf294.03 KBif anyone knows any additional resources - it would be highly appreciated.Thanks!
2022-06-28T07:45:13Z
[]
Pre-Train BERT (from scratch)
https://discuss.huggingface.co/t/pre-train-bert-from-scratch/1245
43
18,721
BERT has been trained on MLM and NSP objective. I wanted to train BERT with/without NSP objective (with NSP in case suggested approach is different). I haven’t performed pre-training in full sense before. Can you please share how to obtain the data (crawl and tokenization details which were used) on which BERT was trained on ?. Since it takes a lot of time, I am looking for well tested code that can yield the BERT with/without NSP in one go. Any suggestions will be helpful.I know about some projects likethese, but they won’t integrate well withtransformerswell I guess which is a must have condition in my case.
2020-09-24T13:01:31Z
[ { "date": "2020-09-25T06:44:43Z", "reply": "BERT was trained onbook corpusandenglish wikipediaboth of which are available indatasetlibraryhuggingface.cowikipedia · Datasets at Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.cobookcorpus · Datasets at Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.Transformers has recently included dataset for for next sent prediction which you could usegithub.comhuggingface/transformers/blob/main/src/transformers/data/datasets/language_modeling.py#L258# We *usually* want to fill up the entire sequence since we are padding# to `block_size` anyways, so short sequences are generally wasted# computation. However, we *sometimes*# (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter# sequences to minimize the mismatch between pretraining and fine-tuning.# The `target_seq_length` is just a rough target however, whereas# `block_size` is a hard limit.target_seq_length = max_num_tokensif random.random() < short_seq_prob:target_seq_length = random.randint(2, max_num_tokens)# We DON'T just concatenate all of the tokens from a document into a long# sequence and choose an arbitrary split point because this would make the# next sentence prediction task too easy. Instead, we split the input into# segments \"A\" and \"B\" based on the actual \"sentences\" provided by the user# input.examples = []current_chunk = [] # a buffer stored current working segmentscurrent_length = 0i = 0while i < len(document):and there’s also NSP head for BERThttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L560EDIT:BertForPreTrainingclass can be used for bothMLMandNSPwith the currentexample/languae-modelingI guess it’s only possible to either useMLMorNSP, you might need to write your own script to combine these." }, { "date": "2020-09-25T07:39:48Z", "reply": "For training on MLM objective, is it recommended to usecollate_fnfromhere? Didn’t seeTextDatasetfor MLM objective." }, { "date": "2020-09-25T07:42:53Z", "reply": "Masking is done usingDataCollatorForLanguageModelingso you can use any dataset and just pass the collator toDataLoader.One thing to note:DataCollatorForLanguageModelingdoes dynamic masking but BERT was trained using static masking ." }, { "date": "2020-09-25T07:52:20Z", "reply": "It seems that usingBertForNextSentencePredictionwithTextDatasetForNextSentencePredictionandDataCollatorForLanguageModelingwould be equivalent to the BERT objective (except static masking part). And for dataset, we can usedatasets.concatenate_datasets()method for BookCorpus and Wikipedia. This might be close right ? Any additional details ?" }, { "date": "2020-09-25T09:10:05Z", "reply": "datasets.concatenate_datasets()does not seem to work for this since features do not match. AlsoBertForNextSentencePredictionexpects afile_path. Initially I thought it was a wrapper which can takedatasetsobjects." }, { "date": "2020-09-25T10:25:43Z", "reply": "It shouldn’t be hard to convertBertForNextSentencePredictionto use datasets. I played with wikipedia dataset for english just now. Each dataset entry is an article/document and it needs to be sentence tokenized inBertForNextSentencePrediction. Book corpus dataset entries seem to be sentences already. Let me know about your progress." }, { "date": "2020-09-25T10:27:59Z", "reply": "How are you measuring the metric ?" }, { "date": "2020-09-25T10:39:47Z", "reply": "I don’t yet. I am still setting up these training pipelines. I asked about metrics atEvaluation metrics for BERT-like LMsbut no response yet. I read athttps://huggingface.co/transformers/perplexity.htmland elsewhere that perplexity is not appropriate for BERT and MLMs. Can’t we use fill-mask pipeline and some version of masking accuracy?OTOH, I’ve already setup GLUE benchmarks withhttps://jiant.info/v2 Alpha. Excellent integration with transformers and can easily plugin any model and run benchmarks in parallel. Seehttps://github.com/jiant-dev/jiant/tree/master/examplesfor more details" }, { "date": "2020-09-25T10:44:05Z", "reply": "Did you try using Cross Entropy for pre-training ? We usually use that for MLM. It can be easily used for NSP I guess." }, { "date": "2020-09-25T13:29:11Z", "reply": "Indeed wikipedia has columns “text” and “title” while bookcorpus only has “text”.You can concatenate them by removing the “title” column from wikipedia:from datasets import load_dataset, concatenate_datasets\n\nwiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\nbookcorpus = load_dataset(\"bookcorpus\", split=\"train\")\nprint(wiki.column_names, bookcorpus.column_names)\n# ['title', 'text'] ['text']\n\nwiki.remove_columns_(\"title\")\nbert_dataset = concatenate_datasets([wiki, bookcorpus])" }, { "date": "2020-09-25T13:33:32Z", "reply": "Let me know if you find an appropriate way to cut wikipedia articles into sentences !Also don’t hesitate if you have any questions about dataset processing, I’d be happy to help" }, { "date": "2020-09-25T14:34:09Z", "reply": "You can use spaCy or stanza for sentence segmentation. spaCy is quite a bit faster but might be less correct. If you want to I can post a segmentation function here." }, { "date": "2020-09-25T14:36:58Z", "reply": "So after concatenation of wikipedia and book_corpus, next things to do is NSP. Can you suggest how that is to be done on object after concatenation happens?I do not want to diverge from the actual method which was used to pre-train BERT." }, { "date": "2020-09-25T14:39:24Z", "reply": "You can have a look here:github.comhuggingface/transformers/blob/master/src/transformers/modeling_bert.py#L1196)input_ids = torch.cat([input_ids, dummy_token], dim=1)return {\"input_ids\": input_ids, \"attention_mask\": attention_mask}@add_start_docstrings(\"\"\"Bert Model with a `next sentence prediction (classification)` head on top. \"\"\",BERT_START_DOCSTRING,)class BertForNextSentencePrediction(BertPreTrainedModel):def __init__(self, config):super().__init__(config)self.bert = BertModel(config)self.cls = BertOnlyNSPHead(config)self.init_weights()@add_start_docstrings_to_callable(BERT_INPUTS_DOCSTRING.format(\"batch_size, sequence_length\"))@replace_return_docstrings(output_type=NextSentencePredictorOutput, config_class=_CONFIG_FOR_DOC)" }, { "date": "2020-09-25T14:39:55Z", "reply": "Has anyone replicated BERT pre-training from scratch ? It would be good to hear what exactly did they do." }, { "date": "2020-09-25T14:40:51Z", "reply": "I already saw it. I tried using it, but got stuck with other things such as metric, preprocessing etc. Given that training will last for a week, there is not much scope to make errors." }, { "date": "2020-09-25T14:43:24Z", "reply": "Also, is there some study or has anyone experimented what happens if we solely rely on MLM and no NSP. How much difference will that make ? RoBERTa showed that NSP didn’t prove to be useful. In this case, does involving NSP help with MLM ?" }, { "date": "2020-09-25T14:51:37Z", "reply": "Well as you found, RoBERTa showed that leaving out NSP yields better results on downstream tasks. Albert then re-added a similar (yet very different) task, namely sentenceorderprediction, which improved performance on downstream tasks.PS: please don’t post multiple consecutive posts but rather edit your posts to add more information. It’s a bit annoying with the notifications." }, { "date": "2020-09-25T15:39:26Z", "reply": "Quentin, I am not sure dataset itself should cut articles into sentences (unless there is an option for both articles/sentences). Perhaps other models might need entire articles as input. If needed, users can sentence tokenize articles using nltk/spacy and such. I’ll play with the wikipedia dataset in the coming days and I’ll report back to you my experiences. Also, while looking at the dataset I found references to Categories and such. Perhaps equally important objective for wikipedia dateset is to keep it as clean as possible." }, { "date": "No date available", "reply": "No reply text available" } ]
How to fine tune fine tune GitHub Copilot?
https://discuss.huggingface.co/t/how-to-fine-tune-fine-tune-github-copilot/18889
3
3,603
We can fine tune language models like BERT, GPT-3.Can I fine tune GitHub Copilot model?I have already looked into thehttps://copilot.github.com/but cant find the details. Would really appreciate if someone had fine tuned Github Copilot.
2022-06-09T04:26:50Z
[ { "date": "2022-06-09T12:34:10Z", "reply": "Hi@neo-benjaminThe Codex model that’s powering the Copilot product is not open sourced. However, there are a few models similar to Codex available on the Hugging Face Hub such as Incoder or CodeGen:huggingface.cofacebook/incoder-6B · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.coSalesforce/codegen-16B-multi · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science." }, { "date": "2022-06-09T20:19:57Z", "reply": "How to fine tune Codegen? Are the steps documented?" }, { "date": "2022-06-24T13:40:30Z", "reply": "You can have a look at the language modeling examples. The should work for any auto regressive model such as GPT-2 or CodeGen:transformers/examples/pytorch/language-modeling at main · huggingface/transformers · GitHub" } ]
Similarity search with combined image and text?
https://discuss.huggingface.co/t/similarity-search-with-combined-image-and-text/19168
6
2,983
How can I do similarity match by combining both image and text?Lets stay:Product1 = Image1, Text1Product2 = Image2, Text2I want to do contrastive learning by combining both the image and text.Is there such a model?Can anyone please suggest a model?
2022-06-14T23:36:21Z
[ { "date": "2022-06-20T06:46:37Z", "reply": "TheSentenceTransformercan encode images and text into a single vector space. You could combine both to create a new vector space for products, and then implement contrastive learning for this vector space.Seesentence-transformers/Image_Search.ipynb at master · UKPLab/sentence-transformers · GitHub" }, { "date": "2022-06-20T08:17:38Z", "reply": "Like in the notebook referenced by@raphaelmerx, I also used a pre-trained CLIP model to embed images and text in the same vector space, so you can perform semantic search:Weights & Biases." }, { "date": "2022-06-21T19:17:36Z", "reply": "@raphaelmerxDo you have a sample code for contrastive learning using SentenceTransformer?" }, { "date": "2022-06-21T19:46:12Z", "reply": "@raphaelmerxI understand the idea of combining the text and image into a single vector space and then implement contrastive learning.But wondering are you aware of an open source implementation for doing contrastive learning? Or code that I could adapt for this purpose." }, { "date": "2022-06-24T00:34:04Z", "reply": "@raphaelmerxin the given example, you have shownmodel.encodeto encode images and text. Do you have any example how to apply that for contrastive learning?" }, { "date": "2022-06-24T04:35:53Z", "reply": "I don’t have any code sample of contrastive learning no" } ]
LayoutLMv3 paper review and fine tuning code
https://discuss.huggingface.co/t/layoutlmv3-paper-review-and-fine-tuning-code/19495
0
1,202
LayoutLMv3: Pre-training for Document AI with Unified Text and Image MaskingHi guys,Made a small video going through Layout LMV3 paper.Feel free to check it out.
2022-06-23T09:13:40Z
[]
Grouphug: multi-task, multi-dataset training with 🤗 transformers/datasets
https://discuss.huggingface.co/t/grouphug-multi-task-multi-dataset-training-with-transformers-datasets/19177
0
2,411
I recently releasedgrouphug- a package optimized for training on multiple datasets/dataframes at once, with each containing an arbitary subset of tasks, built ontransformers/datasets.The need for this came from wanting a single model to predict many closely related things like message topic, sentiment, toxicity, etc, with the inference speed of a single model, and better accuracy.I have also found that co-training on a masked language modelling task results in models which generalize very well and do not start overfitting.Even for single-task modelling, the classification head is also a good deal more powerful than the usual default, and the dataset formatter may be useful to quickly turn your dataframes into the format needed.Would love to hear if this is useful for anyone else, and any suggestions you have!
2022-06-15T07:26:19Z
[]
LSTM Encoder-Decoder not working
https://discuss.huggingface.co/t/lstm-encoder-decoder-not-working/18697
0
791
I am trying to train an LSTM Encoder-Decoder model for paraphrase generation. My model is as follows:StackedResidualLSTM( (encoder): RecurrentEncoder( (embed_tokens): Embedding(30522, 256) (dropout): Dropout(p=0.5, inplace=False) (rnn): LSTM(256, 256, num_layers=2, batch_first=True, dropout=0.5) ) (decoder): RecurrentDecoder( (embed_tokens): Embedding(30522, 128) (dropout_in_module): Dropout(p=0.5, inplace=False) (dropout_out_module): Dropout(p=0.1, inplace=False) (layers): ModuleList( (0): LSTMCell(384, 256) (1): LSTMCell(256, 256) ) (fc_out): Linear(in_features=256, out_features=30522, bias=True) ) )Following is a print of the source sentence, the sentence fed to the decoder (shifted right), the predictions, and the true sentence (labels). Everything is tokenized with BERT tokenizer:Source: [CLS] where can i get quality services in brisbane for plasterand drywall repair? [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD]Decoder Input: [CLS] [CLS] where can i getquality services for plaster and drywall repairs in brisbane? [SEP][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD]Preds:[CLS] the? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]? [SEP]?[SEP]Target: [CLS] where can i get quality services for plaster anddrywall repairs in brisbane? [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD][PAD] [PAD] [PAD] [PAD] [PAD]My loss function is a CrossEntropy between the output and labels (the padding token is switched with -100 to ignore). Something like:loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))There are two problems occurring:the loss does not go downthe generations are all the same for every entry of the same epoch (after weight updating the generations might be different than the ones from the previous epoch, but remain the same for every entry of the new epoch)Do you have any idea what might I try to fix the issue? Thanks in advance for any help you can provide.
2022-06-03T17:15:48Z
[]
Graph2graph network for geometric shapes
https://discuss.huggingface.co/t/graph2graph-network-for-geometric-shapes/18609
0
794
Similar to Seq2Seq models, are there graph2graph models available?Context: I am working on a dimension reduction problem on shapes, where,shapes are represented as graph,vertices as nodes,connecting curves as edges.dimension reduction operation is called as Midcurve generation.Input is 2D profile, say a closed polygon. Example: thick ‘L’ profile on left in the image below.Output is 1D curve in the middle of the profile. Example: thin ‘L’ curve on the right in the image belowencoderdecoder885×278 22.1 KBWish to build encoder-decoder network which accepts graphs as input as well as output.I have training set of such input and output graphs, a supervised set.As I could not find ready graph2graph network, I converted the problem to image2image (say, pix2pix like) and then solving it. But wish to investigate if graph2graph network is available or not.More info:Short paper:MidcurveNN: Encoder-Decoder Neural Network for Computing Midcurve of a Thin Polygon, viXra.org e-Print archive, viXra:1904.0429Github repo, source code:GitHub - yogeshhk/MidcurveNN: Computation of Midcurve of Thin Polygons using Neural NetworksHow to build such encoder decode network? Please note that as both, input and output are different, this can not be AutoEncoder.Any ideas?
2022-06-01T10:27:33Z
[]
Steps to train T5 on collections of tags
https://discuss.huggingface.co/t/steps-to-train-t5-on-collections-of-tags/18600
0
672
Hiya! I’m working on my own model of Imagen; Instead of sentence prompts, my image-pair dataset uses a series of tags to describe an image. An example would be (without quotes): “sunny_day park dog parked_motorcycle female_walking” - and there can be anywhere from a few tags to 30+ tags per image. Because Imagen uses T5 to generate embeddings, I’d need to train a T5 model from scratch based on these collections of tags instead of using transfer learning, correct? Would these tags need to be presented as an array of strings, or one large string? What else would I need to do? And if it’s possible to answer: If my dataset was about 250K, how long would it take to train a T5 large on this dataset on either a P100 or latest generation TPU? Thanks for the help!
2022-06-01T07:58:21Z
[]
Pegasus Paraphrase Fine Tuning dataset
https://discuss.huggingface.co/t/pegasus-paraphrase-fine-tuning-dataset/18531
0
694
Hi@tuner007. I was wondering if you could update the model card forpegasus_paraphraseto include the dataset that you use to finetune Google’s checkpoint?Alex
2022-05-30T16:01:51Z
[]
Technical Skill classification model
https://discuss.huggingface.co/t/technical-skill-classification-model/18487
0
696
A raw dataset of over 30k data points is given which contains technical skills and a lot of jargonmixed in. We need to develop a code that can clean this dataset and extract Technical (Hard) skills. Some 900 random examples of technical skills are also given to go through them to understand thepattern and sequence. How should we go about this problem?
2022-05-29T17:34:15Z
[]
Optuna with a fine-tuned model
https://discuss.huggingface.co/t/optuna-with-a-fine-tuned-model/18113
1
717
How can I use Optuna to optimize a fine-tuned model? Is there any example?
2022-05-18T17:58:01Z
[ { "date": "2022-05-19T11:35:30Z", "reply": "Is that a fine-tuned model like a frozen model and I can not make it better?" } ]
Video Classification
https://discuss.huggingface.co/t/video-classification/17995
0
807
Hi everyone,I am starting to look into the task ofclassifying videos, trying to understand what approaches are currently available.Naively speaking, I guess one could randomly (maybe better, uniformly) sample N frames from a video, perform classification on each of them, and then aggregate predictions (most frequent prediction, most confident prediction, etc.). This may be reasonable for simple classification tasks (e.g. is there a cat in this video? Is the video set indoors or outdoors?).On the other hand, this approach would lose any temporal information conveyed by the frame sequence and the sound/speech information, for which a multi-modal model that can process sequences would be required.So I was wondering if any of you can point out examples of models that have been proposed/used for video classification in any of these directions.I tried browsing the HuggingFace directory but could not find a “video classification” task category, and I have the feeling (after some web searching) that this topic is generally less covered than image or text classification.Any pointer/suggestion is very much appreciated
2022-05-16T10:43:24Z
[]
Ideas for scoring coding assignments
https://discuss.huggingface.co/t/ideas-for-scoring-coding-assignments/17862
0
725
Hey guys , I am searching for ways to use NLP to score coding/programming assignments like what a teacher will do in an exam/test. what ideas come to mind?any papers or similar problem solutions will be much appreciated !
2022-05-12T10:18:21Z
[]
Dynamic Programming for Byte-level BPE
https://discuss.huggingface.co/t/dynamic-programming-for-byte-level-bpe/17376
0
870
Could anyone explain the rationale behind equation (1) inNeural Machine Translation with Byte-Level Subwords?Besides, what does it exactly mean byThe design of UTF-8 encoding ensures the uniqueness of this recovery process: for a character UTF-8 encoded with multiple bytes, its trailing bytes will not make a valid UTF-8 encoded character?How exactly are the hexadecimal digits being derived in Figure 1 ?
2022-05-01T02:58:55Z
[]
Own AI deploy webapp
https://discuss.huggingface.co/t/own-ai-deploy-webapp/17247
0
796
Hello,I am a student at the higher technicas collage Leonding in Austria.In our collage we have our own servers with kubernetes that we are allowed to use as students.Me and three colleagues have a 2 year project where we have to program a webapp where you can easily deploy and train an AI with a few clicks.Basically we should be able to do the same asAutoTrainAutonlpfrom Hugging faceonly on the servers on our school servers.I was already on Github from Hugging Face myself and looked for open source repos but didn’t really find anything.Now the question is: Can I run AutoTrain and Autonlp on the school server or are there alternatives to create such a webapp.Thank you for your answers!
2022-04-27T11:40:47Z
[]
Bert for audio classification
https://discuss.huggingface.co/t/bert-for-audio-classification/17179
0
1,115
I have been thinking at a very high abstract level about using Bert for something like audio classification. Suppose I have a time series data set of sampled sounds and their labels, something like an short audio clip of a dog barking that has the label “dog_bark”. I’m wondering if it’s possible to use the Bert architecture to perform this classification?Naively, I would say that one would have to pre-train Bert from scratch since the input data is time series data represented by floats. That would also lead me to think that one would have to also reconsider how they perform the token embeddings. I don’t have any super concrete ideas, but that was where I was starting. Curious if others had similar ideas or thoughts on the matter?EDIT: I am aware that there are other models out there better suited for this that perhaps fall into ASR or audio classification like wav2vec. However, in this instance I was specifically curious about adapting bert to the task.
2022-04-25T22:30:57Z
[]
Confidence Scores / Self-Training for Wav2Vec2 / CTC models With LM (PyCTCDecode)
https://discuss.huggingface.co/t/confidence-scores-self-training-for-wav2vec2-ctc-models-with-lm-pyctcdecode/17052
1
2,827
I started looking a bit into Confidence Scores / Self-Training for Speech Recognition for models like Wav2Vec2 that make use a language model usingpyctcdecode'slibraryPyCTCDecode returns alm_scorewhich can be seen as the fused score between the acoustic model (Wav2Vec2) and a language model (kenLM). This score is the sum of all per-word fusedlm_scores, so it seems reasonable to normalize the output by the number of words. Also see some questions here:confidence scores output from the LM · Issue #57 · kensho-technologies/pyctcdecode · GitHubQuestion about naming of `lm_score` parameter in `decode_logits` · Issue #63 · kensho-technologies/pyctcdecode · GitHubFirst, let’s create some Wav2Vec2 + ngram models. We’ll simply add the official 4-gram of Librispeech to the new data2vec models to create the following models:patrickvonplaten/data2vec-audio-base-10m-4-grampatrickvonplaten/data2vec-audio-base-100h-4-grampatrickvonplaten/data2vec-audio-base-960h-4-gramNow, it’s quite easy to retrieve thoselm_scoresand to compute a confidence level this way:Import all necessary libraries and load model and tokenizerfrom transformers import AutoModelForCTC, AutoProcessor from datasets import load_dataset import datasets import torch import sys model_id = "TODO: fill in" model = AutoModelForCTC.from_pretrained(model_id) processor = AutoProcessor.from_pretrained(model_id)Load Librispeech dummy data:num_samples = 4 dataset = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") samples = dataset[:num_samples] audio_samples = [s["array"] for s in samples["audio"]] sampling_rate = set([s["sampling_rate"] for s in samples["audio"]]).pop() text_samples = samples["text"]Predict transcription with model:# process to input_values inputs = processor(audio_samples, return_tensors="pt", sampling_rate=sampling_rate, padding=True) # forward inputs to model with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logitsRetrieve the per word probability normalized over number of wordsoutput = processor.batch_decode(logits.numpy(), output_word_offsets=True) confidence_scores = [score / len(t.split(" ")) for score, t in zip(output.lm_score, output.text)]Define confidence score the length normalizedlm_scoreof the predictionfor i in range(num_samples): print(20 * "=" + f"Output {i}" + 20 * "=") print(text_samples[i]) print(f"{output.text[i]}: {confidence_scores[i]}") print("\n")Cool let’s run this on the new data2vec audio models:patrickvonplaten/data2vec-audio-base-10m-4-grampatrickvonplaten/data2vec-audio-base-100h-4-grampatrickvonplaten/data2vec-audio-base-960h-4-grampatrickvonplaten/data2vec-audio-base-10m-4-gram====================Output 0==================== MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL MISTER QUILTER IS THE APPOSELE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL: -2.9550299660242825 ====================Output 1==================== NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER NOR IS MISTER QUILTR'S MANNER LESS INTERESTING THAN HIS MATTER: -3.8471058156146243 ====================Output 2==================== HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND HE TELLS IS THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CRISMIIS AND ROST BEEF LOOMING BEFORE HIS SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND: -3.115683062281252 ====================Output 3==================== HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA HE HAS GRAVED DOUBTS WHETHER SIR FREDERICK LATEN'S WORK IS RELY GREEK AFTER ALL AND CAN DESCOVER IN IT BUT LITTLE OF ROCKY ETHICA: -4.292775884726897patrickvonplaten/data2vec-audio-base-100h-4-gram====================Output 0==================== MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL: -1.0723093529710663 ====================Output 1==================== NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER: -2.6140757339617786 ====================Output 2==================== HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND: -1.1805021799946347 ====================Output 3==================== HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LAYTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA EH: -2.069009737832042patrickvonplaten/data2vec-audio-base-960h-4-gram====================Output 0==================== MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL: -1.0610139720694658 ====================Output 1==================== NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER NOR IS MISTER QUILTER'S MANNER LESS INTERESTING THAN HIS MATTER R: -3.11299682252419 ====================Output 2==================== HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND HE TELLS US THAT AT THIS FESTIVE SEASON OF THE YEAR WITH CHRISTMAS AND ROAST BEEF LOOMING BEFORE US SIMILES DRAWN FROM EATING AND ITS RESULTS OCCUR MOST READILY TO THE MIND: -1.147767963941466 ====================Output 3==================== HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA HE HAS GRAVE DOUBTS WHETHER SIR FREDERICK LEIGHTON'S WORK IS REALLY GREEK AFTER ALL AND CAN DISCOVER IN IT BUT LITTLE OF ROCKY ITHACA: -1.870571726475313Alright, this actually seems to make some sense here! The 10m has consistently the lowest score and one can usually say that the “correcter” the sentence the better the score. The 960h model has the best scores for all butOutput 1for which the 100h also gives a better prediction.This already seems to work quite well, but would need some more experiments.There are a couple of questions, I’m not sure about:Right now the average probability per word is taken, isminormaxmaybe better? Also see:confidence scores output from the LM · Issue #57 · kensho-technologies/pyctcdecode · GitHub
2022-04-21T11:13:34Z
[ { "date": "2022-04-21T13:25:38Z", "reply": "Also tried it out on a “out-of-distribution” dataset - the English version of Common Voice and it still seems to work quite well.So changing the above 2th point “Load librispeech dummy data” to the following code that loads common voice data:dataset = load_dataset(\"common_voice\", \"en\", split=\"test\", streaming=True)\ndataset = dataset.cast_column(\"audio\", datasets.Audio(sampling_rate=16_000))\n\n# iterate over dataset\ndataset_iter = iter(dataset)\nsamples = [next(dataset_iter) for _ in range(num_samples)]\n\naudio_samples = [s[\"audio\"][\"array\"] for s in samples]\nsampling_rate = set([s[\"audio\"][\"sampling_rate\"] for s in samples]).pop()\ntext_samples = [s[\"sentence\"] for s in samples]And then running the script again gives the following results:patrickvonplaten/data2vec-audio-base-10m-4-gram====================Output 0====================\nIt was the time of day when all of Spain slept during the summer.\nIT WAS THE TIME OF DAY LEVERS BEN SLEPT DURING THE SUMMER:\n-3.5796514559110606\n\n\n====================Output 1====================\nSame way you did.\nTHE SAME POINT: \n-6.560971691113143\n\n\n====================Output 2====================\nSarah told him that she was there to see her brother.\nBUT I TOLD HIM THAT SHE WAS IN TO SEE HER BROTHER: \n-1.249188184327079\n\n\n====================Output 3====================\nGalileo Galilei was the first man who observed the planet Neptune through his telescope.\nCALLILI GALLI WAS A FRESHMAN WHO ABSORVES TO PLANT NAPS THOUGH HIS TELICSCOP: \n-7.170448685148719patrickvonplaten/data2vec-audio-base-100h-4-gram====================Output 0====================\nIt was the time of day when all of Spain slept during the summer.\nIT WAS THE TIME OF DAY WHEN OLIVE'S PEN SLEPT DURING THE SUMMER: \n-1.724733290751429\n\n\n====================Output 1====================\nSame way you did.\nTHE SAME DIN YOU TIED: \n-11.673662061158192\n\n\n====================Output 2====================\nSarah told him that she was there to see her brother.\nTHERE I TOLD HIM THAT SHE WAS HERE TO SEE HER BROTHER: \n-1.3407323223953858\n\n\n====================Output 3====================\nGalileo Galilei was the first man who observed the planet Neptune through his telescope.\nGALILEO GALILEI WAS A FRESHMAN WHO OBSERVES THE PLANT NUPKINS THROUGH HIS TELECSCOPE: \n-5.179441703647934patrickvonplaten/data2vec-audio-base-960h-4-gram====================Output 0====================\nIt was the time of day when all of Spain slept during the summer.\nIT WAS THE TIME OF DAY WHEN OLIVER BEN SLEPT DURING THE SUMMER: \n-1.4758548315739513\n\n\n====================Output 1====================\nSame way you did.\nTHE BLIND YOU IN IT: \n-8.845217131011449\n\n\n====================Output 2====================\nSarah told him that she was there to see her brother.\nBUT I TOLD HIM THAT SHE WAS HERE TO SEE HER BROTHER: \n-1.3983698052694178\n\n\n====================Output 3====================\nGalileo Galilei was the first man who observed the planet Neptune through his telescope.\nGALILEO GALIDI WAS THE FIRST MAN WHO OBSERVES TO PLAN NAPTHA THROUGH HIS TELECOSCOPE: \n-4.983984955432581So the numbers here still seem to be very reasonable. Everything over -3, is quite wrong indeed and things are starting to look better below -2" } ]