coyotte508's picture
coyotte508 HF staff
🍱 Copy folders from huggingface.js
b2ecf7d
|
raw
history blame
3.24 kB

Use Cases

Information Extraction from Invoices

You can extract entities of interest from invoices automatically using Named Entity Recognition (NER) models. Invoices can be read with Optical Character Recognition models and the output can be used to do inference with NER models. In this way, important information such as date, company name, and other named entities can be extracted.

Task Variants

Named Entity Recognition (NER)

NER is the task of recognizing named entities in a text. These entities can be the names of people, locations, or organizations. The task is formulated as labeling each token with a class for each named entity and a class named "0" for tokens that do not contain any entities. The input for this task is text and the output is the annotated text with named entities.

Inference

You can use the 🤗 Transformers library ner pipeline to infer with NER models.

from transformers import pipeline

classifier = pipeline("ner")
classifier("Hello I'm Omar and I live in Zürich.")

Part-of-Speech (PoS) Tagging

In PoS tagging, the model recognizes parts of speech, such as nouns, pronouns, adjectives, or verbs, in a given text. The task is formulated as labeling each word with a part of the speech.

Inference

You can use the 🤗 Transformers library token-classification pipeline with a POS tagging model of your choice. The model will return a json with PoS tags for each token.

from transformers import pipeline

classifier = pipeline("token-classification", model = "vblagoje/bert-english-uncased-finetuned-pos")
classifier("Hello I'm Omar and I live in Zürich.")

This is not limited to transformers! You can also use other libraries such as Stanza, spaCy, and Flair to do inference! Here is an example using a canonical spaCy model.

!pip install https://huggingface.co/spacy/en_core_web_sm/resolve/main/en_core_web_sm-any-py3-none-any.whl

import en_core_web_sm

nlp = en_core_web_sm.load()
doc = nlp("I'm Omar and I live in Zürich.")
for token in doc:
    print(token.text, token.pos_, token.dep_, token.ent_type_)

## I PRON nsubj
## 'm AUX ROOT
## Omar PROPN attr PERSON
### ...

Useful Resources

Would you like to learn more about token classification? Great! Here you can find some curated resources that you may find helpful!

Notebooks

Scripts for training

Documentation