|
--- |
|
license: mit |
|
language: |
|
- fr |
|
--- |
|
|
|
This is an 8bit version of [distilcamembert-base-ner](https://huggingface.co/cmarkea/distilcamembert-base-ner) obtained with |
|
[Intel® Neural Compressor](https://github.com/intel/neural-compressor) on [wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr) |
|
dataset. |
|
|
|
### Get Started |
|
|
|
First, install libraries: |
|
|
|
``` |
|
pip install --upgrade-strategy eager "optimum[neural-compressor]" > null |
|
``` |
|
|
|
Second, use `INCModelForTokenClassification` from optimum.intel . It can be used in the similar way as |
|
an ordinary `DistilBertForTokenClassification`: |
|
|
|
```python |
|
from transformers import AutoModelForTokenClassification, AutoTokenizer |
|
from optimum.intel import INCModelForTokenClassification |
|
|
|
|
|
model = INCModelForTokenClassification.from_pretrained('konverner/8bit-distilcamembert-base-ner') |
|
tokenizer = AutoTokenizer.from_pretrained('konverner/8bit-distilcamembert-base-ner') |
|
|
|
text = "Meta Platforms ou Meta, anciennement connue sous le nom de Facebook, est une multinationale américaine fondée en 2004 par Mark Zuckerberg." |
|
|
|
model_input = tokenizer(text, return_tensors='pt') |
|
model_output = model(**model_input) |
|
print(model_output.logits.argmax(2)) |
|
# tensor([[0, 4, 4, 4, 4, 4, 0, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, |
|
# 0, 0, 0, 2, 2, 2, 2, 2, 0, 0]]) |
|
``` |