modernbert-base-japanese-wikipedia-luw-upos

Model Description

This is a ModernBERT model pre-trained on Japanese Wikipedia and 青空文庫 texts for POS-tagging and dependency-parsing, derived from modernbert-base-japanese-wikipedia. Every long-unit-word is tagged by UPOS (Universal Part-Of-Speech) and FEATS.

How to Use

from transformers import pipeline
nlp=pipeline("upos","KoichiYasuoka/modernbert-base-japanese-wikipedia-luw-upos",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))

or

import esupar
nlp=esupar.load("KoichiYasuoka/modernbert-base-japanese-wikipedia-luw-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))

See Also

esupar: Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa/GPT models

Downloads last month
19
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for KoichiYasuoka/modernbert-base-japanese-wikipedia-luw-upos

Finetuned
(2)
this model
Finetunes
1 model

Dataset used to train KoichiYasuoka/modernbert-base-japanese-wikipedia-luw-upos