import streamlit as st
import pandas as pd
# Custom CSS for better styling
st.markdown("""
""", unsafe_allow_html=True)
# Main Title
st.markdown('
Arabic Named Entity Recognition - BERT-based Model
', unsafe_allow_html=True)
# Introduction
st.markdown("""
Named Entity Recognition (NER) models identify and categorize important entities in a text. This page details a BERT-based NER model for Arabic texts, including Modern Standard Arabic (MSA), Dialectal Arabic (DA), and Classical Arabic (CA). The model is pretrained and available on Hugging Face, then imported into Spark NLP.
""", unsafe_allow_html=True)
# Model Description
st.markdown('Description
', unsafe_allow_html=True)
st.markdown("""
The bert_ner_bert_base_arabic_camelbert_mix_ner
model is pretrained for Arabic named entity recognition, originally trained by CAMeL-Lab. It can identify the following types of entities:
- ORG (Organization)
- LOC (Location)
- PERS (Person)
- MISC (Miscellaneous)
""", unsafe_allow_html=True)
# Setup Instructions
st.markdown('Setup
', unsafe_allow_html=True)
st.markdown('To use the model, you need Spark NLP installed. You can install it using pip:
', unsafe_allow_html=True)
st.code("""
pip install spark-nlp
pip install pyspark
""", language="bash")
st.markdown("Then, import Spark NLP and start a Spark session:
", unsafe_allow_html=True)
st.code("""
import sparknlp
# Start Spark Session
spark = sparknlp.start()
""", language='python')
# Example Usage
st.markdown('Example Usage with Arabic NER Model
', unsafe_allow_html=True)
st.markdown("""
Below is an example of how to set up and use the bert_ner_bert_base_arabic_camelbert_mix_ner
model for named entity recognition in Arabic:
""", unsafe_allow_html=True)
st.code('''
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.ml import Pipeline
from pyspark.sql.functions import col, expr, round, concat, lit, explode
# Define the components of the pipeline
documentAssembler = DocumentAssembler() \\
.setInputCol("text") \\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx") \\
.setInputCols(["document"]) \\
.setOutputCol("sentence")
tokenizer = Tokenizer() \\
.setInputCols(["sentence"]) \\
.setOutputCol("token")
tokenClassifier = BertForTokenClassification.pretrained("bert_ner_bert_base_arabic_camelbert_mix_ner", "ar") \\
.setInputCols(["sentence", "token"]) \\
.setOutputCol("ner")
ner_converter = NerConverter()\\
.setInputCols(["document", "token", "ner"])\\
.setOutputCol("ner_chunk")
# Create the pipeline
pipeline = Pipeline(stages=[documentAssembler, sentenceDetector, tokenizer, tokenClassifier, ner_converter])
# Create sample data
example = """
كانت مدينة بغداد، العاصمة الحالية للعراق، مركزاً ثقافياً وحضارياً عظيماً في العصور الوسطى. تأسست في القرن الثامن الميلادي على يد الخليفة العباسي أبو جعفر المنصور.
كانت بغداد مدينة المعرفة والعلم، حيث توافد إليها العلماء والفلاسفة من كل أنحاء العالم الإسلامي للدراسة في بيت الحكمة. كانت مكتباتها تحتوي على آلاف المخطوطات النادرة،
وكانت تشتهر بمدارسها العلمية والطبية والفلكية. في عام 1258، سقطت بغداد في يد المغول بقيادة هولاكو خان، مما أدى إلى تدمير جزء كبير من المدينة وخسارة العديد من النفائس.
"""
data = spark.createDataFrame([[example]]).toDF("text")
# Fit and transform data with the pipeline
result = pipeline.fit(data).transform(data)
# Select the result, entity
result.select(
expr("explode(ner_chunk) as ner_chunk")
).select(
col("ner_chunk.result").alias("chunk"),
col("ner_chunk.metadata").getItem("entity").alias("ner_label")
).show(truncate=False)
''', language="python")
# Data for the DataFrame
data = {
"chunk": ["جعفر المنصور", "بغداد", "بغداد", "هولاكو"],
"ner_label": ["PERS", "LOC", "LOC", "PERS"]
}
# Creating the DataFrame
df = pd.DataFrame(data)
df.index += 1
st.dataframe(df)
# Model Information
st.markdown('Model Information
', unsafe_allow_html=True)
st.markdown("""
The bert_ner_bert_base_arabic_camelbert_mix_ner
model details are as follows:
- Model Name: bert_ner_bert_base_arabic_camelbert_mix_ner
- Compatibility: Spark NLP 3.4.2+
- License: Open Source
- Edition: Official
- Input Labels: [document, token]
- Output Labels: [ner]
- Language: ar
- Size: 407.3 MB
- Case sensitive: true
- Max sentence length: 128
""", unsafe_allow_html=True)
# Summary
st.markdown('Summary
', unsafe_allow_html=True)
st.markdown("""
This page provided an overview of the bert_ner_bert_base_arabic_camelbert_mix_ner
model for Arabic NER. We discussed how to set up and use the model with Spark NLP, including example code and results. We also provided details on the model's specifications and links to relevant resources for further exploration.
""", unsafe_allow_html=True)
# References
st.markdown('Model References
', unsafe_allow_html=True)
st.markdown("""
""", unsafe_allow_html=True)
# Community & Support
st.markdown('Community & Support
', unsafe_allow_html=True)
st.markdown("""
""", unsafe_allow_html=True)