adjaysagar's picture
Update README.md
49c66f9 verified
---
title: Multi-Model Indian Address NER
emoji: ๐Ÿ 
colorFrom: blue
colorTo: green
sdk: gradio
sdk_version: 5.35.0
app_file: app.py
pinned: false
---
# Multi-Model Indian Address NER Demo
This is a Gradio-based demo that allows you to compare three different Indian Address NER models:
- [TinyBERT](https://huggingface.co/shiprocket-ai/open-tinybert-indian-address-ner) - Lightweight and fast
- [ModernBERT](https://huggingface.co/shiprocket-ai/open-modernbert-indian-address-ner) - Modern architecture
- [IndicBERT](https://huggingface.co/shiprocket-ai/open-indicbert-indian-address-ner) - Indic language optimized
## What it does
This application allows you to:
1. **Single Model Analysis**: Choose one model and extract entities from Indian addresses
2. **Model Comparison**: Compare how all three models perform on the same address
3. **Interactive Testing**: Use sample addresses or input your own
The models can identify:
- Building names
- Floor numbers
- House details
- Roads
- Sub-localities
- Localities
- Landmarks
- Cities
- States
- Countries
- Pincodes
## How to use
### Single Model Analysis
1. Select a model from the dropdown (TinyBERT, ModernBERT, or IndicBERT)
2. Enter an Indian address in the text box
3. Click "Extract Entities" or press Enter
4. View the extracted entities with confidence scores
### Model Comparison
1. Go to the "Model Comparison" tab
2. Enter an address
3. Click "Compare All Models"
4. See how each model performs on the same input
## Example addresses
- Shop No 123, Sunshine Apartments, Andheri West, Mumbai, 400058
- DLF Cyber City, Sector 25, Gurgaon, Haryana
- Flat 201, MG Road, Bangalore, Karnataka, 560001
## Model Information
### TinyBERT
- **Parameters**: ~66.4M
- **Advantages**: Fastest inference, lowest memory
- **Best for**: Real-time applications, mobile deployment
### ModernBERT
- **Parameters**: ~599MB model
- **Advantages**: Modern architecture, balanced performance
- **Best for**: High accuracy with reasonable speed
### IndicBERT
- **Parameters**: ~131MB model
- **Advantages**: Optimized for Indian languages/contexts
- **Best for**: Mixed language addresses, regional contexts
**Framework**: PyTorch + Transformers