File size: 4,588 Bytes
02b0a38 f5cb015 02b0a38 557fc20 02b0a38 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
---
license: mit
datasets:
- unsloth/LaTeX_OCR
language:
- en
base_model:
- meta-llama/Llama-3.2-1B
- google/siglip-so400m-patch14-384
tags:
- vlm
- vision
- multimodal
- AnyModal
---
# AnyModal/LaTeX-OCR-Llama-3.2-1B
**AnyModal/LaTeX-OCR-Llama-3.2-1B** is an experimental model designed to convert images of handwritten and printed mathematical equations into LaTeX representations. Developed within the [AnyModal](https://github.com/ritabratamaiti/AnyModal) framework, this model combines a `google/siglip-so400m-patch14-384` image encoder with the Llama 3.2-1B language model. It has been trained on 20% of the [unsloth/LaTeX_OCR dataset](https://huggingface.co/datasets/unsloth/LaTeX_OCR), which itself is a subset of the [linxy/LaTeX_OCR dataset](https://huggingface.co/datasets/linxy/LaTeX_OCR).
---
## Trained On
This model was trained on the [unsloth/LaTeX_OCR](https://huggingface.co/datasets/unsloth/LaTeX_OCR) dataset. The dataset contains 1% of samples from the larger [linxy/LaTeX_OCR dataset](https://huggingface.co/datasets/linxy/LaTeX_OCR), which includes images of mathematical equations annotated with their corresponding LaTeX expressions. The current model was trained on 20% of the unsloth dataset.
---
## How to Use
### Installation
Clone the AnyModal Project:
```bash
git clone https://github.com/ritabratamaiti/AnyModal.git
```
Navigate to the LaTeX OCR Project (see https://github.com/ritabratamaiti/AnyModal/tree/main/LaTeX%20OCR)
Install the required dependencies:
```bash
pip install torch transformers torchvision huggingface_hub tqdm matplotlib Pillow
```
### Inference
Below is an example of generating LaTeX code from an image:
```python
import llm
import anymodal
import torch
import vision
from PIL import Image
from huggingface_hub import hf_hub_download, snapshot_download
# Load language model and tokenizer
llm_tokenizer, llm_model = llm.get_llm(
"meta-llama/Llama-3.2-1B",
access_token="GET_YOUR_OWN_TOKEN_FROM_HUGGINGFACE",
quantized=False,
use_peft=False,
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
llm_model.to(device)
llm_hidden_size = llm.get_hidden_size(llm_tokenizer, llm_model)
# Load vision model components
image_processor, vision_model, vision_hidden_size = vision.get_image_encoder(
"google/siglip-so400m-patch14-384", use_peft=False
)
# Initialize vision tokenizer and encoder
vision_encoder = vision.VisionEncoder(vision_model)
vision_tokenizer = vision.Projector(vision_hidden_size, llm_hidden_size, num_hidden=1)
# Initialize MultiModalModel
multimodal_model = anymodal.MultiModalModel(
input_processor=None,
input_encoder=vision_encoder,
input_tokenizer=vision_tokenizer,
language_tokenizer=llm_tokenizer,
language_model=llm_model,
prompt_text="The latex expression of the equation in the image is: ",
)
# Load pre-trained weights
if not os.path.exists("latex_ocr"):
os.makedirs("latex_ocr")
snapshot_download("AnyModal/latex-ocr-Llama-3.2-1B", local_dir="latex_ocr")
multimodal_model._load_model("latex_ocr")
# Generate LaTeX expression from an image
image_path = "example_equation.jpg" # Path to your image
image = Image.open(image_path).convert("RGB")
processed_image = image_processor(image, return_tensors="pt")
processed_image = {key: val.squeeze(0) for key, val in processed_image.items()}
# Generate LaTeX caption
generated_caption = multimodal_model.generate(processed_image, max_new_tokens=120)
print("Generated LaTeX Caption:", generated_caption)
```
---
## Project and Training Scripts
This model is part of the [AnyModal LaTeX OCR Project](https://github.com/ritabratamaiti/AnyModal/tree/main/LaTeX%20OCR).
- **Training Script**: [train.py](https://github.com/ritabratamaiti/AnyModal/blob/main/LaTeX%20OCR/train.py)
- **Inference Script**: [inference.py](https://github.com/ritabratamaiti/AnyModal/blob/main/LaTeX%20OCR/inference.py)
Refer to the project repository for further implementation details.
---
## Project Details
- **Vision Encoder**: The `google/siglip-so400m-patch14-384` model, pre-trained for visual feature extraction, was used as the image encoder.
- **Projector Network**: A dense projection network aligns visual features with the Llama 3.2-1B text generation model.
- **Language Model**: Llama 3.2-1B, a small causal language model, generates the LaTeX expression.
This implementation highlights a proof-of-concept approach using a limited training subset. Better performance can likely be achieved by training on more samples and incorporating a text-conditioned image encoder. |