songrunhe's picture
Upload folder using huggingface_hub
59f5788 verified

Model Card for ChronoBERT

Model Details

Model Description

ChronoBERT is a series high-performance chronologically consistent large language models (LLM) designed to eliminate lookahead bias and training leakage while maintain good language understanding in time-sensitive applications. The model is pretrained on diverse, high-quality, open-source, and timestamped text to maintain chronological consistency.

All models in the series achieve GLUE benchmark scores that surpass standard BERT. This approach preserves the integrity of historical analysis and enables more reliable economic and financial modeling.

  • Developed by: Songrun He, Linying Lv, Asaf Manela, Jimmy Wu
  • Model type: Transformer-based bidirectional encoder (ModernBERT architecture)
  • Language(s) (NLP): English
  • License: MIT License

Model Sources

  • Paper: "Chronologically Consistent Large Language Models" (He, Lv, Manela, Wu, 2025)

How to Get Started with the Model

from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("manelalab/chronobert-v1-19991231")
model = AutoModel.from_pretrained("manelalab/chronobert-v1-19991231")

text = "You've gotta be very careful not to mess with the space-time continuum. -- Dr. Brown, Back to the Future"

inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

Training Details

Training Data

  • Pretraining corpus: Our initial model $\text{ChronoBERT}_{1999}$ is pretrained on 460 billion tokens of pre-2000, diverse, high-quality, and open-source text data to ensure no leakage of data afterwards.
  • Incremental updates: Yearly updates from 2000 to 2024 with an additional 65 billion tokens of timestamped text.

Training Procedure

  • Architecture: ModernBERT-based model with rotary embeddings and flash attention.
  • Objective: Masked token prediction.

Evaluation

Testing Data, Factors & Metrics

  • Language understanding: Evaluated on GLUE benchmark tasks.
  • Financial forecasting: Evaluated using return prediction task based on Dow Jones Newswire data.
  • Comparison models: ChronoBERT was benchmarked against BERT, FinBERT, StoriesLM-v1-1963, and Llama 3.1.

Results

  • GLUE Score: $\text{ChronoBERT}{1999}$ and $\text{ChronoBERT}{2024}$ achieved GLUE score of 84.71 and 85.54 respectively, outperforming BERT (84.52).
  • Stock return predictions: During the sample from 2008-01 to 2023-07, $\text{ChronoBERT}_{\text{Realtime}}$ achieves a long-short portfolio Sharpe ratio of 4.80, outperforming BERT, FinBERT, and StoriesLM-v1-1963, and comparable to Llama 3.1 8B (4.90).

Citation

@article{He2025ChronoBERT,
  title={Chronologically Consistent Large Language Models},
  author={He, Songrun and Lv, Linying and Manela, Asaf and Wu, Jimmy},
  journal={Working Paper},
  year={2025}
}

Model Card Authors