File size: 1,548 Bytes
5300bfc 9b716ea 5300bfc 9b716ea 30290ed 9b716ea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
library_name: transformers
license: apache-2.0
language:
- en
---
# test of ModernBERT2Olmo-large_1b
experimental seq2seq with EncoderDecoderModel. You will need to patch `modeling_llama.py` with [this code](https://gist.github.com/pszemraj/a15219f33d94dc53a6e270c0c81360ec) for it work
> [!WARNING]
> WIP + output of this model is gibberish bc cross attn needs training
uses different configuration token ids than the [first one](https://huggingface.co/pszemraj/ModernBERT2Olmo-large_1b-test) + uses [olmo-1-b-0724](https://huggingface.co/allenai/OLMo-1B-0724-hf) for decoder
```py
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name ="pszemraj/ModernBERT2Olmo-large_1b-cfg2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
ARTICLE_TO_SUMMARIZE = (
"PG&E stated it scheduled the blackouts in response to forecasts for high winds "
"amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were "
"scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."
)
prompt = f"summarize dis botmon: {ARTICLE_TO_SUMMARIZE}"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# autoregressively generate summary (uses greedy decoding by default)
generated_ids = model.generate(
**inputs,
min_new_tokens=10,
max_new_tokens=100,
)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(generated_text)
``` |