Model Name
This is a chatbot model trained using a transformer architecture. It can answer questions based on the training data provided.
Usage
from transformers import AutoTokenizer, TFAutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("your_username/your_model_name")
model = TFAutoModelForSeq2SeqLM.from_pretrained("your_username/your_model_name")
def predict(sentence):
inputs = tokenizer(sentence, return_tensors="tf")
outputs = model.generate(inputs["input_ids"])
return tokenizer.decode(outputs[0], skip_special_tokens=True)
sentence = "Can exposure to pesticides in food impact cognitive health?"
print(predict(sentence))
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.