PEFT
Safetensors
English
llama
4-bit precision
bitsandbytes
sonisatish119's picture
Update README.md
c7e7565 verified
metadata
base_model: unsloth/Llama-3.2-1B-Instruct
library_name: peft
license: apache-2.0
datasets:
  - >-
    huzaifa525/Medical_Intelligence_Dataset_40k_Rows_of_Disease_Info_Treatments_and_Medical_QA
language:
  - en

Model Card for PhysioMindAI-Llama3-Medical

Model Details

Model Description

PhysioMindAI-Llama3-Medical is a fine-tuned version of the Llama-3.2-1B-Instruct model, specifically designed for medical applications. The model is trained to understand and generate medical content, assisting in tasks like symptom analysis, treatment suggestions, and patient query responses.

  • Developed by: Satish Soni
  • Organization: Globalspace Technologies Ltd
  • Funded by [optional]: More Information Needed
  • Shared by [optional]: sonisatish119
  • Model type: Medical NLP, LLM
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model: unsloth/Llama-3.2-1B-Instruct

Model Sources

Uses

Direct Use

PhysioMindAI-Llama3-Medical can be used for:

  • ✅ Medical question answering
  • ✅ Clinical note summarization
  • ✅ Symptom checker and risk assessment
  • ✅ Generating patient-friendly explanations

Downstream Use

  • 🏥 Can be integrated into healthcare chatbots and virtual assistants
  • 🛠️ Can be fine-tuned further for specific medical domains

Out-of-Scope Use

⚠️ Not intended for real-time clinical decision-making without human oversight
⚠️ Should not be used for emergency medical advice

Bias, Risks, and Limitations

Recommendations

⚠️ Users should be aware of potential biases in training data and limitations in accuracy.
✅ Always verify critical medical information with professionals.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "sonisatish119/PhysioMindAI-Llama3-Medical"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

input_text = "What are the symptoms of anxiety?"
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
output = model.generate(**inputs, max_new_tokens=100)

print(tokenizer.decode(output[0], skip_special_tokens=True))