You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for SFT_FineTuned_Llama2-7B_hf-v0.3

This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf. It has been trained using TRL.

Quick start

from transformers import pipeline

input_judgement = "___Input Legal Judgement___"
generator = pipeline("Legal Text Summarization", model="AjayMukundS/Llama2_7B_fine_tuned", device="cuda")
output = generator([{"role": "summarizer", "content": input_judgement}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

Visualize in Weights & Biases

This model was trained with SFT.

Framework versions

  • TRL: 0.14.0
  • Transformers: 4.48.3
  • Pytorch: 2.5.1+cu124
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for AjayMukundS/Llama2_7B_fine_tuned

Finetuned
(420)
this model

Dataset used to train AjayMukundS/Llama2_7B_fine_tuned