--- base_model: HuggingFaceTB/SmolLM2-135M library_name: transformers model_name: SmolLM2-135M-FT-SCP-Wiki tags: - generated_from_trainer - smol-course - module_1 - trl - sft licence: license --- # Model Card for SmolLM2-135M-FT-SCP-Wiki This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "PhilSad/SmolLM2-1.7B-FT-SCP-Wiki" model = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path=model_name ).to(device) tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name) prompt = "SCP-10214 is a god who loves making pasta." messages = [{"role": "user", "content": prompt}] formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False) inputs = tokenizer(formatted_prompt, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_new_tokens=2048) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/kollai/huggingface/runs/1fc5p6cb) This model was trained with SFT. ### Framework versions - TRL: 0.12.2 - Transformers: 4.46.3 - Pytorch: 2.5.1+cu121 - Datasets: 3.2.0 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```