--- library_name: transformers license: mit datasets: - DrDrek/crewai_finetuning_dataset language: - en --- # Finetuned model of gemma 2B for crewai library
## Crewai Finetuned Model This is a LoRA finetuned model of gemma-2B for crewai library that produces the Goal and Backstory description automatically in it's agent method or agent() by taking only Role as user input, this helps to generate proper descriptions of those parameters through a llm instead of manually writing it. You can run the model on a GPU using the following code. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "DrDrek/crewai-finetuned-model" input_text = "junior software developer" torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.float16, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids, max_length=128) output = tokenizer.decode(outputs[0]) #print("llm output:",output) backstory=(output.split("\n\n"))[1].split("\n\n")[0] goal=(output.split(backstory)[1].replace("