in-context learing in LLama2,thanks!

#32
by yanmengxiang666 - opened
This is an example of in-context learing in other model,**history is two-dimensional list**,each list in history describes the implementing prepared conversations between user and assistant:
    history = [[**User:**:"Summarize this Ethereum transaction in one sentence:{......}",
    Assistant:"DwarfPool sent 4.79 Ether to 0x555be1...0702b5Ed with a transaction fee of 0.001 Ether at Oct-14-2015 07:22:38 AM +UTC."],]
    prompt="Summarize this Ethereum transaction in one sentence:{......}”
    response, history = model.chat(tokenizer, prompt,**history=history**,max_length=1024)
I feel like it's important for the model that I can fine-tune the output of the model without using lora.Do you know how to insert some  code to achieve this function in llama2,thanks!
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment