File size: 1,370 Bytes
67b4a6d
293d0ba
fbce279
 
 
 
 
 
 
 
 
 
 
 
67b4a6d
 
 
fbce279
67b4a6d
fbce279
 
 
 
 
67b4a6d
 
 
fbce279
67b4a6d
fbce279
67b4a6d
fbce279
 
 
 
 
67b4a6d
fbce279
67b4a6d
fbce279
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
library_name: transformers
language:
- en
license: gemma
tags:
- gemma
- pytorch
- instruct
- finetune
base_model: google/gemma-1.1-7b-it
pipeline_tag: text-generation
datasets:
- teknium/OpenHermes-2.5
---


# Gemma 7B OpenHermes v0.80

- Eval  Loss: `0.4544`
- Train Loss: `0.3129`
- lr: `5e-5`
- optimizer: adamw
- lr_scheduler_type: cosine

## Model Details

This is an instruction-following model finetuned from the Gemma 1.1 7B model. It was finetuned on the OpenHermes-2.5 dataset to improve its ability to engage in open-ended conversation and respond helpfully to user instructions and queries. The model can engage in dialogue, answer questions, and assist with a variety of tasks.

### Model Description

- **Developed by:** `lemon-mint`
- **Model type:** Gemma
- **Language(s) (NLP):** English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Finetuned from model:** [google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it)

# Limitations and Ethical Considerations

As Gemma 7B OpenHermes has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.