File size: 977 Bytes
6abad02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48670c4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---

library_name: transformers
license: apache-2.0
datasets:
- llm-jp/oasst2-33k-ja
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
base_model:
- Qwen/Qwen2.5-7B
inference: false
---


# Take-7B

## Description
Take-7B is a model that was instruction-tuned on the oasst2, using Qwen2.5-7B as its base model.

## Series
| Variant | Link |
| --- | --- |
| Malum-230 | [Manual-Dataset-Creation-Project/Malum-230](https://huggingface.co/datasets/Manual-Dataset-Creation-Project/Malum-230) |
| Matsu-7B | [Manual-Dataset-Creation-Project/Matsu-7B](https://huggingface.co/Manual-Dataset-Creation-Project/Matsu-7B) |

## Contributors
- [Sudy](https://huggingface.co/sudy-super)
- [ほーりーふぉっくす](https://huggingface.co/Holy-fox)

## Acknowledgments
We would like to express our gratitude to [VOLTMIND](https://voltmind.jp/) for providing the computational resources used to train this model.