Text Generation
GGUF
Russian
English
conversational
MexIvanov commited on
Commit
b0b1c22
·
1 Parent(s): 9892705

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -1,3 +1,80 @@
1
  ---
 
 
2
  license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: peft
3
+ base_model: HuggingFaceH4/zephyr-7b-beta
4
  license: mit
5
+ language:
6
+ - ru
7
+ - en
8
+ tags:
9
+ - python
10
+ - code
11
+ pipeline_tag: conversational
12
  ---
13
+
14
+ # Model Card for Model ID
15
+
16
+ <!-- Provide a quick summary of what the model is/does. -->
17
+
18
+
19
+
20
+ ## Model Details
21
+
22
+ ### Model Description
23
+
24
+ <!-- Provide a longer summary of what this model is. -->
25
+
26
+
27
+
28
+ - **Developed by:** C.B. Pronin, A.V. Volosova, A.V. Ostroukh, Yu.N. Strogov, V.V. Kurbatov, A.S. Umarova.
29
+ - **Model type:** GGUF Conversion and quantizations of MexIvanov/zephyr-python-ru-merged for easier inference.
30
+ - **Language(s) (NLP):** Russian, English, Python
31
+ - **License:** MIT
32
+ - **Finetuned from model:** HuggingFaceH4/zephyr-7b-beta
33
+
34
+ ### Model Sources
35
+
36
+ <!-- Provide the basic links for the model. -->
37
+
38
+ - **Repository:** Comming soon...
39
+ - **Paper:** Comming soon...
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+ An experimental finetune of Zephyr-7b-beta, aimed at improving coding performance and support for coding-related instructions written in Russian language.
45
+
46
+ ### Direct Use
47
+
48
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
49
+
50
+ Instruction-based coding in Python, based of instructions written in natural language (English or Russian)
51
+
52
+ Prompt template - Zephyr:
53
+ ```
54
+ <|system|>
55
+ </s>
56
+ <|user|>
57
+ {prompt}</s>
58
+ <|assistant|>
59
+ ```
60
+
61
+ <!-- README_GGUF.md-provided-files start -->
62
+ ## Provided files (quantization info taken from TheBloke/zephyr-7B-beta-GGUF)
63
+ | Name | Quant method | Bits | Use case |
64
+ | ---- | ---- | ---- | ---- | ----- |
65
+ | [zephyr-python-ru-q4_K_M.gguf](https://huggingface.co/MexIvanov/zephyr-python-ru-gguf/blob/main/zephyr-python-ru-q4_K_M.gguf) | Q4_K_M | 4 | medium, balanced quality - recommended |
66
+ | [zephyr-python-ru-q6_K.gguf](https://huggingface.co/MexIvanov/zephyr-python-ru-gguf/blob/main/zephyr-python-ru-q6_K.gguf) | Q6_K | 6 | very large, extremely low quality loss |
67
+
68
+ ## Bias, Risks, and Limitations
69
+
70
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
71
+ This adapter model is intended (but not limited) for research usage only. It was trained on a code based instruction set and it does not have any moderation mechanisms. Use at your own risk, we are not responsible for any usage or output of this model.
72
+
73
+ Quote from Zephyr (base-model) repository: "Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this."
74
+
75
+ ### Recommendations
76
+
77
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
78
+
79
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
80
+