Text Generation
Transformers
Safetensors
mistral
text-generation-inference
5-bit
exl2
kisimoff commited on
Commit
dc653fa
·
verified ·
1 Parent(s): e34b98d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md CHANGED
@@ -1,3 +1,63 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - microsoft/orca-math-word-problems-200k
5
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
6
+ - Vezora/Tested-22k-Python-Alpaca
7
  ---
8
+ This is a 5bpw quant of Kukedlc/NeuralExperiment-7b-MagicCoder-v7.5
9
+
10
+
11
+ # Datacard for Custom Trained Model
12
+ - Base Model : [Kukedlc/NeuralExperiment-7b-dare-ties](https://huggingface.co/Kukedlc/NeuralExperiment-7b-dare-ties)
13
+
14
+
15
+ ## Model Description
16
+ This model is an experimental AI trained on three distinct datasets focusing on logical reasoning, mathematics, and programming. The training process involved fine-tuning from the last layer (31) backward with a gradually decreasing learning rate. The primary goal is to address and rectify the common 'INSTINST' bug observed in leaderboard models through targeted training on the latest layers.
17
+
18
+ ## Datasets Used for Training
19
+ - `microsoft/orca-math-word-problems-200k`: A large-scale dataset of mathematical word problems aimed at enhancing the model's numerical reasoning and problem-solving capabilities.
20
+ - `ise-uiuc/Magicoder-Evol-Instruct-110K`: A dataset designed to improve code generation and understanding, contributing to the model's programming language proficiency.
21
+ - `sahil2801/CodeAlpaca-20k`: A dataset focused on programming challenges to further refine the model's coding and logical reasoning skills.
22
+
23
+ Each dataset contributed 20,000 data points to the training process, ensuring a balanced representation of logic, mathematics, and programming tasks.
24
+
25
+ ## Training Environment
26
+ - The model was trained on Kaggle's free GPU environment, allowing for cost-effective fine-tuning and experimentation.
27
+ - Users interested in replicating or extending this training can find the Kaggle notebook in my profile or request it directly for collaborative purposes.
28
+
29
+ ## Preliminary Results
30
+ - The model shows promising results in solving logical puzzles and mathematical problems, especially those with misleading or non-obvious solutions that it initially struggled with.
31
+ - Ongoing experiments aim to quantify the impact of targeted training on the model's reasoning capabilities across different domains.
32
+
33
+ ## Invitation for Collaboration
34
+ - Feedback, suggestions, and collaborative efforts are highly encouraged to further refine and evaluate the model.
35
+ - If interested in contributing or experimenting with this model, please feel free to reach out or access the code directly from my Kaggle profile.
36
+
37
+ ## Contact Information
38
+ - For any inquiries, suggestions, or collaboration proposals, please contact me!
39
+
40
+ ```python
41
+ !pip install -qU transformers accelerate
42
+
43
+ from transformers import AutoTokenizer
44
+ import transformers
45
+ import torch
46
+
47
+ model = "Kukedlc/NeuralExperiment-7b-MagicCoder-v7"
48
+ messages = [{"role": "user", "content": "What is a large language model?"}]
49
+
50
+ tokenizer = AutoTokenizer.from_pretrained(model)
51
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
52
+ pipeline = transformers.pipeline(
53
+ "text-generation",
54
+ model=model,
55
+ torch_dtype=torch.float16,
56
+ device_map="auto",
57
+ )
58
+
59
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
60
+ print(outputs[0]["generated_text"])
61
+ ```
62
+
63
+ ![Kukedlc/NeuralExperiment-7b-dare-ties](https://raw.githubusercontent.com/kukedlc87/imagenes/main/DALL%C2%B7E%202024-03-05%2000.28.41%20-%20Imagine%20a%20visual%20representation%20of%20a%20language%20model%20inspired%20by%20the%20Mandelbrot%20fractal.%20The%20scene%20should%20depict%20an%20abstract%2C%20intricate%20network%20resembl.webp)