Leon-Leee commited on
Commit
f096bbb
·
verified ·
1 Parent(s): 7adce78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -11
README.md CHANGED
@@ -17,8 +17,7 @@ tags:
17
  ## AIGCodeGeek-DS-6.7B
18
 
19
  ### Introduction
20
- AIGCodeGeek-DS-6.7B is the first released version of our Code-LLM family with competitive performance on benchmarks such as HumanEval(+) and MBPP(+).
21
- We are preparing for a tech report; stay tuned for more details:)
22
 
23
  ### Model Details
24
  #### Model Description
@@ -27,11 +26,11 @@ We are preparing for a tech report; stay tuned for more details:)
27
  - Fine-tuned from [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) with full parameters
28
 
29
  ### Training data
30
- A mixture of samples from high-quality open-source datasets (read *Acknowledgements*).
 
31
 
32
  ### Evaluation
33
-
34
- To check out our evaluation results: [EvalPlus](https://evalplus.github.io/leaderboard.html)
35
 
36
  ### Requirements
37
  It should work with the same requirements as DeepSeek-Coder-6.7B or the following packages:
@@ -48,8 +47,18 @@ attrdict
48
 
49
 
50
  ### QuickStart
51
- TBD
52
  ```
 
 
 
 
 
 
 
 
 
 
53
  ```
54
 
55
  ### Limits
@@ -57,9 +66,11 @@ TBD
57
 
58
  ### Acknowledgements
59
  We gain a lot of knowledge and resources from the open-source community:
60
- - [DeepSeekCoder](https://huggingface.co/deepseek-ai): impressive performance and insightful tech reports
61
- - [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol Instruct method and datasets
62
  - We used a ([Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped](https://huggingface.co/datasets/Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped)) since this original has been deleted.
63
- - [Magicoder](https://github.com/ise-uiuc/magicoder/): OSS-Instruct method and datasets, [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K) from theblackcat102/evol-codealpaca-v1(https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)
64
- - [Eurus](https://github.com/OpenBMB/Eurus): creative methods and datasets for reasoning, [openbmb/UltraInteract_sft](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
65
- - [OpenCoderInterpreter](https://opencodeinterpreter.github.io/): well-designed experiments and [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback)
 
 
 
17
  ## AIGCodeGeek-DS-6.7B
18
 
19
  ### Introduction
20
+ AIGCodeGeek-DS-6.7B is the first released version of our Code-LLM family with competitive performance on public and private benchmarks.
 
21
 
22
  ### Model Details
23
  #### Model Description
 
26
  - Fine-tuned from [deepseek-ai/deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base) with full parameters
27
 
28
  ### Training data
29
+ A mixture of samples from high-quality open-source (read *Acknowledgements*) and our private datasets.
30
+ We have made contamination detection as Magicoder/Bigcode did.
31
 
32
  ### Evaluation
33
+ results to be added.
 
34
 
35
  ### Requirements
36
  It should work with the same requirements as DeepSeek-Coder-6.7B or the following packages:
 
47
 
48
 
49
  ### QuickStart
50
+
51
  ```
52
+ from transformers import AutoTokenizer, AutoModelForCausalLM
53
+ tokenizer = AutoTokenizer.from_pretrained("aigcode/AIGCodeGeek-DS-6.7B", trust_remote_code=True)
54
+ model = AutoModelForCausalLM.from_pretrained("aigcode/AIGCodeGeek-DS-6.7B", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
55
+ messages=[
56
+ { 'role': 'user', 'content': "write a quick sort algorithm in python."}
57
+ ]
58
+ inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
59
+ # tokenizer.eos_token_id is the id of <|EOT|> token
60
+ outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
61
+ print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
62
  ```
63
 
64
  ### Limits
 
66
 
67
  ### Acknowledgements
68
  We gain a lot of knowledge and resources from the open-source community:
69
+ - [DeepSeekCoder](https://huggingface.co/deepseek-ai): impressive model series and insightful tech reports
70
+ - [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol Instruct and public datasets
71
  - We used a ([Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped](https://huggingface.co/datasets/Leon-Leee/wizardlm_evol_instruct_v2_196K_backuped)) since this original has been deleted.
72
+ - [Magicoder](https://github.com/ise-uiuc/magicoder/): OSS-Instruct, [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K) from theblackcat102/evol-codealpaca-v1(https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)
73
+ - [Eurus](https://github.com/OpenBMB/Eurus): creative datasets for reasoning, [openbmb/UltraInteract_sft](https://huggingface.co/datasets/openbmb/UltraInteract_sft)
74
+ - [OpenCoderInterpreter](https://opencodeinterpreter.github.io/): well-designed system and datasets [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback)
75
+ - [flytech/python-codes-25k](https://huggingface.co/datasets/flytech/python-codes-25k): diversity
76
+ - [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory): easily used to finetune base models