--- license: cc datasets: - laion/OIG - Hello-SimpleAI/HC3 - databricks/databricks-dolly-15k language: - en - th - ja - vi pipeline_tag: text-generation --- # Model Card for WangChanGLM 🐘 - The Thai-Turned-Multilingual Instruction-Following Model ## Model Details ### Model Description WangChanGLM is a Thai-turned-multilingual, instruction-finetuned Facebook XGLM-7.5B using open-source, commercially permissible datasets (LAION OIG chip2 and infill_dbpedia, DataBricks Dolly v2, OpenAI TL;DR, and Hello-SimpleAI HC3; about 400k examples), released under CC-BY SA 4.0. The models are trained to perform a subset of instruction-following tasks we found most relevant namely: reading comprehension, brainstorming, and creative writing. We provide the weights for a model finetuned on an English-only dataset ([wangchanglm-7.5B-sft-en](https://huggingface.co/pythainlp/wangchanglm-7.5B-sft-en)) and another checkpoint further finetuned on Google-Translated Thai dataset ([wangchanglm-7.5B-sft-enth](https://huggingface.co/pythainlp/wangchanglm-7.5B-sft-enth)). We perform Vicuna-style evaluation using both humans and ChatGPT (in our case, `gpt-3.5-turbo` since we are still on the waitlist for `gpt-4`) and observe some discrepancies between the two types of annoators. All training and evaluation codes are shared under the [Apache-2.0 license](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE) in our Github, as well as datasets and model weights on [HuggingFace](https://huggingface.co/pythainlp). See our live demo [here](). - **Developed by:** [PyThaiNLP](https://www.github.com/pythainlp) and [VISTEC-depa AI Research Institute of Thailand](https://huggingface.co/airesearch) - **Model type:** Finetuned [XGLM-7.5B](https://huggingface.co/facebook/xglm-7.5B) - **Language(s) (NLP)**: `en`, `th`, `ja`, `vi` capacibilities evaluated, theoretically all 30 languages of [XGLM-7.5B](https://huggingface.co/facebook/xglm-7.5B) - **License:** [CC-BY SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) ### Model Sources - **Repository:** [pythainlp/wangchanglm](https://www.github.com/pythainlp/wangchanglm) - **Blog:** [Medium]() - **Demo:** [Colab notebook]() ## Uses ### Direct Use Intended to be use as an instruction-following model for reading comprehension, brainstorming and creative writing. ### Downstream Use The model can be finetuned for any typical instruction-following use cases. ### Out-of-Scope Use We do not expect the models to perform well in math problems, reasoning, and factfulness. We intentionally filter out training examples from these use cases. ## Bias, Risks, and Limitations We noticed similar limitations to other finetuned instruction followers such as math problems, reasoning, and factfulness. Even though the models do not perform on the level that we expect them to be abused, they do contain undesirable biases and toxicity and should be further optimized for your particular use cases. ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ``` name_model = "pythainlp/wangchanglm-7.5B-sft-en" model = AutoModelForCausalLM.from_pretrained( model_name, return_dict=True, load_in_8bit=True , device_map="auto", torch_dtype=torch.float16, offload_folder="./", low_cpu_mem_usage=True, ) text = "เล่นหุ้นยังไงให้รวย" tokenizer = AutoTokenizer.from_pretrained(model_name) batch = tokenizer(text, return_tensors="pt") with torch.cuda.amp.autocast(): output_tokens = model.generate( input_ids=batch["input_ids"], max_new_tokens=max_gen_len, # 512 begin_suppress_tokens = exclude_ids, no_repeat_ngram_size=2, #oasst k50 top_k=50, top_p=top_p, # 0.95 typical_p=1., temperature=temperature, # 0.9 # #oasst typical3 # typical_p = 0.3, # temperature = 0.8, # repetition_penalty = 1.2, ) tokenizer.decode(output_tokens[0], skip_special_tokens=True) ``` ## Training Details ### Training Data Finetuning datasets are sourced from [LAION OIG chip2 and infill_dbpedia](https://huggingface.co/datasets/laion/OIG) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [DataBricks Dolly v2](https://github.com/databrickslabs/dolly) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [OpenAI TL;DR](https://github.com/openai/summarize-from-feedback) ([MIT](https://opensource.org/license/mit/)), and [Hello-SimpleAI HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) ([CC-BY SA](https://creativecommons.org/licenses/by-sa/4.0/)). ### Training Procedure #### Preprocessing See [pythainlp/wangchanglm](https://www.github.com/pythainlp/wangchanglm). #### Training Hyperparameters - **Training regime:** LoRA with 4 GPUs. See more details at [pythainlp/wangchanglm](https://www.github.com/pythainlp/wangchanglm). ## Evaluation We performed automatic evaluation in the style of [Vicuna](https://vicuna.lmsys.org/) and human evaluation. See more details from our [blog](). #### Summary ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation **BibTeX:** [More Information Needed] ## Model Card Contact [More Information Needed]