Stevross commited on
Commit
fdb5868
·
1 Parent(s): 22257b0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -43
README.md CHANGED
@@ -12,12 +12,23 @@ inference: true
12
  thumbnail: https://static.wixstatic.com/media/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png/v1/fill/w_192%2Ch_192%2Clg_1%2Cusm_0.66_1.00_0.01/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png
13
  ---
14
  # Model Card
15
- ## Summary
16
 
17
- This model, Astrid-Llama-7B, is a Llama model for causal language modeling, designed to generate human-like text.
18
- It's part of our mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance.
19
- Trained in English, it's a versatile tool for a variety of applications.
20
- This model is one of the many models available on our platform, and we currently have a 1B and 7B open-source model.
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  This model was trained by [PAIX.Cloud](https://www.paix.cloud/).
23
  - Wait list: [Wait List](https://www.paix.cloud/join-waitlist)
@@ -78,13 +89,13 @@ from h2oai_pipeline import H2OTextGenerationPipeline
78
  from transformers import AutoModelForCausalLM, AutoTokenizer
79
 
80
  tokenizer = AutoTokenizer.from_pretrained(
81
- "Stevross/Astrid-LLama-7B-1",
82
  use_fast=False,
83
  padding_side="left",
84
  trust_remote_code=True,
85
  )
86
  model = AutoModelForCausalLM.from_pretrained(
87
- "Stevross/Astrid-LLama-7B-1",
88
  torch_dtype="auto",
89
  device_map={"": "cuda:0"},
90
  trust_remote_code=True,
@@ -110,7 +121,7 @@ You may also construct the pipeline from the loaded model and tokenizer yourself
110
  ```python
111
  from transformers import AutoModelForCausalLM, AutoTokenizer
112
 
113
- model_name = "Stevross/Astrid-LLama-7B-1" # either local folder or huggingface model name
114
  # Important: The prompt needs to be in the same format the model was trained with.
115
  # You can find an example prompt in the experiment logs.
116
  prompt = "<|prompt|>How are you?</s><|answer|>"
@@ -144,53 +155,46 @@ tokens = model.generate(
144
  tokens = tokens[inputs["input_ids"].shape[1]:]
145
  answer = tokenizer.decode(tokens, skip_special_tokens=True)
146
  print(answer)
147
- ```
148
 
149
- ## Model Architecture
150
 
151
- ```
152
- LlamaForCausalLM(
153
- (model): LlamaModel(
154
- (embed_tokens): Embedding(32000, 4096, padding_idx=0)
155
- (layers): ModuleList(
156
- (0-31): 32 x LlamaDecoderLayer(
157
- (self_attn): LlamaAttention(
158
- (q_proj): Linear(in_features=4096, out_features=4096, bias=False)
159
- (k_proj): Linear(in_features=4096, out_features=4096, bias=False)
160
- (v_proj): Linear(in_features=4096, out_features=4096, bias=False)
161
- (o_proj): Linear(in_features=4096, out_features=4096, bias=False)
162
- (rotary_emb): LlamaRotaryEmbedding()
163
- )
164
- (mlp): LlamaMLP(
165
- (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
166
- (down_proj): Linear(in_features=11008, out_features=4096, bias=False)
167
- (up_proj): Linear(in_features=4096, out_features=11008, bias=False)
168
- (act_fn): SiLUActivation()
169
- )
170
- (input_layernorm): LlamaRMSNorm()
171
- (post_attention_layernorm): LlamaRMSNorm()
172
- )
173
- )
174
- (norm): LlamaRMSNorm()
175
- )
176
- (lm_head): Linear(in_features=4096, out_features=32000, bias=False)
177
- )
178
  ```
179
 
180
- ## Model Configuration
 
 
 
181
 
182
- This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
 
183
 
 
 
184
 
185
- ## Model Validation
 
186
 
187
- Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
 
 
 
 
 
 
 
 
 
188
 
189
- ```bash
190
- CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=Stevross/Astrid-LLama-7B-1 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
191
  ```
192
 
193
 
 
194
  ## Disclaimer
195
 
196
  Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
 
12
  thumbnail: https://static.wixstatic.com/media/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png/v1/fill/w_192%2Ch_192%2Clg_1%2Cusm_0.66_1.00_0.01/bdee4e_8aa5cefc86024bc88f7e20e3e19d9ff3~mv2.png
13
  ---
14
  # Model Card
 
15
 
16
+
17
+ # Model Card: PAIXAI/Astrid-LLama-7B
18
+
19
+ ## Summary:
20
+ This model, Astrid-Llama-7B, is a Llama model for causal language modeling, designed to generate human-like text. It's part of the mission to make AI technology accessible to everyone, focusing on personalization, data privacy, and transparent AI governance. Trained in English, it's a versatile tool for a variety of applications. This model is one of many models available on the platform, with both 1B and 7B open-source versions. The model was trained by PAIX.Cloud.
21
+
22
+ ## About PAIX:
23
+ PAIX is a revolutionary AI ecosystem that thrives on the principles of collaboration, innovation, data privacy, and transparent AI governance. It provides a decentralized AI ecosystem that is accessible to all. By fusing the robustness of AI with the transparency of blockchain technology, PAIX propels the development of personalized AI assistants. With blockchain at its core, PAIX ensures enhanced data security and user control, addressing AI alignment issues that have been a concern for the industry.
24
+
25
+ ## PAIX Web4AI Sandbox Environment:
26
+ The PAIX ecosystem includes features like PAIX Web4AI, which allows users to create, test, and train their AI models in a safe sandbox environment through APIs or a no-code setup. The PAIX Playground feature enables users to test and compare different AI models, ensuring alignment with their specific requirements. Additionally, the PAIX Gymnasium feature helps fine-tune AI assistants by injecting personal data, such as emails or social media content, to enhance their understanding of users.
27
+
28
+ ## PAIX Marketplace:
29
+ The upcoming PAIX Marketplace will offer a wide range of AI models, extensions for voices, characters, and other customizable features. Users can integrate these models to enhance their personalized AI assistants. Furthermore, PAIX allows users to commercialize their AI models by selling them on the marketplace, contributing to the growth of the ecosystem.
30
+
31
+
32
 
33
  This model was trained by [PAIX.Cloud](https://www.paix.cloud/).
34
  - Wait list: [Wait List](https://www.paix.cloud/join-waitlist)
 
89
  from transformers import AutoModelForCausalLM, AutoTokenizer
90
 
91
  tokenizer = AutoTokenizer.from_pretrained(
92
+ "PAIXAI/Astrid-LLama-7B",
93
  use_fast=False,
94
  padding_side="left",
95
  trust_remote_code=True,
96
  )
97
  model = AutoModelForCausalLM.from_pretrained(
98
+ "PAIXAI/Astrid-LLama-7B",
99
  torch_dtype="auto",
100
  device_map={"": "cuda:0"},
101
  trust_remote_code=True,
 
121
  ```python
122
  from transformers import AutoModelForCausalLM, AutoTokenizer
123
 
124
+ model_name = "PAIXAI/Astrid-LLama-7B" # either local folder or huggingface model name
125
  # Important: The prompt needs to be in the same format the model was trained with.
126
  # You can find an example prompt in the experiment logs.
127
  prompt = "<|prompt|>How are you?</s><|answer|>"
 
155
  tokens = tokens[inputs["input_ids"].shape[1]:]
156
  answer = tokenizer.decode(tokens, skip_special_tokens=True)
157
  print(answer)
 
158
 
159
+ ## Usage
160
 
161
+ To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
162
+
163
+ ```bash
164
+ pip install transformers==4.30.1
165
+ pip install accelerate==0.20.3
166
+ pip install torch==2.0.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
167
  ```
168
 
169
+ ```python
170
+ # Import necessary libraries
171
+ import streamlit as st
172
+ from transformers import pipeline
173
 
174
+ # Initialize the chatbot model
175
+ chatbot = pipeline("text-generation", model="PAIXAI/Astrid-LLama-7B")
176
 
177
+ # Streamlit UI
178
+ st.title("Astrid-LLama-7B Chatbot")
179
 
180
+ # User input
181
+ user_input = st.text_input("You: ", "")
182
 
183
+ # Get response from the chatbot
184
+ if st.button("Ask"):
185
+ with st.spinner("Generating response..."):
186
+ response = chatbot(user_input, max_length=100, do_sample=True, top_p=0.95, top_k=60)
187
+ st.write("Bot:", response[0]['generated_text'])
188
+
189
+ st.sidebar.header("About")
190
+ st.sidebar.text("This is a simple chatbot using\n"
191
+ "the Astrid-LLama-7B model from\n"
192
+ "Hugging Face and Streamlit UI.")
193
 
 
 
194
  ```
195
 
196
 
197
+
198
  ## Disclaimer
199
 
200
  Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.