Transformers
English
Random7878 commited on
Commit
5f3a226
·
verified ·
1 Parent(s): 16c8277

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -208
README.md CHANGED
@@ -4,216 +4,127 @@ datasets:
4
  - vidore/syntheticDocQA_artificial_intelligence_test
5
  - aps/super_glue
6
  metrics:
7
- - exact_match
8
- - f1
9
- - recall
10
- - perplexity
11
- - bleu
12
- - rouge
13
  - accuracy
 
 
14
  base_model:
15
  - openai-community/gpt2
16
  - deepseek-ai/DeepSeek-R1
17
- new_version: qualcomm/Stable-Diffusion-v2.1
18
- pipeline_tag: question-answering
19
- library_name: transformers
20
- tags:
21
- - code
22
- - finance
23
- - biology
24
- - chemistry
25
  ---
26
- # Model Card for Model ID
27
-
28
- <!-- Provide a quick summary of what the model is/does. -->
29
-
30
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
31
-
32
- ## Model Details
33
-
34
- ### Model Description
35
-
36
- <!-- Provide a longer summary of what this model is. -->
37
-
38
-
39
-
40
- - **Developed by:** [More Information Needed]
41
- - **Funded by [optional]:** [More Information Needed]
42
- - **Shared by [optional]:** [More Information Needed]
43
- - **Model type:** [More Information Needed]
44
- - **Language(s) (NLP):** [More Information Needed]
45
- - **License:** [More Information Needed]
46
- - **Finetuned from model [optional]:** [More Information Needed]
47
-
48
- ### Model Sources [optional]
49
-
50
- <!-- Provide the basic links for the model. -->
51
-
52
- - **Repository:** [More Information Needed]
53
- - **Paper [optional]:** [More Information Needed]
54
- - **Demo [optional]:** [More Information Needed]
55
-
56
- ## Uses
57
-
58
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
59
-
60
- ### Direct Use
61
-
62
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
63
-
64
- [More Information Needed]
65
-
66
- ### Downstream Use [optional]
67
-
68
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
69
-
70
- [More Information Needed]
71
-
72
- ### Out-of-Scope Use
73
-
74
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
75
-
76
- [More Information Needed]
77
-
78
- ## Bias, Risks, and Limitations
79
-
80
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Recommendations
85
-
86
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
87
-
88
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
89
-
90
- ## How to Get Started with the Model
91
-
92
- Use the code below to get started with the model.
93
-
94
- [More Information Needed]
95
-
96
- ## Training Details
97
-
98
- ### Training Data
99
-
100
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
101
-
102
- [More Information Needed]
103
-
104
- ### Training Procedure
105
-
106
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
107
-
108
- #### Preprocessing [optional]
109
-
110
- [More Information Needed]
111
-
112
-
113
- #### Training Hyperparameters
114
-
115
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
116
-
117
- #### Speeds, Sizes, Times [optional]
118
-
119
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
120
-
121
- [More Information Needed]
122
-
123
- ## Evaluation
124
-
125
- <!-- This section describes the evaluation protocols and provides the results. -->
126
-
127
- ### Testing Data, Factors & Metrics
128
-
129
- #### Testing Data
130
-
131
- <!-- This should link to a Dataset Card if possible. -->
132
-
133
- [More Information Needed]
134
-
135
- #### Factors
136
-
137
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
138
-
139
- [More Information Needed]
140
-
141
- #### Metrics
142
-
143
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
144
-
145
- [More Information Needed]
146
-
147
- ### Results
148
-
149
- [More Information Needed]
150
-
151
- #### Summary
152
-
153
-
154
-
155
- ## Model Examination [optional]
156
-
157
- <!-- Relevant interpretability work for the model goes here -->
158
-
159
- [More Information Needed]
160
-
161
- ## Environmental Impact
162
-
163
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
164
-
165
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
166
-
167
- - **Hardware Type:** [More Information Needed]
168
- - **Hours used:** [More Information Needed]
169
- - **Cloud Provider:** [More Information Needed]
170
- - **Compute Region:** [More Information Needed]
171
- - **Carbon Emitted:** [More Information Needed]
172
-
173
- ## Technical Specifications [optional]
174
-
175
- ### Model Architecture and Objective
176
-
177
- [More Information Needed]
178
-
179
- ### Compute Infrastructure
180
-
181
- [More Information Needed]
182
-
183
- #### Hardware
184
-
185
- [More Information Needed]
186
-
187
- #### Software
188
-
189
- [More Information Needed]
190
-
191
- ## Citation [optional]
192
-
193
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
194
-
195
- **BibTeX:**
196
-
197
- [More Information Needed]
198
-
199
- **APA:**
200
-
201
- [More Information Needed]
202
-
203
- ## Glossary [optional]
204
-
205
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
206
-
207
- [More Information Needed]
208
-
209
- ## More Information [optional]
210
-
211
- [More Information Needed]
212
-
213
- ## Model Card Authors [optional]
214
-
215
- [More Information Needed]
216
-
217
- ## Model Card Contact
218
-
219
- [More Information Needed]
 
4
  - vidore/syntheticDocQA_artificial_intelligence_test
5
  - aps/super_glue
6
  metrics:
 
 
 
 
 
 
7
  - accuracy
8
+ language:
9
+ - en
10
  base_model:
11
  - openai-community/gpt2
12
  - deepseek-ai/DeepSeek-R1
13
+ new_version: deepseek-ai/Janus-Pro-7B
14
+ library_name: diffusers
 
 
 
 
 
 
15
  ---
16
+ from flask import Flask, request, jsonify
17
+ from transformers import pipeline
18
+ import openai
19
+ from newsapi import NewsApiClient
20
+ from notion_client import Client
21
+ from datetime import datetime, timedelta
22
+ import torch
23
+ from diffusers import StableDiffusionPipeline
24
+
25
+ # Initialize Flask app
26
+ app = Flask(__name__)
27
+
28
+ # Load Hugging Face Question-Answering model
29
+ qa_pipeline = pipeline("question-answering", model="distilbert-base-uncased-distilled-squad")
30
+
31
+ # OpenAI API Key (Replace with your own)
32
+ openai.api_key = "your_openai_api_key"
33
+
34
+ # NewsAPI Key (Replace with your own)
35
+ newsapi = NewsApiClient(api_key="your_news_api_key")
36
+
37
+ # Notion API Key (Replace with your own)
38
+ notion = Client(auth="your_notion_api_key")
39
+
40
+ # Load Stable Diffusion for Image Generation
41
+ device = "cuda" if torch.cuda.is_available() else "cpu"
42
+ sd_model = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5").to(device)
43
+
44
+ # === FUNCTION 1: Answer Student Questions ===
45
+ @app.route("/ask", methods=["POST"])
46
+ def answer_question():
47
+ data = request.json
48
+ question = data.get("question", "")
49
+ context = "This AI is trained to assist students with questions related to various subjects."
50
+
51
+ if not question:
52
+ return jsonify({"error": "Please provide a question."}), 400
53
+
54
+ answer = qa_pipeline(question=question, context=context)
55
+ return jsonify({"question": question, "answer": answer["answer"]})
56
+
57
+ # === FUNCTION 2: Generate Code ===
58
+ @app.route("/generate_code", methods=["POST"])
59
+ def generate_code():
60
+ data = request.json
61
+ prompt = data.get("prompt", "")
62
+
63
+ if not prompt:
64
+ return jsonify({"error": "Please provide a prompt for code generation."}), 400
65
+
66
+ response = openai.Completion.create(
67
+ engine="code-davinci-002",
68
+ prompt=prompt,
69
+ max_tokens=100
70
+ )
71
+ return jsonify({"code": response.choices[0].text.strip()})
72
+
73
+ # === FUNCTION 3: Get Daily News ===
74
+ @app.route("/news", methods=["GET"])
75
+ def get_news():
76
+ headlines = newsapi.get_top_headlines(language="en", category="technology")
77
+ news_list = [{"title": article["title"], "url": article["url"]} for article in headlines["articles"]]
78
+
79
+ return jsonify({"news": news_list})
80
+
81
+ # === FUNCTION 4: Create a Planner Task ===
82
+ @app.route("/planner", methods=["POST"])
83
+ def create_planner():
84
+ data = request.json
85
+ task = data.get("task", "")
86
+ days = int(data.get("days", 1))
87
+
88
+ if not task:
89
+ return jsonify({"error": "Please provide a task."}), 400
90
+
91
+ due_date = datetime.now() + timedelta(days=days)
92
+
93
+ return jsonify({"task": task, "due_date": due_date.strftime("%Y-%m-%d")})
94
+
95
+ # === FUNCTION 5: Save Notes to Notion ===
96
+ @app.route("/notion", methods=["POST"])
97
+ def save_notion_note():
98
+ data = request.json
99
+ title = data.get("title", "Untitled Note")
100
+ content = data.get("content", "")
101
+
102
+ if not content:
103
+ return jsonify({"error": "Please provide content for the note."}), 400
104
+
105
+ notion.pages.create(
106
+ parent={"database_id": "your_notion_database_id"},
107
+ properties={"title": {"title": [{"text": {"content": title}}]}},
108
+ children=[{"object": "block", "type": "paragraph", "paragraph": {"text": [{"type": "text", "text": {"content": content}}]}}]
109
+ )
110
+
111
+ return jsonify({"message": "Note added successfully to Notion!"})
112
+
113
+ # === FUNCTION 6: Generate AI Images ===
114
+ @app.route("/generate_image", methods=["POST"])
115
+ def generate_image():
116
+ data = request.json
117
+ prompt = data.get("prompt", "")
118
+
119
+ if not prompt:
120
+ return jsonify({"error": "Please provide an image prompt."}), 400
121
+
122
+ image = sd_model(prompt).images[0]
123
+ image_path = "generated_image.png"
124
+ image.save(image_path)
125
+
126
+ return jsonify({"message": "Image generated successfully!", "image_path": image_path})
127
+
128
+ # === RUN THE APP ===
129
+ if __name__ == "__main__":
130
+ app.run(debug=True)