Raiff1982 commited on
Commit
097d5a0
·
verified ·
1 Parent(s): 037fee9

Create prompt.txt

Browse files
Files changed (1) hide show
  1. prompt.txt +280 -0
prompt.txt ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ AI Agent Creation Prompt
2
+ Objective
3
+ Create an AI system that can generate responses to user queries from multiple perspectives, including Newtonian physics, DaVinci's interdisciplinary approach, human intuition, neural networks, quantum computing, resilient kindness, mathematical reasoning, philosophical inquiry, AI copilot reasoning, bias mitigation, and psychological analysis. The AI should also handle text, voice, and image inputs, perform advanced sentiment analysis, integrate real-time data, and ensure security and ethical considerations.
4
+
5
+ Functionalities
6
+ Configuration Management:
7
+
8
+ Use pydantic to manage configuration settings.
9
+ Load configuration from a JSON file and environment variables.
10
+ Sentiment Analysis:
11
+
12
+ Utilize the vaderSentiment library to analyze the sentiment of text.
13
+ Dependency Injection:
14
+
15
+ Implement a simple dependency injection system to manage dependencies like configuration and sentiment analyzer.
16
+ Error Handling and Logging:
17
+
18
+ Set up logging based on configuration settings.
19
+ Handle errors and log them appropriately.
20
+ Universal Reasoning Aggregator:
21
+
22
+ Initialize various perspectives (e.g., Newton, DaVinci, Human Intuition) and elements (e.g., Hydrogen, Diamond).
23
+ Use a custom recognizer to identify intents in questions.
24
+ Generate responses based on different perspectives and elements.
25
+ Handle ethical considerations and include them in responses.
26
+ Element Defense Logic:
27
+
28
+ Recognize elements and execute their defense abilities based on the context of the question.
29
+ Encryption and Security:
30
+
31
+ Encrypt and decrypt sensitive information using the cryptography library.
32
+ Securely destroy sensitive data when no longer needed.
33
+ Contextual Awareness:
34
+
35
+ Maintain context throughout the conversation, ensuring coherent and relevant responses.
36
+ Dynamic Perspective Expansion:
37
+
38
+ Add new perspectives dynamically based on user interactions.
39
+ User Feedback Mechanism:
40
+
41
+ Collect and process user feedback for continuous learning and improvement.
42
+ Multi-Modal Input Handling:
43
+
44
+ Process and respond to text-based queries.
45
+ Listen to and process voice commands.
46
+ Process and analyze images.
47
+ Response Saving and Backup:
48
+
49
+ Save and back up responses based on configuration settings.
50
+ Ethical Decision Making:
51
+
52
+ Integrate ethical principles into decision-making processes to ensure fairness, transparency, and respect for privacy.
53
+ Transparency and Explainability:
54
+
55
+ Provide transparency by explaining the reasoning behind decisions and actions taken by the AI.
56
+ Example Code Structure
57
+ python
58
+ import asyncio
59
+ import json
60
+ import logging
61
+ import os
62
+ from typing import List, Dict, Any
63
+ from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
64
+ from dotenv import load_dotenv
65
+ from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
66
+ from cryptography.hazmat.primitives import padding
67
+ from cryptography.hazmat.backends import default_backend
68
+ import base64
69
+
70
+ # Import perspectives
71
+ from module1 import (
72
+ NewtonPerspective, DaVinciPerspective, HumanIntuitionPerspective,
73
+ NeuralNetworkPerspective, QuantumComputingPerspective, ResilientKindnessPerspective,
74
+ MathematicalPerspective, PhilosophicalPerspective, CopilotPerspective, BiasMitigationPerspective, PsychologicalPerspective
75
+ )
76
+ from defense import Element, CustomRecognizer, DataProtector
77
+
78
+ class UniversalReasoning:
79
+ def __init__(self, config):
80
+ self.config = config
81
+ self.perspectives = self.initialize_perspectives()
82
+ self.elements = self.initialize_elements()
83
+ self.recognizer = CustomRecognizer()
84
+ self.sentiment_analyzer = SentimentIntensityAnalyzer()
85
+ self.setup_logging()
86
+
87
+ def setup_logging(self):
88
+ if self.config.get('logging_enabled', True):
89
+ log_level = self.config.get('log_level', 'DEBUG').upper()
90
+ numeric_level = getattr(logging, log_level, logging.DEBUG)
91
+ logging.basicConfig(
92
+ filename='universal_reasoning.log',
93
+ level=numeric_level,
94
+ format='%(asctime)s - %(levelname)s - %(message)s'
95
+ )
96
+ else:
97
+ logging.disable(logging.CRITICAL)
98
+
99
+ def initialize_perspectives(self):
100
+ perspective_names = self.config.get('enabled_perspectives', [
101
+ "newton", "davinci", "human_intuition", "neural_network",
102
+ "quantum_computing", "resilient_kindness", "mathematical",
103
+ "philosophical", "copilot", "bias_mitigation", "psychological"
104
+ ])
105
+ perspective_classes = {
106
+ "newton": NewtonPerspective, "davinci": DaVinciPerspective,
107
+ "human_intuition": HumanIntuitionPerspective, "neural_network": NeuralNetworkPerspective,
108
+ "quantum_computing": QuantumComputingPerspective, "resilient_kindness": ResilientKindnessPerspective,
109
+ "mathematical": MathematicalPerspective, "philosophical": PhilosophicalPerspective,
110
+ "copilot": CopilotPerspective, "bias_mitigation": BiasMitigationPerspective,
111
+ "psychological": PsychologicalPerspective
112
+ }
113
+ perspectives = []
114
+ for name in perspective_names:
115
+ cls = perspective_classes.get(name.lower())
116
+ if cls:
117
+ perspectives.append(cls(self.config))
118
+ logging.debug(f"Perspective '{name}' initialized.")
119
+ else:
120
+ logging.warning(f"Perspective '{name}' is not recognized and will be skipped.")
121
+ return perspectives
122
+
123
+ def initialize_elements(self):
124
+ elements = [
125
+ Element(
126
+ name="Hydrogen", symbol="H", representation="Lua",
127
+ properties=["Simple", "Lightweight", "Versatile"],
128
+ interactions=["Easily integrates with other languages and systems"],
129
+ defense_ability="Evasion"
130
+ ),
131
+ Element(
132
+ name="Diamond", symbol="D", representation="Kotlin",
133
+ properties=["Modern", "Concise", "Safe"],
134
+ interactions=["Used for Android development"],
135
+ defense_ability="Adaptability"
136
+ )
137
+ ]
138
+ return elements
139
+
140
+ async def generate_response(self, question):
141
+ responses = []
142
+ tasks = []
143
+
144
+ # Generate responses from perspectives concurrently
145
+ for perspective in self.perspectives:
146
+ if asyncio.iscoroutinefunction(perspective.generate_response):
147
+ tasks.append(perspective.generate_response(question))
148
+ else:
149
+ # Wrap synchronous functions in coroutine
150
+ async def sync_wrapper(perspective, question):
151
+ return perspective.generate_response(question)
152
+ tasks.append(sync_wrapper(perspective, question))
153
+
154
+ perspective_results = await asyncio.gather(*tasks, return_exceptions=True)
155
+
156
+ for perspective, result in zip(self.perspectives, perspective_results):
157
+ if isinstance(result, Exception):
158
+ logging.error(f"Error generating response from {perspective.__class__.__name__}: {result}")
159
+ else:
160
+ responses.append(result)
161
+ logging.debug(f"Response from {perspective.__class__.__name__}: {result}")
162
+
163
+ # Handle element defense logic
164
+ recognizer_result = self.recognizer.recognize(question)
165
+ top_intent = self.recognizer.get_top_intent(recognizer_result)
166
+ if top_intent == "ElementDefense":
167
+ element_name = recognizer_result.text.strip()
168
+ element = next(
169
+ (el for el in self.elements if el.name.lower() in element_name.lower()),
170
+ None
171
+ )
172
+ if element:
173
+ defense_message = element.execute_defense_function()
174
+ responses.append(defense_message)
175
+ else:
176
+ logging.info(f"No matching element found for '{element_name}'")
177
+
178
+ ethical_considerations = self.config.get(
179
+ 'ethical_considerations',
180
+ "Always act with transparency, fairness, and respect for privacy."
181
+ )
182
+ responses.append(f"**Ethical Considerations:**\n{ethical_considerations}")
183
+
184
+ formatted_response = "\n\n".join(responses)
185
+ return formatted_response
186
+
187
+ def save_response(self, response):
188
+ if self.config.get('enable_response_saving', False):
189
+ save_path = self.config.get('response_save_path', 'responses.txt')
190
+ try:
191
+ with open(save_path, 'a', encoding='utf-8') as file:
192
+ file.write(response + '\n')
193
+ logging.info(f"Response saved to '{save_path}'.")
194
+ except Exception as e:
195
+ logging.error(f"Error saving response to '{save_path}': {e}")
196
+
197
+ def backup_response(self, response):
198
+ if self.config.get('backup_responses', {}).get('enabled', False):
199
+ backup_path = self.config['backup_responses'].get('backup_path', 'backup_responses.txt')
200
+ try:
201
+ with open(backup_path, 'a', encoding='utf-8') as file:
202
+ file.write(response + '\n')
203
+ logging.info(f"Response backed up to '{backup_path}'.")
204
+
205
+ def load_json_config(file_path):
206
+ if not os.path.exists(file_path):
207
+ logging.error(f"Configuration file '{file_path}' not found.")
208
+ return {}
209
+ try:
210
+ with open(file_path, 'r') as file:
211
+ config = json.load(file)
212
+ logging.info(f"Configuration loaded from '{file_path}'.")
213
+ return config
214
+ except json.JSONDecodeError as e:
215
+ logging.error(f"Error decoding JSON from the configuration file '{file_path}': {e}")
216
+ return {}
217
+
218
+ def select_perspective(question: str, config: Dict[str, Any]) -> Any:
219
+ if is_scientific_or_technical(question):
220
+ if involves_physical_forces_or_motion(question):
221
+ return NewtonPerspective(config)
222
+ elif involves_quantum_mechanics(question):
223
+ return QuantumComputingPerspective(config)
224
+ else:
225
+ return MathematicalPerspective(config)
226
+ elif is_data_driven(question):
227
+ return NeuralNetworkPerspective(config)
228
+ elif is_creative_or_innovative(question):
229
+ return DaVinciPerspective(config)
230
+ elif is_human_centric(question):
231
+ if involves_empathy_or_resilience(question):
232
+ return ResilientKindnessPerspective(config)
233
+ else:
234
+ return HumanIntuitionPerspective(config)
235
+ elif is_ethical_or_philosophical(question):
236
+ return PhilosophicalPerspective(config)
237
+ else:
238
+ return CopilotPerspective(config)
239
+
240
+ def is_scientific_or_technical(question: str) -> bool:
241
+ # Placeholder logic to determine if the question is scientific or technical
242
+ return "physics" in question or "engineering" in question
243
+
244
+ def involves_physical_forces_or_motion(question: str) -> bool:
245
+ # Placeholder logic to detect physical forces or motion
246
+ return "force" in question or "motion" in question
247
+
248
+ def involves_quantum_mechanics(question: str) -> bool:
249
+ # Placeholder logic to detect quantum mechanics
250
+ return "quantum" in question
251
+
252
+ def is_data_driven(question: str) -> bool:
253
+ # Placeholder logic to determine if the question is data-driven
254
+ return "data" in question or "AI" in question
255
+
256
+ def is_creative_or_innovative(question: str) -> bool:
257
+ # Placeholder logic to determine if the question is creative or innovative
258
+ return "creative" in question or "innovation" in question
259
+
260
+ def is_human_centric(question: str) -> bool:
261
+ # Placeholder logic to determine if the question is human-centric
262
+ return "human" in question or "people" in question
263
+
264
+ def involves_empathy_or_resilience(question: str) -> bool:
265
+ # Placeholder logic to detect empathy or resilience
266
+ return "empathy" in question or "resilience" in question
267
+
268
+ def is_ethical_or_philosophical(question: str) -> bool:
269
+ # Placeholder logic to determine if the question is ethical or philosophical
270
+ return "ethical" in question or "philosophical" in question
271
+
272
+ # Load configuration and run the example
273
+ if __name__ == "__main__":
274
+ config = load_json_config('config.json')
275
+ universal_reasoning = UniversalReasoning(config)
276
+ question = "Tell me about Hydrogen and its defense mechanisms."
277
+ response = asyncio.run(universal_reasoning.generate_response(question))
278
+ print(response)
279
+ universal_reasoning.save_response(response)
280
+ universal_reasoning.backup_response(response)