Spaces:
Sleeping
Art Prompts
Below are the list of prompts used to generate the sidebar and the center gallery pictures. Naming of files is manual currently. The names are the high information words that repeat in discussions. This format can be computationally explored to create an idempotent cached view of completed explored results for a given MoE. Aspect ratios are used to fit an image to a space which is left sidebar (1:9 aspect ratio) and double UHD resolution which is two monitors side by side at 3840x2160 resolution so total space to cover is 7680 x 2160 (7x2) or the 32:9 aspect ratio in order to double as a dual monitor background at high upscale resolution for maximum fidelity of images.
Scrabble, Boggle, Crossword Puzzle, Bananagrams, Hangman, Words with Friends, Wordle, Letterpress, Alphabear, Gameplay Dynamics, Word Discovery, Vocabulary building 3d character photographic kit Peripheral drift illusion Hieronymous Bosch supercomputer android body map and body scan with futuristic similarity to humans showing organs on tile map of paintings with couples digital and robot with micro sized wire flows of light for outlines digital hud. The overall design conveys the energy and drama of the embrace, emphasizing the distinct characteristics of the realistic characters while maintaining science technology art feel. --v 6.0 --ar 1:9
scrabble tile placement word formation point scoring tilemap with all letter tiles and score values for each letter, and include blanks. account for every scrabble tile --v 6.0 --ar 32:9
luminous marble game tiles with gem stone letters vowels are color coded and consonants are white and black. scrabble tile placement word formation point scoring tilemap with all letter tiles and score values for each letter, and include blanks. account for every scrabble tile --v 6.0 --ar 32:9
If using Midjourney the one click into the image will have low resolution
The ratio is also in the MJ html div (3.5562 as shown below:
<div class="loadingOverlay__4d818" style="aspect-ratio: 3.5562 / 1;"><img alt="Image" src="https://media.discordapp.net/attachments/997514686608191558/1211348891627429928/aaronwacker_luminous_marble_game_tiles_with_gem_stone_letters_v_2317b3eb-404e-4c1b-8a6f-d8ff1d7b80c2.png?ex=65eddf91&is=65db6a91&hm=b921d1bceae3dc6c4aaedff1288e01d8b7997dc52b42847a93e0ccc9135f3d79&=&format=webp&quality=lossless&width=1651&height=464" style="width: 1835px; height: 516px;"></div>
Image models will eventually evolve where they can maintain different related diffusion sets by combinations of two to three words or tokens on their own in order to disassemble and provide high fidelity on given terms that interact together. One idea to do this is to use the NLTK library and cull the high information words using code below:
https://huggingface.co/spaces/awacke1/Transcript-EDA-NLTK
https://huggingface.co/spaces/awacke1/Transcript-EDA-NLTK
Example shown with Andrej Karpathy's talk on state of GPT.
π Top 10 High Information Words
[
0:"training"
1:"models"
2:"rm"
3:"gpt"
4:"assistant"
5:"pipeline"
6:"data"
7:"collection"
8:"example"
9:"base"
]
This is called a relationship graph.
There is a second graph type which is even more important and that is the high information words showing fidelity of the topic exploration along with its ingress and egree terms which are the prefix term which comes before the object and the egress or suffix term after the important high info word.
Full Code Listing for NLTK Information Graph for Knowledge Management is below:
# Import necessary libraries
import streamlit as st
import re
import nltk
import os
from nltk.corpus import stopwords
from nltk import FreqDist
from graphviz import Digraph
# Set page configuration with a title and favicon
st.set_page_config(
page_title="πΊTranscriptπEDAπNLTK",
page_icon="π ",
layout="wide",
initial_sidebar_state="expanded",
menu_items={
'Get Help': 'https://huggingface.co/awacke1',
'Report a bug': "https://huggingface.co/spaces/awacke1/WebDataDownload",
'About': "# Midjourney: https://discord.com/channels/@me/997514686608191558"
}
)
st.markdown('''π **Exploratory Data Analysis (EDA)** π: - Dive deep into the sea of data with our EDA feature, unveiling hidden patterns π΅οΈββοΈ and insights π§ in your transcripts. Transform raw data into a treasure trove of information π.
π **Natural Language Toolkit (NLTK)** π οΈ: - Harness the power of NLTK to process and understand human language π£οΈ. From tokenization to sentiment analysis, our toolkit is your compass π§ in the vast landscape of natural language processing (NLP).
πΊ **Transcript Analysis** π: - Elevate your text analysis with our advanced transcript analysis tools. Whether it's speech recognition ποΈ or thematic extraction π, turn your audiovisual content into actionable insights π.''')
# Download NLTK resources
nltk.download('punkt')
nltk.download('stopwords')
def remove_timestamps(text):
return re.sub(r'\d{1,2}:\d{2}\n.*\n', '', text)
def extract_high_information_words(text, top_n=10):
words = nltk.word_tokenize(text)
words = [word.lower() for word in words if word.isalpha()]
stop_words = set(stopwords.words('english'))
filtered_words = [word for word in words if word not in stop_words]
freq_dist = FreqDist(filtered_words)
return [word for word, _ in freq_dist.most_common(top_n)]
def create_relationship_graph(words):
graph = Digraph()
for index, word in enumerate(words):
graph.node(str(index), word)
if index > 0:
graph.edge(str(index - 1), str(index), label=str(index))
return graph
def display_relationship_graph(words):
graph = create_relationship_graph(words)
st.graphviz_chart(graph)
def extract_context_words(text, high_information_words):
words = nltk.word_tokenize(text)
context_words = []
for index, word in enumerate(words):
if word.lower() in high_information_words:
before_word = words[index - 1] if index > 0 else None
after_word = words[index + 1] if index < len(words) - 1 else None
context_words.append((before_word, word, after_word))
return context_words
def create_context_graph(context_words):
graph = Digraph()
for index, (before_word, high_info_word, after_word) in enumerate(context_words):
graph.node(f'before{index}', before_word, shape='box') if before_word else None
graph.node(f'high{index}', high_info_word, shape='ellipse')
graph.node(f'after{index}', after_word, shape='diamond') if after_word else None
if before_word:
graph.edge(f'before{index}', f'high{index}')
if after_word:
graph.edge(f'high{index}', f'after{index}')
return graph
def display_context_graph(context_words):
graph = create_context_graph(context_words)
st.graphviz_chart(graph)
def display_context_table(context_words):
table = "| Before | High Info Word | After |\n|--------|----------------|-------|\n"
for before, high, after in context_words:
table += f"| {before if before else ''} | {high} | {after if after else ''} |\n"
st.markdown(table)
def showInnovationOutlines():
st.markdown("""
# AI App Areas in Demand and Opportunities for 100x π
## Creativity + Productivity π¨β
| **Area** | **Opportunity** | **Innovation Keywords** |
|---------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------|
| **Content Generation** | Enable consumers to create art, music, videos, or graphics without complex training. | **Bridges creativity and craft**, making imagination a reality. |
| **Content Editing** | Automate editing workflows and introduce AI-native edits. | **Compose, refine, remix** content seamlessly. |
| **Productivity** | Transform tasks into actions, providing leverage on time. | **Executing tasks** and **giving leverage** on time. |
## High Opportunities for 100x π
### Content Generation
| **What We're Looking For** | **Details** | **Emoji** |
|----------------------------------|-----------------------------------------------------------------------------------------------------|-----------|
| **Killing the "blank page problem"** | From text prompts to slide decks, generation products that **create content** from "blank pages". | πβ‘οΈπ |
| **Making open source models accessible** | Products that **utilize tech** in the browser or app, making open-source models accessible. | π»π |
| **Creating remixable outputs** | Platforms that allow creators to **make work instantly remixable**, enhancing creativity. | ππ¨ |
### Content Editing
| **What We're Looking For** | **Details** | **Emoji** |
|----------------------------------|-----------------------------------------------------------------------------------------------|-----------|
| **Owning multi-media workflows** | Workflow products that allow users to **generate, refine, and stitch different content types**. | πΌοΈ+π΅ |
| **Enabling in-platform refinement** | AI products that help users **automatically improve** their creations. | β¨π§ |
| **Iterating with intelligent editors** | Products that enable users to **refine existing outputs** without starting from scratch. | πβοΈ |
### Productivity
| **What We're Looking For** | **Details** | **Emoji** |
|------------------------------|-----------------------------------------------------------------------------------------------------|-----------|
| **Agents that act as systems of action** | General and specialized agents that **complete tasks**, like booking restaurants or analyzing data. | π€πΌ |
| **Voice-first apps** | AI apps that prioritize **voice input**, making interaction more natural. | π£οΈπ± |
| **Apps that provide in-flow assistance** | Tools that **minimize context switching** by offering information and actions within workflow. | ππ οΈ |
## Companionship + Social π§βπ€βπ§π
| **Area** | **Opportunity** | **Innovation Keywords** |
|---------------------|---------------------------------------------------------------------|--------------------------------------------------------------|
| **Companionship** | AI offers an **infinitely patient and engaging friend**. | **Engaging in conversation** about any topic. |
| **Social** | Enhancing interactions and helping **meet new people**. | **Fun interactions** and **enhanced matchmaking**. |
## Personal Growth π±
| **Area** | **Opportunity** | **Innovation Keywords** |
|---------------------|---------------------------------------------------------------------|--------------------------------------------------------------|
| **Education** | Personalized learning environments for every consumer. | **Personalized support** at a lower cost. |
| **Personal Finance**| AI-driven financial advice and portfolio management. | **Money on autopilot** and **self-managing assets**. |
| **Wellness** | Judgment-free expert advice for a better future. | **Judgment-free experts** and **personalized wellness plans**.|
This table encapsulates the essence of AI's transformative potential across creativity, productivity, companionship, social engagement, and personal growth. By focusing on these key areas and innovation keywords, we identify the high-impact opportunities where AI can multiply value and redefine experiences.
""")
def load_example_files():
# Exclude specific files
excluded_files = {'freeze.txt', 'requirements.txt', 'packages.txt', 'pre-requirements.txt'}
# List all .txt files excluding the ones in excluded_files
example_files = [f for f in os.listdir() if f.endswith('.txt') and f not in excluded_files]
# Check if there are any files to select from
if example_files:
selected_file = st.selectbox("π Select an example file:", example_files)
if st.button(f"π Load {selected_file}"):
with open(selected_file, 'r', encoding="utf-8") as file:
return file.read()
else:
st.write("No suitable example files found.")
return None
# Load example files
def load_example_files_old():
example_files = [f for f in os.listdir() if f.endswith('.txt')]
selected_file = st.selectbox("π Select an example file:", example_files)
if st.button(f"π Load {selected_file}"):
with open(selected_file, 'r', encoding="utf-8") as file:
return file.read()
return None
# Main code for UI
uploaded_file = st.file_uploader("π Choose a .txt file", type=['txt'])
example_text = load_example_files()
if example_text:
file_text = example_text
elif uploaded_file:
file_text = uploaded_file.read().decode("utf-8")
else:
file_text = ""
if file_text:
text_without_timestamps = remove_timestamps(file_text)
top_words = extract_high_information_words(text_without_timestamps, 10)
with st.expander("π Top 10 High Information Words"):
st.write(top_words)
with st.expander("π Relationship Graph"):
display_relationship_graph(top_words)
context_words = extract_context_words(text_without_timestamps, top_words)
with st.expander("π Context Graph"):
display_context_graph(context_words)
with st.expander("π Context Table"):
display_context_table(context_words)
with st.expander("Innovation Outlines"):
showInnovationOutlines()