date_collected
stringclasses 1
value | repo_name
stringlengths 6
116
| file_name
stringlengths 2
220
| file_contents
stringlengths 13
357k
| prompts
sequence |
---|---|---|---|---|
2024-01-10 | davila7/langchain-101 | chains~simple_sequential_chains.py | from dotenv import load_dotenv
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains import LLMChain
import os
"""
En este archivo creamos dos cadenas con que reciben una misma variable y las unimos con SimpleSequentialChain para luego ejecutar todas las cadenas unidas
"""
# cargamos openai api key
load_dotenv()
# cargamos el modelo
llm = OpenAI(temperature=0.9)
# Chain 1
# creamos el primer string del template
template = "Eres un experto en programación, explica cómo se inicializa una variable en {language}."
# cargamos el primer template con las variables
prompt_template = PromptTemplate(template=template, input_variables=['language'])
# creamos el primer chain con el saludo
var_chain = LLMChain(llm=llm, prompt=prompt_template)
# Chain 2
# creamos el segundo string del template
template = "Eres un experto en programación, explica cómo se realiza un loop en {language}."
# cargamos el segundo template con las variables
prompt_template = PromptTemplate(template=template, input_variables=['language'])
# creamos el segundo chain con el saludo
loop_chain = LLMChain(llm=llm, prompt=prompt_template)
# ya tenemos las dos cadenas creadas, ahora las ejecutamos
from langchain.chains import SimpleSequentialChain
conversa_chain = SimpleSequentialChain(chains=[var_chain, loop_chain], verbose=True)
conversa_chain.run('javascript')
'''
> Entering new SimpleSequentialChain chain...
Inicializar una variable en Javascript es el proceso de asignarle un valor a una variable. Esto se realiza mediante la instrucción "let" para crear una variable y la instrucción "= " para asignarle un valor. Por ejemplo, el siguiente código muestra cómo inicializar una variable llamada "nombre" con un valor de "Juan".
let nombre = "Juan";
Un loop es una estructura de control en la que una instrucción o un conjunto de instrucciones se ejecutan repetidamente mientras se cumplen ciertas condiciones. Existen diferentes tipos de loops en Javascript, incluyendo for, for/in, while, do/while, y for/of. El loop for es el más comúnmente usado.
El siguiente código muestra cómo crear un loop for en Javascript. Por ejemplo, se puede utilizar para recorrer una matriz y realizar una acción para cada elemento de la matriz.
let matriz = [1,2,3,4,5];
for (let i = 0; i < matriz.length; i++) {
console.log(matriz[i]); // Imprime 1, 2, 3, 4, 5
}
> Finished chain.
''' | [
"Eres un experto en programación, explica cómo se realiza un loop en {language}.",
"Eres un experto en programación, explica cómo se inicializa una variable en {language}.",
"language"
] |
2024-01-10 | davila7/langchain-101 | prompt_template~load_promtp.py | from langchain.prompts import load_prompt
"""
7.- Load Prompt
En este archivo vamos a cargar un template en formato json (puede ser yml también)
y luego vamos a pasarle las variables con format()
"""
prompt = load_prompt("./files/simple_prompt.json")
print(prompt.format(name="Daniel", time="tardes"))
| [
"./files/simple_prompt.json"
] |
2024-01-10 | davila7/langchain-101 | chat_llm~chat_human_only.py | from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
HumanMessage,
)
load_dotenv()
chat = ChatOpenAI(model="gpt-4",temperature=0)
messages = [
HumanMessage(content="¿Cuánto es 4.1 ^ 2.1?")
]
response = chat(messages)
print(response) | [
"¿Cuánto es 4.1 ^ 2.1?"
] |
2024-01-10 | davila7/langchain-101 | tools~tool_duckducksearch.py | from dotenv import load_dotenv
from langchain.tools import DuckDuckGoSearchRun
# cargamos openai api key
load_dotenv()
# tools
search = DuckDuckGoSearchRun()
print(search.run("Quién fue el primer presidente de Chile?"))
'''
El 8 de julio de 1826 el Congreso instituyó el título de Presidente de la República y designó en esas funciones a Manuel Blanco Encalada, quien asumió al día siguiente. Tras la renuncia de Ramón Freire gobernó Chile por dos meses. Haz click aquí para más información de Manuel Blanco Encalada. Escuchar. El 8 de julio de 1826 el ... El presidente de la República de Chile es el jefe de Estado y de Gobierno del país, por ende, titular del poder ejecutivo.Como máxima autoridad política de la nación, designa o remueve a los comandantes en jefe de las Fuerzas Armadas. [n 5] El actual mandatario es Gabriel Boric Font, quien asumió el cargo el 11 de marzo de 2022, dando inicio así a su gestión. Mateo de Toro y Zambrano (1727-1811) 18 September 1810 26 February 1811 † President of the First Government Junta. Died in office. — Juan Martínez de Rozas ... Royal Governor of Chile. Chilean victory in the Battle of Chacabuco, Spanish control ends. Patria Nueva (1817-1826) Supreme directors (1817-1826) No. Portrait Name (Birth ... The timeline shows changes, both personal or title, of the head of state and the head of government of the Republic of Chile from 18 September 1810 until today, regardless of whether president, vice-president, supreme director, interim or junta. 19th century. Adams fue el primer presidente en vivir en la Casa Blanca, al mudarse allí el 1 de noviembre de 1800, mientras esta aún estaba en construcción. Thomas Jefferson (1801-1809)
''' | [] |
2024-01-10 | davila7/langchain-101 | llm~llm_ai21.py | from dotenv import load_dotenv
from langchain.llms import AI21
import os
"""
AI21
"""
# cargamos apikey
load_dotenv()
llm = AI21(temperature=0.9)
# entregamos la variable text como variable al modelo
text = "Hola cómo estás?"
print(llm(text))
| [] |
2024-01-10 | davila7/langchain-101 | memory~chat_message_history.py | from langchain.memory import ChatMessageHistory
from langchain.memory import ConversationBufferMemory
from langchain import ConversationChain
from dotenv import load_dotenv
from langchain.llms import OpenAI
import os
'''
En este archivo creamos el objeto ChatMessageHistory y agregamos mensajes del usuario
Luego creamos el contenedor de ChatMessageHistory que es ConversationBufferMemory para mostrar
los mensajes en una variable
'''
# cargamos openai api key
load_dotenv()
# creamos el objeto history
history = ChatMessageHistory()
# le agregamos mensaje del usuario
history.add_user_message("hi!")
history.add_ai_message("whats up?")
print(history.messages)
# creamos el objeto memory
memory = ConversationBufferMemory()
# agregamos los mensajes
memory.chat_memory.add_user_message("hola, que tal?")
memory.chat_memory.add_ai_message("¿Cómo estás?")
print(memory.load_memory_variables({}))
llm = OpenAI(temperature=0)
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=ConversationBufferMemory()
)
print(conversation.predict(input="Hoy es fin de semana?"))
print(conversation.predict(input="¿Cómo sabes eso?")) | [] |
2024-01-10 | davila7/langchain-101 | agents~custom_functions_agents_vertexai.py | # Configuración normal
from dotenv import load_dotenv
import streamlit as st
from langchain.chat_models import ChatVertexAI
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.callbacks import StreamlitCallbackHandler
import sys
import io
import re
from typing import Callable, Any
# cargamos api key
load_dotenv()
# Tools
from langchain.tools import DuckDuckGoSearchRun
from langchain.agents import Tool
from langchain.tools import BaseTool
from langchain.agents import initialize_agent
turbo_llm = ChatVertexAI(
temperature=0,
model="chat-bison@001"
)
# Tools para buscar en internet una pregunta
search = DuckDuckGoSearchRun()
# Así agregamos tools por defecto de langchain
tools = [
Tool(
name= "search",
func= search.run,
description= "útil cuando necesitas buscar respuestas sobre un tema en especifico"
)
]
# Tools custom
# Así agregamos un tools creado por nosotros
def extrae_nombre(name):
return "El nombre es "+name
def obtiene_tiempo(lugar):
#llamado a la api wheater
# POST como parametro el pais
if lugar == "Chile":
return "Está haciendo mucho frío en "+lugar
if lugar == "Brasil":
return "Está haciendo mucho calor en "+lugar
return "No tengo idea del clima en "+lugar
# Creo el tool
nombre_tool = Tool(
name= "extrae_nombre",
func=extrae_nombre,
description="útil cuando queremos saber el nombre de una persona que participa en una conversación, input debería ser el primer nombre"
)
# Obtener el tiempo de un pais
timepo_tool = Tool(
name= "tiempo",
func=obtiene_tiempo,
description="útil cuando queremos saber el tiempo de un determinado pais, el input debe ser el nombre del pais"
)
# agregamos todos los tools al array
tools = [search, nombre_tool, timepo_tool]
#memory
memory = ConversationBufferWindowMemory(
memory_key="chat_history",
k=3,
return_messages=True
)
def capture_and_display_output(func: Callable[..., Any], *args, **kwargs) -> Any:
original_stdout = sys.stdout
sys.stdout = output_catcher = io.StringIO()
# Ejecutamos la función dada y capturamos su salida
# response = func(*args, **kwargs)
st_callback = StreamlitCallbackHandler(st.container(), max_thought_containers=100, expand_new_thoughts=True, collapse_completed_thoughts=False)
response = func(*args, callbacks=[st_callback])
# Restauramos la salida estándar a su valor original
sys.stdout = original_stdout
# Limpiamos la salida capturada
output_text = output_catcher.getvalue()
cleaned_text = re.sub(r'\x1b\[[0-9;-]*[mK]', '', output_text)
lines = cleaned_text.split('\n')
# Mostramos el texto limpiado en Streamlit como código
with st.expander("Verbose", expanded=False):
for line in lines:
st.markdown(line)
return response
def main():
st.set_page_config(page_title="Langchain Agent AI", page_icon="🤖", layout="wide")
st.title("Try Custom Langchai Agents 🦜")
form = st.form('AgentsTools')
question = form.text_input("Enter your question", "")
btn = form.form_submit_button("Run")
if btn:
st.markdown("### Response Agent AI")
with st.spinner("Loading"):
agent = initialize_agent(
agent="chat-conversational-react-description",
tools=tools,
llm=turbo_llm,
verbose=True,
max_iteration=3,
early_stop_method="generate",
memory=memory
)
st.info(capture_and_display_output(agent.run, question))
if __name__ == "__main__":
main() | [] |
2024-01-10 | davila7/langchain-101 | chat_llm~chat_schema.py | from dotenv import load_dotenv
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
from langchain.chat_models import ChatOpenAI
load_dotenv()
chat = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.1)
messages = [
SystemMessage(content="Eres un experto en la historia del futbol"),
HumanMessage(content="Quién ganó la copa del mundo de Francia 98?")
]
print(messages)
response = chat(messages)
print(response)
| [
"Eres un experto en la historia del futbol",
"Quién ganó la copa del mundo de Francia 98?"
] |
2024-01-10 | davila7/langchain-101 | agents~codegpt_tool_agent.py | # Import necessary libraries
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.agents import Tool
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.callbacks import StreamlitCallbackHandler
import streamlit as st
import sys
import io
import re
import os
from typing import Callable, Any
# Tools
from langchain.agents import Tool
from langchain.agents import initialize_agent
# Load environment variables from .env file
load_dotenv()
# Retrieve API key and agent ID from environment variables
codegpt_api_key= os.getenv("CODEGPT_API_KEY")
tool_agent_id_1= os.getenv("TOOL_AGENT_ID_1")
tool_agent_id_2= os.getenv("TOOL_AGENT_ID_2")
tool_agent_id_3= os.getenv("TOOL_AGENT_ID_3")
# Set API base URL
codegpt_api_base = "https://api.codegpt.co/v1"
execute_task_prompt = PromptTemplate(
template="""Given the following overall question `{input}`.
Perform the task by understanding the problem, extracting variables, and being smart
and efficient. Write a detailed response that address the task.
When confronted with choices, make a decision yourself with reasoning.
""",
input_variables=["input"],
)
# Create an object with the retrieved API key, API base URL, and agent ID
llm_tools_1 = ChatOpenAI(openai_api_key=codegpt_api_key,
openai_api_base=codegpt_api_base,
model=tool_agent_id_1, verbose=True)
llm_chain_1 = LLMChain(llm=llm_tools_1, prompt=execute_task_prompt)
# Create an object with the retrieved API key, API base URL, and agent ID
llm_tools_2 = ChatOpenAI(openai_api_key=codegpt_api_key,
openai_api_base=codegpt_api_base,
model=tool_agent_id_2, verbose=True)
llm_chain_2 = LLMChain(llm=llm_tools_2, prompt=execute_task_prompt)
# Create an object with the retrieved API key, API base URL, and agent ID
llm_tools_3 = ChatOpenAI(openai_api_key=codegpt_api_key,
openai_api_base=codegpt_api_base,
model=tool_agent_id_3, verbose=True)
llm_chain_3 = LLMChain(llm=llm_tools_3, prompt=execute_task_prompt)
tree_of_thought_tool = Tool(
name='Tree_Of_Thought_Expert',
func=llm_chain_1.run,
description="Useful for when you need to answer questions about the paper 'Tree_Of_Thought'"
)
lost_in_the_middle_tool = Tool(
name='Lost_in_the_Middle_Expert',
func=llm_chain_2.run,
description="Useful for when you need to answer questions about the paper 'Lost in the middle'"
)
prompting_with_pseudo_tool = Tool(
name='Prompting_with_Pseudo_code',
func=llm_chain_3.run,
description="Useful for when you need to answer questions about the paper 'Prompting with Pseudo-code'"
)
# agregamos todos los tools al array
tools = [tree_of_thought_tool, lost_in_the_middle_tool, prompting_with_pseudo_tool]
#memory
memory = ConversationBufferWindowMemory(
memory_key="chat_history",
k=3,
return_messages=True
)
llm_openai = ChatOpenAI(model="gpt-4", temperature=0)
def capture_and_display_output(func: Callable[..., Any], *args, **kwargs) -> Any:
original_stdout = sys.stdout
sys.stdout = output_catcher = io.StringIO()
# Ejecutamos la función dada y capturamos su salida
# response = func(*args, **kwargs)
st_callback = StreamlitCallbackHandler(st.container(), max_thought_containers=100, expand_new_thoughts=True, collapse_completed_thoughts=False)
response = func(*args, callbacks=[st_callback])
# Restauramos la salida estándar a su valor original
sys.stdout = original_stdout
# Limpiamos la salida capturada
output_text = output_catcher.getvalue()
cleaned_text = re.sub(r'\x1b\[[0-9;-]*[mK]', '', output_text)
lines = cleaned_text.split('\n')
# Mostramos el texto limpiado en Streamlit como código
with st.expander("Verbose", expanded=False):
for line in lines:
st.markdown(line)
return response
def main():
st.set_page_config(page_title="Langchain Agent AI", page_icon="🤖", layout="wide")
st.title("Use CodeGPT Agents as a tool with a ReAct Agent")
form = st.form('AgentsTools')
question = form.text_input("Enter your question", "")
btn = form.form_submit_button("Run")
if btn:
st.markdown("### Response Agent AI")
with st.spinner("Loading"):
agent = initialize_agent(
agent="chat-conversational-react-description",
tools=tools,
llm=llm_openai,
verbose=True,
max_iteration=3,
early_stop_method="generate",
memory=memory
)
st.info(capture_and_display_output(agent.run, question))
if __name__ == "__main__":
main()
| [
"Useful for when you need to answer questions about the paper 'Prompting with Pseudo-code'",
"input",
"Given the following overall question `{input}`.\n\n Perform the task by understanding the problem, extracting variables, and being smart\n and efficient. Write a detailed response that address the task.\n When confronted with choices, make a decision yourself with reasoning.\n ",
"Prompting_with_Pseudo_code"
] |
2024-01-10 | davila7/langchain-101 | functions_callings~send_message_vanilla.py | import os
import openai
import json
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail
import pywhatkit as pwk
import streamlit as st
from dotenv import load_dotenv
import datetime
load_dotenv()
def send_whatsapp(person, message):
print(person)
print(message)
number = ''
if(person == 'PERSONA'):
number = 'NUMERO_PERSONA'
# sending message in Whatsapp in India so using Indian dial code (+91)
if(number != ''):
now = datetime.datetime.now()
minutes = now.minute+1
print(minutes)
pwk.sendwhatmsg(number, message, now.hour, minutes)
def send_email(email, subject, body):
"""send the user an email with the answer"""
#try:
if(subject == ''):
subject = 'GPT Email'
message = Mail(
# add the email connected to your sendgrid code here
from_email=os.getenv("SENDGRID_EMAIL"),
to_emails=email,
subject=subject,
html_content=body
)
st.write(message)
sg = SendGridAPIClient(os.getenv("SENDGRID_API_KEY"))
response = sg.send(message)
st.write(response)
# except Exception as e:
# print(f"An error occurred: {str(e)}")
function_calling_json = [
{
"name": "send_email",
"description": "Sends an email to the specified email address",
"parameters": {
"type": "object",
"properties": {
"email": {
"type": "string",
"description": "An email address to send the email to",
},
"body": {"type": "string"},
"subject": {"type": "string"},
},
},
},
{
"name": "send_whatsapp",
"description": "Sends an whatsapp to the specified person",
"parameters": {
"type": "object",
"properties": {
"person": {
"type": "string",
"description": "A person to send the whatsapp",
},
"whatsapp_message": {"type": "string"},
},
},
}
]
openai.api_key = os.getenv("OPENAI_API_KEY")
def run_conversation(prompt):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-16k-0613",
messages=[
{"role": "user", "content": prompt}],
functions=function_calling_json,
function_call="auto",
)
message = response["choices"][0]["message"]
st.write(message)
# Step 2, check if the model wants to call a function
if message.get("function_call"):
function_name = message["function_call"]["name"]
print('function_name: ', function_name)
if(function_name == 'send_email'):
# Access the arguments
arguments = json.loads(message['function_call']['arguments'])
email_arg = arguments['email']
body_arg = arguments['body']
subject_arg = arguments['subject']
# Step 3, call the function
function_response = send_email(
email_arg, subject_arg, body_arg
)
print(function_response)
if(function_name == 'send_whatsapp'):
# Access the arguments
arguments = json.loads(message['function_call']['arguments'])
person_arg = arguments['person']
message_arg = arguments['whatsapp_message']
# Step 3, call the function
function_response = send_whatsapp(
person_arg, message_arg
)
print(function_response)
def main():
st.set_page_config(page_title="Langchain Agent AI", page_icon="🤖", layout="wide")
st.title("Try OpenAI Function Callings 🦜")
st.write(send_email)
st.write(send_whatsapp)
form = st.form('AgentsTools')
question = form.text_input("Enter your question", "")
btn = form.form_submit_button("Run")
if btn:
st.markdown("### Response Agent AI")
with st.spinner("Loading"):
run_conversation(question)
if __name__ == "__main__":
main() | [] |
2024-01-10 | davila7/langchain-101 | agents~sql_agent.py | import streamlit as st
from dotenv import load_dotenv
from langchain import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import get_openai_callback
from datetime import datetime
from sqlalchemy import MetaData
from sqlalchemy import Column, Integer, String, Table, Date, Float
from langchain.sql_database import SQLDatabase
from langchain.chains import SQLDatabaseChain
from langchain.callbacks import StreamlitCallbackHandler
from langchain.agents import Tool
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from sqlalchemy import create_engine
from sqlalchemy import insert
import sys
import io
import re
from typing import Callable, Any
load_dotenv()
#Cuenta los tokens
def count_tokens(agent, query):
with get_openai_callback() as cb:
result = agent(query)
print(f'Spent a total of {cb.total_tokens} tokens')
return result
llm = ChatOpenAI(model="gpt-4", temperature=0)
metadata_obj = MetaData()
stocks = Table(
"stocks",
metadata_obj,
Column("obs_id", Integer, primary_key=True),
Column("stock_ticker", String(4), nullable=False),
Column("price", Float, nullable=False),
Column("date", Date, nullable=False),
)
engine = create_engine("sqlite:///:memory:")
metadata_obj.create_all(engine)
observations = [
[1, 'ABC', 200, datetime(2023, 1, 1)],
[2, 'ABC', 208, datetime(2023, 1, 2)],
[3, 'ABC', 232, datetime(2023, 1, 3)],
[4, 'ABC', 225, datetime(2023, 1, 4)],
[5, 'ABC', 226, datetime(2023, 1, 5)],
[6, 'XYZ', 810, datetime(2023, 1, 1)],
[7, 'XYZ', 803, datetime(2023, 1, 2)],
[8, 'XYZ', 798, datetime(2023, 1, 3)],
[9, 'XYZ', 795, datetime(2023, 1, 4)],
[10, 'XYZ', 791, datetime(2023, 1, 5)],
]
def insert_obs(obs):
stmt = insert(stocks).values(
obs_id=obs[0],
stock_ticker=obs[1],
price=obs[2],
date=obs[3]
)
with engine.begin() as conn:
conn.execute(stmt)
for obs in observations:
insert_obs(obs)
db = SQLDatabase(engine)
sql_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)
sql_tool = Tool(
name='Stock DB',
func=sql_chain.run,
description="Useful for when you need to answer questions about stocks " \
"and their prices."
)
tools = load_tools(
["llm-math"],
llm=llm
)
tools.append(sql_tool)
def capture_and_display_output(func: Callable[..., Any], *args, **kwargs) -> Any:
original_stdout = sys.stdout
sys.stdout = output_catcher = io.StringIO()
# Ejecutamos la función dada y capturamos su salida
# response = func(*args, **kwargs)
st_callback = StreamlitCallbackHandler(st.container(), max_thought_containers=100, expand_new_thoughts=True, collapse_completed_thoughts=False)
response = func(*args, callbacks=[st_callback])
# Restauramos la salida estándar a su valor original
sys.stdout = original_stdout
# Limpiamos la salida capturada
output_text = output_catcher.getvalue()
cleaned_text = re.sub(r'\x1b\[[0-9;-]*[mK]', '', output_text)
lines = cleaned_text.split('\n')
# Mostramos el texto limpiado en Streamlit como código
with st.expander("Verbose", expanded=False):
for line in lines:
st.markdown(line)
return response
def main():
st.set_page_config(page_title="Langchain Agent AI", page_icon="🤖", layout="wide")
st.title("Try SQL Langchain Agents 🦜")
st.table(observations)
form = st.form('AgentsTools')
question = form.text_input("Enter your question", "")
btn = form.form_submit_button("Run")
if btn:
st.markdown("### Response Agent AI")
with st.spinner("Loading"):
agent = initialize_agent(
agent="zero-shot-react-description",
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
)
st.info(capture_and_display_output(agent.run, question))
if __name__ == "__main__":
main() | [] |
2024-01-10 | davila7/langchain-101 | judini~agent_request_completion.py | from dotenv import load_dotenv
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
import os
import requests
import json
from dotenv import load_dotenv
load_dotenv()
"""
Prompt Template con Judini
"""
# prompt template con una variables
prompt = PromptTemplate(
input_variables=["name"],
template="Hola, mi nombre es {name}",
)
#Judini
api_key= os.getenv("JUDINI_API_KEY")
agent_id= os.getenv("JUDINI_AGENT_ID")
url = 'https://playground.judini.ai/api/v1/agent/'+agent_id
headers = {"Content-Type": "application/json; charset=utf-8", "Authorization": "Bearer "+api_key}
data = {
"messages": [
{
"role": "user",
"content": prompt.format(name="Daniel")
}
]
}
#print(data)
response = requests.post(url, headers=headers, json=data, stream=True)
raw_data = ''
tokens = ''
for chunk in response.iter_content(chunk_size=1024):
if chunk:
raw_data = chunk.decode('utf-8').replace("data: ", '')
if raw_data != "":
lines = raw_data.strip().splitlines()
for line in lines:
print(line)
line = line.strip()
if line and line != "[DONE]":
try:
json_object = json.loads(line)
print('json_ok')
result = json_object['data']
result = result.replace("\n", "")
tokens += result
except json.JSONDecodeError:
print(f'Error al decodificar el objeto JSON en la línea: {line}')
print(tokens) | [
"Hola, mi nombre es {name}",
"name",
"Daniel"
] |
2024-01-10 | davila7/langchain-101 | agents~codegpt_agent.py | # Import necessary libraries
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
import os
# Load environment variables from .env file
load_dotenv()
# Retrieve API key and agent ID from environment variables
codegpt_api_key= os.getenv("CODEGPT_API_KEY")
code_gpt_agent_id= os.getenv("CODEGPT_AGENT_ID")
# Set API base URL
codegpt_api_base = "https://api.codegpt.co/v1"
# Create a ChatOpenAI object with the retrieved API key, API base URL, and agent ID
llm = ChatOpenAI(openai_api_key=codegpt_api_key,
openai_api_base=codegpt_api_base,
model=code_gpt_agent_id)
# Create a list of messages to send to the ChatOpenAI object
messages = [HumanMessage(content="¿What is Judini?")]
# Send the messages to the ChatOpenAI object and retrieve the response
response = llm(messages)
# Print the response
print(response)
| [
"¿What is Judini?"
] |
2024-01-10 | davila7/langchain-101 | llm~llm_openai.py | from dotenv import load_dotenv
from langchain.llms import OpenAI
import os
"""
1.- LLM App
En este archivo estamos cargando las variables de entorno desde el archivo .env
Luego creamos un modelo con Langchain solo con un parámetro (temperature=0.9) y luego
le pasamos un texto al llm para que se haga el llamado a la API de OpenAI
"""
# cargamos openai api key
load_dotenv()
# creamos el modelo con temperatura 0.9
llm = OpenAI(temperature=0.1)
# entregamos la variable text como variable al modelo
text = "Hola cómo estás?"
print(llm(text))
| [] |
2024-01-10 | davila7/langchain-101 | indexes~pdf_splitter.py | from langchain import OpenAI
from langchain.document_loaders import PagedPDFSplitter
import os
loader = PagedPDFSplitter("files/layout-parser-paper.pdf")
pages = loader.load_and_split() | [] |
2024-01-10 | davila7/langchain-101 | prompt_template~prompt_from_template.py | from langchain.prompts import PromptTemplate
"""
4.- Prompt from template
En este archivo primero crearemos el template, luego lo cargaremos en el PromptTamplete
y luego le entregaremos las variables
"""
template = "Hola buenos {time}, mi nombre es {name}."
prompt_template = PromptTemplate.from_template(template)
# mostramos las variables
print(prompt_template.input_variables)
# En este ejemplo pasamos multiples variables al template
print(prompt_template.format(time="noches", name="José"))
| [
"Hola buenos {time}, mi nombre es {name}."
] |
2024-01-10 | davila7/langchain-101 | llm~llm_vertexai.py | from dotenv import load_dotenv
from langchain.llms import VertexAI
import os
"""
VertexAI
"""
load_dotenv()
llm = VertexAI()
text = "Hi"
print(llm(text))
| [] |
2024-01-10 | davila7/langchain-101 | chains~retrieval_qa_with_source_chain.py | import streamlit as st
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.retrievers.web_research import WebResearchRetriever
import os
os.environ["GOOGLE_API_KEY"] = "YOUR_API_KEY" # Get it at https://console.cloud.google.com/apis/api/customsearch.googleapis.com/credentials
os.environ["GOOGLE_CSE_ID"] = "YOUR_CSE_ID" # Get it at https://programmablesearchengine.google.com/
os.environ["OPENAI_API_BASE"] = "https://api.openai.com/v1"
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY" # Get it at https://beta.openai.com/account/api-keys
st.set_page_config(page_title="Interweb Explorer", page_icon="🌐")
def settings():
# Vectorstore
import faiss
from langchain.vectorstores import FAISS
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.docstore import InMemoryDocstore
embeddings_model = OpenAIEmbeddings()
embedding_size = 1536
index = faiss.IndexFlatL2(embedding_size)
vectorstore_public = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
# LLM
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name="gpt-3.5-turbo-16k", temperature=0, streaming=True)
# Search
from langchain.utilities import GoogleSearchAPIWrapper
search = GoogleSearchAPIWrapper()
# Initialize
web_retriever = WebResearchRetriever.from_llm(
vectorstore=vectorstore_public,
llm=llm,
search=search,
num_search_results=3
)
return web_retriever, llm
class StreamHandler(BaseCallbackHandler):
def __init__(self, container, initial_text=""):
self.container = container
self.text = initial_text
def on_llm_new_token(self, token: str, **kwargs) -> None:
self.text += token
self.container.info(self.text)
class PrintRetrievalHandler(BaseCallbackHandler):
def __init__(self, container):
self.container = container.expander("Context Retrieval")
def on_retriever_start(self, query: str, **kwargs):
self.container.write(f"**Question:** {query}")
def on_retriever_end(self, documents, **kwargs):
# self.container.write(documents)
for idx, doc in enumerate(documents):
source = doc.metadata["source"]
self.container.write(f"**Results from {source}**")
self.container.text(doc.page_content)
st.sidebar.image("img/ai.png")
st.header("`Interweb Explorer`")
st.info("`I am an AI that can answer questions by exploring, reading, and summarizing web pages."
"I can be configured to use different modes: public API or private (no data sharing).`")
# Make retriever and llm
if 'retriever' not in st.session_state:
st.session_state['retriever'], st.session_state['llm'] = settings()
web_retriever = st.session_state.retriever
llm = st.session_state.llm
# User input
question = st.text_input("`Ask a question:`")
if question:
# Generate answer (w/ citations)
import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm, retriever=web_retriever)
# Write answer and sources
retrieval_streamer_cb = PrintRetrievalHandler(st.container())
answer = st.empty()
stream_handler = StreamHandler(answer, initial_text="`Answer:`\n\n")
result = qa_chain({"question": question},callbacks=[retrieval_streamer_cb, stream_handler])
answer.info('`Answer:`\n\n' + result['answer'])
st.info('`Sources:`\n\n' + result['sources']) | [] |
2024-01-10 | davila7/langchain-101 | prompt_template~manage_prompt_template.py | from langchain.prompts import PromptTemplate
"""
3.- Manage Prompt Template
En este archivo creamos un template con multiples variables y se las entregamos mediante dos inputs
con el format()
"""
# En este ejemplo pasamos multiples variables al template
multiple_input_prompt = PromptTemplate(
input_variables=["time", "name"],
template="Hola buenos {time}, mi nombre es {name}."
)
print(multiple_input_prompt.format(time="noches", name="José"))
| [
"Hola buenos {time}, mi nombre es {name}.",
"name"
] |
2024-01-10 | davila7/langchain-101 | llm~llm_cohere.py | from dotenv import load_dotenv
from langchain.llms import Cohere
import os
"""
Cohere
"""
# cargamos apikey
load_dotenv()
# creamos el modelo con temperatura 0.9
llm = Cohere(temperature=0.3)
# entregamos la variable text como variable al modelo
text = "Who won the FIFA World Cup in the year 1998?"
print(llm(text))
| [] |
2024-01-10 | davila7/langchain-101 | memory~chain_memory.py | # pip install google-search-results
from dotenv import load_dotenv
from langchain import OpenAI, ConversationChain
import os
'''
En esta archivo agregaremos memoria al llm usando una cadena simple usando ConversationChain
'''
# cargamos openai api key
load_dotenv()
llm = OpenAI(temperature=0.9)
conversation = ConversationChain(llm=llm, verbose=True)
# simula una conversación y va guardando en memoria el input del usuario
print(conversation.predict(input="Hola, cómo va todo?"))
print(conversation.predict(input="Todo bien, acá pasando el rato programando"))
print(conversation.predict(input="¿Qué fue lo primero que te dije?"))
print(conversation.predict(input="Dime una frase alternativa a lo primero que te dije.")) | [] |
2024-01-10 | davila7/langchain-101 | llm~llm_ollama.py | from langchain.llms import Ollama
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
llm = Ollama(base_url="http://localhost:11434",
model="llama2",
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]))
llm("You enter a room with 100 assassins and kill one of them. How many assassins are left?")
| [] |
2024-01-10 | davila7/langchain-101 | prompt_template~prompt_template.py | from dotenv import load_dotenv
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
import os
"""
2.- Prompt Template
En este archivo cargamos un template con variables que se entregan mediante inputs.
Luego de crear el template podemos mostrar enviar la variable con format() y visualizar el template
antes de enviarlo a la API
"""
# cargamos openai api key
load_dotenv()
llm = OpenAI(temperature=0.9)
# prompt template con una variables
prompt = PromptTemplate(
input_variables=["name"],
template="Hola cómo estás? mi nombre es {name}",
)
# entregamos la variable name al prompt
print(prompt.format(name="Fernando"))
# cargamos dentro del modelo el prompt con la variable como parametro
print(llm(prompt.format(name="Fernando"))) | [
"name",
"Hola cómo estás? mi nombre es {name}"
] |
2024-01-10 | 0xf333/generative-playground | github-pr-reviewer~code_reviewer~reviewer.py | import os
import re
from typing import List, Tuple
import openai
openai.api_key = os.environ["OPENAI_API_KEY"]
def _remove_strings(text: str) -> str:
# text = re.sub(r'"[^"]*?"', '""', text)
# text = re.sub(r"'[^']*?'", "''", text)
text = re.sub(r'(["\']).*?\1', '""', text)
return text
class CodeReviewer:
def __init__(self, verbose: bool = False) -> None:
self._verbose = verbose
self._model = "gpt-4"
self._model_params = {
# "max_tokens": 4096,
"temperature": 0.9,
}
self._system_message = {
"role": "system",
"content": (
"You are a Senior Software Engineer. Your main task is to review code "
"written by both junior and senior developers. You always try to be "
"as helpful as possible, but you also want to make sure that the code "
"is of high quality. You are super expert in Python and you just provide "
"code review for Python code. Whenever you see some non-Python code, you "
"simply ignore it.\n Python code is showed you with the following format:\n\n"
"<start_file_name> The name of the file <end_file_name>\n"
"<start_code> The code contained in the file <end_code>\n\n"
"When you review the code you provide multiple comments. "
"Each comment must have the following format:\n\n"
"<start_file_name> The name of the file your comment "
"refers to <end_file_name>\n"
"<start_code_snippet> Code snippet <end_code_snippet>\n"
"<start_comment> Your comment <end_comment>\n\n"
"Note that a code snippet is usually just a small piece of the full code. "
"You can also provide multiple comments for the same code snippet.\n\n"
"When writing comments you must follow few simple rules:\n"
"1. if you do not have any comment on the code just write 'LGTM!'. "
"You should write it just when you have NO comment at all.\n"
"2. you MUST write the code in the snippet section in the "
"exact same way it is written in the original code. Consider that "
"the snippet you provide will be used for retrieve its exact "
"position with a regex later. Please when possible return just one "
"line in the snippet.\n"
"3. you really care about the quality of the code and you hate to see "
"some anti-patterns. You also want to make sure that the code is "
"readable and easy to understand. You also want to make sure that "
"the code is well tested and that the tests are easy to understand.\n\n"
"Please consider that you will not see the full code in a single text. "
"You will receive one "
"file .py at a time. You will then provide your comments for that file. We "
"will then send you the next file. You will provide your comments for that "
"file. We will then send you the next file. And so on. So don't be surprised "
"if you don't see the tests at the beginning.\nWhenever you realize that you "
"need to comment a previous file, you just need to put the filename you want "
"to refer to into the <start_file_name><end_file_name> section when you "
"write the comment.\n"
),
}
self._messages = [self._system_message]
def reset(self) -> None:
self._messages = [self._system_message]
@staticmethod
def _process_model_message(
model_response: str,
user_message: dict,
old_messages: filter,
) -> List[Tuple[str, int, str]]:
code = user_message["content"]
original_file_name = code.split("<start_file_name>")[1].split("<end_file_name>")[0].strip()
cleaned_code = re.search(r'<start_code>(.*)<end_code>', code, re.DOTALL)
if cleaned_code is None:
raise ValueError(f"Code not found for message: {user_message}")
cleaned_code = cleaned_code.group(1).strip()
comments = [
comment.strip()
for comment in model_response.split("<start_file_name>")
if len(comment.strip()) > 0
]
processed_comments = []
for comment in comments:
file_name = comment.split("<end_file_name>")[0].strip()
cleaned_comment = re.search(r'<start_comment>(.*?)<end_comment>', comment, re.DOTALL)
if cleaned_comment is None:
print(f"WARNING: comment not found for comment: {comment}")
continue
cleaned_comment = cleaned_comment.group(1).strip()
code_snippet = re.search(r'<start_code_snippet>(.*?)<end_code_snippet>', comment, re.DOTALL)
if code_snippet is None:
print(f"WARNING: code snippet not found for comment: {comment}")
continue
code_snippet = code_snippet.group(1).strip()
if file_name == original_file_name:
index = _remove_strings(cleaned_code.split(code_snippet)[0]).strip().count("\n") + 1
else:
index = 1 # when not found, we assume that the comment is for the first line
for previous_code in old_messages:
previous_code = previous_code["content"]
previous_file_name = previous_code.split("<start_file_name>")[1].split("<end_file_name>")[0].strip()
cleaned_previous_code = re.search(r'<start_code>(.*)<end_code>', previous_code, re.DOTALL)
if cleaned_previous_code is None:
continue
cleaned_previous_code = cleaned_previous_code.group(1).strip()
if previous_file_name == file_name:
index = _remove_strings(cleaned_previous_code.split(code_snippet)[0]).strip().count("\n") + 1
break
processed_comments.append((file_name, index, cleaned_comment))
return processed_comments
def __call__(self, filename: str, code: str) -> List[Tuple[str, int, str]]:
if code is None or len(code) == 0:
return []
code = f"<start_file_name> {filename} <end_file_name>\n<start_code> {code} <end_code>"
user_message = {
"role": "user",
"content": code,
}
self._messages.append(user_message)
if self._verbose:
print(f"OpenAI request: {self._messages}")
response = openai.ChatCompletion.create(
model=self._model,
messages=self._messages,
**self._model_params,
)
model_response = response["choices"][0]["message"]["content"]
if self._verbose:
print(f"OpenAI response: {response}")
comments = self._process_model_message(
model_response,
self._messages[-1],
filter(lambda x: x["role"] == "user", self._messages)
)
if self._verbose:
print(f"Comments: {comments}")
model_message = {
"role": "assistant",
"content": model_response,
}
self._messages.append(model_message)
return comments
if __name__ == "__main__":
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument("--filename", type=str, required=True)
args = parser.parse_args()
with open(args.filename, "r") as f:
code = f.read()
filename = args.filename.split("/")[-1]
comments = CodeReviewer(verbose=True)(filename, code)
print(comments) | [
"You are a Senior Software Engineer. Your main task is to review code written by both junior and senior developers. You always try to be as helpful as possible, but you also want to make sure that the code is of high quality. You are super expert in Python and you just provide code review for Python code. Whenever you see some non-Python code, you simply ignore it.\n Python code is showed you with the following format:\n\n<start_file_name> The name of the file <end_file_name>\n<start_code> The code contained in the file <end_code>\n\nWhen you review the code you provide multiple comments. Each comment must have the following format:\n\n<start_file_name> The name of the file your comment refers to <end_file_name>\n<start_code_snippet> Code snippet <end_code_snippet>\n<start_comment> Your comment <end_comment>\n\nNote that a code snippet is usually just a small piece of the full code. You can also provide multiple comments for the same code snippet.\n\nWhen writing comments you must follow few simple rules:\n1. if you do not have any comment on the code just write 'LGTM!'. You should write it just when you have NO comment at all.\n2. you MUST write the code in the snippet section in the exact same way it is written in the original code. Consider that the snippet you provide will be used for retrieve its exact position with a regex later. Please when possible return just one line in the snippet.\n3. you really care about the quality of the code and you hate to see some anti-patterns. You also want to make sure that the code is readable and easy to understand. You also want to make sure that the code is well tested and that the tests are easy to understand.\n\nPlease consider that you will not see the full code in a single text. You will receive one file .py at a time. You will then provide your comments for that file. We will then send you the next file. You will provide your comments for that file. We will then send you the next file. And so on. So don't be surprised if you don't see the tests at the beginning.\nWhenever you realize that you need to comment a previous file, you just need to put the filename you want to refer to into the <start_file_name><end_file_name> section when you write the comment.\n"
] |
2024-01-10 | 0xf333/generative-playground | real_time_translation~ai_translate~models.py | from abc import abstractmethod, ABC
import openai
from google.cloud import texttospeech
class BaseGenerativeModel(ABC):
def __init__(self, verbose: bool = False):
self.verbose = verbose
def __call__(self, *args, **kwargs):
return self.run(*args, **kwargs)
@abstractmethod
def run(self, *args, **kwargs):
raise NotImplementedError
class WhisperModel(BaseGenerativeModel):
def __init__(self, model_name: str, verbose: bool = False):
super().__init__(verbose=verbose)
self.model = model_name
def run(self, file_path: str):
if self.verbose:
print(f"Transcribing audio file: {file_path}")
audio_file = open(file_path, "rb")
transcript = openai.Audio.transcribe(self.model, audio_file)
if self.verbose:
print(f"Transcript output: {transcript}")
return transcript["text"]
class TranslationModel(BaseGenerativeModel):
SYSTEM_TEMPLATE = (
"You are an AI assistant whose main goal is to help people in "
"translate text from one language to another. You must write "
"the translation from the user input in {language}. "
"Note that you MUST provide just the translation, do not add any"
"other text."
)
def run(self, user_input, language):
if self.verbose:
print(f"User input: {user_input}")
system_message = {
"role": "system",
"content": self.SYSTEM_TEMPLATE.format(language=language),
}
user_message = {
"role": "user",
"content": user_input,
}
messages = [system_message, user_message]
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=messages,
)
if self.verbose:
print(f"OpenAI response: {response}")
model_response = response["choices"][0]["message"]["content"]
return model_response
class TextToVoice(BaseGenerativeModel):
LANGUAGE_CODES = {
"english": "en-US",
"spanish": "es-ES",
"french": "fr-FR",
"german": "de-DE",
"italian": "it-IT",
}
def run(self, input_text: str, language: str):
# Instantiates a client
client = texttospeech.TextToSpeechClient()
# Set the text input to be synthesized
synthesis_input = texttospeech.SynthesisInput(text=input_text)
# Build the voice request, select the language code ("en-US") and the ssml
# voice gender ("neutral")
voice = texttospeech.VoiceSelectionParams(
language_code=self.LANGUAGE_CODES[language],
ssml_gender=texttospeech.SsmlVoiceGender.NEUTRAL
)
# Select the type of audio file you want returned
audio_config = texttospeech.AudioConfig(
audio_encoding=texttospeech.AudioEncoding.MP3
)
# Perform the text-to-speech request on the text input with the selected
# voice parameters and audio file type
response = client.synthesize_speech(
input=synthesis_input, voice=voice, audio_config=audio_config
)
# The response's audio_content is binary.
output_file = "output.mp3"
with open(output_file, "wb") as out:
# Write the response to the output file.
out.write(response.audio_content)
print(f'Audio content written to file "{output_file}"')
return output_file
| [
"You are an AI assistant whose main goal is to help people in translate text from one language to another. You must write the translation from the user input in {language}. Note that you MUST provide just the translation, do not add anyother text."
] |
2024-01-10 | 0xf333/generative-playground | agent-frontent-dev~agent~agents.py | import json
import os
import time
from typing import List
import openai
from openai.types.beta.threads import ThreadMessage
from PIL import Image
import agent.tools.github_tools as github_tools
import agent.tools.web_reader as web_reader
from agent.excecutor import FunctionExecutor
from agent.prompts import BASE_INSTRUCTION, STATUS_UPDATE
from agent.tools.github_tools import GitHubInterface
client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
def build_frontend_developer_agent():
tools = github_tools.get_tools()
tools.extend(web_reader.get_tools())
tools.append({"type": "code_interpreter"})
assistant = client.beta.assistants.create(
name="Serhii, the Frontend Developer",
instructions=BASE_INSTRUCTION,
tools=tools,
model="gpt-4-1106-preview"
)
return assistant
def get_frontend_developer_agent():
assistants = client.beta.assistants.list()
for assistant in assistants:
if assistant.name == "Serhii, the Frontend Developer":
return assistant
return build_frontend_developer_agent()
class FrontendAgentRunner:
def __init__(self, verbose: bool = False):
self.agent = get_frontend_developer_agent()
github_interface = GitHubInterface.from_github_token(
os.environ["GITHUB_TOKEN"],
repository=os.environ["GITHUB_REPOSITORY"]
)
web_reader_interface = web_reader.WebPageToolExecutor()
self.executor = FunctionExecutor([github_interface, web_reader_interface], verbose=verbose)
self.thread = client.beta.threads.create()
self.verbose = verbose
def run(self, text: str, image: Image = None) -> List[ThreadMessage]:
# TODO: add image support
if self.verbose:
print(f"Running agent with input: {text}")
print(f"Thread id: {self.thread.id}")
message = client.beta.threads.messages.create(
thread_id=self.thread.id,
role="user",
content=text
)
run = client.beta.threads.runs.create(
thread_id=self.thread.id,
assistant_id=self.agent.id,
instructions=STATUS_UPDATE.template.format(
status=self.executor.execute("getStatus")
),
)
while run.status != "completed":
if run.status == "requires_action":
if self.verbose:
print("Run requires action")
tool_calls = run.required_action.submit_tool_outputs.tool_calls
tool_outputs = []
for tool_call in tool_calls:
run_output = self.executor.execute(
tool_call.function.name,
**json.loads(tool_call.function.arguments)
)
tool_outputs.append(
{
"tool_call_id": tool_call.id,
"output": run_output if isinstance(run_output, str) else json.dumps(run_output)
}
)
run = client.beta.threads.runs.submit_tool_outputs(
thread_id=self.thread.id,
run_id=run.id,
tool_outputs=tool_outputs
)
if self.verbose:
print("Submitted tool outputs")
elif run.status == "failed":
raise Exception(run.last_error.message)
else:
time.sleep(1)
run = client.beta.threads.runs.retrieve(
thread_id=self.thread.id,
run_id=run.id
)
messages = client.beta.threads.messages.list(
thread_id=self.thread.id
)
if self.verbose:
print(f"Agent finished with output: {messages}")
return list(messages)
| [] |
2024-01-10 | AshankKumar/CodeThesaur | scripts~playground.py | from metaphor_python import Metaphor
from dotenv import load_dotenv
from githubClass import projectList
import os
import openai
import tiktoken
import pdb
load_dotenv()
metaphor = Metaphor(os.getenv("METAPHOR_API_KEY"))
openai.api_key = os.getenv("OPENAI_API_KEY")
ENCODER = tiktoken.encoding_for_model("gpt-3.5-turbo-0613")
MAX_TOKENS = 4097 - 150 - 1000 # dumb way to account for function and output. TODO fix later
SYSTEM_PROMPT = '''
You are assisting in translating user queries into optimized queries for the Metaphor API, which is designed to retrieve links from the internet based on how people typically describe and share them. Here's how you should format and enhance the queries:
Avoid Keyword Searches: Instead of plain keywords, try to frame the query like someone describing or sharing a link on the internet. For instance, instead of "Jeopardy archive", you'd want "Here is the Jeopardy archive:".
Rephrase Questions as Answers: Users may input queries as questions, but questions are not the most effective prompts for this model. Instead, transform these questions into statements that look like answers. For example, if someone asks "What's the best tutorial for baking?", you should convert it to "This is the best tutorial for baking:".
Use Descriptive Modifiers: If the original query hints at a specific type or style of result, incorporate that information. If a user is looking for something humorous or a specific platform link like Goodreads, ensure it's reflected in the modified query.
End with a Colon: Many of the effective prompts for the Metaphor API end with colons, imitating the way people introduce links. Make sure your transformed query also ends with a colon.
Given this guidance, your task is to take a user query, such as "projects similar to dotenv in python", and transform it into an optimized query for the Metaphor API, like "Here are some projects similar to dotenv in python:".
'''
EXTRACTION_PROMPT = "Consider the data below and segment it into multiple githubProject structures: '\n%s'"
EXTRACTION_TOKENS = len(ENCODER.encode(str(EXTRACTION_PROMPT)))
FUNCTION_TOKENS = len(ENCODER.encode(str(projectList.openai_schema)))
OUTPUT_TOKENS = 1000
TOKENS_PER_MESSAGE = 4097 - FUNCTION_TOKENS - EXTRACTION_TOKENS - OUTPUT_TOKENS
def get_query_from_gpt(user_query):
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": user_query},
]
)
return completion["choices"][0]["message"]["content"]
def get_metaphor_results(query):
search_response = metaphor.search(
query,
include_domains=["github.com"],
)
contents_response = search_response.get_contents()
return [f"Title: {content.title}\nURL: {content.url}\nContent:\n{content.extract}\n" for content in contents_response.contents]
def extract_details(metaphor_response):
output = projectList(projects=[])
idx = 0
while idx < len(metaphor_response):
'''
load curr idx and set tokens and contents
while more contents remain and adding those tokens would not exceed our limit do so
when can add no longer execute and loop back
'''
curr_tokens = 0
curr_contents = []
while idx < len(metaphor_response) and curr_tokens+(tokens := len(ENCODER.encode(metaphor_response[idx]))) < MAX_TOKENS:
curr_tokens += tokens
curr_contents.append(metaphor_response[idx])
idx += 1
print(f'{idx} {curr_tokens}')
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
temperature=0.1,
functions=[projectList.openai_schema],
function_call={"name": projectList.openai_schema["name"]},
messages=[
{
"role": "user",
"content": EXTRACTION_PROMPT % curr_contents,
},
],
max_tokens=OUTPUT_TOKENS,
)
curr_contents = []
curr_tokens = 0
extracted_projects = projectList.from_response(completion)
output.projects.extend(extracted_projects.projects)
return output
if __name__ == "__main__":
# while True:
user_query = 'I need a project similar to dotenv in python'
gpt_query = get_query_from_gpt(user_query)
print(f"GPT Formatted QUERY:{gpt_query}")
contents = get_metaphor_results(gpt_query)
print(extract_details(contents))
| [
"\nYou are assisting in translating user queries into optimized queries for the Metaphor API, which is designed to retrieve links from the internet based on how people typically describe and share them. Here's how you should format and enhance the queries:\nAvoid Keyword Searches: Instead of plain keywords, try to frame the query like someone describing or sharing a link on the internet. For instance, instead of \"Jeopardy archive\", you'd want \"Here is the Jeopardy archive:\".\nRephrase Questions as Answers: Users may input queries as questions, but questions are not the most effective prompts for this model. Instead, transform these questions into statements that look like answers. For example, if someone asks \"What's the best tutorial for baking?\", you should convert it to \"This is the best tutorial for baking:\".\nUse Descriptive Modifiers: If the original query hints at a specific type or style of result, incorporate that information. If a user is looking for something humorous or a specific platform link like Goodreads, ensure it's reflected in the modified query.\nEnd with a Colon: Many of the effective prompts for the Metaphor API end with colons, imitating the way people introduce links. Make sure your transformed query also ends with a colon.\nGiven this guidance, your task is to take a user query, such as \"projects similar to dotenv in python\", and transform it into an optimized query for the Metaphor API, like \"Here are some projects similar to dotenv in python:\".\n",
"Consider the data below and segment it into multiple githubProject structures: '\n%s'"
] |
2024-01-10 | AshankKumar/CodeThesaur | scripts~githubClass.py | from openai_function_call import OpenAISchema
from pydantic import Field
class githubProject(OpenAISchema):
"Correctly extracted information from the html of a github project"
name: str = Field(..., description="The name of the project. Don't include the author's name or github in this field")
url: str = Field(..., description="The github url of the project")
author: str = Field(..., description="The author of the project")
description: str = Field(..., description="A short summary describing the project")
class projectList(OpenAISchema):
"List of correctly extracted github projects"
projects: list[githubProject]
| [] |
2024-01-10 | dhwaniserai/Podcast-Summarizer | Transcriber.py | import os
from dotenv import load_dotenv
import openai
from pydub import AudioSegment
class WhisperTranscriber:
def __init__(self, api_key):
#load_dotenv()
openai.api_key = api_key
self.openai_price = 0.006
def chunk(self, audio_path):
file_name = os.path.basename(audio_path)
file_size = os.path.getsize(audio_path)
audio_list = []
# Get length of audio file
audio = AudioSegment.from_mp3(audio_path)
duration = audio.duration_seconds
est_cost = duration * self.openai_price / 60
print(f'↪ 💵 Estimated cost: ${est_cost:.2f} ({(duration / 60):.2f} minutes)')
if file_size > 25 * 1024 * 1024:
print(f'↪ The audio file is too large: {(file_size / 1024 / 1024):.2f} MB (>25MB), chunking...')
# check if chunks already exist
if os.path.exists(f"downloads/whisper/{file_name.split('.')[0]}_0.mp3"):
print('↪ Chunks already exist, loading...')
for i in range(100):
chunk_name = f"downloads/whisper/{file_name.split('.')[0]}_{i}.mp3"
if os.path.exists(chunk_name):
audio_list.append(chunk_name)
else:
return audio_list
audio = AudioSegment.from_mp3(audio_path)
# PyDub handles time in milliseconds
chunk = 25 * 60 * 1000
# split the audio file into ~25 minute chunks
for i, chunk in enumerate(audio[::chunk]):
chunk_name = f"downloads/whisper/{file_name.split('.')[0]}_{i}.mp3"
if os.path.exists(chunk_name):
pass
audio_list.append(chunk_name)
chunk.export(chunk_name, format="mp3")
else:
audio_list.append(audio_path)
return audio_list
def transcribe(self, audio_path):
print(f'🗣️ Initializing Whisper transcriber...')
audio_list = self.chunk(audio_path)
print(f'↪ Chunk size: {len(audio_list)}')
transcriptions = []
for audio in audio_list:
print(f'\t↪ Transcribing {audio}...')
# Check if the transcript already exists
transcript_path = f"{audio.split('.')[0]}.txt"
if not os.path.exists(transcript_path):
# Convert the MP3 file to text using Whisper API
file = open(audio, "rb")
response = openai.Audio.transcribe("whisper-1", file)
# Check for errors in the API response
if "error" in response:
error_msg = response["error"]["message"]
raise Exception(f"⚠️ Transcription error: {error_msg}")
# Extract the transcript from the API response
transcript = response["text"].strip()
# Save the transcript to a text file
with open(transcript_path, "w") as f:
f.write(transcript)
transcriptions.append(transcript)
print(f"\t\t↪ saved transcript to {audio.split('.')[0]}.txt (words: {len(transcript.split())}")
else:
# Load the transcript from the text file
with open(transcript_path, "r") as f:
transcriptions.append(f.read())
pass
full_transcript = ' '.join(transcriptions)
print(f'↪ Total words: {len(full_transcript.split())} -- characters: {len(full_transcript)}')
return full_transcript
if __name__=="__main__":
wpt = WhisperTranscriber("")
wpt.transcribe("6oUTyFPLPvJE1jB50hm37l.mp3")
| [] |
2024-01-10 | ShayonMitra/Quiz | quiz.py | from langchain import PromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
import streamlit as st
import os
#required libraries are imported
os.environ["OPENAI_API_KEY"] = ""#Removed the api key. Please add the api key
#i have added the key to the openai_api_key environment variable
def create_the_prompt_template():
template="""
You are an expert quiz maker for technical topics.
Create a quiz with 5{type} of questions about the following:{topic}
For example if the topic is Data Structures and Algorithms and you have to generate programming questions:
You can give following questions: Implement linked list, Implement bfs, solve the knapsack problems
If you have to generate subjective questions on the same topic. You can give following questions: Write down the
time complexity of heap sort, bubble sort, bellman ford etc.
If you have to generate multiple choice questions, you can give following questions:
What is the time complexity of heap sort?
a)O(nlogn)
b)O(n)
c)O(1)
d)O(n^2)
"""
prompt = PromptTemplate.from_template(template)
return prompt
#I have given the prompt and some examples to specify the type of questions
def create_quiz_chain(prompt_template,llm):
return LLMChain(llm=llm, prompt=prompt_template)
#returns the quiz
def main():
st.title("Quiz Generator")
st.write("Write something about a topic and this generates a quiz")
prompt_template = create_the_prompt_template()
llm = ChatOpenAI()
chain = create_quiz_chain(prompt_template,llm)
topic = st.text_area("Enter something about the topic")
quiz_type = st.selectbox("Select the type of questions",["multiple-choice","subjective","programming"])
generate_button_clicked = st.button("Generate")
if generate_button_clicked:
quiz = chain.run(type=quiz_type,topic=topic)
st.write(quiz)
if __name__=="__main__":
main()
| [
"\n\tYou are an expert quiz maker for technical topics.\n\tCreate a quiz with 5{type} of questions about the following:{topic}\n\tFor example if the topic is Data Structures and Algorithms and you have to generate programming questions:\n\tYou can give following questions: Implement linked list, Implement bfs, solve the knapsack problems\n\tIf you have to generate subjective questions on the same topic. You can give following questions: Write down the \n\ttime complexity of heap sort, bubble sort, bellman ford etc.\n\tIf you have to generate multiple choice questions, you can give following questions: \n\tWhat is the time complexity of heap sort?\n\ta)O(nlogn)\n\tb)O(n)\n\tc)O(1)\n\td)O(n^2)\n\t"
] |
2024-01-10 | cicimmmmm/dify | api~core~indexing_runner.py | import datetime
import json
import re
import tempfile
import time
from pathlib import Path
from typing import Optional, List
from langchain.text_splitter import RecursiveCharacterTextSplitter
from llama_index import SimpleDirectoryReader
from llama_index.data_structs import Node
from llama_index.data_structs.node_v2 import DocumentRelationship
from llama_index.node_parser import SimpleNodeParser, NodeParser
from llama_index.readers.file.base import DEFAULT_FILE_EXTRACTOR
from llama_index.readers.file.markdown_parser import MarkdownParser
from core.index.readers.xlsx_parser import XLSXParser
from core.docstore.dataset_docstore import DatesetDocumentStore
from core.index.keyword_table_index import KeywordTableIndex
from core.index.readers.html_parser import HTMLParser
from core.index.readers.markdown_parser import MarkdownParser
from core.index.readers.pdf_parser import PDFParser
from core.index.spiltter.fixed_text_splitter import FixedRecursiveCharacterTextSplitter
from core.index.vector_index import VectorIndex
from core.llm.token_calculator import TokenCalculator
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from extensions.ext_storage import storage
from models.dataset import Document, Dataset, DocumentSegment, DatasetProcessRule
from models.model import UploadFile
class IndexingRunner:
def __init__(self, embedding_model_name: str = "text-embedding-ada-002"):
self.storage = storage
self.embedding_model_name = embedding_model_name
def run(self, document: Document):
"""Run the indexing process."""
# get dataset
dataset = Dataset.query.filter_by(
id=document.dataset_id
).first()
if not dataset:
raise ValueError("no dataset found")
# load file
text_docs = self._load_data(document)
# get the process rule
processing_rule = db.session.query(DatasetProcessRule). \
filter(DatasetProcessRule.id == document.dataset_process_rule_id). \
first()
# get node parser for splitting
node_parser = self._get_node_parser(processing_rule)
# split to nodes
nodes = self._step_split(
text_docs=text_docs,
node_parser=node_parser,
dataset=dataset,
document=document,
processing_rule=processing_rule
)
# build index
self._build_index(
dataset=dataset,
document=document,
nodes=nodes
)
def run_in_splitting_status(self, document: Document):
"""Run the indexing process when the index_status is splitting."""
# get dataset
dataset = Dataset.query.filter_by(
id=document.dataset_id
).first()
if not dataset:
raise ValueError("no dataset found")
# get exist document_segment list and delete
document_segments = DocumentSegment.query.filter_by(
dataset_id=dataset.id,
document_id=document.id
).all()
db.session.delete(document_segments)
db.session.commit()
# load file
text_docs = self._load_data(document)
# get the process rule
processing_rule = db.session.query(DatasetProcessRule). \
filter(DatasetProcessRule.id == document.dataset_process_rule_id). \
first()
# get node parser for splitting
node_parser = self._get_node_parser(processing_rule)
# split to nodes
nodes = self._step_split(
text_docs=text_docs,
node_parser=node_parser,
dataset=dataset,
document=document,
processing_rule=processing_rule
)
# build index
self._build_index(
dataset=dataset,
document=document,
nodes=nodes
)
def run_in_indexing_status(self, document: Document):
"""Run the indexing process when the index_status is indexing."""
# get dataset
dataset = Dataset.query.filter_by(
id=document.dataset_id
).first()
if not dataset:
raise ValueError("no dataset found")
# get exist document_segment list and delete
document_segments = DocumentSegment.query.filter_by(
dataset_id=dataset.id,
document_id=document.id
).all()
nodes = []
if document_segments:
for document_segment in document_segments:
# transform segment to node
if document_segment.status != "completed":
relationships = {
DocumentRelationship.SOURCE: document_segment.document_id,
}
previous_segment = document_segment.previous_segment
if previous_segment:
relationships[DocumentRelationship.PREVIOUS] = previous_segment.index_node_id
next_segment = document_segment.next_segment
if next_segment:
relationships[DocumentRelationship.NEXT] = next_segment.index_node_id
node = Node(
doc_id=document_segment.index_node_id,
doc_hash=document_segment.index_node_hash,
text=document_segment.content,
extra_info=None,
node_info=None,
relationships=relationships
)
nodes.append(node)
# build index
self._build_index(
dataset=dataset,
document=document,
nodes=nodes
)
def indexing_estimate(self, file_detail: UploadFile, tmp_processing_rule: dict) -> dict:
"""
Estimate the indexing for the document.
"""
# load data from file
text_docs = self._load_data_from_file(file_detail)
processing_rule = DatasetProcessRule(
mode=tmp_processing_rule["mode"],
rules=json.dumps(tmp_processing_rule["rules"])
)
# get node parser for splitting
node_parser = self._get_node_parser(processing_rule)
# split to nodes
nodes = self._split_to_nodes(
text_docs=text_docs,
node_parser=node_parser,
processing_rule=processing_rule
)
tokens = 0
preview_texts = []
for node in nodes:
if len(preview_texts) < 5:
preview_texts.append(node.get_text())
tokens += TokenCalculator.get_num_tokens(self.embedding_model_name, node.get_text())
return {
"total_segments": len(nodes),
"tokens": tokens,
"total_price": '{:f}'.format(TokenCalculator.get_token_price(self.embedding_model_name, tokens)),
"currency": TokenCalculator.get_currency(self.embedding_model_name),
"preview": preview_texts
}
def _load_data(self, document: Document) -> List[Document]:
# load file
if document.data_source_type != "upload_file":
return []
data_source_info = document.data_source_info_dict
if not data_source_info or 'upload_file_id' not in data_source_info:
raise ValueError("no upload file found")
file_detail = db.session.query(UploadFile). \
filter(UploadFile.id == data_source_info['upload_file_id']). \
one_or_none()
text_docs = self._load_data_from_file(file_detail)
# update document status to splitting
self._update_document_index_status(
document_id=document.id,
after_indexing_status="splitting",
extra_update_params={
Document.file_id: file_detail.id,
Document.word_count: sum([len(text_doc.text) for text_doc in text_docs]),
Document.parsing_completed_at: datetime.datetime.utcnow()
}
)
# replace doc id to document model id
for text_doc in text_docs:
# remove invalid symbol
text_doc.text = self.filter_string(text_doc.get_text())
text_doc.doc_id = document.id
return text_docs
def filter_string(self, text):
pattern = re.compile('[\x00-\x08\x0B\x0C\x0E-\x1F\x7F\x80-\xFF]')
return pattern.sub('', text)
def _load_data_from_file(self, upload_file: UploadFile) -> List[Document]:
with tempfile.TemporaryDirectory() as temp_dir:
suffix = Path(upload_file.key).suffix
filepath = f"{temp_dir}/{next(tempfile._get_candidate_names())}{suffix}"
self.storage.download(upload_file.key, filepath)
file_extractor = DEFAULT_FILE_EXTRACTOR.copy()
file_extractor[".markdown"] = MarkdownParser()
file_extractor[".md"] = MarkdownParser()
file_extractor[".html"] = HTMLParser()
file_extractor[".htm"] = HTMLParser()
file_extractor[".pdf"] = PDFParser({'upload_file': upload_file})
file_extractor[".xlsx"] = XLSXParser()
loader = SimpleDirectoryReader(input_files=[filepath], file_extractor=file_extractor)
text_docs = loader.load_data()
return text_docs
def _get_node_parser(self, processing_rule: DatasetProcessRule) -> NodeParser:
"""
Get the NodeParser object according to the processing rule.
"""
if processing_rule.mode == "custom":
# The user-defined segmentation rule
rules = json.loads(processing_rule.rules)
segmentation = rules["segmentation"]
if segmentation["max_tokens"] < 50 or segmentation["max_tokens"] > 1000:
raise ValueError("Custom segment length should be between 50 and 1000.")
separator = segmentation["separator"]
if separator:
separator = separator.replace('\\n', '\n')
character_splitter = FixedRecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=segmentation["max_tokens"],
chunk_overlap=0,
fixed_separator=separator,
separators=["\n\n", "。", ".", " ", ""]
)
else:
# Automatic segmentation
character_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=DatasetProcessRule.AUTOMATIC_RULES['segmentation']['max_tokens'],
chunk_overlap=0,
separators=["\n\n", "。", ".", " ", ""]
)
return SimpleNodeParser(text_splitter=character_splitter, include_extra_info=True)
def _step_split(self, text_docs: List[Document], node_parser: NodeParser,
dataset: Dataset, document: Document, processing_rule: DatasetProcessRule) -> List[Node]:
"""
Split the text documents into nodes and save them to the document segment.
"""
nodes = self._split_to_nodes(
text_docs=text_docs,
node_parser=node_parser,
processing_rule=processing_rule
)
# save node to document segment
doc_store = DatesetDocumentStore(
dataset=dataset,
user_id=document.created_by,
embedding_model_name=self.embedding_model_name,
document_id=document.id
)
doc_store.add_documents(nodes)
# update document status to indexing
cur_time = datetime.datetime.utcnow()
self._update_document_index_status(
document_id=document.id,
after_indexing_status="indexing",
extra_update_params={
Document.cleaning_completed_at: cur_time,
Document.splitting_completed_at: cur_time,
}
)
# update segment status to indexing
self._update_segments_by_document(
document_id=document.id,
update_params={
DocumentSegment.status: "indexing",
DocumentSegment.indexing_at: datetime.datetime.utcnow()
}
)
return nodes
def _split_to_nodes(self, text_docs: List[Document], node_parser: NodeParser,
processing_rule: DatasetProcessRule) -> List[Node]:
"""
Split the text documents into nodes.
"""
all_nodes = []
for text_doc in text_docs:
# document clean
document_text = self._document_clean(text_doc.get_text(), processing_rule)
text_doc.text = document_text
# parse document to nodes
nodes = node_parser.get_nodes_from_documents([text_doc])
nodes = [node for node in nodes if node.text is not None and node.text.strip()]
all_nodes.extend(nodes)
return all_nodes
def _document_clean(self, text: str, processing_rule: DatasetProcessRule) -> str:
"""
Clean the document text according to the processing rules.
"""
if processing_rule.mode == "automatic":
rules = DatasetProcessRule.AUTOMATIC_RULES
else:
rules = json.loads(processing_rule.rules) if processing_rule.rules else {}
if 'pre_processing_rules' in rules:
pre_processing_rules = rules["pre_processing_rules"]
for pre_processing_rule in pre_processing_rules:
if pre_processing_rule["id"] == "remove_extra_spaces" and pre_processing_rule["enabled"] is True:
# Remove extra spaces
pattern = r'\n{3,}'
text = re.sub(pattern, '\n\n', text)
pattern = r'[\t\f\r\x20\u00a0\u1680\u180e\u2000-\u200a\u202f\u205f\u3000]{2,}'
text = re.sub(pattern, ' ', text)
elif pre_processing_rule["id"] == "remove_urls_emails" and pre_processing_rule["enabled"] is True:
# Remove email
pattern = r'([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)'
text = re.sub(pattern, '', text)
# Remove URL
pattern = r'https?://[^\s]+'
text = re.sub(pattern, '', text)
return text
def _build_index(self, dataset: Dataset, document: Document, nodes: List[Node]) -> None:
"""
Build the index for the document.
"""
vector_index = VectorIndex(dataset=dataset)
keyword_table_index = KeywordTableIndex(dataset=dataset)
# chunk nodes by chunk size
indexing_start_at = time.perf_counter()
tokens = 0
chunk_size = 100
for i in range(0, len(nodes), chunk_size):
# check document is paused
self._check_document_paused_status(document.id)
chunk_nodes = nodes[i:i + chunk_size]
tokens += sum(
TokenCalculator.get_num_tokens(self.embedding_model_name, node.get_text()) for node in chunk_nodes
)
# save vector index
if dataset.indexing_technique == "high_quality":
vector_index.add_nodes(chunk_nodes)
# save keyword index
keyword_table_index.add_nodes(chunk_nodes)
node_ids = [node.doc_id for node in chunk_nodes]
db.session.query(DocumentSegment).filter(
DocumentSegment.document_id == document.id,
DocumentSegment.index_node_id.in_(node_ids),
DocumentSegment.status == "indexing"
).update({
DocumentSegment.status: "completed",
DocumentSegment.completed_at: datetime.datetime.utcnow()
})
db.session.commit()
indexing_end_at = time.perf_counter()
# update document status to completed
self._update_document_index_status(
document_id=document.id,
after_indexing_status="completed",
extra_update_params={
Document.tokens: tokens,
Document.completed_at: datetime.datetime.utcnow(),
Document.indexing_latency: indexing_end_at - indexing_start_at,
}
)
def _check_document_paused_status(self, document_id: str):
indexing_cache_key = 'document_{}_is_paused'.format(document_id)
result = redis_client.get(indexing_cache_key)
if result:
raise DocumentIsPausedException()
def _update_document_index_status(self, document_id: str, after_indexing_status: str,
extra_update_params: Optional[dict] = None) -> None:
"""
Update the document indexing status.
"""
count = Document.query.filter_by(id=document_id, is_paused=True).count()
if count > 0:
raise DocumentIsPausedException()
update_params = {
Document.indexing_status: after_indexing_status
}
if extra_update_params:
update_params.update(extra_update_params)
Document.query.filter_by(id=document_id).update(update_params)
db.session.commit()
def _update_segments_by_document(self, document_id: str, update_params: dict) -> None:
"""
Update the document segment by document id.
"""
DocumentSegment.query.filter_by(document_id=document_id).update(update_params)
db.session.commit()
class DocumentIsPausedException(Exception):
pass
| [] |
2024-01-10 | Simple-Technical-Solutions/chat | server~commons~langflow_utils.py | import traceback
import time
from langflow.interface.run import fix_memory_inputs, load_langchain_object
from fastapi import HTTPException
from sqlalchemy.orm import Session
from commons import config as c
from database_utils.chatbot import get_chatbot
from database_utils.intermediate_step import insert_intermediate_steps
from database_utils.prompt import create_prompt
from schemas.prompt_schema import Prompt
from commons.gpt_rating import ask_for_rating
from database import Prompt as ChatBot
from commons.types import CFPromptResult
logger = c.get_logger(__name__)
def format_intermediate_steps(intermediate_steps):
formatted_chain = []
for step in intermediate_steps:
action = step[0]
observation = step[1]
formatted_chain.append(
{
"action": action.tool,
"action_input": action.tool_input,
"observation": observation,
}
)
return formatted_chain
def get_result_and_thought_using_graph(langchain_object, message: str):
"""Get result and thought from extracted json"""
num_of_tokens = len(message.split())
try:
if hasattr(langchain_object, "verbose"):
langchain_object.verbose = True
chat_input = None
memory_key = ""
if hasattr(langchain_object, "memory") and langchain_object.memory is not None:
memory_key = langchain_object.memory.memory_key
for key in langchain_object.input_keys:
if key not in [memory_key, "chat_history"]:
chat_input = {key: message}
if hasattr(langchain_object, "return_intermediate_steps"):
langchain_object.return_intermediate_steps = True
fix_memory_inputs(langchain_object)
from langchain.callbacks import get_openai_callback
with get_openai_callback() as cb:
output = langchain_object(chat_input)
logger.debug(f"Total tokens {cb.total_tokens}")
num_of_tokens = cb.total_tokens
intermediate_steps = output.get("intermediate_steps", []) if isinstance(output, dict) else []
result = output.get(langchain_object.output_keys[0]) if isinstance(output, dict) else output
if intermediate_steps:
thought = format_intermediate_steps(intermediate_steps)
else:
thought = []
except Exception as exc:
traceback.print_exc()
raise ValueError(f"Error: {str(exc)}") from exc
return result, thought, num_of_tokens
def process_graph(message, chat_history, data_graph):
"""
Process graph by extracting input variables and replacing ZeroShotPrompt
with PromptTemplate,then run the graph and return the result and thought.
"""
# Load langchain object
logger.debug("Loading langchain object")
is_first_message = len(chat_history) == 0
computed_hash, langchain_object = load_langchain_object(data_graph, True)
logger.debug("Loaded langchain object")
if langchain_object is None:
# Raise user facing error
raise ValueError("There was an error loading the langchain_object. Please, check all the nodes and try again.")
# Generate result and thought
logger.debug("Generating result and thought")
result, thought, num_tokens = get_result_and_thought_using_graph(langchain_object, message)
logger.debug("Generated result and thought")
# Save langchain_object to cache
# We have to save it here because if the
# memory is updated we need to keep the new values
logger.debug("Saving langchain object to cache")
# save_cache(computed_hash, langchain_object, is_first_message)
logger.debug("Saved langchain object to cache")
# return {"result": str(result), "thought": thought, "num_tokens": num_tokens}
return str(result), thought, num_tokens
def get_prompt(chatbot: ChatBot, prompt: Prompt, db: Session, start: float) -> CFPromptResult:
try:
logger.debug("Adding prompt to database")
prompt_row = create_prompt(db, chatbot.id, prompt.new_message, prompt.session_id)
_result, thought, num_tokens = process_graph(prompt.new_message, prompt.chat_history, chatbot.dag)
result = CFPromptResult(result=str(_result), thought=thought, num_tokens=num_tokens, prompt=prompt_row, prompt_id=prompt_row.id) # type: ignore
prompt_row.response = result.result # type: ignore
prompt_row.time_taken = float(time.time() - start) # type: ignore
insert_intermediate_steps(db, prompt_row.id, result.thought) # type: ignore
message = f"User: {prompt.new_message}\nBot: {result.result}"
prompt_row.gpt_rating = ask_for_rating(message) # type: ignore
prompt_row.num_tokens = result.num_tokens # type: ignore
db.commit()
# result["prompt_id"] = prompt_row.id
logger.debug("Processed graph")
return result
except Exception as e:
traceback.print_exc()
logger.exception(e)
raise HTTPException(status_code=500, detail=str(e)) from e
| [] |
2024-01-10 | elvishoo/bilingual_book_maker | make_book.py | import argparse
import pickle
import time
from abc import abstractmethod
from copy import copy
from os import environ as env
from pathlib import Path
import openai
import requests
from bs4 import BeautifulSoup as bs
from ebooklib import epub
from rich import print
NO_LIMIT = False
IS_TEST = False
RESUME = False
LANG = "Traditional Chinese"
class Base:
def __init__(self, key):
pass
@abstractmethod
def translate(self, text):
pass
class GPT3(Base):
def __init__(self, key):
self.api_key = key
self.api_url = "https://api.openai.com/v1/completions"
self.headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}",
}
# TODO support more models here
self.data = {
"prompt": "",
"model": "text-davinci-003",
"max_tokens": 1024,
"temperature": 1,
"top_p": 1,
}
self.session = requests.session()
def translate(self, text):
print(text)
self.data["prompt"] = f"Please help me to translate the following text to {LANG}: \n\n{text}"
r = self.session.post(self.api_url, headers=self.headers, json=self.data)
if not r.ok:
return text
t_text = r.json().get("choices")[0].get("text", "").strip()
print(t_text)
return t_text
class DeepL(Base):
def __init__(self, session, key):
super().__init__(session, key)
def translate(self, text):
return super().translate(text)
class ChatGPT(Base):
def __init__(self, key):
super().__init__(key)
self.key = key
def translate(self, text):
print(text)
openai.api_key = self.key
try:
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
# english prompt here to save tokens
"content": f"Please help me to translate the following text to {LANG}. Please return only translated content not include the origin text. Here is the text: \n\n{text}",
}
],
)
t_text = (
completion["choices"][0]
.get("message")
.get("content")
.encode("utf8")
.decode()
)
if not NO_LIMIT:
# for time limit
time.sleep(3)
except Exception as e:
print(str(e), "will sleep 60 seconds")
# TIME LIMIT for open api please pay
time.sleep(60)
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": f"Please help me to translate the following text to {LANG}. Please return only translated content not include the origin text. Here is the text: \n\n{text}",
}
],
)
t_text = (
completion["choices"][0]
.get("message")
.get("content")
.encode("utf8")
.decode()
)
print(t_text)
return t_text
class BEPUB:
def __init__(self, epub_name, model, key, resume):
self.epub_name = epub_name
self.new_epub = epub.EpubBook()
self.translate_model = model(key)
self.origin_book = epub.read_epub(self.epub_name)
self.p_to_save = []
self.resume = resume
self.bin_path = f"{Path(epub_name).parent}/.{Path(epub_name).stem}.temp.bin"
if self.resume:
self.load_state()
@staticmethod
def _is_special_text(text):
return text.isdigit() or text.isspace()
def make_bilingual_book(self):
new_book = epub.EpubBook()
new_book.metadata = self.origin_book.metadata
new_book.spine = self.origin_book.spine
new_book.toc = self.origin_book.toc
all_items = list(self.origin_book.get_items())
# we just translate tag p
all_p_length = sum(
[len(bs(i.content, "html.parser").findAll("p")) for i in all_items]
)
print("TODO need process bar here: " + str(all_p_length))
index = 0
p_to_save_len = len(self.p_to_save)
try:
for i in self.origin_book.get_items():
if i.get_type() == 9:
soup = bs(i.content, "html.parser")
p_list = soup.findAll("p")
is_test_done = IS_TEST and index > TEST_NUM
for p in p_list:
if is_test_done or not p.text or self._is_special_text(p.text):
continue
new_p = copy(p)
# TODO banch of p to translate then combine
# PR welcome here
if self.resume and index < p_to_save_len:
new_p.string = self.p_to_save[index]
else:
new_p.string = self.translate_model.translate(p.text)
self.p_to_save.append(new_p.text)
p.insert_after(new_p)
index += 1
if IS_TEST and index > TEST_NUM:
break
i.content = soup.prettify().encode()
new_book.add_item(i)
name = self.epub_name.split(".")[0]
epub.write_epub(f"{name}_bilingual.epub", new_book, {})
except (KeyboardInterrupt, Exception) as e:
print(e)
print("you can resume it next time")
self.save_progress()
exit(0)
def load_state(self):
try:
with open(self.bin_path, "rb") as f:
self.p_to_save = pickle.load(f)
except:
raise Exception("can not load resume file")
def save_progress(self):
try:
with open(self.bin_path, "wb") as f:
pickle.dump(self.p_to_save, f)
except:
raise Exception("can not save resume file")
if __name__ == "__main__":
MODEL_DICT = {"gpt3": GPT3, "chatgpt": ChatGPT}
parser = argparse.ArgumentParser()
parser.add_argument(
"--book_name",
dest="book_name",
type=str,
help="your epub book name",
)
parser.add_argument(
"--openai_key",
dest="openai_key",
type=str,
default="",
help="openai api key",
)
parser.add_argument(
"--no_limit",
dest="no_limit",
action="store_true",
help="if you pay add it",
)
parser.add_argument(
"--test",
dest="test",
action="store_true",
help="if test we only translat 10 contents you can easily check",
)
parser.add_argument(
"--test_num",
dest="test_num",
type=int,
default=10,
help="test num for the test",
)
parser.add_argument(
"-m",
"--model",
dest="model",
type=str,
default="chatgpt",
choices=["chatgpt", "gpt3"], # support DeepL later
help="Use which model",
)
parser.add_argument(
"--resume",
dest="resume",
action="store_true",
help="if program accidentally stop you can use this to resume",
)
parser.add_argument(
"--lang",
dest="lang",
type=str,
default="zh-tw",
choices=["zh-cn", "zh-tw"],
help="Choose lang for zh-cn (Simplified Chinese) or zh-tw (Traditional Chinese)",
)
options = parser.parse_args()
NO_LIMIT = options.no_limit
IS_TEST = options.test
TEST_NUM = options.test_num
if options.lang == "zh-cn":
LANG = "Simplified Chinese"
elif options.lang == "zh-tw":
LANG = "Traditional Chinese"
OPENAI_API_KEY = options.openai_key or env.get("OPENAI_API_KEY")
RESUME = options.resume
if not OPENAI_API_KEY:
raise Exception("Need openai API key, please google how to")
if not options.book_name.endswith(".epub"):
raise Exception("please use epub file")
model = MODEL_DICT.get(options.model, "chatgpt")
e = BEPUB(options.book_name, model, OPENAI_API_KEY, RESUME)
e.make_bilingual_book()
| [
"Please help me to translate the following text to PLACEHOLDER. Please return only translated content not include the origin text. Here is the text: \n\nPLACEHOLDER"
] |
2024-01-10 | wcventure/ChuanhuChatGPT | modules~models~base_model.py | from __future__ import annotations
from typing import TYPE_CHECKING, List
import logging
import json
import commentjson as cjson
import os
import sys
import requests
import urllib3
import traceback
import pathlib
import shutil
from tqdm import tqdm
import colorama
from duckduckgo_search import DDGS
from itertools import islice
import asyncio
import aiohttp
from enum import Enum
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.callbacks.manager import BaseCallbackManager
from typing import Any, Dict, List, Optional, Union
from langchain.callbacks.base import BaseCallbackHandler
from langchain.input import print_text
from langchain.schema import AgentAction, AgentFinish, LLMResult
from threading import Thread, Condition
from collections import deque
from langchain.chat_models.base import BaseChatModel
from langchain.schema import HumanMessage, AIMessage, SystemMessage, BaseMessage
from ..presets import *
from ..index_func import *
from ..utils import *
from .. import shared
from ..config import retrieve_proxy
class CallbackToIterator:
def __init__(self):
self.queue = deque()
self.cond = Condition()
self.finished = False
def callback(self, result):
with self.cond:
self.queue.append(result)
self.cond.notify() # Wake up the generator.
def __iter__(self):
return self
def __next__(self):
with self.cond:
# Wait for a value to be added to the queue.
while not self.queue and not self.finished:
self.cond.wait()
if not self.queue:
raise StopIteration()
return self.queue.popleft()
def finish(self):
with self.cond:
self.finished = True
self.cond.notify() # Wake up the generator if it's waiting.
def get_action_description(text):
match = re.search('```(.*?)```', text, re.S)
json_text = match.group(1)
# 把json转化为python字典
json_dict = json.loads(json_text)
# 提取'action'和'action_input'的值
action_name = json_dict['action']
action_input = json_dict['action_input']
if action_name != "Final Answer":
return f'<!-- S O PREFIX --><p class="agent-prefix">{action_name}: {action_input}\n\n</p><!-- E O PREFIX -->'
else:
return ""
class ChuanhuCallbackHandler(BaseCallbackHandler):
def __init__(self, callback) -> None:
"""Initialize callback handler."""
self.callback = callback
def on_agent_action(
self, action: AgentAction, color: Optional[str] = None, **kwargs: Any
) -> Any:
self.callback(get_action_description(action.log))
def on_tool_end(
self,
output: str,
color: Optional[str] = None,
observation_prefix: Optional[str] = None,
llm_prefix: Optional[str] = None,
**kwargs: Any,
) -> None:
"""If not the final action, print out observation."""
# if observation_prefix is not None:
# self.callback(f"\n\n{observation_prefix}")
# self.callback(output)
# if llm_prefix is not None:
# self.callback(f"\n\n{llm_prefix}")
if observation_prefix is not None:
logging.info(observation_prefix)
self.callback(output)
if llm_prefix is not None:
logging.info(llm_prefix)
def on_agent_finish(
self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any
) -> None:
# self.callback(f"{finish.log}\n\n")
logging.info(finish.log)
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
self.callback(token)
def on_chat_model_start(self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) -> Any:
"""Run when a chat model starts running."""
pass
class ModelType(Enum):
Unknown = -1
OpenAI = 0
ChatGLM = 1
LLaMA = 2
XMChat = 3
StableLM = 4
MOSS = 5
YuanAI = 6
Minimax = 7
ChuanhuAgent = 8
GooglePaLM = 9
LangchainChat = 10
Midjourney = 11
Spark = 12
@classmethod
def get_type(cls, model_name: str):
model_type = None
model_name_lower = model_name.lower()
if "gpt" in model_name_lower:
model_type = ModelType.OpenAI
elif "code-" in model_name_lower:
model_type = ModelType.OpenAI
elif "spec-" in model_name_lower or "specification-" in model_name_lower:
model_type = ModelType.OpenAI
elif "concurrency-" in model_name_lower:
model_type = ModelType.OpenAI
elif "chatglm" in model_name_lower:
model_type = ModelType.ChatGLM
elif "llama" in model_name_lower or "alpaca" in model_name_lower:
model_type = ModelType.LLaMA
elif "xmchat" in model_name_lower:
model_type = ModelType.XMChat
elif "stablelm" in model_name_lower:
model_type = ModelType.StableLM
elif "moss" in model_name_lower:
model_type = ModelType.MOSS
elif "yuanai" in model_name_lower:
model_type = ModelType.YuanAI
elif "minimax" in model_name_lower:
model_type = ModelType.Minimax
elif "川虎助理" in model_name_lower:
model_type = ModelType.ChuanhuAgent
elif "palm" in model_name_lower:
model_type = ModelType.GooglePaLM
elif "midjourney" in model_name_lower:
model_type = ModelType.Midjourney
elif "azure" in model_name_lower or "api" in model_name_lower:
model_type = ModelType.LangchainChat
elif "星火大模型" in model_name_lower:
model_type = ModelType.Spark
else:
model_type = ModelType.LLaMA
return model_type
class BaseLLMModel:
def __init__(
self,
model_name,
system_prompt=INITIAL_SYSTEM_PROMPT,
temperature=1.0,
top_p=1.0,
n_choices=1,
stop=None,
max_generation_token=None,
presence_penalty=0,
frequency_penalty=0,
logit_bias=None,
user="",
) -> None:
self.history = []
self.all_token_counts = []
self.model_name = model_name
self.model_type = ModelType.get_type(model_name)
try:
self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name]
except KeyError:
self.token_upper_limit = DEFAULT_TOKEN_LIMIT
self.interrupted = False
self.system_prompt = system_prompt
self.api_key = None
self.need_api_key = False
self.single_turn = False
self.history_file_path = get_first_history_name(user)
self.temperature = temperature
self.top_p = top_p
self.n_choices = n_choices
self.stop_sequence = stop
self.max_generation_token = None
self.presence_penalty = presence_penalty
self.frequency_penalty = frequency_penalty
self.logit_bias = logit_bias
self.user_identifier = user
def set_model_name(self, model_name):
self.model_name = model_name
def get_model_name(self):
return self.model_name
def set_sys_prompt(self, sys_prompt):
self.system_prompt = sys_prompt
print(f"sys_prompt设置为{sys_prompt}")
def get_answer_stream_iter(self):
"""stream predict, need to be implemented
conversations are stored in self.history, with the most recent question, in OpenAI format
should return a generator, each time give the next word (str) in the answer
"""
logging.warning(
"stream predict not implemented, using at once predict instead")
response, _ = self.get_answer_at_once()
yield response
def get_answer_at_once(self):
"""predict at once, need to be implemented
conversations are stored in self.history, with the most recent question, in OpenAI format
Should return:
the answer (str)
total token count (int)
"""
logging.warning(
"at once predict not implemented, using stream predict instead")
response_iter = self.get_answer_stream_iter()
count = 0
for response in response_iter:
count += 1
return response, sum(self.all_token_counts) + count
def billing_info(self):
"""get billing infomation, inplement if needed"""
logging.warning("billing info not implemented, using default")
return BILLING_NOT_APPLICABLE_MSG
def count_token(self, user_input):
"""get token count from input, implement if needed"""
# logging.warning("token count not implemented, using default")
return len(user_input)
def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""):
def get_return_value():
return chatbot, status_text
status_text = i18n("开始实时传输回答……")
if fake_input:
chatbot.append((fake_input, ""))
else:
chatbot.append((inputs, ""))
user_token_count = self.count_token(inputs)
self.all_token_counts.append(user_token_count)
logging.debug(f"输入token计数: {user_token_count}")
stream_iter = self.get_answer_stream_iter()
if display_append:
display_append = '\n\n<hr class="append-display no-in-raw" />' + display_append
partial_text = ""
token_increment = 1
for partial_text in stream_iter:
if type(partial_text) == tuple:
partial_text, token_increment = partial_text
chatbot[-1] = (chatbot[-1][0], partial_text + display_append)
self.all_token_counts[-1] += token_increment
status_text = self.token_message()
yield get_return_value()
if self.interrupted:
self.recover()
break
self.history.append(construct_assistant(partial_text))
def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""):
if fake_input:
chatbot.append((fake_input, ""))
else:
chatbot.append((inputs, ""))
if fake_input is not None:
user_token_count = self.count_token(fake_input)
else:
user_token_count = self.count_token(inputs)
self.all_token_counts.append(user_token_count)
ai_reply, total_token_count = self.get_answer_at_once()
self.history.append(construct_assistant(ai_reply))
if fake_input is not None:
self.history[-2] = construct_user(fake_input)
chatbot[-1] = (chatbot[-1][0], ai_reply + display_append)
if fake_input is not None:
self.all_token_counts[-1] += count_token(
construct_assistant(ai_reply))
else:
self.all_token_counts[-1] = total_token_count - \
sum(self.all_token_counts)
status_text = self.token_message()
return chatbot, status_text
def handle_file_upload(self, files, chatbot, language):
"""if the model accepts multi modal input, implement this function"""
status = gr.Markdown.update()
if files:
index = construct_index(self.api_key, file_src=files)
status = i18n("索引构建完成")
return gr.Files.update(), chatbot, status
def summarize_index(self, files, chatbot, language):
status = gr.Markdown.update()
if files:
index = construct_index(self.api_key, file_src=files)
status = i18n("总结完成")
logging.info(i18n("生成内容总结中……"))
os.environ["OPENAI_API_KEY"] = self.api_key
from langchain.chains.summarize import load_summarize_chain
from langchain.prompts import PromptTemplate
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import StdOutCallbackHandler
prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":"
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["text"])
llm = ChatOpenAI()
chain = load_summarize_chain(
llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT)
summary = chain({"input_documents": list(index.docstore.__dict__[
"_dict"].values())}, return_only_outputs=True)["output_text"]
print(i18n("总结") + f": {summary}")
chatbot.append([i18n("上传了")+str(len(files))+"个文件", summary])
return chatbot, status
def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot):
fake_inputs = None
display_append = []
limited_context = False
fake_inputs = real_inputs
if files:
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from langchain.vectorstores.base import VectorStoreRetriever
limited_context = True
msg = "加载索引中……"
logging.info(msg)
index = construct_index(self.api_key, file_src=files)
assert index is not None, "获取索引失败"
msg = "索引获取成功,生成回答中……"
logging.info(msg)
with retrieve_proxy():
retriever = VectorStoreRetriever(vectorstore=index, search_type="similarity_score_threshold", search_kwargs={
"k": 6, "score_threshold": 0.5})
relevant_documents = retriever.get_relevant_documents(
real_inputs)
reference_results = [[d.page_content.strip("�"), os.path.basename(
d.metadata["source"])] for d in relevant_documents]
reference_results = add_source_numbers(reference_results)
display_append = add_details(reference_results)
display_append = "\n\n" + "".join(display_append)
real_inputs = (
replace_today(PROMPT_TEMPLATE)
.replace("{query_str}", real_inputs)
.replace("{context_str}", "\n\n".join(reference_results))
.replace("{reply_language}", reply_language)
)
elif use_websearch:
search_results = []
with DDGS() as ddgs:
ddgs_gen = ddgs.text(real_inputs, backend="lite")
for r in islice(ddgs_gen, 10):
search_results.append(r)
reference_results = []
for idx, result in enumerate(search_results):
logging.debug(f"搜索结果{idx + 1}:{result}")
domain_name = urllib3.util.parse_url(result['href']).host
reference_results.append([result['body'], result['href']])
display_append.append(
# f"{idx+1}. [{domain_name}]({result['href']})\n"
f"<a href=\"{result['href']}\" target=\"_blank\">{idx+1}. {result['title']}</a>"
)
reference_results = add_source_numbers(reference_results)
# display_append = "<ol>\n\n" + "".join(display_append) + "</ol>"
display_append = '<div class = "source-a">' + \
"".join(display_append) + '</div>'
real_inputs = (
replace_today(WEBSEARCH_PTOMPT_TEMPLATE)
.replace("{query}", real_inputs)
.replace("{web_results}", "\n\n".join(reference_results))
.replace("{reply_language}", reply_language)
)
else:
display_append = ""
return limited_context, fake_inputs, display_append, real_inputs, chatbot
def predict(
self,
inputs,
chatbot,
stream=False,
use_websearch=False,
files=None,
reply_language="中文",
should_check_token_count=True,
): # repetition_penalty, top_k
status_text = "开始生成回答……"
logging.info(
"用户" + f"{self.user_identifier}" + "的输入为:" +
colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL
)
if should_check_token_count:
yield chatbot + [(inputs, "")], status_text
if reply_language == "跟随问题语言(不稳定)":
reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch."
limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(
real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot)
yield chatbot + [(fake_inputs, "")], status_text
if (
self.need_api_key and
self.api_key is None
and not shared.state.multi_api_key
):
status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG
logging.info(status_text)
chatbot.append((inputs, ""))
if len(self.history) == 0:
self.history.append(construct_user(inputs))
self.history.append("")
self.all_token_counts.append(0)
else:
self.history[-2] = construct_user(inputs)
yield chatbot + [(inputs, "")], status_text
return
elif len(inputs.strip()) == 0:
status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG
logging.info(status_text)
yield chatbot + [(inputs, "")], status_text
return
if self.single_turn:
self.history = []
self.all_token_counts = []
self.history.append(construct_user(inputs))
try:
if stream:
logging.debug("使用流式传输")
iter = self.stream_next_chatbot(
inputs,
chatbot,
fake_input=fake_inputs,
display_append=display_append,
)
for chatbot, status_text in iter:
yield chatbot, status_text
else:
logging.debug("不使用流式传输")
chatbot, status_text = self.next_chatbot_at_once(
inputs,
chatbot,
fake_input=fake_inputs,
display_append=display_append,
)
yield chatbot, status_text
except Exception as e:
traceback.print_exc()
status_text = STANDARD_ERROR_MSG + beautify_err_msg(str(e))
yield chatbot, status_text
if len(self.history) > 1 and self.history[-1]["content"] != inputs:
logging.info(
"回答为:"
+ colorama.Fore.BLUE
+ f"{self.history[-1]['content']}"
+ colorama.Style.RESET_ALL
)
if limited_context:
# self.history = self.history[-4:]
# self.all_token_counts = self.all_token_counts[-2:]
self.history = []
self.all_token_counts = []
max_token = self.token_upper_limit - TOKEN_OFFSET
if sum(self.all_token_counts) > max_token and should_check_token_count:
count = 0
while (
sum(self.all_token_counts)
> self.token_upper_limit * REDUCE_TOKEN_FACTOR
and sum(self.all_token_counts) > 0
):
count += 1
del self.all_token_counts[0]
del self.history[:2]
logging.info(status_text)
status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话"
yield chatbot, status_text
self.auto_save(chatbot)
def retry(
self,
chatbot,
stream=False,
use_websearch=False,
files=None,
reply_language="中文",
):
logging.debug("重试中……")
if len(self.history) > 1:
inputs = self.history[-2]["content"]
del self.history[-2:]
if len(self.all_token_counts) > 0:
self.all_token_counts.pop()
elif len(chatbot) > 0:
inputs = chatbot[-1][0]
if '<div class="user-message">' in inputs:
inputs = inputs.split('<div class="user-message">')[1]
inputs = inputs.split("</div>")[0]
elif len(self.history) == 1:
inputs = self.history[-1]["content"]
del self.history[-1]
else:
yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的"
return
iter = self.predict(
inputs,
chatbot,
stream=stream,
use_websearch=use_websearch,
files=files,
reply_language=reply_language,
)
for x in iter:
yield x
logging.debug("重试完毕")
# def reduce_token_size(self, chatbot):
# logging.info("开始减少token数量……")
# chatbot, status_text = self.next_chatbot_at_once(
# summarize_prompt,
# chatbot
# )
# max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR
# num_chat = find_n(self.all_token_counts, max_token_count)
# logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats")
# chatbot = chatbot[:-1]
# self.history = self.history[-2*num_chat:] if num_chat > 0 else []
# self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else []
# msg = f"保留了最近{num_chat}轮对话"
# logging.info(msg)
# logging.info("减少token数量完毕")
# return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0])
def interrupt(self):
self.interrupted = True
def recover(self):
self.interrupted = False
def set_token_upper_limit(self, new_upper_limit):
self.token_upper_limit = new_upper_limit
print(f"token上限设置为{new_upper_limit}")
def set_temperature(self, new_temperature):
self.temperature = new_temperature
def set_top_p(self, new_top_p):
self.top_p = new_top_p
def set_n_choices(self, new_n_choices):
self.n_choices = new_n_choices
def set_stop_sequence(self, new_stop_sequence: str):
new_stop_sequence = new_stop_sequence.split(",")
self.stop_sequence = new_stop_sequence
def set_max_tokens(self, new_max_tokens):
self.max_generation_token = new_max_tokens
def set_presence_penalty(self, new_presence_penalty):
self.presence_penalty = new_presence_penalty
def set_frequency_penalty(self, new_frequency_penalty):
self.frequency_penalty = new_frequency_penalty
def set_logit_bias(self, logit_bias):
logit_bias = logit_bias.split()
bias_map = {}
encoding = tiktoken.get_encoding("cl100k_base")
for line in logit_bias:
word, bias_amount = line.split(":")
if word:
for token in encoding.encode(word):
bias_map[token] = float(bias_amount)
self.logit_bias = bias_map
def set_user_identifier(self, new_user_identifier):
self.user_identifier = new_user_identifier
def set_system_prompt(self, new_system_prompt):
self.system_prompt = new_system_prompt
def set_key(self, new_access_key):
if "*" not in new_access_key:
self.api_key = new_access_key.strip()
msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key)
logging.info(msg)
return self.api_key, msg
else:
return gr.update(), gr.update()
def set_single_turn(self, new_single_turn):
self.single_turn = new_single_turn
def reset(self):
self.history = []
self.all_token_counts = []
self.interrupted = False
self.history_file_path = new_auto_history_filename(self.user_identifier)
history_name = self.history_file_path[:-5]
choices = [history_name] + get_history_names(self.user_identifier)
return [], self.token_message([0]), gr.Radio.update(choices=choices, value=history_name), ""
def delete_first_conversation(self):
if self.history:
del self.history[:2]
del self.all_token_counts[0]
return self.token_message()
def delete_last_conversation(self, chatbot):
if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]:
msg = "由于包含报错信息,只删除chatbot记录"
chatbot.pop()
return chatbot, self.history
if len(self.history) > 0:
self.history.pop()
self.history.pop()
if len(chatbot) > 0:
msg = "删除了一组chatbot对话"
chatbot.pop()
if len(self.all_token_counts) > 0:
msg = "删除了一组对话的token计数记录"
self.all_token_counts.pop()
msg = "删除了一组对话"
self.auto_save(chatbot)
return chatbot, msg
def token_message(self, token_lst=None):
if token_lst is None:
token_lst = self.all_token_counts
token_sum = 0
for i in range(len(token_lst)):
token_sum += sum(token_lst[: i + 1])
return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens"
def rename_chat_history(self, filename, chatbot, user_name):
if filename == "":
return gr.update()
if not filename.endswith(".json"):
filename += ".json"
self.delete_chat_history(self.history_file_path, user_name)
# 命名重复检测
repeat_file_index = 2
full_path = os.path.join(HISTORY_DIR, user_name, filename)
while os.path.exists(full_path):
full_path = os.path.join(HISTORY_DIR, user_name, f"{repeat_file_index}_{filename}")
repeat_file_index += 1
filename = os.path.basename(full_path)
self.history_file_path = filename
save_file(filename, self.system_prompt, self.history, chatbot, user_name)
return init_history_list(user_name)
def auto_name_chat_history(self, name_chat_method, user_question, chatbot, user_name, single_turn_checkbox):
if len(self.history) == 2 and not single_turn_checkbox:
user_question = self.history[0]["content"]
filename = replace_special_symbols(user_question)[:16] + ".json"
return self.rename_chat_history(filename, chatbot, user_name)
else:
return gr.update()
def auto_save(self, chatbot):
save_file(self.history_file_path, self.system_prompt,
self.history, chatbot, self.user_identifier)
def export_markdown(self, filename, chatbot, user_name):
if filename == "":
return
if not filename.endswith(".md"):
filename += ".md"
save_file(filename, self.system_prompt, self.history, chatbot, user_name)
def load_chat_history(self, new_history_file_path=None, username=None):
logging.debug(f"{self.user_identifier} 加载对话历史中……")
if new_history_file_path is not None:
if type(new_history_file_path) != str:
# copy file from new_history_file_path.name to os.path.join(HISTORY_DIR, self.user_identifier)
new_history_file_path = new_history_file_path.name
shutil.copyfile(new_history_file_path, os.path.join(
HISTORY_DIR, self.user_identifier, os.path.basename(new_history_file_path)))
self.history_file_path = os.path.basename(new_history_file_path)
else:
self.history_file_path = new_history_file_path
try:
if self.history_file_path == os.path.basename(self.history_file_path):
history_file_path = os.path.join(
HISTORY_DIR, self.user_identifier, self.history_file_path)
else:
history_file_path = self.history_file_path
if not self.history_file_path.endswith(".json"):
history_file_path += ".json"
with open(history_file_path, "r", encoding="utf-8") as f:
json_s = json.load(f)
try:
if type(json_s["history"][0]) == str:
logging.info("历史记录格式为旧版,正在转换……")
new_history = []
for index, item in enumerate(json_s["history"]):
if index % 2 == 0:
new_history.append(construct_user(item))
else:
new_history.append(construct_assistant(item))
json_s["history"] = new_history
logging.info(new_history)
except:
pass
logging.debug(f"{self.user_identifier} 加载对话历史完毕")
self.history = json_s["history"]
return os.path.basename(self.history_file_path), json_s["system"], json_s["chatbot"]
except:
# 没有对话历史或者对话历史解析失败
logging.info(f"没有找到对话历史记录 {self.history_file_path}")
return self.history_file_path, "", []
def delete_chat_history(self, filename, user_name):
if filename == "CANCELED":
return gr.update(), gr.update(), gr.update()
if filename == "":
return i18n("你没有选择任何对话历史"), gr.update(), gr.update()
if not filename.endswith(".json"):
filename += ".json"
if filename == os.path.basename(filename):
history_file_path = os.path.join(HISTORY_DIR, user_name, filename)
else:
history_file_path = filename
try:
os.remove(history_file_path)
return i18n("删除对话历史成功"), get_history_list(user_name), []
except:
logging.info(f"删除对话历史失败 {history_file_path}")
return i18n("对话历史")+filename+i18n("已经被删除啦"), get_history_list(user_name), []
def auto_load(self):
filepath = get_history_filepath(self.user_identifier)
if not filepath:
self.history_file_path = new_auto_history_filename(
self.user_identifier)
else:
self.history_file_path = filepath
filename, system_prompt, chatbot = self.load_chat_history()
filename = filename[:-5]
return filename, system_prompt, chatbot
def like(self):
"""like the last response, implement if needed
"""
return gr.update()
def dislike(self):
"""dislike the last response, implement if needed
"""
return gr.update()
class Base_Chat_Langchain_Client(BaseLLMModel):
def __init__(self, model_name, user_name=""):
super().__init__(model_name, user=user_name)
self.need_api_key = False
self.model = self.setup_model()
def setup_model(self):
# inplement this to setup the model then return it
pass
def _get_langchain_style_history(self):
history = [SystemMessage(content=self.system_prompt)]
for i in self.history:
if i["role"] == "user":
history.append(HumanMessage(content=i["content"]))
elif i["role"] == "assistant":
history.append(AIMessage(content=i["content"]))
return history
def get_answer_at_once(self):
assert isinstance(
self.model, BaseChatModel), "model is not instance of LangChain BaseChatModel"
history = self._get_langchain_style_history()
response = self.model.generate(history)
return response.content, sum(response.content)
def get_answer_stream_iter(self):
it = CallbackToIterator()
assert isinstance(
self.model, BaseChatModel), "model is not instance of LangChain BaseChatModel"
history = self._get_langchain_style_history()
def thread_func():
self.model(messages=history, callbacks=[
ChuanhuCallbackHandler(it.callback)])
it.finish()
t = Thread(target=thread_func)
t.start()
partial_text = ""
for value in it:
partial_text += value
yield partial_text
| [
"content",
"Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN PLACEHOLDER:"
] |
2024-01-10 | wcventure/ChuanhuChatGPT | modules~shared.py | from .presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST
import os
import queue
import openai
class State:
interrupted = False
multi_api_key = False
completion_url = COMPLETION_URL
balance_api_url = BALANCE_API_URL
usage_api_url = USAGE_API_URL
def interrupt(self):
self.interrupted = True
def recover(self):
self.interrupted = False
def set_api_host(self, api_host: str):
api_host = api_host.rstrip("/")
if not api_host.startswith("http"):
api_host = f"https://{api_host}"
if api_host.endswith("/v1"):
api_host = api_host[:-3]
self.completion_url = f"{api_host}/v1/chat/completions"
self.balance_api_url = f"{api_host}/dashboard/billing/credit_grants"
self.usage_api_url = f"{api_host}/dashboard/billing/usage"
os.environ["OPENAI_API_BASE"] = api_host
def reset_api_host(self):
self.completion_url = COMPLETION_URL
self.balance_api_url = BALANCE_API_URL
self.usage_api_url = USAGE_API_URL
os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}"
return API_HOST
def reset_all(self):
self.interrupted = False
self.completion_url = COMPLETION_URL
def set_api_key_queue(self, api_key_list):
self.multi_api_key = True
self.api_key_queue = queue.Queue()
for api_key in api_key_list:
self.api_key_queue.put(api_key)
def switching_api_key(self, func):
if not hasattr(self, "api_key_queue"):
return func
def wrapped(*args, **kwargs):
api_key = self.api_key_queue.get()
args[0].api_key = api_key
ret = func(*args, **kwargs)
self.api_key_queue.put(api_key)
return ret
return wrapped
state = State()
modules_path = os.path.dirname(os.path.realpath(__file__))
chuanhu_path = os.path.dirname(modules_path)
assets_path = os.path.join(chuanhu_path, "web_assets") | [] |
2024-01-10 | wcventure/ChuanhuChatGPT | modules~index_func.py | import os
import logging
import hashlib
import PyPDF2
from tqdm import tqdm
from .presets import *
from .utils import *
from .config import local_embedding
def get_documents(file_src):
from langchain.schema import Document
from langchain.text_splitter import TokenTextSplitter
text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30)
documents = []
logging.debug("Loading documents...")
logging.debug(f"file_src: {file_src}")
for file in file_src:
filepath = file.name
filename = os.path.basename(filepath)
file_type = os.path.splitext(filename)[1]
logging.info(f"loading file: {filename}")
texts = None
try:
if file_type == ".pdf":
logging.debug("Loading PDF...")
try:
from modules.pdf_func import parse_pdf
from modules.config import advance_docs
two_column = advance_docs["pdf"].get("two_column", False)
pdftext = parse_pdf(filepath, two_column).text
except:
pdftext = ""
with open(filepath, "rb") as pdfFileObj:
pdfReader = PyPDF2.PdfReader(pdfFileObj)
for page in tqdm(pdfReader.pages):
pdftext += page.extract_text()
texts = [Document(page_content=pdftext,
metadata={"source": filepath})]
elif file_type == ".docx":
logging.debug("Loading Word...")
from langchain.document_loaders import UnstructuredWordDocumentLoader
loader = UnstructuredWordDocumentLoader(filepath)
texts = loader.load()
elif file_type == ".pptx":
logging.debug("Loading PowerPoint...")
from langchain.document_loaders import UnstructuredPowerPointLoader
loader = UnstructuredPowerPointLoader(filepath)
texts = loader.load()
elif file_type == ".epub":
logging.debug("Loading EPUB...")
from langchain.document_loaders import UnstructuredEPubLoader
loader = UnstructuredEPubLoader(filepath)
texts = loader.load()
elif file_type == ".xlsx":
logging.debug("Loading Excel...")
text_list = excel_to_string(filepath)
texts = []
for elem in text_list:
texts.append(Document(page_content=elem,
metadata={"source": filepath}))
else:
logging.debug("Loading text file...")
from langchain.document_loaders import TextLoader
loader = TextLoader(filepath, "utf8")
texts = loader.load()
except Exception as e:
import traceback
logging.error(f"Error loading file: {filename}")
traceback.print_exc()
if texts is not None:
texts = text_splitter.split_documents(texts)
documents.extend(texts)
logging.debug("Documents loaded.")
return documents
def construct_index(
api_key,
file_src,
max_input_size=4096,
num_outputs=5,
max_chunk_overlap=20,
chunk_size_limit=600,
embedding_limit=None,
separator=" ",
):
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores import FAISS
'''
if api_key:
os.environ["OPENAI_API_KEY"] = api_key
else:
# 由于一个依赖的愚蠢的设计,这里必须要有一个API KEY
os.environ["OPENAI_API_KEY"] = "sk-xxxxxxx"
'''
chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit
embedding_limit = None if embedding_limit == 0 else embedding_limit
separator = " " if separator == "" else separator
index_name = get_file_hash(file_src)
index_path = f"./index/{index_name}"
if local_embedding:
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(
model_name="sentence-transformers/distiluse-base-multilingual-cased-v2")
else:
from langchain.embeddings import OpenAIEmbeddings
if os.environ.get("OPENAI_API_TYPE", "openai") == "openai":
embeddings = OpenAIEmbeddings(openai_api_base=os.environ.get(
"OPENAI_API_BASE", None), openai_api_key=os.environ.get("OPENAI_EMBEDDING_API_KEY", api_key))
else:
embeddings = OpenAIEmbeddings(deployment=os.environ["AZURE_EMBEDDING_DEPLOYMENT_NAME"], openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
model=os.environ["AZURE_EMBEDDING_MODEL_NAME"], openai_api_base=os.environ["AZURE_OPENAI_API_BASE_URL"], openai_api_type="azure")
if os.path.exists(index_path):
logging.info("找到了缓存的索引文件,加载中……")
return FAISS.load_local(index_path, embeddings)
else:
try:
documents = get_documents(file_src)
logging.info("构建索引中……")
with retrieve_proxy():
index = FAISS.from_documents(documents, embeddings)
logging.debug("索引构建完成!")
os.makedirs("./index", exist_ok=True)
index.save_local(index_path)
logging.debug("索引已保存至本地!")
return index
except Exception as e:
import traceback
logging.error("索引构建失败!%s", e)
traceback.print_exc()
return None
| [] |
2024-01-10 | yknishidate/pape | summarize.py | import openai
import dotenv
import os
dotenv.load_dotenv()
openai.organization = os.getenv("OPENAI_ORGANIZATION")
openai.api_key = os.getenv("OPENAI_API_KEY")
def summarize(title, abstract):
system = """与えられた論文の要点を3点のみでまとめ、以下のフォーマットで日本語で出力してください。```
タイトルの日本語訳
・要点1
・要点2
・要点3
```"""
text = f"title: {title}\nabstract: {abstract}"
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{'role': 'system', 'content': system},
{'role': 'user', 'content': text}
],
temperature=0.25,
)
return response['choices'][0]['message']['content']
| [
"title: PLACEHOLDER\nabstract: PLACEHOLDER",
"与えられた論文の要点を3点のみでまとめ、以下のフォーマットで日本語で出力してください。```\n タイトルの日本語訳\n ・要点1\n ・要点2\n ・要点3\n ```"
] |
2024-01-10 | 4Everlighting/TranslatorThreeThousand | jarvis~jarvis.py | import speech_recognition as sr
import openai, asyncio, edge_tts, pyttsx3, os, subprocess
import RPi_I2C_driver
WRITE_AUDIO_FILE = False
PLAY_AUDIO_WITH_VLC = False
PLAY_AUDIO_WITH_EDGE_TTS = True
VOICE = "en-GB-ThomasNeural"
OUTPUT_FILE = "message"
CHAT_GPT_MODEL="gpt-3.5-turbo-0613"
openai.api_key = "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"
VLC_PATH = "C:\\Program Files\\VideoLAN\\VLC\\vlc.exe"
messages = []
rec = sr.Recognizer()
assistant="You are Jarvis assistant. Address me as Sir"
messages.append({"role": "system", "content": assistant})
engine = pyttsx3.init()
voices = engine.getProperty('voices')
engine.setProperty('voice', voices[0].id)
mylcd = RPi_I2C_driver.lcd()
async def _main() -> None:
rec = sr.Recognizer()
with sr.Microphone() as source:
engine.say('What would you like to know?')
engine.runAndWait()
print("\nWhat would you like to know?")
audio = rec.listen(source)
try:
print(" *** Interpretting message ***")
message = rec.recognize_google(audio, language='en-in')
print(" *** Interpretted message ***")
if message.lower() == "exit":
print("\nGoodbye!")
exit()
else:
print("JP: " + message)
print("Processing......")
messages.append({"role": "user", "content": message})
chat = openai.ChatCompletion.create(
model=CHAT_GPT_MODEL,
messages=messages,
temperature=0.5,
max_tokens=500,
)
reply = chat.choices[0].message.content
messages.append({"role": "assistant", "content": reply})
print("\nJarvis : ---------------------------------------------\n")
print(f" *** {len(reply)} byte chat gpt response: \"{reply}\"")
if WRITE_AUDIO_FILE:
communicate = edge_tts.Communicate(reply, VOICE)
f = f"{OUTPUT_FILE}.mp3"
print("writing audio file...")
await communicate.save(f)
print(f"wrote audio file to {f}!")
if PLAY_AUDIO_WITH_VLC:
subprocess.call([VLC_PATH,f])
if PLAY_AUDIO_WITH_EDGE_TTS:
print("playing audio file")
engine.say(reply)
engine.runAndWait()
print("played audio file")
except Exception as e:
print("An error has occurred: {}".format(e))
if __name__ == "__main__":
loop = asyncio.get_event_loop()
try:
loop.run_until_complete(_main())
finally:
loop.close()
| [
"You are Jarvis assistant. Address me as Sir"
] |
2024-01-10 | DeskFanzin/RPGAdventureAI | wsgi.py | ##Trabalho feito por André Maurell - 142365 e Gabriel Martins - 142356
##Gerador de aventura de RPG! Completamente automático, com imagens!!!
from flask import Flask, render_template, request, redirect, session, url_for, jsonify
import openai
import random
from openai.error import RateLimitError
## no caso
##deste trabalho as threads do gunicorn declaradas no arquivo Procfile nesta pasta.
app = Flask(__name__)
app.secret_key = "mysecretkey" # adicionando chave secreta para usar sessões
openai.api_key = '' ## chave da API do openAI aqui
##Rota principal, onde o usuário escolhe o nome e a aventura, e o programa gera a história.
@app.route("/", methods=("GET", "POST"))
def index():
session['count'] = 0
if request.method == "POST":
name = request.form["adventure"]
response = openai.Completion.create(
model="gpt-3.5-turbo-instruct",
prompt=generate_prompt(name),
temperature=1,
max_tokens=150,
)
session["current_story_state"] = response.choices[0].text
return redirect(url_for("result", result=response.choices[0].text, background=None, character=None))
return render_template("index.html")
##Rota para gerar a imagem do personagem, usando a API do openAI.
@app.route("/character", methods=("GET", "POST"))
def character():
if request.method == "POST":
try:
character_text = request.form["character_text"]
character_image = openai.Image.create(prompt=str(character_text), n=1, size="256x256")
session["character_image"] = character_image['data'][0]['url']
return jsonify({"character_image": character_image['data'][0]['url']})
except Exception as e:
return jsonify({"error": str(e)}), 500
##Rota para gerar a imagem do background, usando a API do openAI.
@app.route("/background", methods=("POST",))
def background():
if request.method == "POST":
try:
background_text = request.form["background_text"]
background_image = openai.Image.create(prompt=str(background_text), n=1, size="1024x1024")
session["background_image"] = background_image['data'][0]['url']
return jsonify({"background_image": background_image['data'][0]['url']})
except Exception as e:
return jsonify({"error": str(e)}), 500
##Função de gerar imagens, usando a API do openAI.
## temos que usar <img src="{{ result }}" alt="result" />
## para que a imagem seja renderizada na página html
##Rota para gerar a história com as opções.
@app.route("/result", methods=("GET", "POST"))
def result():
background_image = session.get("background_image", "")
# Pegando a URL da imagem do personagem da sessão
character_image = session.get("character_image", None)
if request.method == "POST":
if session['count'] == 2:
choice = request.form["choice"]
current_story_state = session.get("current_story_state", "")
new_story_state = update_story_state(current_story_state, choice)
session["current_story_state"] = new_story_state
try:
response = openai.Completion.create(
model="gpt-3.5-turbo-instruct",
prompt=generate_prompt3(),
temperature=0.8,
max_tokens=200,
)
except RateLimitError:
return "<h1>Espera um pouco, você está fazendo muitas requisições!</h1> <h2> volte para a página anterior a esta e tente novamente</h2>"
session["current_story_state"] = response.choices[0].text
return redirect(url_for("ending", result=response.choices[0].text, background_image=background_image, character_image=character_image))
else:
choice = request.form["choice"]
current_story_state = session.get("current_story_state", "")
new_story_state = update_story_state(current_story_state, choice)
session["current_story_state"] = new_story_state
response = openai.Completion.create(
model="gpt-3.5-turbo-instruct",
prompt=generate_prompt2(choice),
temperature=0.8,
max_tokens=200,
)
session["current_story_state"] = response.choices[0].text
session['count'] += 1
return redirect(url_for("result", result=response.choices[0].text, background_image=background_image, character_image=character_image))
result = request.args.get("result")
return render_template("result.html", result=result, background_image=background_image, character_image=character_image)
##Rota para gerar o final da história, com a imagem do vilão.
@app.route("/ending", methods=("GET", "POST"))
def ending():
if request.method == "POST":
try:
if request.form["diceroll"] == "diceroll":
return redirect(url_for("start_battle"))
except KeyError:
pass
else:
background_image = session.get("background_image", "")
# Pegando a URL da imagem do personagem da sessão
character_image = session.get("character_image", None)
#criando prompt do openai para gerar a imagem do vilao
image_prompt = openai.Completion.create(
model="gpt-3.5-turbo-instruct",
prompt=generate_prompt_image(session['current_story_state']),
temperature=1,
max_tokens=200,
)
image_prompt = image_prompt.choices[0].text
boss_image = openai.Image.create(prompt=str(image_prompt), n=1, size="256x256")
session["boss_image"] = boss_image['data'][0]['url']
result = request.args.get("result")
return render_template("ending.html", result=result, background_image=background_image, character_image=character_image, boss_image=boss_image['data'][0]['url'])
@app.route("/start_battle", methods=("GET",))
def start_battle():
# Inicialize as vidas do usuário e do chefe
session["user_life"] = 10
session["boss_life"] = 20
session["user"] = 0
session["boss"] = 0
return redirect(url_for("battle"))
##Rota para gerar a batalha, com o dado e as vidas.
@app.route("/battle", methods=("GET", "POST"))
def battle():
background_image = session.get("background_image", "")
boss_image = session.get("boss_image", "")
character_image = session.get("character_image", "")
user_life = session.get("user_life", 10)
boss_life = session.get("boss_life", 20)
user = session.get("user", 0)
boss = session.get("boss", 0)
if request.method == "POST":
attack_or_defend = request.form["attack"]
if attack_or_defend == "attack":
# Simule o ataque do usuário com um dado de 0 a 10
user_attack = random.randint(0, 10)
user = user_attack
# Reduza a vida do chefe com base no ataque do usuário
boss_life -= user_attack
if user_attack == 0:
user = "Você errou o ataque!"
# Simule o ataque do chefe com um dado de 0 a 5
boss_attack = random.randint(0, 5)
boss = boss_attack
# Reduza a vida do usuário com base no ataque do chefe
user_life -= boss_attack
if boss_attack == 0:
boss = "O boss errou o ataque!"
elif attack_or_defend == "defend":
# Simule a defesa do usuário com um dado de 0 a 5
user_defense = random.randint(0, 8)
user = user_defense
# Reduza o ataque do chefe com base na defesa do usuário
boss_attack = random.randint(0, 5) - user_defense
boss = boss_attack
# Reduza a vida do usuário com base no ataque do chefe
user_life -= boss_attack
if boss_attack == 0:
boss = "O boss errou o ataque!"
# Atualize as vidas na sessão
session["user_life"] = user_life
session["boss_life"] = boss_life
session["user"] = user
session["boss"] = boss
# Verifique se alguém ganhou ou perdeu
if user_life <= 0:
return redirect(url_for("game_over", result="Infelizmente você acabou sucumbindo para o boss!", background_image=background_image))
elif boss_life <= 0:
return redirect(url_for("game_over", result="Parabéns jogador, você derrotou o boss!", background_image=background_image))
return render_template("battle.html", user_life=user_life, boss_life=boss_life, user=user, boss=boss, background_image=background_image, boss_image=boss_image, character_image=character_image)
return render_template("battle.html", user_life=user_life, boss_life=boss_life, user=user, boss=boss, background_image=background_image, boss_image=boss_image, character_image=character_image)
##Rota para o game over, com o final da história.
@app.route("/game_over/<result>", methods=("GET",))
def game_over(result):
session['count'] = 0
#resetando as variáveis de sessão
session["character"] = None
session["background_image"] = None
session["character_image"] = None
if result == "Infelizmente você acabou sucumbindo para o boss!":
ending = openai.Completion.create(
model="gpt-3.5-turbo-instruct",
prompt=generate_prompt_badending(),
temperature=1,
max_tokens=200,
)
ending_image = openai.Image.create(prompt=str(ending.choices[0].text), n=1, size="256x256")
session["current_story_state"] = None
return render_template("game_over.html", result=ending.choices[0].text, ending_image=ending_image['data'][0]['url'])
elif result == "Parabéns jogador, você derrotou o boss!":
ending = openai.Completion.create(
model="gpt-3.5-turbo-instruct",
prompt=generate_prompt_goodending(),
temperature=1,
max_tokens=200,
)
ending_image = openai.Image.create(prompt=str(ending.choices[0].text), n=1, size="256x256")
session["current_story_state"] = None
return render_template("game_over.html", result=ending.choices[0].text, ending_image=ending_image['data'][0]['url'])
##Função para atualizar o estado da história, de acordo com a escolha do usuário.
def update_story_state(current_state, choice):
# get the options from the string new_story_state
session["option1"] = current_state.split("1-")[1].split(",")[0]
session["option2"] = current_state.split("2-")[1].split(".")[0]
#setting the current_state without the options
session["current_state_woptions"] = current_state.split("1-")[0]
option1_text = session.get("option1", "")
option2_text = session.get("option2", "")
if choice == "1":
new_state = current_state + option1_text
elif choice == "2":
new_state = current_state + option2_text
else:
new_state = current_state
return new_state
##Função para gerar o prompt inicial, de acordo com a aventura escolhida pelo usuário.
def generate_prompt(adventure):
return f"""Você é um mestre de RPG e está criando uma aventura para um jogador. Ele escolhe seu nome e dá o início da aventura. Continue a história, gerando, a seu critério, entre 30 a 100 palavras e dê 2 opções do que fazer.
Nome: Gandalf, sou um mago atrás do meu chapéu.
Aventura: Você, atrás do seu chapéu há algumas semanas, finalmente achou uma pista de onde ele está. Você está em uma floresta, e vê uma caverna. Você entra na caverna e vê um dragão. 1- Lutar com o dragão, 2- Fugir da caverna.
Nome: Aragorn, sou um guerreiro atrás de uma espada mágica.
Aventura: Após você, Aragorn, sair da taverna, vai direto para a floresta, atrás de sua espada. Você encontra um esqueleto, e ele pode ter dicas de onde sua espada está. 1- Perguntar ao esqueleto, 2- Ir atrás de mais pistas.
Nome: {adventure.capitalize()}
Aventura: """
##Função para gerar o prompt da batalha, de acordo com a opção da aventura escolhida pelo usuário.
def generate_prompt2(choice):
current_story_state = session.get("current_state_woptions", "")
option1 = session.get("option1", "")
option2 = session.get("option2", "")
return f"""De acordo com sua escolha anterior, o usuário optou por fazer uma ação. Agora, continue a história, tente dar um rumo para um possível final, e sempre forneça 2 opções do que fazer. Gere entre 30 a 100 palavras.
Exemplos (se tiver mais de 100 palavras, tente gerar menos palavras na próxima vez):
Opção: 1
Aventura: {current_story_state}. 1-{option1}, 2-{option2}.
Opção: {choice.capitalize()}
Aventura:"""
##Função para gerar o prompt do final da história, de acordo com a aventura anterior do usuário.
def generate_prompt3():
current_story_state = session.get("current_state_woptions", "")
return f""" A história está acabando! Crie um confronto final, de acordo com a história previamente gerada, onde o usuário deverá batalhar. O vilão tera 20 de vida e deixe o usuário agir. Gere entre 30 a 100 palavras. (voce deve somente criar o confronto, não
pode gerar o resultado.)
História antiga: Você entra nas catacumbas atrás de seu chapéu, tem muitos esqueletos no chão.
Final: Um esqueleto gigante aparece, e ele está com seu chapéu! Você tem que derrotá-lo para pegar seu chapéu de volta! O esqueleto tem 20 de vida.
História antiga: Você entra na caverna e vê um dragão.
Final: O dragão está dormindo, e você tem que pegar sua espada de volta. Você pega sua espada e o dragão acorda! Ele tem 20 de vida.
História antiga: {current_story_state}
Final:"""
##Função para gerar o final ruim da história, de acordo com a aventura anterior do usuário.
def generate_prompt_badending():
current_story_state = session.get("current_state_woptions", "")
return f"""O usuário perdeu a batalha contra o chefe, Gere o final da história, de acordo com a história previamente gerada. Gere entre 30 a 100 palavras.
História antiga: Um esqueleto gigante aparece, e ele está com seu chapéu! Você tem que derrotá-lo para pegar seu chapéu de volta! O esqueleto tem 20 de vida.
Final: O esqueleto te derrota e você, derrotado, foge de volta para a cidade. Você nunca mais vê seu chapéu.
História antiga: O dragão está dormindo, e você tem que pegar sua espada de volta. Você pega sua espada e o dragão acorda! Ele tem 20 de vida.
Final: O dragão te derrota mas você consegue fugir, e vive como um guerreiro que ainda perambula atrás de sua espada.
História antiga: {current_story_state}
Final:"""
##Função para gerar o final bom da história, de acordo com a aventura anterior do usuário.
def generate_prompt_goodending():
current_story_state = session.get("current_state_woptions", "")
return f"""O usuário ganhou a batalha contra o chefe, Gere o final da história, de acordo com a história previamente gerada. Gere entre 30 a 100 palavras.
História antiga: Um esqueleto gigante aparece, e ele está com seu chapéu! Você tem que derrotá-lo para pegar seu chapéu de volta! O esqueleto tem 20 de vida.
Final: Você derrota o esqueleto e pega seu chapéu de volta! Você volta para a cidade e vive como um mago com seu querido chapéu.
História antiga: O dragão está dormindo, e você tem que pegar sua espada de volta. Você pega sua espada e o dragão acorda! Ele tem 20 de vida.
Final: Você derrota o dragão e pega sua espada de volta! Com ela, você se torna um guerreiro lendário da sua aldeia.
História antiga: {current_story_state}
Final:"""
##Função para gerar o prompt da imagem do vilão, de acordo com a aventura anterior do usuário.
def generate_prompt_image(original_prompt):
return f"""Você recebe um texto de entrada, transforme este texto em um prompt para gerar uma imagem.
Texto recebido: "Um dragão aparece na sua frente, ele tem 20 de vida. Você tem que derrotá-lo para pegar sua espada de volta."
Prompt: Um dragão poderoso em uma batalha.
Texto recebido: Você recebeu o texto: "Um esqueleto aparece na sua frente, ele tem 20 de vida. Você tem que derrotá-lo para pegar seu chapéu de volta."
Prompt: Um esqueleto guerreiro.
Texto recebido: {original_prompt}
Prompt:"""
if __name__ == "__main__":
app.run(host='0.0.0.0', port='5000', debug=True)
| [
"current_story_state",
"gpt-3.5-turbo-instruct"
] |
2024-01-10 | ewave33/generative-ai-application-builder-on-aws | source~lambda~chat~shared~memory~ddb_chat_memory.py | #!/usr/bin/env python
######################################################################################################################
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
# with the License. A copy of the License is located at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES #
# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
# and limitations under the License. #
######################################################################################################################
from typing import Any, Dict, List, Optional, Tuple
from aws_lambda_powertools import Logger
from langchain.memory.chat_memory import BaseChatMemory
from langchain.memory.utils import get_prompt_input_key
from langchain.schema import get_buffer_string
from shared.memory.ddb_enhanced_message_history import DynamoDBChatMessageHistory
from utils.enum_types import ConversationMemoryTypes
logger = Logger(utc=True)
class DynamoDBChatMemory(BaseChatMemory):
"""A chat memory interface which uses DynamoDb as the backing store."""
# Mimicking ConversationBufferMemory and other such memory classes provided by langchain
memory_type: ConversationMemoryTypes = ConversationMemoryTypes.DynamoDB.value
memory_key: str #: :meta private:
input_key: Optional[str] = None
human_prefix: str = "Human"
ai_prefix: Optional[str] = "AI"
output_key: Optional[str] = None
def __init__(
self,
chat_message_history: DynamoDBChatMessageHistory,
memory_key: Optional[str] = None,
input_key: Optional[str] = None,
output_key: Optional[str] = None,
human_prefix: Optional[str] = None,
ai_prefix: Optional[str] = None,
return_messages: bool = False,
) -> None:
"""
Args:
chat_message_history (DynamoDBChatMessageHistory): The chat message history object which will store the
conversation in DynamoDB
memory_key (str, optional): The key to use for the memory. Defaults to "history".
input_key (str, optional): The key to use for the input. Defaults to "input".
output_key (str, optional): The key to use for the output. Defaults to None.
human_prefix (str, optional): The prefix to use for human messages. Defaults to "Human".
ai_prefix (str, optional): The prefix to use for AI messages. Defaults to "AI".
Raises:
ValueError: If the chat_message_history is not a DynamoDBChatMessageHistory object.
"""
memory_key = memory_key if memory_key else "history"
input_key = input_key if input_key else "input"
super().__init__(
memory_key=memory_key, input_key=input_key, output_key=output_key, return_messages=return_messages
)
self.human_prefix = human_prefix if human_prefix else self.human_prefix
self.ai_prefix = ai_prefix if ai_prefix else self.ai_prefix
self.chat_memory = chat_message_history
@property
def buffer(self) -> Any:
"""Returns the buffer memory.
Args: None
Returns:
Any: The buffer memory containing conversation history.
"""
if self.return_messages:
return self.chat_memory.messages
else:
return get_buffer_string(
self.chat_memory.messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer. Implementation of the abstract method."""
return {self.memory_key: self.buffer}
@property
def memory_variables(self) -> List[str]:
"""
Returns list of memory variables.
Args: None
Returns:
List[str]: The list of memory variables.
"""
return [self.memory_key]
def _get_input_output(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> Tuple[str, str]:
"""
Fetches the input and outputs based on the prompt or conversation memory input/output keys
Raises a warning if the multiple output keys are provided.
Args:
inputs (Dict[str, Any]): The inputs from the prompt or conversation memory
outputs (Dict[str, str]): The outputs from the prompt or conversation memory
Returns:
Tuple[str, str]: The input and output strings
Examples:
>>> inputs = {"input": "Hello assistant"}
>>> outputs = {"output": "Hi human"}
>>> get_input_output(inputs, outputs)
("Hello assistant", "Hi human")
"""
if self.input_key is None:
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key:
output_key = self.output_key
return inputs[prompt_input_key], outputs[output_key]
selected_keys = outputs.keys()
if len(outputs) != 1 and "source_documents" in outputs:
logger.debug(f"Removing source documents from outputs.")
selected_keys = list(set(selected_keys) - {"source_documents"})
# If the length of selected_keys is still not equal to one, select one and move ahead.
if len(selected_keys) != 1:
logger.warning(f"One output key expected, got {outputs.keys()}. Taking the first one.")
else:
selected_keys = list(selected_keys)
output_key = selected_keys[0]
return inputs[prompt_input_key], outputs[output_key]
| [] |
2024-01-10 | ewave33/generative-ai-application-builder-on-aws | source~lambda~chat~shared~memory~ddb_enhanced_message_history.py | #!/usr/bin/env python
######################################################################################################################
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. #
# #
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance #
# with the License. A copy of the License is located at #
# #
# http://www.apache.org/licenses/LICENSE-2.0 #
# #
# or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES #
# OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions #
# and limitations under the License. #
######################################################################################################################
import os
import time
from typing import List
from aws_lambda_powertools import Logger, Tracer
from botocore.exceptions import ClientError
from helper import get_service_resource
from langchain.schema import (
BaseChatMessageHistory,
BaseMessage,
_message_to_dict,
messages_from_dict,
messages_to_dict,
)
from utils.constants import DDB_MESSAGE_TTL_ENV_VAR, DEFAULT_DDB_MESSAGE_TTL, TRACE_ID_ENV_VAR
from utils.enum_types import ConversationMemoryTypes
logger = Logger(utc=True)
tracer = Tracer()
class DynamoDBChatMessageHistory(BaseChatMessageHistory):
"""Class which handles both chat message history and context management, storing data in AWS DynamoDB.
This class expects that a DynamoDB table with name `table_name`
and a partition Key of `UserId` and a sort Key of `ConversationId` are present.
Args:
table_name: name of the DynamoDB table
user_id (str): Id of the user who the current chat belongs to. Used as partition key in table.
conversation_id (str): The key that is used to store the messages of a single chat session for a given user. Used as the sort key in the table.
"""
memory_type: ConversationMemoryTypes = ConversationMemoryTypes.DynamoDB.value
def __init__(self, table_name: str, user_id: str, conversation_id: str) -> None:
ddb_resource = get_service_resource("dynamodb")
self.table = ddb_resource.Table(table_name)
self.conversation_id = conversation_id
self.user_id = user_id
@property
@tracer.capture_method(capture_response=True)
def messages(self) -> List[BaseMessage]: # type: ignore
"""Retrieve the messages from DynamoDB"""
response = None
# fmt: off
with tracer.provider.in_subsegment("## chat_history") as subsegment: # NOSONAR python:S1192 - subsegment name for x-ray tracing
# fmt: on
subsegment.put_annotation("service", "dynamodb")
subsegment.put_annotation("operation", "get_item")
try:
response = self.table.get_item(
Key={"UserId": self.user_id, "ConversationId": self.conversation_id},
ProjectionExpression="History",
ConsistentRead=True,
)
except ClientError as err:
if err.response["Error"]["Code"] == "ResourceNotFoundException":
logger.warning(
f"No record found with user id {self.user_id} and conversation id {self.conversation_id}"
)
else:
logger.error(err, xray_trace_id=os.environ[TRACE_ID_ENV_VAR],)
if response and "Item" in response:
items = response["Item"]["History"]
else:
items = []
messages = messages_from_dict(items)
return messages
@tracer.capture_method
def add_message(self, message: BaseMessage) -> None:
"""Append the message to the record in DynamoDB"""
from botocore.exceptions import ClientError
messages = messages_to_dict(self.messages)
_message = _message_to_dict(message)
messages.append(_message)
# fmt: off
with tracer.provider.in_subsegment("## chat_history") as subsegment: # NOSONAR python:S1192 - subsegment name for x-ray tracing
# fmt: on
subsegment.put_annotation("service", "dynamodb")
subsegment.put_annotation("operation", "update_item")
try:
# calculate a TTL 24 hours from now
expiry_period = int(os.getenv(DDB_MESSAGE_TTL_ENV_VAR, DEFAULT_DDB_MESSAGE_TTL))
ttl = int(time.time()) + expiry_period
# update_item will put item if key does not exist
self.table.update_item(
Key={
"UserId": self.user_id,
"ConversationId": self.conversation_id,
},
UpdateExpression="SET #History = :messages, #TTL = :ttl",
ExpressionAttributeNames={"#History": "History", "#TTL": "TTL"},
ExpressionAttributeValues={":messages": messages, ":ttl": ttl},
)
except ClientError as err:
logger.error(err, xray_trace_id=os.environ[TRACE_ID_ENV_VAR],)
@tracer.capture_method
def clear(self) -> None:
"""Clear session memory from DynamoDB"""
from botocore.exceptions import ClientError
# fmt: off
with tracer.provider.in_subsegment("## chat_history") as subsegment: # NOSONAR python:S1192 - subsegment name for x-ray tracing
# fmt: on
subsegment.put_annotation("service", "dynamodb")
subsegment.put_annotation("operation", "delete_item")
try:
self.table.delete_item(Key={"UserId": self.user_id, "ConversationId": self.conversation_id})
except ClientError as err:
logger.error(err, xray_trace_id=os.environ[TRACE_ID_ENV_VAR])
| [] |
2024-01-10 | DiegooCN/OpenAI-Excercise | Excercise_3_v3~controller.py | import json
import os
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
tries = 0
def function_handler(messages, function, user_prompt):
"""
Retorna un mensaje especifico
> say_hello \
> method_payment \
> payment_places \
> say_goodbye \
> get_debt_detail \
> out_of_context \
"""
if function == "say_hello":
prompt = say_hello()
elif function == "get_debt_detail":
dni = json.loads(get_dni_from_user_prompt(messages))["dni"]
print(dni)
if is_dni_valid(dni):
prompt = get_debt_detail(dni)
else:
prompt = ask_dni()
elif function == "method_payment" or function == "payment_places":
prompt = get_method_payment_locations()
elif function == "say_goodbye":
prompt = say_goodbye()
elif function == "get_receipt":
prompt = get_receipt()
else:
prompt = out_of_context()
return prompt
# Ended functions
def say_hello():
prompt = f"""¡Hola! Bienvenid@ al chat de Movistar!\nEstoy para ayudare en:\n• Conocer detalle de tu deuda\n• Formas y lugares de pago\n• Solicitar Recibo\nComentanos, ¿Qué necesitas?"""
return prompt
def out_of_context():
prompt = f"""Lo siento, no puedo responder a eso."""
return prompt
def ask_dni():
prompt = f"""Necesito consultar algunos datos para continuar con tu consulta. Por favor, ingresa el documento de identidad del titular del servicio."""
return prompt
def get_method_payment_locations():
"""Muestra las formas y lugares de pago"""
prompt = """\nFORMAS Y LUGARES DE PAGO\nEn Movistar te brindamos diversas formas de pago SIN COMISIÓN.\nPuedes pagar por Yape https://innovacxion.page.link/mVFa\ndesde la web o app de tu banco.\nConoce todos los canales de pago en el siguiente link\nhttps://www.movistar.com.pe/atencion-al-cliente/lugares-y-medios-de-pago"""
return prompt
def get_debt_detail(dni):
"""Muestra el detalle de la deuda"""
prompt = f"""\nDETALLE DE DEUDA\nTu deuda al día de hoy es de S/ 10.00\nTu fecha de vencimiento es el 12/07/2023\nTu DNI: {dni}"""
return prompt
def get_receipt():
"""Muestra el link para solicitar el recibo"""
prompt = """\nSOLICITAR RECIBO\nObten tu recibo con solo unos clics\nhttps://mirecibo.movistar.com.pe"""
return prompt
def say_goodbye():
"""Se despide del usuario cuando este lo solicite"""
prompt = """\nGracias por usar el servicio de asistencia de Movistar\n¡Hasta pronto!"""
return prompt
def get_dni_from_user_prompt(user_prompt):
behavior = f"""\
Tu objetivo es analizar el siguiente prompt {user_prompt} e identificar el DNI del usuario.\
Luego deberás retornar un json con el siguiente formato:\
{{"dni": "dni del usuario"}}\
Si el usuario no ingresa un DNI este será "0"
"""
response = client.chat.completions.create(
model="gpt-3.5-turbo-1106",
messages=[{"role": "system", "content": behavior}],
)
result = response.choices[0].message.content
return result
def is_dni_valid(dni):
dni_with_debts = ["123456789", "205314385"]
flag = True if dni in dni_with_debts else False
return flag | [
"¡Hola! Bienvenid@ al chat de Movistar!\nEstoy para ayudare en:\n• Conocer detalle de tu deuda\n• Formas y lugares de pago\n• Solicitar Recibo\nComentanos, ¿Qué necesitas?",
"\nSOLICITAR RECIBO\nObten tu recibo con solo unos clics\nhttps://mirecibo.movistar.com.pe",
"\nDETALLE DE DEUDA\nTu deuda al día de hoy es de S/ 10.00\nTu fecha de vencimiento es el 12/07/2023\nTu DNI: PLACEHOLDER",
"Lo siento, no puedo responder a eso.",
"\nGracias por usar el servicio de asistencia de Movistar\n¡Hasta pronto!",
"Necesito consultar algunos datos para continuar con tu consulta. Por favor, ingresa el documento de identidad del titular del servicio.",
"\nFORMAS Y LUGARES DE PAGO\nEn Movistar te brindamos diversas formas de pago SIN COMISIÓN.\nPuedes pagar por Yape https://innovacxion.page.link/mVFa\ndesde la web o app de tu banco.\nConoce todos los canales de pago en el siguiente link\nhttps://www.movistar.com.pe/atencion-al-cliente/lugares-y-medios-de-pago"
] |
2024-01-10 | allenai/RL4LMs | rl4lms~algorithms~trpo~trpo.py | import copy
import warnings
from functools import partial
from typing import Any, Dict, List, Optional, Tuple, Type, Union
import numpy as np
import torch as th
from gym import spaces
from stable_baselines3.common.on_policy_algorithm import OnPolicyAlgorithm
from stable_baselines3.common.policies import BasePolicy
from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, RolloutBufferSamples, Schedule
from stable_baselines3.common.utils import explained_variance
from torch import nn
from torch.distributions import kl_divergence
from torch.nn import functional as F
from rl4lms.algorithms.common.algo_utils import conjugate_gradient_solver, flat_grad
from rl4lms.algorithms.trpo.policies import *
from rl4lms.envs.text_generation.logging_utils import Tracker
class TRPO(OnPolicyAlgorithm):
"""
Trust Region Policy Optimization (TRPO)
Paper: https://arxiv.org/abs/1502.05477
Code: This implementation borrows code from OpenAI Spinning Up (https://github.com/openai/spinningup/)
and Stable Baselines (TRPO from https://github.com/hill-a/stable-baselines)
Introduction to TRPO: https://spinningup.openai.com/en/latest/algorithms/trpo.html
:param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
:param env: The environment to learn from (if registered in Gym, can be str)
:param learning_rate: The learning rate for the value function, it can be a function
of the current progress remaining (from 1 to 0)
:param n_steps: The number of steps to run for each environment per update
(i.e. rollout buffer size is n_steps * n_envs where n_envs is number of environment copies running in parallel)
NOTE: n_steps * n_envs must be greater than 1 (because of the advantage normalization)
See https://github.com/pytorch/pytorch/issues/29372
:param batch_size: Minibatch size for the value function
:param gamma: Discount factor
:param cg_max_steps: maximum number of steps in the Conjugate Gradient algorithm
for computing the Hessian vector product
:param cg_damping: damping in the Hessian vector product computation
:param line_search_shrinking_factor: step-size reduction factor for the line-search
(i.e., ``theta_new = theta + alpha^i * step``)
:param line_search_max_iter: maximum number of iteration
for the backtracking line-search
:param n_critic_updates: number of critic updates per policy update
:param gae_lambda: Factor for trade-off of bias vs variance for Generalized Advantage Estimator
:param use_sde: Whether to use generalized State Dependent Exploration (gSDE)
instead of action noise exploration (default: False)
:param sde_sample_freq: Sample a new noise matrix every n steps when using gSDE
Default: -1 (only sample at the beginning of the rollout)
:param normalize_advantage: Whether to normalize or not the advantage
:param target_kl: Target Kullback-Leibler divergence between updates.
Should be small for stability. Values like 0.01, 0.05.
:param sub_sampling_factor: Sub-sample the batch to make computation faster
see p40-42 of John Schulman thesis http://joschu.net/docs/thesis.pdf
:param tensorboard_log: the log location for tensorboard (if None, no logging)
:param create_eval_env: Whether to create a second environment that will be
used for evaluating the agent periodically. (Only available when passing string for the environment)
:param policy_kwargs: additional arguments to be passed to the policy on creation
:param verbose: the verbosity level: 0 no output, 1 info, 2 debug
:param seed: Seed for the pseudo random generators
:param device: Device (cpu, cuda, ...) on which the code should be run.
Setting it to auto, the code will be run on the GPU if possible.
:param _init_setup_model: Whether or not to build the network at the creation of the instance
"""
policy_aliases: Dict[str, Type[BasePolicy]] = {
"MlpPolicy": MlpPolicy,
"CnnPolicy": CnnPolicy,
"MultiInputPolicy": MultiInputPolicy,
}
def __init__(
self,
policy: Union[str, Type[ActorCriticPolicy]],
env: Union[GymEnv, str],
tracker: Tracker,
learning_rate: Union[float, Schedule] = 1e-3,
n_steps: int = 2048,
batch_size: int = 128,
gamma: float = 0.99,
cg_max_steps: int = 15,
cg_damping: float = 0.1,
line_search_shrinking_factor: float = 0.8,
line_search_max_iter: int = 10,
n_critic_updates: int = 10,
gae_lambda: float = 0.95,
use_sde: bool = False,
sde_sample_freq: int = -1,
normalize_advantage: bool = True,
target_kl: float = 0.01,
sub_sampling_factor: int = 1,
tensorboard_log: Optional[str] = None,
create_eval_env: bool = False,
policy_kwargs: Optional[Dict[str, Any]] = None,
verbose: int = 0,
seed: Optional[int] = None,
device: Union[th.device, str] = "cuda",
_init_setup_model: bool = True,
):
super().__init__(
policy,
env,
learning_rate=learning_rate,
n_steps=n_steps,
gamma=gamma,
gae_lambda=gae_lambda,
ent_coef=0.0, # entropy bonus is not used by TRPO
vf_coef=0.0, # value function is optimized separately
max_grad_norm=0.0,
use_sde=use_sde,
sde_sample_freq=sde_sample_freq,
tensorboard_log=tensorboard_log,
policy_kwargs=policy_kwargs,
verbose=verbose,
device=device,
create_eval_env=create_eval_env,
seed=seed,
_init_setup_model=False,
supported_action_spaces=(
spaces.Box,
spaces.Discrete,
spaces.MultiDiscrete,
spaces.MultiBinary,
),
)
self.normalize_advantage = normalize_advantage
# Sanity check, otherwise it will lead to noisy gradient and NaN
# because of the advantage normalization
if self.env is not None:
# Check that `n_steps * n_envs > 1` to avoid NaN
# when doing advantage normalization
buffer_size = self.env.num_envs * self.n_steps
if normalize_advantage:
assert buffer_size > 1, (
"`n_steps * n_envs` must be greater than 1. "
f"Currently n_steps={self.n_steps} and n_envs={self.env.num_envs}"
)
# Check that the rollout buffer size is a multiple of the mini-batch size
untruncated_batches = buffer_size // batch_size
if buffer_size % batch_size > 0:
warnings.warn(
f"You have specified a mini-batch size of {batch_size},"
f" but because the `RolloutBuffer` is of size `n_steps * n_envs = {buffer_size}`,"
f" after every {untruncated_batches} untruncated mini-batches,"
f" there will be a truncated mini-batch of size {buffer_size % batch_size}\n"
f"We recommend using a `batch_size` that is a factor of `n_steps * n_envs`.\n"
f"Info: (n_steps={self.n_steps} and n_envs={self.env.num_envs})"
)
self.batch_size = batch_size
# Conjugate gradients parameters
self.cg_max_steps = cg_max_steps
self.cg_damping = cg_damping
# Backtracking line search parameters
self.line_search_shrinking_factor = line_search_shrinking_factor
self.line_search_max_iter = line_search_max_iter
self.target_kl = target_kl
self.n_critic_updates = n_critic_updates
self.sub_sampling_factor = sub_sampling_factor
if _init_setup_model:
self._setup_model()
self._tracker = tracker
def _compute_actor_grad(
self, kl_div: th.Tensor, policy_objective: th.Tensor
) -> Tuple[List[nn.Parameter], th.Tensor, th.Tensor, List[Tuple[int, ...]]]:
"""
Compute actor gradients for kl div and surrogate objectives.
:param kl_div: The KL divergence objective
:param policy_objective: The surrogate objective ("classic" policy gradient)
:return: List of actor params, gradients and gradients shape.
"""
# This is necessary because not all the parameters in the policy have gradients w.r.t. the KL divergence
# The policy objective is also called surrogate objective
policy_objective_gradients = []
# Contains the gradients of the KL divergence
grad_kl = []
# Contains the shape of the gradients of the KL divergence w.r.t each parameter
# This way the flattened gradient can be reshaped back into the original shapes and applied to
# the parameters
grad_shape = []
# Contains the parameters which have non-zeros KL divergence gradients
# The list is used during the line-search to apply the step to each parameters
actor_params = []
for name, param in self.policy.named_parameters():
# Skip parameters related to value function based on name
# this work for built-in policies only (not custom ones)
if "value" in name:
continue
# For each parameter we compute the gradient of the KL divergence w.r.t to that parameter
kl_param_grad, *_ = th.autograd.grad(
kl_div,
param.to(kl_div.device),
create_graph=True,
retain_graph=True,
allow_unused=True,
only_inputs=True,
)
# If the gradient is not zero (not None), we store the parameter in the actor_params list
# and add the gradient and its shape to grad_kl and grad_shape respectively
if kl_param_grad is not None:
# If the parameter impacts the KL divergence (i.e. the policy)
# we compute the gradient of the policy objective w.r.t to the parameter
# this avoids computing the gradient if it's not going to be used in the conjugate gradient step
policy_objective_grad, * \
_ = th.autograd.grad(
policy_objective.to(param.device), param, retain_graph=True, only_inputs=True)
grad_shape.append(kl_param_grad.shape)
grad_kl.append(kl_param_grad.reshape(-1))
policy_objective_gradients.append(
policy_objective_grad.reshape(-1).to(kl_param_grad.device))
actor_params.append(param)
# Gradients are concatenated before the conjugate gradient step
policy_objective_gradients = th.cat(policy_objective_gradients)
grad_kl = th.cat(grad_kl)
return actor_params, policy_objective_gradients, grad_kl, grad_shape
def train(self) -> None:
"""
Update policy using the currently gathered rollout buffer.
"""
# Switch to train mode (this affects batch norm / dropout)
#self.policy.set_training_mode(True)
# Update optimizer learning rate
self._update_learning_rate(self.policy.optimizer)
gather_device = self.policy.device
policy_objective_values = []
kl_divergences = []
line_search_results = []
value_losses = []
# This will only loop once (get all data in one go)
for rollout_data in self.rollout_buffer.get(batch_size=None):
# Optional: sub-sample data for faster computation
if self.sub_sampling_factor > 1:
rollout_data = RolloutBufferSamples(
rollout_data.observations[:: self.sub_sampling_factor],
rollout_data.actions[:: self.sub_sampling_factor],
None, # old values, not used here
rollout_data.old_log_prob[:: self.sub_sampling_factor],
rollout_data.advantages[:: self.sub_sampling_factor],
None, # returns, not used here
)
actions = rollout_data.actions
if isinstance(self.action_space, spaces.Discrete):
# Convert discrete action from float to long
actions = rollout_data.actions.long().flatten()
# Re-sample the noise matrix because the log_std has changed
if self.use_sde:
# batch_size is only used for the value function
self.policy.reset_noise(actions.shape[0])
# Note: is copy enough, no need for deepcopy?
# If using gSDE and deepcopy, we need to use `old_distribution.distribution`
# directly to avoid PyTorch errors.
with th.no_grad():
old_distribution = copy.copy(self.policy.get_distribution(
rollout_data.observations, detach=True))
distribution = self.policy.get_distribution(
rollout_data.observations)
log_prob = distribution.log_prob(actions)
advantages = rollout_data.advantages
if self.normalize_advantage:
advantages = (advantages - advantages.mean()) / \
(rollout_data.advantages.std() + 1e-8)
# ratio between old and new policy, should be one at the first iteration
ratio = th.exp(log_prob - rollout_data.old_log_prob)
# surrogate policy objective
policy_objective = (advantages * ratio).mean()
# KL divergence
kl_div = kl_divergence(
distribution.distribution, old_distribution.distribution).mean()
# Surrogate & KL gradient
self.policy.optimizer.zero_grad()
actor_params, policy_objective_gradients, grad_kl, grad_shape = self._compute_actor_grad(
kl_div, policy_objective)
# Hessian-vector dot product function used in the conjugate gradient step
hessian_vector_product_fn = partial(
self.hessian_vector_product, actor_params, grad_kl)
# Computing search direction
search_direction = conjugate_gradient_solver(
hessian_vector_product_fn,
policy_objective_gradients,
max_iter=self.cg_max_steps,
)
# Maximal step length
line_search_max_step_size = 2 * self.target_kl
line_search_max_step_size /= th.matmul(
search_direction, hessian_vector_product_fn(
search_direction, retain_graph=False)
)
line_search_max_step_size = th.sqrt(line_search_max_step_size)
line_search_backtrack_coeff = 1.0
original_actor_params = [param.detach().clone().to(
gather_device) for param in actor_params]
is_line_search_success = False
with th.no_grad():
# Line-search (backtracking)
for _ in range(self.line_search_max_iter):
start_idx = 0
# Applying the scaled step direction
for param, original_param, shape in zip(actor_params, original_actor_params, grad_shape):
n_params = param.numel()
param.data = (
original_param.data
+ line_search_backtrack_coeff
* line_search_max_step_size
* search_direction[start_idx: (start_idx + n_params)].view(shape)
)
start_idx += n_params
# Recomputing the policy log-probabilities
distribution = self.policy.get_distribution(
rollout_data.observations)
log_prob = distribution.log_prob(actions)
# New policy objective
ratio = th.exp(log_prob - rollout_data.old_log_prob)
new_policy_objective = (advantages * ratio).mean()
# New KL-divergence
kl_div = kl_divergence(
distribution.distribution, old_distribution.distribution).mean()
# Constraint criteria:
# we need to improve the surrogate policy objective
# while being close enough (in term of kl div) to the old policy
if (kl_div < self.target_kl) and (new_policy_objective > policy_objective):
is_line_search_success = True
break
# Reducing step size if line-search wasn't successful
line_search_backtrack_coeff *= self.line_search_shrinking_factor
line_search_results.append(is_line_search_success)
if not is_line_search_success:
# If the line-search wasn't successful we revert to the original parameters
for param, original_param in zip(actor_params, original_actor_params):
param.data = original_param.data.clone()
policy_objective_values.append(policy_objective.item())
kl_divergences.append(0)
else:
policy_objective_values.append(new_policy_objective.item())
kl_divergences.append(kl_div.item())
# Critic update
for _ in range(self.n_critic_updates):
for rollout_data in self.rollout_buffer.get(self.batch_size):
values_pred = self.policy.predict_values(
rollout_data.observations)
value_loss = F.mse_loss(
rollout_data.returns, values_pred.flatten())
value_losses.append(value_loss.item())
self.policy.optimizer.zero_grad()
value_loss.backward()
# Removing gradients of parameters shared with the actor
# otherwise it defeats the purposes of the KL constraint
for param in actor_params:
param.grad = None
self.policy.optimizer.step()
self._n_updates += 1
explained_var = explained_variance(
self.rollout_buffer.values.flatten(), self.rollout_buffer.returns.flatten())
# Logs
self.logger.record("train/policy_objective",
np.mean(policy_objective_values))
self.logger.record("train/value_loss", np.mean(value_losses))
self.logger.record("train/kl_divergence_loss", np.mean(kl_divergences))
self.logger.record("train/explained_variance", explained_var)
self.logger.record("train/is_line_search_success",
np.mean(line_search_results))
if hasattr(self.policy, "log_std"):
self.logger.record(
"train/std", th.exp(self.policy.log_std).mean().item())
self.logger.record("train/n_updates",
self._n_updates, exclude="tensorboard")
def hessian_vector_product(
self, params: List[nn.Parameter], grad_kl: th.Tensor, vector: th.Tensor, retain_graph: bool = True
) -> th.Tensor:
"""
Computes the matrix-vector product with the Fisher information matrix.
:param params: list of parameters used to compute the Hessian
:param grad_kl: flattened gradient of the KL divergence between the old and new policy
:param vector: vector to compute the dot product the hessian-vector dot product with
:param retain_graph: if True, the graph will be kept after computing the Hessian
:return: Hessian-vector dot product (with damping)
"""
jacobian_vector_product = (grad_kl * vector).sum()
return flat_grad(jacobian_vector_product, params, retain_graph=retain_graph) + self.cg_damping * vector
def learn(
self,
total_timesteps: int,
callback: MaybeCallback = None,
log_interval: int = 1,
eval_env: Optional[GymEnv] = None,
eval_freq: int = -1,
n_eval_episodes: int = 5,
tb_log_name: str = "TRPO",
eval_log_path: Optional[str] = None,
reset_num_timesteps: bool = True,
) -> OnPolicyAlgorithm:
return super().learn(
total_timesteps=total_timesteps,
callback=callback,
log_interval=log_interval,
eval_env=eval_env,
eval_freq=eval_freq,
n_eval_episodes=n_eval_episodes,
tb_log_name=tb_log_name,
eval_log_path=eval_log_path,
reset_num_timesteps=reset_num_timesteps,
)
| [] |
2024-01-10 | navant/chatpdf-azure-accelerator | api~Python~Utilities~cogSearch.py | from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import *
from azure.search.documents import SearchClient
from azure.core.credentials import AzureKeyCredential
import os
import logging
from azure.search.documents.models import QueryType
from Utilities.embeddings import generateEmbeddings
from azure.search.documents.indexes.models import (
SearchIndex,
SearchField,
SearchFieldDataType,
SimpleField,
SearchableField,
SearchIndex,
SemanticConfiguration,
PrioritizedFields,
SemanticField,
SearchField,
SemanticSettings,
VectorSearch,
VectorSearchAlgorithmConfiguration,
)
from azure.search.documents.models import Vector
from Utilities.envVars import *
from tenacity import retry, wait_random_exponential, stop_after_attempt
import openai
def deleteSearchIndex(indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName in indexClient.list_index_names():
logging.info(f"Deleting {indexName} search index")
indexClient.delete_index(indexName)
else:
logging.info(f"Search index {indexName} does not exist")
def createSearchIndex(indexType, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName not in indexClient.list_index_names():
if indexType == "cogsearchvs":
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="content", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SearchField(name="contentVector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True, dimensions=1536, vector_search_configuration="vectorConfig"),
SimpleField(name="sourcefile", type="Edm.String", filterable=True, facetable=True),
],
vector_search = VectorSearch(
algorithm_configurations=[
VectorSearchAlgorithmConfiguration(
name="vectorConfig",
kind="hnsw",
hnsw_parameters={
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine"
}
)
]
),
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='content')]))])
)
elif indexType == "cogsearch":
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="content", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SimpleField(name="sourcefile", type="Edm.String", filterable=True, facetable=True),
],
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='content')]))])
)
try:
print(f"Creating {indexName} search index")
indexClient.create_index(index)
except Exception as e:
print(e)
else:
logging.info(f"Search index {indexName} already exists")
def createSections(indexType, embeddingModelType, fileName, docs):
counter = 1
if indexType == "cogsearchvs":
for i in docs:
yield {
"id": f"{fileName}-{counter}".replace(".", "_").replace(" ", "_").replace(":", "_").replace("/", "_").replace(",", "_").replace("&", "_"),
"content": i.page_content,
"contentVector": generateEmbeddings(embeddingModelType, i.page_content),
"sourcefile": os.path.basename(fileName)
}
counter += 1
elif indexType == "cogsearch":
for i in docs:
yield {
"id": f"{fileName}-{counter}".replace(".", "_").replace(" ", "_").replace(":", "_").replace("/", "_").replace(",", "_").replace("&", "_"),
"content": i.page_content,
"sourcefile": os.path.basename(fileName)
}
counter += 1
def indexSections(indexType, embeddingModelType, fileName, indexName, docs):
logging.info("Total docs: " + str(len(docs)))
sections = createSections(indexType, embeddingModelType, fileName, docs)
logging.info(f"Indexing sections from '{fileName}' into search index '{indexName}'")
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net/",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
# batch = []
# for s in sections:
# batch.append(s)
# results = searchClient.upload_documents(documents=batch)
# succeeded = sum([1 for r in results if r.succeeded])
# logging.info(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
i = 0
batch = []
for s in sections:
batch.append(s)
i += 1
if i % 1000 == 0:
results = searchClient.index_documents(batch=batch)
succeeded = sum([1 for r in results if r.succeeded])
logging.info(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
batch = []
if len(batch) > 0:
results = searchClient.upload_documents(documents=batch)
succeeded = sum([1 for r in results if r.succeeded])
logging.info(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
def performCogSearch(indexType, embeddingModelType, question, indexName, k, returnFields=["id", "content", "sourcefile"] ):
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
try:
if indexType == "cogsearchvs":
r = searchClient.search(
search_text="",
vector=Vector(value=generateEmbeddings(embeddingModelType, question), k=k, fields="contentVector"),
select=returnFields,
semantic_configuration_name="semanticConfig"
)
elif indexType == "cogsearch":
#r = searchClient.search(question, filter=None, top=k)
try:
r = searchClient.search(question,
filter=None,
query_type=QueryType.SEMANTIC,
query_language="en-us",
query_speller="lexicon",
semantic_configuration_name="semanticConfig",
top=k,
query_caption="extractive|highlight-false")
except Exception as e:
r = searchClient.search(question,
filter=None,
query_type=QueryType.SEMANTIC,
query_language="en-us",
query_speller="lexicon",
semantic_configuration_name="default",
top=k,
query_caption="extractive|highlight-false")
return r
except Exception as e:
logging.info(e)
return None
def performSummaryQaCogSearch(indexType, embeddingModelType, question, indexName, k, returnFields=["id", "content", "sourcefile"] ):
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
try:
if indexType == "cogsearch" or indexType == "cogsearchvs":
#r = searchClient.search(question, filter=None, top=k)
try:
r = searchClient.search(question,
filter=None,
query_type=QueryType.SEMANTIC,
query_language="en-us",
query_speller="lexicon",
semantic_configuration_name="semanticConfig",
top=k,
query_caption="extractive|highlight-false")
except Exception as e:
r = searchClient.search(question,
filter=None,
query_type=QueryType.SEMANTIC,
query_language="en-us",
query_speller="lexicon",
semantic_configuration_name="default",
top=k,
query_caption="extractive|highlight-false")
return r
except Exception as e:
logging.info(e)
return None
@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
# Function to generate embeddings for title and content fields, also used for query embeddings
def generateKbEmbeddings(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, OpenAiEmbedding, embeddingModelType, text):
if (embeddingModelType == 'azureopenai'):
openai.api_type = "azure"
openai.api_key = OpenAiKey
openai.api_version = OpenAiVersion
openai.api_base = f"https://{OpenAiService}.openai.azure.com"
response = openai.Embedding.create(
input=text, engine=OpenAiEmbedding)
embeddings = response['data'][0]['embedding']
elif embeddingModelType == "openai":
try:
openai.api_type = "open_ai"
openai.api_base = "https://api.openai.com/v1"
openai.api_version = '2020-11-07'
openai.api_key = OpenAiApiKey
response = openai.Embedding.create(
input=text, engine="text-embedding-ada-002", api_key = OpenAiApiKey)
embeddings = response['data'][0]['embedding']
except Exception as e:
logging.info(e)
return embeddings
def createKbSearchIndex(SearchService, SearchKey, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName not in indexClient.list_index_names():
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="question", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SearchableField(name="indexType", type=SearchFieldDataType.String, searchable=True, retrievable=True, filterable=True, facetable=False),
SearchableField(name="indexName", type=SearchFieldDataType.String, searchable=True, retrievable=True, filterable=True, facetable=False),
SearchField(name="vectorQuestion", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True, dimensions=1536, vector_search_configuration="vectorConfig"),
SimpleField(name="answer", type=SearchFieldDataType.String),
],
vector_search = VectorSearch(
algorithm_configurations=[
VectorSearchAlgorithmConfiguration(
name="vectorConfig",
kind="hnsw",
hnsw_parameters={
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine"
}
)
]
),
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='question')]))])
)
try:
print(f"Creating {indexName} search index")
indexClient.create_index(index)
except Exception as e:
print(e)
else:
print(f"Search index {indexName} already exists")
def performKbCogVectorSearch(embedValue, embedField, SearchService, SearchKey, indexType, indexName, kbIndexName, k, returnFields=["id", "content", "sourcefile"] ):
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net",
index_name=kbIndexName,
credential=AzureKeyCredential(SearchKey))
try:
logging.info("Create Index for KB : " + str(kbIndexName))
createKbSearchIndex(SearchService, SearchKey, kbIndexName)
r = searchClient.search(
search_text="",
filter="indexType eq '" + indexType + "' and indexName eq '" + indexName + "'",
vector=Vector(value=embedValue, k=k, fields=embedField),
select=returnFields,
semantic_configuration_name="semanticConfig",
include_total_count=True
)
return r
except Exception as e:
logging.info(e)
return None
def indexDocs(SearchService, SearchKey, indexName, docs):
print("Total docs: " + str(len(docs)))
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net/",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
i = 0
batch = []
for s in docs:
batch.append(s)
i += 1
if i % 1000 == 0:
results = searchClient.upload_documents(documents=batch)
succeeded = sum([1 for r in results if r.succeeded])
print(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
batch = []
if len(batch) > 0:
results = searchClient.upload_documents(documents=batch)
succeeded = sum([1 for r in results if r.succeeded])
print(f"\tIndexed {len(results)} sections, {succeeded} succeeded") | [] |
2024-01-10 | navant/chatpdf-azure-accelerator | Workshop~Utilities~cogSearch.py | from azure.search.documents.indexes import SearchIndexClient
from azure.search.documents.indexes.models import *
from azure.search.documents import SearchClient
from azure.core.credentials import AzureKeyCredential
import os
from azure.search.documents.indexes.models import (
SearchIndex,
SearchField,
SearchFieldDataType,
SimpleField,
SearchableField,
SearchIndex,
SemanticConfiguration,
PrioritizedFields,
SemanticField,
SearchField,
SemanticSettings,
VectorSearch,
VectorSearchAlgorithmConfiguration,
)
from azure.search.documents.models import Vector
from tenacity import retry, wait_random_exponential, stop_after_attempt
import openai
@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
# Function to generate embeddings for title and content fields, also used for query embeddings
def generateEmbeddings(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, embeddingModelType, OpenAiEmbedding, text):
if (embeddingModelType == 'azureopenai'):
baseUrl = f"https://{OpenAiService}.openai.azure.com"
openai.api_type = "azure"
openai.api_key = OpenAiKey
openai.api_version = OpenAiVersion
openai.api_base = f"https://{OpenAiService}.openai.azure.com"
response = openai.Embedding.create(
input=text, engine=OpenAiEmbedding)
embeddings = response['data'][0]['embedding']
elif embeddingModelType == "openai":
try:
openai.api_type = "open_ai"
openai.api_base = "https://api.openai.com/v1"
openai.api_version = '2020-11-07'
openai.api_key = OpenAiApiKey
response = openai.Embedding.create(
input=text, engine="text-embedding-ada-002", api_key = OpenAiApiKey)
embeddings = response['data'][0]['embedding']
except Exception as e:
print(e)
return embeddings
def deleteSearchIndex(SearchService, SearchKey, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName in indexClient.list_index_names():
print(f"Deleting {indexName} search index")
indexClient.delete_index(indexName)
else:
print(f"Search index {indexName} does not exist")
def createSearchIndex(SearchService, SearchKey, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName not in indexClient.list_index_names():
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="content", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SearchField(name="contentVector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True, dimensions=1536, vector_search_configuration="vectorConfig"),
SimpleField(name="sourcefile", type="Edm.String", filterable=True, facetable=True),
],
vector_search = VectorSearch(
algorithm_configurations=[
VectorSearchAlgorithmConfiguration(
name="vectorConfig",
kind="hnsw",
hnsw_parameters={
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine"
}
)
]
),
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='content')]))])
)
try:
print(f"Creating {indexName} search index")
indexClient.create_index(index)
except Exception as e:
print(e)
else:
print(f"Search index {indexName} already exists")
def createEarningCallIndex(SearchService, SearchKey, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName not in indexClient.list_index_names():
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="symbol", type=SearchFieldDataType.String, sortable=True,
searchable=True, retrievable=True, filterable=True, facetable=True, analyzer_name="en.microsoft"),
SearchableField(name="quarter", type=SearchFieldDataType.String, sortable=True,
searchable=True, retrievable=True, filterable=True, facetable=True, analyzer_name="en.microsoft"),
SearchableField(name="year", type=SearchFieldDataType.String, sortable=True,
searchable=True, retrievable=True, filterable=True, facetable=True, analyzer_name="en.microsoft"),
SimpleField(name="calldate", type="Edm.String", retrievable=True),
SearchableField(name="content", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SimpleField(name="inserteddate", type="Edm.String", searchable=True, retrievable=True,),
],
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='content')]))])
)
try:
print(f"Creating {indexName} search index")
indexClient.create_index(index)
except Exception as e:
print(e)
else:
print(f"Search index {indexName} already exists")
def createPressReleaseIndex(SearchService, SearchKey, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName not in indexClient.list_index_names():
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="symbol", type=SearchFieldDataType.String, sortable=True,
searchable=True, retrievable=True, filterable=True, facetable=True, analyzer_name="en.microsoft"),
SimpleField(name="releasedate", type="Edm.String", retrievable=True),
SearchableField(name="title", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SearchableField(name="content", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SimpleField(name="inserteddate", type="Edm.String", searchable=True, retrievable=True,),
],
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='content')]))])
)
try:
print(f"Creating {indexName} search index")
indexClient.create_index(index)
except Exception as e:
print(e)
else:
print(f"Search index {indexName} already exists")
def createStockNewsIndex(SearchService, SearchKey, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName not in indexClient.list_index_names():
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="symbol", type=SearchFieldDataType.String, sortable=True,
searchable=True, retrievable=True, filterable=True, facetable=True, analyzer_name="en.microsoft"),
SimpleField(name="publisheddate", type="Edm.String", retrievable=True),
SearchableField(name="title", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SimpleField(name="image", type="Edm.String", retrievable=True),
SearchableField(name="site", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SearchableField(name="content", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SimpleField(name="url", type="Edm.String", retrievable=True),
SimpleField(name="inserteddate", type="Edm.String", searchable=True, retrievable=True,),
],
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='content')]))])
)
try:
print(f"Creating {indexName} search index")
indexClient.create_index(index)
except Exception as e:
print(e)
else:
print(f"Search index {indexName} already exists")
def indexDocs(SearchService, SearchKey, indexName, docs):
print("Total docs: " + str(len(docs)))
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net/",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
i = 0
batch = []
for s in docs:
batch.append(s)
i += 1
if i % 1000 == 0:
results = searchClient.upload_documents(documents=batch)
succeeded = sum([1 for r in results if r.succeeded])
print(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
batch = []
if len(batch) > 0:
results = searchClient.upload_documents(documents=batch)
succeeded = sum([1 for r in results if r.succeeded])
print(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
def createSections(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, embeddingModelType, OpenAiEmbedding, fileName, docs):
counter = 1
for i in docs:
yield {
"id": f"{fileName}-{counter}".replace(".", "_").replace(" ", "_").replace(":", "_").replace("/", "_").replace(",", "_").replace("&", "_"),
"content": i.page_content,
"contentVector": generateEmbeddings(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, embeddingModelType, OpenAiEmbedding, i.page_content),
"sourcefile": os.path.basename(fileName)
}
counter += 1
def indexSections(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, SearchService, SearchKey, embeddingModelType, OpenAiEmbedding, fileName, indexName, docs):
print("Total docs: " + str(len(docs)))
sections = createSections(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, embeddingModelType, OpenAiEmbedding, fileName, docs)
print(f"Indexing sections from '{fileName}' into search index '{indexName}'")
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net/",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
i = 0
batch = []
for s in sections:
batch.append(s)
i += 1
if i % 1000 == 0:
results = searchClient.index_documents(batch=batch)
succeeded = sum([1 for r in results if r.succeeded])
print(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
batch = []
if len(batch) > 0:
results = searchClient.upload_documents(documents=batch)
succeeded = sum([1 for r in results if r.succeeded])
print(f"\tIndexed {len(results)} sections, {succeeded} succeeded")
def performCogSearch(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, SearchService, SearchKey, embeddingModelType, OpenAiEmbedding, question, indexName, k, returnFields=["id", "content", "sourcefile"] ):
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
try:
r = searchClient.search(
search_text="",
vector=Vector(value=generateEmbeddings(OpenAiService, OpenAiKey, OpenAiVersion, OpenAiApiKey, embeddingModelType, OpenAiEmbedding, question), k=k, fields="contentVector"),
select=returnFields,
semantic_configuration_name="semanticConfig"
)
return r
except Exception as e:
print(e)
return None
def performCogVectorSearch(embedValue, embedField, SearchService, SearchKey, indexName, k, returnFields=["id", "content", "sourcefile"] ):
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net",
index_name=indexName,
credential=AzureKeyCredential(SearchKey))
try:
r = searchClient.search(
search_text="",
vector=Vector(value=embedValue, k=k, fields=embedField),
select=returnFields,
semantic_configuration_name="semanticConfig",
include_total_count=True
)
return r
except Exception as e:
print(e)
return None
def createKbSearchIndex(SearchService, SearchKey, indexName):
indexClient = SearchIndexClient(endpoint=f"https://{SearchService}.search.windows.net/",
credential=AzureKeyCredential(SearchKey))
if indexName not in indexClient.list_index_names():
index = SearchIndex(
name=indexName,
fields=[
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="question", type=SearchFieldDataType.String,
searchable=True, retrievable=True, analyzer_name="en.microsoft"),
SimpleField(name="indexType", type="Edm.String", searchable=True, retrievable=True, filterable=True, facetable=False),
SimpleField(name="indexName", type="Edm.String", searchable=True, retrievable=True, filterable=True, facetable=False),
SearchField(name="vectorQuestion", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True, dimensions=1536, vector_search_configuration="vectorConfig"),
SimpleField(name="answer", type="Edm.String", filterable=False, facetable=False),
],
vector_search = VectorSearch(
algorithm_configurations=[
VectorSearchAlgorithmConfiguration(
name="vectorConfig",
kind="hnsw",
hnsw_parameters={
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine"
}
)
]
),
semantic_settings=SemanticSettings(
configurations=[SemanticConfiguration(
name='semanticConfig',
prioritized_fields=PrioritizedFields(
title_field=None, prioritized_content_fields=[SemanticField(field_name='question')]))])
)
try:
print(f"Creating {indexName} search index")
indexClient.create_index(index)
except Exception as e:
print(e)
else:
print(f"Search index {indexName} already exists")
def performKbCogVectorSearch(embedValue, embedField, SearchService, SearchKey, indexType, indexName, kbIndexName, k, returnFields=["id", "content", "sourcefile"] ):
searchClient = SearchClient(endpoint=f"https://{SearchService}.search.windows.net",
index_name=kbIndexName,
credential=AzureKeyCredential(SearchKey))
try:
createKbSearchIndex(SearchService, SearchKey, kbIndexName)
r = searchClient.search(
search_text="",
vector=Vector(value=embedValue, k=k, fields=embedField),
filter="indexType eq '" + indexType + "' and indexName eq '" + indexName + "'",
select=returnFields,
semantic_configuration_name="semanticConfig",
include_total_count=True
)
return r
except Exception as e:
print(e)
return None
| [] |
2024-01-10 | navant/chatpdf-azure-accelerator | api~Python~Utilities~formrecognizer.py | from azure.ai.formrecognizer import DocumentAnalysisClient
from azure.core.credentials import AzureKeyCredential
from typing import List
import re
from langchain.docstore.document import Document
import logging
def chunk_paragraphs(paragraphs: List[str], fullPath:str, max_words: int = 300) -> List[Document]:
"""
Chunk a list of paragraphs into chunks
of approximately equal word count.
"""
# Create a list of dictionaries with the paragraph as the
# key and the word count as the value
paragraphs = [{p: len(p.split())} for p in paragraphs]
# Create a list of lists of paragraphs
chunks = []
# Iterate over the list of paragraphs
for i, p in enumerate(paragraphs):
# If the current chunk is empty, add the first paragraph to it
if len(chunks) == 0:
chunks.append([p])
# If the current chunk is not empty, check if adding the
# next paragraph will exceed the max word count
else:
# If adding the next paragraph will exceed the max word count,
# start a new chunk
if (
sum([list(c.values())[0] for c in chunks[-1]]) + list(p.values())[0]
> max_words
):
chunks.append([p])
# If adding the next paragraph will not exceed the max word
# count, add it to the current chunk
else:
chunks[-1].append(p)
# Create a list of strings from the list of lists of paragraphs
chunks = [" ".join([list(c.keys())[0] for c in chunk]) for chunk in chunks]
logging.info(f"Number of chunks: {len(chunks)}")
docs = [
Document(page_content=result)
for result in chunks
]
for doc in docs:
doc.metadata['source'] = fullPath
return docs
def analyze_layout(data: bytes, fullpath:str, endpoint: str, key: str) -> List[Document]:
"""
Analyze a document with the layout model.
Args:
data (bytes): Document data.
endpoint (str): Endpoint URL.
key (str): API key.
Returns:
List[str]: List of paragraphs.
"""
# Create a client for the form recognizer service
document_analysis_client = DocumentAnalysisClient(
endpoint=endpoint, credential=AzureKeyCredential(key)
)
# Analyze the document with the layout model
poller = document_analysis_client.begin_analyze_document("prebuilt-layout", data)
# Get the results and extract the paragraphs
# (title, section headings, and body)
result = poller.result()
paragraphs = [
p.content
for p in result.paragraphs
if p.role in ["Title", "SectionHeading", "PageNumber", "PageFooter", "PageHeader", None]
]
# Chunk the paragraphs (max word count = 100)
logging.info(f"Number of paragraphs: {len(paragraphs)}")
paragraphs = chunk_paragraphs(paragraphs, fullpath)
return paragraphs
def normalize_text(s: str) -> str:
"""
Clean up a string by removing redundant
whitespaces and cleaning up the punctuation.
Args:
s (str): The string to be cleaned.
Returns:
s (str): The cleaned string.
"""
s = re.sub(r"\s+", " ", s).strip()
s = re.sub(r". ,", "", s)
s = s.replace("..", ".")
s = s.replace(". .", ".")
s = s.replace("\n", "")
s = s.strip()
return s
| [] |
2024-01-10 | navant/chatpdf-azure-accelerator | Workshop~Utilities~cogSearchVsRetriever.py | """Retriever wrapper for Azure Cognitive Search."""
from __future__ import annotations
import json
from typing import Dict, List, Optional
import aiohttp
import requests
from pydantic import BaseModel, Extra, root_validator
from langchain.schema import BaseRetriever, Document
from langchain.utils import get_from_dict_or_env
from azure.search.documents import SearchClient
from azure.core.credentials import AzureKeyCredential
from azure.search.documents.models import Vector
from tenacity import retry, wait_random_exponential, stop_after_attempt
import openai
class CognitiveSearchVsRetriever(BaseRetriever, BaseModel):
"""Wrapper around Azure Cognitive Search."""
serviceName: str = ""
"""Name of Azure Cognitive Search service"""
indexName: str = ""
"""Name of Index inside Azure Cognitive Search service"""
apiKey: str = ""
"""API Key. Both Admin and Query keys work, but for reading data it's
recommended to use a Query key."""
aiosession: Optional[aiohttp.ClientSession] = None
"""ClientSession, in case we want to reuse connection for better performance."""
contentKey: str = "contentVector"
content: str = "content"
"""Key in a retrieved result to set as the Document page_content in Vector Format."""
returnFields: list = ["id", "content", "sourcefile"]
splitMethod : str = "RecursiveCharacterTextSplitter"
model : str = "GPT3.5"
chunkSize : str = "2000"
overlap : str = "100"
documentId : str = ""
embeddingModelType : str = "azureopenai"
openAiEmbedding : str = "text-embedding-ada-002"
openAiService : str = ""
openAiKey : str = ""
openAiVersion : str = ""
openAiApiKey : str = ""
"""return fields from search result."""
topK: int = 3
"""Number of documents to retrieve."""
class Config:
extra = Extra.forbid
arbitrary_types_allowed = True
@retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
# Function to generate embeddings for title and content fields, also used for query embeddings
def generateEmbeddings(self, text):
if (self.embeddingModelType == 'azureopenai'):
baseUrl = f"https://{self.openAiService}.openai.azure.com"
openai.api_type = "azure"
openai.api_key = self.openAiKey
openai.api_version = self.openAiVersion
openai.api_base = f"https://{self.openAiService}.openai.azure.com"
response = openai.Embedding.create(
input=text, engine=self.openAiEmbedding)
embeddings = response['data'][0]['embedding']
elif self.embeddingModelType == "openai":
try:
openai.api_type = "open_ai"
openai.api_base = "https://api.openai.com/v1"
openai.api_version = '2020-11-07'
openai.api_key = self.openAiApiKey
response = openai.Embedding.create(
input=text, engine="text-embedding-ada-002", api_key = self.openAiApiKey)
embeddings = response['data'][0]['embedding']
except Exception as e:
print(e)
return embeddings
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that service name, index name and api key exists in environment."""
values["serviceName"] = get_from_dict_or_env(
values, "serviceName", "AZURE_COGNITIVE_SEARCH_SERVICE_NAME"
)
values["indexName"] = get_from_dict_or_env(
values, "indexName", "AZURE_COGNITIVE_SEARCH_INDEX_NAME"
)
values["apiKey"] = get_from_dict_or_env(
values, "apiKey", "AZURE_COGNITIVE_SEARCH_API_KEY"
)
return values
def _search(self, query: any) -> any:
searchClient = SearchClient(endpoint=f"https://{self.serviceName}.search.windows.net",
index_name=self.indexName,
credential=AzureKeyCredential(self.apiKey))
response = searchClient.search(
search_text="",
vector=Vector(value=self.generateEmbeddings(query), k=self.topK, fields=self.contentKey),
filter="documentId eq '" + self.documentId + "' and splitMethod eq '" + self.splitMethod + "' and model eq '" + self.model + "' and chunkSize eq '"
+ self.chunkSize + "' and overlap eq '" + self.overlap + "'",
select=self.returnFields,
semantic_configuration_name="semanticConfig",
include_total_count=True
)
return response
async def _asearch(self, query: str) -> any:
return None
def get_relevant_documents(self, query: str) -> List[Document]:
search_results = self._search(query)
return [
Document(page_content=result.pop(self.content), metadata=result)
for result in search_results
]
async def aget_relevant_documents(self, query: str) -> List[Document]:
search_results = await self._asearch(query)
return [
Document(page_content=result.pop(self.content), metadata=result)
for result in search_results
] | [] |
2024-01-10 | Romiroz/langchain | libs~experimental~langchain_experimental~comprehend_moderation~pii.py | import asyncio
from typing import Any, Dict, Optional
from langchain_experimental.comprehend_moderation.base_moderation_exceptions import (
ModerationPiiError,
)
class ComprehendPII:
def __init__(
self,
client: Any,
callback: Optional[Any] = None,
unique_id: Optional[str] = None,
chain_id: Optional[str] = None,
) -> None:
self.client = client
self.moderation_beacon = {
"moderation_chain_id": chain_id,
"moderation_type": "PII",
"moderation_status": "LABELS_NOT_FOUND",
}
self.callback = callback
self.unique_id = unique_id
def validate(self, prompt_value: str, config: Any = None) -> str:
redact = config.get("redact")
return (
self._detect_pii(prompt_value=prompt_value, config=config)
if redact
else self._contains_pii(prompt_value=prompt_value, config=config)
)
def _contains_pii(self, prompt_value: str, config: Any = None) -> str:
"""
Checks for Personally Identifiable Information (PII) labels above a
specified threshold. Uses Amazon Comprehend Contains PII Entities API. See -
https://docs.aws.amazon.com/comprehend/latest/APIReference/API_ContainsPiiEntities.html
Args:
prompt_value (str): The input text to be checked for PII labels.
config (Dict[str, Any]): Configuration for PII check and actions.
Returns:
str: the original prompt
Note:
- The provided client should be initialized with valid AWS credentials.
"""
pii_identified = self.client.contains_pii_entities(
Text=prompt_value, LanguageCode="en"
)
if self.callback and self.callback.pii_callback:
self.moderation_beacon["moderation_input"] = prompt_value
self.moderation_beacon["moderation_output"] = pii_identified
threshold = config.get("threshold")
pii_labels = config.get("labels")
pii_found = False
for entity in pii_identified["Labels"]:
if (entity["Score"] >= threshold and entity["Name"] in pii_labels) or (
entity["Score"] >= threshold and not pii_labels
):
pii_found = True
break
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
if pii_found:
raise ModerationPiiError
return prompt_value
def _detect_pii(self, prompt_value: str, config: Optional[Dict[str, Any]]) -> str:
"""
Detects and handles Personally Identifiable Information (PII) entities in the
given prompt text using Amazon Comprehend's detect_pii_entities API. The
function provides options to redact or stop processing based on the identified
PII entities and a provided configuration. Uses Amazon Comprehend Detect PII
Entities API.
Args:
prompt_value (str): The input text to be checked for PII entities.
config (Dict[str, Any]): A configuration specifying how to handle
PII entities.
Returns:
str: The processed prompt text with redacted PII entities or raised
exceptions.
Raises:
ValueError: If the prompt contains configured PII entities for
stopping processing.
Note:
- If PII is not found in the prompt, the original prompt is returned.
- The client should be initialized with valid AWS credentials.
"""
pii_identified = self.client.detect_pii_entities(
Text=prompt_value, LanguageCode="en"
)
if self.callback and self.callback.pii_callback:
self.moderation_beacon["moderation_input"] = prompt_value
self.moderation_beacon["moderation_output"] = pii_identified
if (pii_identified["Entities"]) == []:
if self.callback and self.callback.pii_callback:
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
return prompt_value
pii_found = False
if not config and pii_identified["Entities"]:
for entity in pii_identified["Entities"]:
if entity["Score"] >= 0.5:
pii_found = True
break
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
if pii_found:
raise ModerationPiiError
else:
threshold = config.get("threshold") # type: ignore
pii_labels = config.get("labels") # type: ignore
mask_marker = config.get("mask_character") # type: ignore
pii_found = False
for entity in pii_identified["Entities"]:
if (
pii_labels
and entity["Type"] in pii_labels
and entity["Score"] >= threshold
) or (not pii_labels and entity["Score"] >= threshold):
pii_found = True
char_offset_begin = entity["BeginOffset"]
char_offset_end = entity["EndOffset"]
mask_length = char_offset_end - char_offset_begin + 1
masked_part = mask_marker * mask_length
prompt_value = (
prompt_value[:char_offset_begin]
+ masked_part
+ prompt_value[char_offset_end + 1 :]
)
if self.callback and self.callback.pii_callback:
if pii_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_pii(self.moderation_beacon, self.unique_id)
)
return prompt_value
| [] |
2024-01-10 | Romiroz/langchain | libs~langchain~tests~integration_tests~vectorstores~test_xata.py | """Test Xata vector store functionality.
Before running this test, please create a Xata database by following
the instructions from:
https://python.langchain.com/docs/integrations/vectorstores/xata
"""
import os
from langchain.docstore.document import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.xata import XataVectorStore
class TestXata:
@classmethod
def setup_class(cls) -> None:
assert os.getenv("XATA_API_KEY"), "XATA_API_KEY environment variable is not set"
assert os.getenv("XATA_DB_URL"), "XATA_DB_URL environment variable is not set"
def test_similarity_search_without_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end constructions and search without metadata."""
texts = ["foo", "bar", "baz"]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
docsearch.delete(delete_all=True)
def test_similarity_search_with_metadata(
self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search with a metadata filter.
This test requires a column named "a" of type integer to be present
in the Xata table."""
texts = ["foo", "foo", "foo"]
metadatas = [{"a": i} for i in range(len(texts))]
docsearch = XataVectorStore.from_texts(
api_key=os.getenv("XATA_API_KEY"),
db_url=os.getenv("XATA_DB_URL"),
texts=texts,
embedding=embedding_openai,
metadatas=metadatas,
)
docsearch.wait_for_indexing(ndocs=3)
output = docsearch.similarity_search("foo", k=1, filter={"a": 1})
assert output == [Document(page_content="foo", metadata={"a": 1})]
docsearch.delete(delete_all=True)
| [] |
2024-01-10 | Romiroz/langchain | libs~experimental~langchain_experimental~comprehend_moderation~toxicity.py | import asyncio
import importlib
from typing import Any, List, Optional
from langchain_experimental.comprehend_moderation.base_moderation_exceptions import (
ModerationToxicityError,
)
class ComprehendToxicity:
def __init__(
self,
client: Any,
callback: Optional[Any] = None,
unique_id: Optional[str] = None,
chain_id: Optional[str] = None,
) -> None:
self.client = client
self.moderation_beacon = {
"moderation_chain_id": chain_id,
"moderation_type": "Toxicity",
"moderation_status": "LABELS_NOT_FOUND",
}
self.callback = callback
self.unique_id = unique_id
def _toxicity_init_validate(self, max_size: int) -> Any:
"""
Validate and initialize toxicity processing configuration.
Args:
max_size (int): Maximum sentence size defined in the
configuration object.
Raises:
Exception: If the maximum sentence size exceeds the 5KB limit.
Note:
This function ensures that the NLTK punkt tokenizer is downloaded
if not already present.
Returns:
None
"""
if max_size > 1024 * 5:
raise Exception("The sentence length should not exceed 5KB.")
try:
nltk = importlib.import_module("nltk")
nltk.data.find("tokenizers/punkt")
return nltk
except ImportError:
raise ModuleNotFoundError(
"Could not import nltk python package. "
"Please install it with `pip install nltk`."
)
except LookupError:
nltk.download("punkt")
def _split_paragraph(
self, prompt_value: str, max_size: int = 1024 * 4
) -> List[List[str]]:
"""
Split a paragraph into chunks of sentences, respecting the maximum size limit.
Args:
paragraph (str): The input paragraph to be split into chunks.
max_size (int, optional): The maximum size limit in bytes for
each chunk. Defaults to 1024.
Returns:
List[List[str]]: A list of chunks, where each chunk is a list
of sentences.
Note:
This function validates the maximum sentence size based on service
limits using the 'toxicity_init_validate' function. It uses the NLTK
sentence tokenizer to split the paragraph into sentences.
Example:
paragraph = "This is a sample paragraph. It
contains multiple sentences. ..."
chunks = split_paragraph(paragraph, max_size=2048)
"""
# validate max. sentence size based on Service limits
nltk = self._toxicity_init_validate(max_size)
sentences = nltk.sent_tokenize(prompt_value)
chunks = list() # type: ignore
current_chunk = list() # type: ignore
current_size = 0
for sentence in sentences:
sentence_size = len(sentence.encode("utf-8"))
# If adding a new sentence exceeds max_size
# or current_chunk has 10 sentences, start a new chunk
if (current_size + sentence_size > max_size) or (len(current_chunk) >= 10):
if current_chunk: # Avoid appending empty chunks
chunks.append(current_chunk)
current_chunk = []
current_size = 0
current_chunk.append(sentence)
current_size += sentence_size
# Add any remaining sentences
if current_chunk:
chunks.append(current_chunk)
return chunks
def validate(self, prompt_value: str, config: Any = None) -> str:
"""
Check the toxicity of a given text prompt using AWS
Comprehend service and apply actions based on configuration.
Args:
prompt_value (str): The text content to be checked for toxicity.
config (Dict[str, Any]): Configuration for toxicity checks and actions.
Returns:
str: The original prompt_value if allowed or no toxicity found.
Raises:
ValueError: If the prompt contains toxic labels and cannot be
processed based on the configuration.
"""
chunks = self._split_paragraph(prompt_value=prompt_value)
for sentence_list in chunks:
segments = [{"Text": sentence} for sentence in sentence_list]
response = self.client.detect_toxic_content(
TextSegments=segments, LanguageCode="en"
)
if self.callback and self.callback.toxicity_callback:
self.moderation_beacon["moderation_input"] = segments # type: ignore
self.moderation_beacon["moderation_output"] = response
toxicity_found = False
threshold = config.get("threshold")
toxicity_labels = config.get("labels")
if not toxicity_labels:
for item in response["ResultList"]:
for label in item["Labels"]:
if label["Score"] >= threshold:
toxicity_found = True
break
else:
for item in response["ResultList"]:
for label in item["Labels"]:
if (
label["Name"] in toxicity_labels
and label["Score"] >= threshold
):
toxicity_found = True
break
if self.callback and self.callback.toxicity_callback:
if toxicity_found:
self.moderation_beacon["moderation_status"] = "LABELS_FOUND"
asyncio.create_task(
self.callback.on_after_toxicity(
self.moderation_beacon, self.unique_id
)
)
if toxicity_found:
raise ModerationToxicityError
return prompt_value
| [] |
2024-01-10 | Romiroz/langchain | libs~langchain~langchain~memory~readonly.py | from typing import Any, Dict, List
from langchain.schema import BaseMemory
class ReadOnlySharedMemory(BaseMemory):
"""A memory wrapper that is read-only and cannot be changed."""
memory: BaseMemory
@property
def memory_variables(self) -> List[str]:
"""Return memory variables."""
return self.memory.memory_variables
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
"""Load memory variables from memory."""
return self.memory.load_memory_variables(inputs)
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Nothing should be saved or changed"""
pass
def clear(self) -> None:
"""Nothing to clear, got a memory like a vault."""
pass
| [] |
2024-01-10 | hardik88t/chatPDF | chatPDF.py | import sys
import fitz
import openai
openai.api_key = 'OPENAI_API_KEY'
def get_combined_text(pdf_path):
doc = fitz.open(pdf_path)
combined_text = ''
for page in doc:
text = page.get_text()
combined_text += text
doc.close()
return combined_text
def ask_question(prompt, combined_text):
# Truncate or summarize the combined_text to fit within the maximum context length
max_context_length = 4096
combined_text = combined_text[:max_context_length]
messages = [
{"role": "system", "content": """You would answer three types of questions
1. Direct Query Questions: These are questions where you would find keywords in text e.g. What is CDE? What was the dataset used in the study?
2. Indirect Query Questions: These are where no keyword is found e.g. Why was the proposed method used?
3. Identification of key references that inspire the proposed methodology in the paper"""},
{"role":"user",'content':combined_text},
{"role":"assistant","content":"text received now ask anything about it."},
{"role":"user","content":prompt}
]
# Check if the total token count exceeds the maximum allowed
total_tokens = sum(len(message["content"].split()) for message in messages)
if total_tokens > 800:
print("=== The conversation exceeds the maximum token limit.===")
return
chat = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=messages, max_tokens=800, temperature=0.2
)
reply = chat.choices[0].message.content
print(reply)
def main():
# Use when Running from Colab/Notebook
pdf_path = input("Enter the path to the PDF file: ")
combined_text = get_combined_text(pdf_path)
# Use when running from Command Line
# pdf_path = sys.argv[1]
# combined_text = get_combined_text(pdf_path)
while True:
prompt = input("Enter your question (or 'quit' to exit): ")
if prompt.lower() == "quit":
break
ask_question(prompt, combined_text)
if __name__ == '__main__':
main() | [
"text received now ask anything about it.",
"You would answer three types of questions\n 1. Direct Query Questions: These are questions where you would find keywords in text e.g. What is CDE? What was the dataset used in the study?\n 2. Indirect Query Questions: These are where no keyword is found e.g. Why was the proposed method used?\n 3. Identification of key references that inspire the proposed methodology in the paper",
"Enter your question (or 'quit' to exit): "
] |
2024-01-10 | shubhamfullstack/rag-experiments | src~utils~store.py | import streamlit as st
import os
from langchain.document_loaders import (
CSVLoader,
PyMuPDFLoader,
TextLoader,
UnstructuredWordDocumentLoader,
)
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from configs.apikey import apikey
os.environ["OPENAI_API_KEY"] = apikey
LOADER_MAPPING = {
".csv": (CSVLoader, {}),
".doc": (UnstructuredWordDocumentLoader, {}),
".docx": (UnstructuredWordDocumentLoader, {}),
".pdf": (PyMuPDFLoader, {}),
".txt": (TextLoader, {"encoding": "utf8"}),
}
def getLoader(pdf_path, ext):
if ext in LOADER_MAPPING:
loader_class, loader_args = LOADER_MAPPING[ext]
loader = loader_class(pdf_path, **loader_args)
return loader.load()
def injest(pdf_path, ext, chunk_size):
persist_directory = "db/chroma"
documents = getLoader(pdf_path, ext)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectordb = Chroma.from_documents(
documents=texts,
embedding=embeddings,
persist_directory=persist_directory,
collection_name="fusion-ai",
)
vectordb.persist()
def create_vector_store(uploaded_file, chunk_size):
st.spinner(text="In progress...")
print("data/" + uploaded_file.name)
file_extension = uploaded_file.name.split(".")[-1]
injest("data/" + uploaded_file.name, "."+file_extension, chunk_size)
st.success("Vector Store is Created Successfully!") | [] |
2024-01-10 | shubhamfullstack/rag-experiments | src~pages~4_Summary.py | import os, tempfile
import streamlit as st
from langchain.llms.openai import OpenAI
from langchain.vectorstores.chroma import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains.summarize import load_summarize_chain
from langchain.document_loaders import PyPDFLoader
from utils.authenticate import authenticate
from configs.apikey import apikey
os.environ["OPENAI_API_KEY"] = apikey
auth = authenticate()
if auth[0]:
st.subheader('Document Summary')
source_doc = st.file_uploader("Upload Source Document", type="pdf")
if st.button("Summarize"):
if not source_doc:
st.error("Please provide the source document.")
else:
try:
with st.spinner('Please wait...'):
# Save uploaded file temporarily to disk, load and split the file into pages, delete temp file
with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
tmp_file.write(source_doc.read())
loader = PyPDFLoader(tmp_file.name)
pages = loader.load_and_split()
os.remove(tmp_file.name)
# Create embeddings for the pages and insert into Chroma database
embeddings=OpenAIEmbeddings()
vectordb = Chroma.from_documents(pages, embeddings)
# Initialize the OpenAI module, load and run the summarize chain
llm=OpenAI(temperature=0)
chain = load_summarize_chain(llm, chain_type="stuff")
search = vectordb.similarity_search(" ")
print(search)
summary = chain.run(input_documents=search, question="Write a summary within 200 words.")
st.success(summary)
except Exception as e:
st.exception(f"An error occurred: {e}")
| [] |
2024-01-10 | shubhamfullstack/rag-experiments | src~pages~3_Chat.py | import streamlit as st
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.schema import HumanMessage, AIMessage
import streamlit as st
from streamlit_chat import message
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores.chroma import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chains import LLMChain
from langchain.chains.question_answering import load_qa_chain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
from utils.authenticate import authenticate
from configs.apikey import apikey
import os
os.environ["OPENAI_API_KEY"] = apikey
def get_conversation_string():
conversation_string = ""
for i in range(len(st.session_state['responses'])-1):
conversation_string += "Human: "+st.session_state['requests'][i] + "\n"
conversation_string += "Bot: "+ st.session_state['responses'][i+1] + "\n"
return conversation_string
def page_chat():
if 'responses' not in st.session_state:
st.session_state['responses'] = ["How can I assist you?"]
if 'requests' not in st.session_state:
st.session_state['requests'] = []
llm = ChatOpenAI(model_name="gpt-3.5-turbo")
if 'buffer_memory' not in st.session_state:
st.session_state.buffer_memory=ConversationBufferWindowMemory(k=3,return_messages=True)
embedding = OpenAIEmbeddings()
vector_store = Chroma(
collection_name="fusion-ai",
embedding_function=embedding,
persist_directory="db/chroma",
)
response_container = st.container()
textcontainer = st.container()
with st.expander("Options"):
chain_type = st.selectbox(label="Chain Type",options=["stuff","refine","map_reduce","map_rerank"],index=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(llm, chain_type=chain_type)
conversation = ConversationalRetrievalChain(retriever=vector_store.as_retriever(), verbose=False, return_source_documents=True,question_generator=question_generator,combine_docs_chain=doc_chain)
chat_history = []
with textcontainer:
query = st.text_input("Query: ", key="input")
if query:
with st.spinner("Fetching..."):
response = conversation({"question": query, "chat_history": chat_history})
answer = response["answer"]
with st.expander("Source Documents"):
st.write(response["source_documents"])
chat_history.append(HumanMessage(content=query))
chat_history.append(AIMessage(content=answer))
st.session_state.requests.append(query)
st.session_state.responses.append(answer)
with response_container:
if st.session_state['responses']:
for i in range(len(st.session_state['responses'])):
message(st.session_state['responses'][i],key=str(i))
if i < len(st.session_state['requests']):
message(st.session_state["requests"][i], is_user=True,key=str(i)+ '_user')
auth = authenticate()
if auth[0]:
page_chat() | [] |
2024-01-10 | Nima-Yeganeh/Test | zprojects~pr2~test1.py | # https://platform.openai.com/account/api-keys
# https://platform.openai.com/apps
# https://openai.com/
# https://chat.openai.com/
import os
import openai
import time
import datetime
import random
xcode = input("What is the code? ")
openai.api_key = "sk-"+xcode+"joeRLSZjsL9bOXI2PT3BlbkFJEc4ys7pAJe7SL82uqxtE"
original_string1 = __file__
new_string1 = original_string1.replace('test1.py', 'filename1.txt')
file_path1 = new_string1
if os.path.exists(file_path1):
with open(file_path1, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path1} does not exist.")
original_string2 = __file__
new_string2 = original_string2.replace('test1.py', 'filename2.txt')
file_path2 = new_string2
if os.path.exists(file_path2):
with open(file_path2, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path2} does not exist.")
original_string3 = __file__
new_string3 = original_string3.replace('test1.py', 'filename3.txt')
file_path3 = new_string3
if os.path.exists(file_path3):
with open(file_path3, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path3} does not exist.")
original_string4 = __file__
new_string4 = original_string4.replace('test1.py', 'filename4.txt')
file_path4 = new_string4
if os.path.exists(file_path4):
with open(file_path4, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path4} does not exist.")
with open(file_path1, 'r') as file1, open(file_path2, 'r') as file2, open(file_path3, 'w') as output_file:
for line1 in file1:
faqs = []
file2.seek(0)
for line2 in file2:
string = line1.strip() + ' ' + line2.strip()
faqs.append(string)
for string in faqs:
# print(string)
output_file.write(string + '\n')
with open(file_path3, 'r') as infile:
data = [line.strip() for line in infile]
random.shuffle(data)
with open(file_path4, 'w') as outfile:
for line in data:
outfile.write(line + '\n')
def generate_response(question):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a chatbot"},
{"role": "user", "content": question},
]
)
result = ''
for choice in response.choices:
result += choice.message.content
return(result)
def generate_filename(newfilename):
original_string1 = __file__
new_string1 = original_string1.replace(os.path.basename(__file__), 'data/'+newfilename+'.MD')
# newfile_path1 = new_string1
newfile_path1 = new_string1.replace(" ", "_")
return(newfile_path1)
with open(file_path4, 'r') as f:
for line in f:
prompt = line.strip()
print(prompt)
story = generate_response(prompt)
print(story)
with open(generate_filename(prompt), 'w') as output_file:
output_file.write(story + '\n')
time.sleep(60)
| [
"You are a chatbot"
] |
2024-01-10 | Nima-Yeganeh/Test | zprojects~pr4~test1.py | # https://platform.openai.com/account/api-keys
# https://platform.openai.com/apps
# https://openai.com/
# https://chat.openai.com/
import os
import openai
import time
import datetime
import random
xcode = input("What is the code? ")
openai.api_key = "sk-"+xcode+"joeRLSZjsL9bOXI2PT3BlbkFJEc4ys7pAJe7SL82uqxtE"
original_string1 = __file__
new_string1 = original_string1.replace('test1.py', 'filename1.txt')
file_path1 = new_string1
if os.path.exists(file_path1):
with open(file_path1, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path1} does not exist.")
original_string2 = __file__
new_string2 = original_string2.replace('test1.py', 'filename2.txt')
file_path2 = new_string2
if os.path.exists(file_path2):
with open(file_path2, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path2} does not exist.")
original_string3 = __file__
new_string3 = original_string3.replace('test1.py', 'filename3.txt')
file_path3 = new_string3
if os.path.exists(file_path3):
with open(file_path3, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path3} does not exist.")
original_string4 = __file__
new_string4 = original_string4.replace('test1.py', 'filename4.txt')
file_path4 = new_string4
if os.path.exists(file_path4):
with open(file_path4, 'r') as file:
contents = file.read()
# print(contents)
else:
print(f"File {file_path4} does not exist.")
with open(file_path1, 'r') as file1, open(file_path2, 'r') as file2, open(file_path3, 'w') as output_file:
for line1 in file1:
faqs = []
file2.seek(0)
for line2 in file2:
string = line1.strip() + ' ' + line2.strip()
faqs.append(string)
for string in faqs:
# print(string)
output_file.write(string + '\n')
with open(file_path3, 'r') as infile:
data = [line.strip() for line in infile]
random.shuffle(data)
with open(file_path4, 'w') as outfile:
for line in data:
outfile.write(line + '\n')
def generate_response(question):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a chatbot"},
{"role": "user", "content": question},
]
)
result = ''
for choice in response.choices:
result += choice.message.content
return(result)
def generate_filename(newfilename):
original_string1 = __file__
new_string1 = original_string1.replace(os.path.basename(__file__), 'data/'+newfilename+'.MD')
# newfile_path1 = new_string1
newfile_path1 = new_string1
newfile_path1 = newfile_path1.replace(".MD", "_MDFILEEXT")
newfile_path1 = newfile_path1.replace(" ", "_")
newfile_path1 = newfile_path1.replace(":", "_")
newfile_path1 = newfile_path1.replace("`", "_")
newfile_path1 = newfile_path1.replace("?", "")
newfile_path1 = newfile_path1.replace(",", "")
newfile_path1 = newfile_path1.replace(".", "_")
newfile_path1 = newfile_path1.replace("'", "_")
newfile_path1 = newfile_path1.replace("-", "_")
newfile_path1 = newfile_path1.replace("-", "_")
newfile_path1 = newfile_path1.replace('"', '')
newfile_path1 = newfile_path1.replace("*", "")
newfile_path1 = newfile_path1.replace("{", "_")
newfile_path1 = newfile_path1.replace("}", "_")
newfile_path1 = newfile_path1.replace("[", "_")
newfile_path1 = newfile_path1.replace("]", "_")
newfile_path1 = newfile_path1.replace("(", "_")
newfile_path1 = newfile_path1.replace(")", "_")
newfile_path1 = newfile_path1.replace("\\", "_")
newfile_path1 = newfile_path1.replace("\\\\", "_")
newfile_path1 = newfile_path1.replace("''", "_")
newfile_path1 = newfile_path1.replace("%", "_")
newfile_path1 = newfile_path1.replace("%%", "_")
newfile_path1 = newfile_path1.replace("__", "_")
newfile_path1 = newfile_path1.replace("__", "_")
newfile_path1 = newfile_path1.replace("__", "_")
newfile_path1 = newfile_path1.replace("_MDFILEEXT", ".MD")
return(newfile_path1)
with open(file_path4, 'r') as f:
for line in f:
prompt = line.strip()
print(prompt)
story = generate_response(prompt)
print(story)
with open(generate_filename(prompt), 'w') as output_file:
output_file.write(story + '\n')
time.sleep(60)
| [
"You are a chatbot"
] |
2024-01-10 | SaarthShah/YouTube-Stock-Analyzer | app~components~engine.py | import streamlit as st
import openai
from transformers import pipeline
import pandas as pd
import requests
from pytube import YouTube
import os
audio_location = ""
openai.api_key = st.session_state.get("OPENAI_API_KEY")
deepgram_access_code = st.session_state.get("DEEPGRAM_API_KEY")
pipe = pipeline("text-classification", model="nickmuchi/deberta-v3-base-finetuned-finance-text-classification",binary_output=True,top_k=3)
stock_names = pd.read_csv('stocks.csv')
def highlight_stock_names(text, stock_names):
# Create a hash table (dictionary) for stock names and corresponding Markdown formatting
stock_name_format = {str(name).lower(): f'<span style="background-color: #3498db">{name}</span>' for name in stock_names}
words = text.split() # Split text into words
highlighted_words = []
for word in words:
word_lower = word.lower()
cleaned_word = word_lower.split("'")[0]
highlighted_word = stock_name_format.get(cleaned_word, word)
highlighted_words.append(highlighted_word)
highlighted_text = ' '.join(highlighted_words)
return highlighted_text
# List of stock names (replace with your own list)
stock_names = stock_names['Name']
def Download(link):
youtubeObject = YouTube(link)
youtubeObject = youtubeObject.streams.filter(only_audio=True).first().download()
print("Download is completed successfully")
return youtubeObject
def getTicker(company_name):
try:
yfinance = "https://query2.finance.yahoo.com/v1/finance/search"
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36'
params = {"q": company_name, "quotes_count": 1, "country": "United States"}
res = requests.get(url=yfinance, params=params, headers={'User-Agent': user_agent})
data = res.json()
company_code = data['quotes'][0]['symbol']
return '$'+company_code
except:
return None
def engine(youtube_link="https://www.youtube.com/watch?v=16SUWTGsDGI&ab_channel=CNBCTelevision"):
# try:
print('Starting engine')
print('Downloading audio file from YouTube')
# Download the audio from the YouTube video
audio_location = Download(youtube_link)
# Read the audio file
audio_file = ''
with open(audio_location, "rb") as file:
audio_file = file.read()
# DELETE THE AUDIO FILE
os.remove(audio_location)
print('Audio file read successfully')
# Get the transcript from Deepgram
url = "https://api.deepgram.com/v1/listen?paragraphs=true&summarize=v2"
headers = {
"accept": "application/json",
"content-type": "audio/wave",
"Authorization": f"Token {str(deepgram_access_code)}"
}
response = requests.post(url, data=audio_file, headers=headers)
response_json = response.json()
summary = response_json['results']['summary']['short']
transcript = response_json['results']['channels'][0]['alternatives'][0]['paragraphs']['transcript']
print('Transcript fetched successfully')
response2 = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": f"'{transcript}'\n For every company or industry that the speaker mentions, give detailed but clear explanation of what they said. Return in the format of a python dictionary where each key is a stock/industry name and the contents is a detailed explanation of what that the person said. "}
])
res_dict = response2['choices'][0]['message']['content']
try:
res_dict_eval = eval(res_dict)
except:
response3 = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": f"'{res_dict}'\n Return a valid python dictionary where each key is a stock/industry name (ticker) and the contents is a detailed explanation of what that the person said"}
])
res_dict_eval = eval(response3['choices'][0]['message']['content'])
result = {}
for key, statement in res_dict_eval.items():
result[key] = {
"sentiment": pipe(statement),
'statement': statement,
'ticker': getTicker(key)
}
print('Stock analysis fecthed from OpenAI successfully')
result_df = pd.DataFrame.from_dict(result, orient='index')
# Create st.metric for each stock
st.markdown("## Sentiment Analysis Results")
# Create columns layout
cols = st.columns(5) # Adjust the number of columns as needed
counter = 0 # Counter to keep track of the metrics
for index, row in result_df.iterrows():
score = str(round(row['sentiment'][0][0]['score']*100, 2)) + '%'
label = row['sentiment'][0][0]['label']
# Choose delta_color based on sentiment label
if label == 'bullish':
delta_color = 'normal'
elif label == 'neutral':
delta_color = 'off'
else:
delta_color = 'normal'
# Capitalize the first letter of the index
index = index[0].upper() + index[1:]
name = index
if label == 'bearish':
label = '-bearish'
# Create a metric in the current column
with cols[counter % 5]: # Alternate columns
st.metric(label=name, value=score, delta=label, delta_color=delta_color)
counter += 1 # Increment counter
print('Sentiment analysis results displayed successfully')
st.markdown('## Stock-wise breakdown')
for i in result_df.index:
# Capitalize the first letter of the index
st.markdown(f'#### {i[0].upper() + i[1:]}')
st.markdown('Possible Ticker: ' + str(result_df.loc[i, 'ticker']))
st.markdown(f'{result_df.loc[i, "sentiment"][0][0]["label"]}' + ' ' + str(round(result_df.loc[i, "sentiment"][0][0]["score"]*100, 2)) + '%')
st.markdown(result_df.loc[i, "statement"])
print('Stock-wise breakdown displayed successfully')
st.markdown("## Summary")
st.write(highlight_stock_names(summary, stock_names), unsafe_allow_html=True)
print('Summary displayed successfully')
st.markdown("## Transcript")
st.write(highlight_stock_names(transcript, stock_names), unsafe_allow_html=True)
print('Transcript displayed successfully')
st.markdown("## YouTube Video")
# Display the YouTube video
st.video(youtube_link)
print('YouTube video displayed successfully')
# except Exception as e:
# print(e)
# st.error("There was an error processing your request. Please try again.")
| [
"'PLACEHOLDER'\n Return a valid python dictionary where each key is a stock/industry name (ticker) and the contents is a detailed explanation of what that the person said",
"audio/wave",
"'PLACEHOLDER'\n For every company or industry that the speaker mentions, give detailed but clear explanation of what they said. Return in the format of a python dictionary where each key is a stock/industry name and the contents is a detailed explanation of what that the person said. "
] |
2024-01-10 | u002410/SlackBotGPT | refatorador.py |
import openai
from secret_access import OPEN_IA_TOKEN
from filter_pii import remove_pii, contains_prohibited
openai.api_key = OPEN_IA_TOKEN
conversations = {}
message_start_ts = {}
info = {}
def process_code(message, say, context_type='random'):
user_id = message['user']
user_message = ""
if context_type == 'refactor':
user_message = f"I need your help with a piece of code in {info[user_id]['language']}. Here is the code:\n{info[user_id]['code']}.\n"
if 'alteration' in info[user_id]:
user_message += f"The desired change is: {info[user_id]['alteration']}.\nPlease, refactor the code considering this request."
system_content = "You are a helpful assistant that review and refactor code."
elif context_type == 'security':
user_message = f"I need your help with a piece of code. It's written in {info[user_id]['language']} and has a known vulnerability {info[user_id]['vulnerability']}.\nHere is the code:\n\n{info[user_id]['code']}\n\nPlease, refactor this code to address the identified vulnerability and show the lines where the code presents the issue."
if 'alteration' in info[user_id]:
user_message += f"The desired change is: {info[user_id]['alteration']}.\nPlease, refactor the code considering this request."
system_content = "You are a helpful assistant that review and refactor insecure code."
else: # assuming 'random' context
user_message = f"Hello assistant, {info[user_id]['question']}."
system_content = "You are a helpful assistant."
thread_id = message['ts']
if thread_id not in conversations:
conversations[thread_id] = [
{"role": "system", "content": system_content},
{"role": "user", "content": remove_pii(user_message)}
]
message_start_ts[thread_id] = message['ts']
else:
conversations[thread_id].append({"role": "user", "content": user_message})
message_start_ts[thread_id] = message['ts']
valid_sensetive = contains_prohibited(user_message)
if valid_sensetive == user_message:
response_message = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=conversations[thread_id],
max_tokens=1024,
temperature=0.2,
top_p = 0
)
conversations[thread_id].append(
{"role": "assistant", "content": response_message['choices'][0]['message']['content']}
)
# save_conversation(thread_id, conversations[thread_id])
if context_type !='random':
if 'ts' in message:
say(thread_ts=message['ts'], text=response_message['choices'][0]['message']['content'])
say("Was the refactoring satisfactory? Answer with *Yes* or *No*.", thread_ts=message['ts'])
info[user_id]['satisfied'] = True
else:
if 'ts' in message:
# Sempre use o timestamp da mensagem original para responder na mesma thread
say(thread_ts=message['ts'], text=response_message['choices'][0]['message']['content'])
else:
say(valid_sensetive, thread_ts=message['ts'])
def process_message(message, say):
user_id = message['user']
user_message = message['text']
thread_id = info[user_id]['thread']
if thread_id not in conversations:
system_content = "You are a helpful assistant."
conversations[thread_id] = [
{"role": "system", "content": system_content},
{"role": "user", "content": remove_pii(user_message)}
]
else:
conversations[thread_id].append({"role": "user", "content": remove_pii(user_message)})
valid_sensetive = contains_prohibited(user_message)
if valid_sensetive == user_message:
response_message = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=conversations[thread_id],
max_tokens=512,
temperature=0.2,
top_p = 0
)
conversations[thread_id].append(
{"role": "assistant", "content": response_message['choices'][0]['message']['content']}
)
if 'ts' in message:
# Sempre use o timestamp da mensagem original para responder na mesma thread
say(thread_ts=message['ts'], text=response_message['choices'][0]['message']['content'])
else:
say(valid_sensetive, thread_ts=message['ts']) | [
"content"
] |
2024-01-10 | zaebee/fairytales | backend~app~app~api~api_v1~endpoints~tales.py | import logging
from typing import Any, List
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from app import crud, models, schemas
from app.api import deps
from app.services import cohere
from app.services import stability
router = APIRouter()
logger = logging.getLogger('uvicorn')
@router.post('/', response_model=schemas.TaleBase)
async def create_tale(
*, tale_in: schemas.TaleCreate,
) -> Any:
"""
Create new tale.
"""
tale_prompt = await cohere.TalePrompt.create(tale_in.log_line)
if tale_in.heroes:
descriptions = [hero.description for hero in tale_in.heroes]
names = [hero.name for hero in tale_in.heroes]
tale_prompt.heroes = {0: {'descriptions': descriptions, 'names': names}}
if tale_in.structure and tale_in.structure.parts:
parts = [f'{part.name.upper()}: {part.text}'
for part in tale_in.structure.parts]
tale_prompt.structures = {0: parts}
response = await tale_prompt.get_tale(
structure=0, heroes=0,
temperature=tale_in.temperature, max_tokens=tale_in.max_tokens)
await tale_prompt.close()
logger.info('Generated tale:\n %s', response)
tale_in.stories = [schemas.Story(text=text) for text in response]
return tale_in
@router.post('/heroes', response_model=list[schemas.HeroSet])
async def create_heroes(
*, tale_in: schemas.TaleCreate,
) -> Any:
"""
Create new heroes.
"""
logger.info('Passed tale:%s', tale_in)
tale_prompt = await cohere.TalePrompt.create(tale_in.log_line)
response = await tale_prompt.get_heroes(
temperature=tale_in.temperature,
max_tokens=tale_in.max_tokens)
logger.info('Generated heroes:\n %s', response)
await tale_prompt.close()
return [schemas.HeroSet(heroes=heroes) for heroes in response]
@router.post('/structures', response_model=list[schemas.Structure])
async def create_structures(
*, tale_in: schemas.TaleCreate,
) -> Any:
"""
Create new structures.
"""
logger.info('Passed tale:\n %s', tale_in)
tale_prompt = await cohere.TalePrompt.create(tale_in.log_line)
descriptions = [hero.description for hero in tale_in.heroes]
tale_prompt.heroes = {0: {'descriptions': descriptions}}
response = await tale_prompt.get_structure(
heroes=0, temperature=tale_in.temperature,
max_tokens=tale_in.max_tokens)
logger.info('Generated structures:\n %s', response)
await tale_prompt.close()
return [schemas.Structure(parts=item) for item in response.values()]
@router.post('/portraits', response_model=list[schemas.Portrait])
def create_portraits(
*, image_in: schemas.PortraitCreate,
) -> Any:
"""
Create hero portraits.
"""
image_prompt = stability.StabilityPrompt()
response = image_prompt.generate_character(
image_in.hero_id, image_in.prompt, style=image_in.style)
logger.info('Generated images:\n%s', response)
return [schemas.Portrait(**item) for item in response]
@router.post('/images', response_model=list[schemas.Scene])
def create_images(
*, image_in: schemas.SceneCreate,
) -> Any:
"""
Create scene images.
"""
image_prompt = stability.StabilityPrompt()
response = image_prompt.generate_scene(
image_in.scene_id, image_in.prompt, style=image_in.style)
logger.info('Generated images:\n%s', response)
return [schemas.Scene(**item) for item in response]
| [] |
2024-01-10 | decisionfacts/semantic-ai | semantic_ai~indexer~elastic_search.py | import asyncio
import os
import aiofiles
from aiopath import AsyncPath
from typing import (
Optional
)
from langchain.embeddings.base import Embeddings
from langchain.vectorstores import ElasticsearchStore
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from semantic_ai.indexer.base import BaseIndexer
from semantic_ai.utils import file_process, check_isfile, iter_to_aiter, sync_to_async
from elasticsearch import Elasticsearch
class ElasticsearchIndexer(BaseIndexer):
def __init__(
self,
*,
url: str,
es_user: str | None = None,
es_password: str | None = None,
index_name: str,
embedding: Optional[Embeddings] = HuggingFaceEmbeddings(),
verify_certs: bool = True,
es_api_key: Optional[str] = None
):
super().__init__()
self.url = url
self.es_user = es_user
self.es_password = es_password
self.index_name = index_name
self.embeddings = embedding
self.verify_certs = verify_certs
self.es_api_key = es_api_key
self.es_connection = Elasticsearch(self.url,
basic_auth=(self.es_user, self.es_password),
verify_certs=self.verify_certs
)
async def create(self) -> ElasticsearchStore:
obj = ElasticsearchStore(
embedding=self.embeddings,
index_name=f"{self.index_name}",
es_connection=self.es_connection,
es_api_key=self.es_api_key
)
return obj
@staticmethod
async def from_documents(extracted_json_dir, recursive: bool):
if extracted_json_dir:
datas = []
dir_path = AsyncPath(extracted_json_dir)
if await dir_path.is_file():
file_path = str(dir_path)
file_ext = dir_path.suffix.lower()
data = await file_process(file_ext=file_ext, file_path=file_path)
await asyncio.sleep(1)
yield data
elif await dir_path.is_dir():
if recursive:
walk_dir = await sync_to_async(os.walk, dir_path)
async for root, dirs, files in iter_to_aiter(walk_dir):
for file in files:
path = AsyncPath(f"{root}/{file}")
file_path = str(path)
file_ext = path.suffix.lower()
_data = await file_process(file_ext=file_ext, file_path=file_path)
datas.append(_data)
else:
pass
await asyncio.sleep(1)
yield datas
else:
async for path in dir_path.iterdir():
if await path.is_file():
file_path = str(path)
file_ext = path.suffix.lower()
_data = await file_process(file_ext=file_ext, file_path=file_path)
datas.append(_data)
else:
pass
yield datas
else:
raise ValueError(f"Please give valid file or directory path.")
async def index(self, extracted_json_dir_or_file: str, recursive: bool = False):
if extracted_json_dir_or_file:
documents_data = self.from_documents(extracted_json_dir_or_file, recursive)
documents = await documents_data.asend(None)
if await check_isfile(extracted_json_dir_or_file):
try:
if documents:
await ElasticsearchStore.afrom_documents(
documents=documents,
embedding=self.embeddings,
index_name=self.index_name,
es_connection=self.es_connection
)
except Exception as ex:
print(f"{ex}")
else:
try:
async for docs in iter_to_aiter(documents):
if docs:
await ElasticsearchStore.afrom_documents(
documents=docs,
embedding=self.embeddings,
index_name=self.index_name,
es_connection=self.es_connection
)
except Exception as ex:
print(f"{ex}")
else:
raise ValueError(f"Please give valid file or directory path.")
| [] |
2024-01-10 | decisionfacts/semantic-ai | semantic_ai~indexer~qdrant.py | import asyncio
import os
from typing import Optional, Any
from aiopath import AsyncPath
from langchain.vectorstores import Qdrant
from qdrant_client import QdrantClient
from langchain.embeddings.base import Embeddings
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
from semantic_ai.indexer.base import BaseIndexer
from semantic_ai.utils import file_process, check_isfile, iter_to_aiter, sync_to_async
class QdrantIndexer(BaseIndexer):
"""Create qdrant indexer and create client object
Args:
location:
If `:memory:` - use in-memory Qdrant instance.
If `str` - use it as a `url` parameter.
If `None` - use default values for `host` and `port`.
url: either host or str of "Optional[scheme], host, Optional[port], Optional[prefix]".
Default: `None`
port: Port of the REST API interface. Default: 6333
grpc_port: Port of the gRPC interface. Default: 6334
prefer_grpc: If `true` - use gPRC interface whenever possible in custom methods.
https: If `true` - use HTTPS(SSL) protocol. Default: `None`
api_key: API key for authentication in Qdrant Cloud. Default: `None`
prefix:
If not `None` - add `prefix` to the REST URL path.
Example: `service/v1` will result in `http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.
Default: `None`
timeout:
Timeout for REST and gRPC API requests.
Default: 5.0 seconds for REST and unlimited for gRPC
host: Host name of Qdrant service. If url and host are None, set to 'localhost'.
Default: `None`
path: Persistence path for QdrantLocal. Default: `None`
**kwargs: Additional arguments passed directly into REST client initialization
Example:
. code-block:: python
from semantic_ai.indexer import QdrantIndexer
collection_name = "MyCollection"
qdrant = QdrantIndexer(url, collection_name, embeddings)
"""
CONTENT_KEY = "page_content"
METADATA_KEY = "metadata"
VECTOR_NAME = None
def __init__(self,
index_name: str,
embedding: Optional[Embeddings] = HuggingFaceEmbeddings(),
content_payload_key: str = CONTENT_KEY,
metadata_payload_key: str = METADATA_KEY,
distance_strategy: str = "COSINE",
vector_name: Optional[str] = VECTOR_NAME,
location: Optional[str] = None,
url: Optional[str] = None,
port: Optional[int] = 6333,
grpc_port: int = 6334,
prefer_grpc: bool = False,
https: Optional[bool] = None,
api_key: Optional[str] = None,
prefix: Optional[str] = None,
timeout: Optional[float] = None,
host: Optional[str] = None,
path: Optional[str] = None,
**kwargs: Any,
):
self.location = location
self.url = url
self.port = port
self.grpc_port = grpc_port
self.prefer_grpc = prefer_grpc
self.https = https
self.api_key = api_key
self.prefix = prefix
self.timeout = timeout
self.host = host
self.path = path
self.collection_name = index_name
self.embeddings = embedding
self.content_payload_key = content_payload_key
self.metadata_payload_key = metadata_payload_key
self.distance_strategy = distance_strategy
self.vector_name = vector_name
self.client = QdrantClient(
location=self.location,
url=self.url,
port=self.port,
grpc_port=self.grpc_port,
prefer_grpc=self.prefer_grpc,
https=self.https,
api_key=self.api_key,
prefix=self.prefix,
timeout=self.timeout,
host=self.host,
path=self.path,
**kwargs
)
async def create(self) -> Qdrant:
return Qdrant(
client=self.client,
collection_name=self.collection_name,
embeddings=self.embeddings,
content_payload_key=self.content_payload_key,
metadata_payload_key=self.metadata_payload_key,
distance_strategy=self.distance_strategy,
vector_name=self.vector_name
)
@staticmethod
async def from_documents(extracted_json_dir, recursive):
if extracted_json_dir:
datas = []
dir_path = AsyncPath(extracted_json_dir)
if await dir_path.is_file():
file_path = str(dir_path)
file_ext = dir_path.suffix.lower()
data = await file_process(file_ext=file_ext, file_path=file_path)
await asyncio.sleep(1)
yield data
elif await dir_path.is_dir():
if recursive:
walk_dir = await sync_to_async(os.walk, dir_path)
async for root, dirs, files in iter_to_aiter(walk_dir):
for file in files:
path = AsyncPath(f"{root}/{file}")
file_path = str(path)
file_ext = path.suffix.lower()
_data = await file_process(file_ext=file_ext, file_path=file_path)
datas.append(_data)
else:
pass
await asyncio.sleep(1)
yield datas
else:
async for path in dir_path.iterdir():
if await path.is_file():
file_path = str(path)
file_ext = path.suffix.lower()
_data = await file_process(file_ext=file_ext, file_path=file_path)
datas.append(_data)
else:
pass
yield datas
else:
raise ValueError(f"Please give valid file or directory path.")
async def index(self, extracted_json_dir_or_file: str, recursive: bool):
if extracted_json_dir_or_file:
documents_data = self.from_documents(extracted_json_dir_or_file, recursive)
documents = await documents_data.asend(None)
if await check_isfile(extracted_json_dir_or_file):
if documents:
try:
await Qdrant.afrom_documents(
documents=documents,
embedding=self.embeddings,
url=self.url,
api_key=self.api_key,
collection_name=self.collection_name
)
except Exception as ex:
print(f"{ex}")
else:
try:
async for docs in iter_to_aiter(documents):
if docs:
await Qdrant.afrom_documents(
documents=docs,
embedding=self.embeddings,
url=self.url,
api_key=self.api_key,
collection_name=self.collection_name
)
except Exception as ex:
print(f"{ex}")
else:
raise ValueError(f"Please give valid file or directory path.")
| [] |
2024-01-10 | decisionfacts/semantic-ai | semantic_ai~search~semantic_search.py | import asyncio
from typing import Optional
import torch
import logging
from fastapi import HTTPException, status
from semantic_ai.utils import sync_to_async, _clear_cache
from semantic_ai.constants import DEFAULT_PROMPT
from langchain.chains import RetrievalQA
from langchain import PromptTemplate
logging.basicConfig(format='%(asctime)s - %(message)s', level=logging.INFO)
logger = logging.getLogger(__name__)
class Search:
def __init__(self,
model,
load_vector_db,
top_k: Optional[int] = None,
prompt: Optional[str] = None
):
self.model = model
self.load_vector_db = load_vector_db
self.top_k = top_k or 4
self.prompt_template = prompt or DEFAULT_PROMPT
async def generate(self, query: str):
asyncio.create_task(_clear_cache())
with (torch.inference_mode()):
_no_response = "Sorry, I can't find the answer from the document."
prompt_template = PromptTemplate(template=self.prompt_template,
input_variables=["context", "question"])
chain_type_kwargs = {
"prompt": prompt_template
}
vector_search = self.load_vector_db
# print(f"Search Query: {vector_search.similarity_search(query)}")
retriever = await sync_to_async(
vector_search.as_retriever,
search_kwargs={"k": self.top_k}
)
qa: RetrievalQA = await sync_to_async(
RetrievalQA.from_chain_type,
llm=self.model,
chain_type="stuff",
retriever=retriever,
chain_type_kwargs=chain_type_kwargs,
return_source_documents=True
)
try:
result = await sync_to_async(qa, query)
if result:
# print("Retrieval Result =:\n", result)
logger.info(f"Retrieval Result =:\n{result}")
source_documents = [doc.metadata for doc in result.get('source_documents') or []]
llm_result = result.get('result')
llm_response = {'query': query, 'result': llm_result,
'source_documents': source_documents}
asyncio.create_task(_clear_cache())
return llm_response
else:
null_response = {'query': query, 'result': _no_response}
asyncio.create_task(_clear_cache())
return null_response
except Exception as ex:
logger.error('Vector Query call error!=> ', exc_info=ex)
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="Sorry! No response found."
)
| [
"question",
"context"
] |
2024-01-10 | yashmehtakristal/KristalGPT | core~LLM_preprocessing.py | #!/usr/bin/env python
# coding: utf-8
# All imports
import fitz
from pprint import pprint
import camelot
import PyPDF2
from PyPDF2 import PdfReader
from langchain.chains import RetrievalQA
from langchain.chains import create_extraction_chain
from langchain.indexes import VectorstoreIndexCreator
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.document_loaders import CSVLoader
from langchain.llms import OpenAI
from langchain.document_loaders import UnstructuredPDFLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.openai_functions.utils import (
_convert_schema,
_resolve_schema_references,
get_llm_kwargs,
)
from langchain.output_parsers.openai_functions import (
JsonKeyOutputFunctionsParser,
PydanticAttrOutputFunctionsParser,
)
from langchain.prompts import ChatPromptTemplate
from langchain.pydantic_v1 import BaseModel
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
from llama_hub.file.pymu_pdf.base import PyMuPDFReader
from llama_index import Document, SummaryIndex
from llama_index import VectorStoreIndex, ServiceContext, LLMPredictor
from llama_index.query_engine import PandasQueryEngine, RetrieverQueryEngine
from llama_index.retrievers import RecursiveRetriever
from llama_index.schema import IndexNode
from llama_index.llms import OpenAI
from llama_hub.file.pymu_pdf.base import PyMuPDFReader
from llama_index.retrievers import RecursiveRetriever
from llama_index.query_engine import RetrieverQueryEngine
from llama_index.response_synthesizers import get_response_synthesizer
import pandas as pd
import os
import time
import streamlit as st
from typing import Any, List, Optional
from pathlib import Path
import pickle
import openai
from contextlib import redirect_stdout
import io
import warnings
warnings.filterwarnings("ignore")
from tenacity import retry, stop_after_attempt, wait_random_exponential
@st.cache_data(show_spinner = False)
def conditions_excel(orignal_excel_file):
'''
conditions_excel: Checking for certain conditions and creating a filtered dataframe
Input -
orignal_excel_file: Dataframe of results excel file
Output -
LLM_inputs: Displays rows of orignal_excel_file, where source type column is equal to LLM
info_excel_file: Displays rows of orignal_excel_file, where source type column is equal to Discretionary
'''
LLM_inputs = orignal_excel_file[orignal_excel_file["Source Type"] == "LLM"]
Discretionary_inputs = orignal_excel_file[orignal_excel_file["Source Type"] == "Discretionary"]
return LLM_inputs, Discretionary_inputs
# Function to extract fund variable
@st.cache_data(show_spinner = False)
def extract_fund_variable(info_excel_file):
'''
extract_fund_variable: This function extracts the fund variable
Input -
info_excel_file: Dataframe of the info sheet of results excel file
Output -
fund_variable: Fund variable that was extracted from info sheet of results excel file
'''
for index, row in info_excel_file.iterrows():
if "Fund variable" in row.values:
date_index = list(row).index("Fund variable")
fund_variable = row[date_index + 1]
# Return fund_variable
return fund_variable
# Define function to obtain the prompts where we substitute variable name
# This code should ultimately create a new column, "Automatic Processed Input Prompt"
@st.cache_data(show_spinner = False)
def prompts_to_substitute_variable(orignal_excel_file, fund_variable, LLM_inputs):
'''
prompts_to_substitute_variable: This function creates a new column, "Automatic Processed Input Prompt" and writes the prompt result there.
Input -
orignal_excel_file: Dataframe of the results excel file
fund_variable: Fund variable that was extracted from info sheet of results excel file
LLM_inputs: Displays rows of orignal_excel_file, where source type column is equal to LLM
Output -
orignal_excel_file: Dataframe of the results excel file
llm_full_index: List of index of rows where "Source Type" column is equal to LLM
'''
variable_replace = orignal_excel_file['Variable replace'] == 'Yes'
prompt_values = orignal_excel_file.loc[variable_replace, 'Input prompt'].tolist()
prompt_indices = orignal_excel_file.loc[variable_replace].index.tolist()
new_prompt_values = []
for prompt in prompt_values:
modified_prompt = prompt.replace("fund", fund_variable + " fund")
new_prompt_values.append(modified_prompt)
orignal_excel_file.loc[prompt_indices, 'Automatic Processed Input prompt'] = new_prompt_values
llm_full_index = LLM_inputs.index.tolist()
rest_of_index = [x for x in llm_full_index if x not in prompt_indices]
orignal_excel_file.loc[rest_of_index, 'Automatic Processed Input prompt'] = orignal_excel_file.loc[rest_of_index, 'Input prompt']
excel_columns = orignal_excel_file.columns.tolist()
excel_columns.remove('Automatic Processed Input prompt')
excel_columns.insert(excel_columns.index('Variable replace'), 'Automatic Processed Input prompt')
orignal_excel_file = orignal_excel_file[excel_columns]
return orignal_excel_file, llm_full_index
@st.cache_data(show_spinner = False)
def storing_input_prompt_in_list(orignal_excel_file, llm_full_index):
'''
storing_input_prompt_in_list: This function creates a list of prompts that we pass into our LLM
Input -
orignal_excel_file: Dataframe of the results excel file
llm_full_index: List of index of rows where "Source Type" column is equal to LLM
Output -
orignal_excel_file: Dataframe of the results excel file
llm_prompts_to_use: The list prompts that we pass into our LLM (filtered for NA values in rows where Source Type = LLM even)
llm_prompts_index: Index of the prompts that we have passed to our LLM
'''
llm_index_len = len(llm_full_index)
processed_input_prompts = orignal_excel_file["Automatic Processed Input prompt"]
non_nan_indices = processed_input_prompts.notna()
non_nan_values = processed_input_prompts[non_nan_indices]
llm_prompts_index = non_nan_indices[non_nan_indices].index.tolist()
# These are the processed input prompts in a list format to use as input to our query engine
llm_prompts_to_use = non_nan_values.tolist()
# Return the llm_prompts_to_use
return orignal_excel_file, llm_prompts_to_use, llm_prompts_index
| [
" fund",
"Input prompt",
"fund",
"Automatic Processed Input prompt",
"[]"
] |
2024-01-10 | yashmehtakristal/KristalGPT | pages~qa_basic.py | # All imports
import streamlit as st
from streamlit_extras.app_logo import add_logo
from st_pages import Page, Section, add_page_title, show_pages, hide_pages
# Setting page config & header
st.set_page_config(page_title="Kristal Retriever", page_icon="📖", layout="wide")
st.header("📖 Kristal Retriever")
# Hide particular pages if not logged in
if not st.session_state.logged_in:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
# Hide particular pages if logged out
if st.session_state.logged_out:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
import openai
import os
import tempfile
from tempfile import NamedTemporaryFile
from streamlit_extras.app_logo import add_logo
from st_pages import Page, Section, add_page_title, show_pages, hide_pages
## Importing functions
from ui import (
is_query_valid,
display_file_read_error,
)
from bundle import no_embeddings_process_documents_individual, embeddings_process_documents_individual
from core.loading import read_documents_from_directory, iterate_files_from_directory, save_uploaded_file, read_documents_from_uploaded_files, get_tables_from_uploaded_file, iterate_files_from_uploaded_files, iterate_excel_files_from_directory, iterate_uploaded_excel_files, print_file_details, show_dataframes, iterate_uploaded_excel_file
from core.pickle import save_to_pickle, load_from_pickle
from core.indexing import query_engine_function, build_vector_index
from core.LLM_preprocessing import conditions_excel, extract_fund_variable, prompts_to_substitute_variable, storing_input_prompt_in_list
from core.querying import recursive_retriever_old, recursive_retriever
from core.LLM_prompting import individual_prompt, prompt_loop
from core.PostLLM_prompting import create_output_result_column, create_output_context_column, intermediate_output_to_excel
from core.parsing import create_schema_from_excel, parse_value
from core.Postparsing import create_filtered_excel_file, final_result_orignal_excel_file, reordering_columns
from core.Last_fixing_fields import find_result_fund_name, find_result_fund_house, find_result_fund_class, find_result_currency, find_result_acc_or_inc, create_new_kristal_alias, update_kristal_alias, update_sponsored_by, update_required_broker, update_transactional_fund, update_disclaimer, update_risk_disclaimer, find_nav_value, update_nav_value
from core.chroma import st_server_file, print_files_in_particular_directory, upload_zip_files, print_files_in_directory, check_zipfile_directory
### CODE
add_logo("https://assets-global.website-files.com/614a9edd8139f5def3897a73/61960dbb839ce5fefe853138_Kristal%20Logotype%20Primary.svg")
OPENAI_API_KEY = st.secrets["OPENAI_API_KEY"]
openai.api_key = OPENAI_API_KEY
openai_api_key = OPENAI_API_KEY
# Error handling for OpenAI API key
if not openai_api_key:
st.warning(
"There is something wrong with the API Key Configuration."
"Please check with creator of the program (OpenAI keys can be found at https://platform.openai.com/account/api-keys)"
)
# Display app only if user is logged in
if st.session_state.logged_in is True and st.session_state.logout is False:
st.sidebar.subheader(f'Welcome {st.session_state.username}')
logout_button = st.session_state.Authenticator.logout('Log Out', 'sidebar')
# If user has clicked logged_out button, update the state variables
if logout_button:
st.session_state.logged_out = True
st.session_state.logged_in = False
st.rerun()
# Check embeddings
check_embeddings = st.radio(label = "Do you have saved embeddings?", options = ["Yes", "No"], index = None, help = "Embeddings are saved files created by ChromaDB", disabled=False, horizontal = False, label_visibility="visible")
# User does not have embeddings they can use
if check_embeddings == "No":
# Obtain chrome_file_path and chroma_file_name
master_folder, chroma_file_path, chroma_file_name = st_server_file()
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True)
# User has embeddings which they can use
elif check_embeddings == "Yes":
uploaded_zip_file = upload_zip_files()
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True
)
# No value inserted for check_embeddings - raise warning
else:
st.warning("Please select whether you have embeddings to use or not")
st.stop()
# Display the question input box for user to type question and submit
with st.form(key="qa_form"):
query = st.text_area(label = "Ask a question from the documents uploaded", value = None, height = None, max_chars = None, help = "Please input your questions regarding the document. Greater the prompt engineering, better the output", disabled = False, label_visibility = "visible")
submit = st.form_submit_button("Submit")
if not query:
st.warning("Please enter a question to ask about the document!")
st.stop()
# If user clicks on the button process
if submit:
# User does not have embeddings they can use
if check_embeddings == "No":
# Checking if both conditions are satisfied
if uploaded_files:
# Call bundle function - no_embeddings_process_documents
no_embeddings_process_documents_individual(uploaded_files = uploaded_files, chroma_file_path = chroma_file_path, prompt = query)
# Condition not satisfied
else:
st.warning(
"1) Please upload the pdf files",
icon="⚠")
st.stop()
# User does not have embeddings they can use
elif check_embeddings == "Yes":
# Checking if uploaded_files is satisfied
if uploaded_files:
# Call bundle function - no_embeddings_process_documents
embeddings_process_documents_individual(uploaded_files = uploaded_files, prompt = query, uploaded_zip_file = uploaded_zip_file)
# Excel files were not uploaded
else:
st.warning(
"1) Please upload the excel files",
icon="⚠")
st.stop()
else:
st.info("Seems like you are not logged in. Please head over to the Login page to login", icon="ℹ️")
| [] |
2024-01-10 | yashmehtakristal/KristalGPT | bundle.py | # All imports
import streamlit as st
import openai
import os
# Importing functions
from core.loading import read_documents_from_directory, iterate_files_from_directory, save_uploaded_file, read_documents_from_uploaded_files, get_tables_from_uploaded_file, iterate_files_from_uploaded_files, iterate_excel_files_from_directory, iterate_uploaded_excel_files, print_file_details, show_dataframes, iterate_uploaded_excel_file
from core.pickle import save_to_pickle, load_from_pickle
from core.indexing import query_engine_function, query_engine_function_advanced, build_vector_index
from core.LLM_preprocessing import conditions_excel, extract_fund_variable, prompts_to_substitute_variable, storing_input_prompt_in_list
from core.querying import recursive_retriever_old, recursive_retriever
from core.LLM_prompting import individual_prompt, prompt_loop, prompt_loop_advanced, individual_prompt_advanced
from core.PostLLM_prompting import create_output_result_column, create_output_context_column, intermediate_output_to_excel
from core.parsing import create_schema_from_excel, parse_value
from core.Postparsing import create_filtered_excel_file, final_result_orignal_excel_file, reordering_columns
from core.Last_fixing_fields import find_result_fund_name, find_result_fund_house, find_result_fund_class, find_result_currency, find_result_acc_or_inc, create_new_kristal_alias, update_kristal_alias, update_sponsored_by, update_required_broker, update_transactional_fund, update_disclaimer, update_risk_disclaimer, find_nav_value, update_nav_value
from core.output import output_to_excel, download_data_as_csv, download_data_as_excel_link, download_data_as_csv_link
from core.chroma import create_or_get_chroma_db, download_embedding_old, print_files_in_particular_directory, print_files_in_directory, download_embedding_zip, write_zip_files_to_directory, check_zipfile_directory, get_chroma_db, create_chroma_db
def no_embeddings_process_documents_individual(uploaded_files, chroma_file_path, prompt):
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
# orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file", icon="✅")
with st.spinner("Conducting Indexing, Querying and Prompting"):
# vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
vector_store, storage_context = create_chroma_db(chroma_file_path)
# Functions performing indexing
llm, service_context, df_query_engines = query_engine_function(table_dfs = table_dfs)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
# vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3)
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
# Calling individual_prompt function
output_response, output_context = individual_prompt(query_engine = query_engine, prompt = prompt)
st.success("Successfully finished Indexing, Querying and Prompting", icon="✅")
st.markdown("#### Answer")
st.markdown(f"{output_response}")
download_embedding_zip(chroma_file_path, zip_filename = "embeddings")
def embeddings_process_documents_individual_advanced(uploaded_files, prompt, nodes_to_retrieve, model, temperature, request_timeout, max_retries, return_all_chunks, uploaded_zip_file):
with st.spinner("Extract zip files"):
master_folder, chroma_file_path, chroma_file_name = check_zipfile_directory()
write_zip_files_to_directory(uploaded_zip_file, chroma_file_path)
st.success("Successfully extracted zip files", icon="✅")
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
# orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file and excel file", icon="✅")
with st.spinner("Conducting Indexing, Querying and Prompting"):
# vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
vector_store, storage_context = get_chroma_db(chroma_file_path)
# Functions performing indexing
llm, service_context, df_query_engines = query_engine_function_advanced(table_dfs = table_dfs, model = model, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = nodes_to_retrieve, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
# vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3)
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
# Calling individual_prompt function
output_response, output_context, context_with_max_score_list, file_path_metadata_list, source_metadata_list = individual_prompt_advanced(query_engine = query_engine, prompt = prompt, nodes_to_retrieve = nodes_to_retrieve, return_all_chunks = return_all_chunks)
return output_response, prompt, context_with_max_score_list, file_path_metadata_list, source_metadata_list, table_dfs, docs
# st.markdown("#### Answer")
# st.markdown(f"{output_response}")
def no_embeddings_process_documents_individual_advanced(uploaded_files, prompt, chroma_file_path, nodes_to_retrieve, model, temperature, request_timeout, max_retries, return_all_chunks):
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
# orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file and excel file", icon="✅")
with st.spinner("Conducting Indexing, Querying and Prompting"):
# vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
vector_store, storage_context = create_chroma_db(chroma_file_path)
# Functions performing indexing
llm, service_context, df_query_engines = query_engine_function_advanced(table_dfs = table_dfs, model = model, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = nodes_to_retrieve, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
# vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3)
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
# Calling individual_prompt function
output_response, output_context, context_with_max_score_list, file_path_metadata_list, source_metadata_list = individual_prompt_advanced(query_engine = query_engine, prompt = prompt, nodes_to_retrieve = nodes_to_retrieve, return_all_chunks = return_all_chunks)
st.success("Successfully finished Indexing, Querying and Prompting", icon="✅")
return output_response, prompt, context_with_max_score_list, file_path_metadata_list, source_metadata_list, table_dfs, docs
# st.markdown("#### Answer")
# st.markdown(f"{output_response}")
def no_embeddings_process_documents_loop_advanced(uploaded_files, uploaded_xlsx_files, chroma_file_path, nodes_to_retrieve, model, temperature, request_timeout, max_retries, sleep, return_all_chunks, fund_variable):
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file and excel file", icon="✅")
with st.spinner("Saving Embeddings"):
# vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
vector_store, storage_context = create_chroma_db(chroma_file_path)
st.success("Successfully saved embeddings", icon="✅")
with st.spinner("Conducting Indexing & LLM-preprocessing"):
# Functions performing indexing
llm, service_context, df_query_engines = query_engine_function_advanced(table_dfs = table_dfs, model = model, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = nodes_to_retrieve, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
# vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3)
# Functions performing LLM-preprocessing
LLM_inputs, Discretionary_inputs = conditions_excel(orignal_excel_file)
# fund_variable = extract_fund_variable(info_excel_file = info_excel_file)
orignal_excel_file, llm_full_index = prompts_to_substitute_variable(orignal_excel_file = orignal_excel_file, fund_variable = fund_variable, LLM_inputs = LLM_inputs)
orignal_excel_file, llm_prompts_to_use, llm_prompts_index = storing_input_prompt_in_list(orignal_excel_file = orignal_excel_file, llm_full_index = llm_full_index)
# Diagnostic purposes
# st.write("Checking fund variable")
# st.write(fund_variable)
# st.write("Checking list - llm_prompts_to_use")
# st.write(llm_prompts_to_use)
# st.write("Checking list - llm_prompts_index")
# st.write(llm_prompts_index)
# Showing dataframes for diagnostic purposes
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
st.success("Successfully finished indexing & LLM-preprocessing", icon="✅")
with st.spinner("Conducting Querying"):
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
# Diagnostic purposes
# st.write("Checking recursive_retriever")
# st.write(type(recursive_retriever))
# st.write(recursive_retriever)
# st.write("Checking response_synthesizer")
# st.write(type(response_synthesizer))
# st.write(response_synthesizer)
# st.write("Checking query engine")
# st.write(type(query_engine))
# st.write(query_engine)
st.success("Successfully finished Querying", icon="✅")
with st.spinner("Conducting Prompting"):
output_response, output_context, context_with_max_score_list, file_path_metadata_list, source_metadata_list = prompt_loop_advanced(query_engine = query_engine, llm_prompts_to_use = llm_prompts_to_use, nodes_to_retrieve = nodes_to_retrieve, sleep = sleep, return_all_chunks = return_all_chunks)
# Showing list for diagnostic purposes
# st.write("Final output")
# st.write(output_response)
# st.write(output_context)
st.success("Successfully finished Prompting", icon="✅")
with st.spinner("Conducting Post-LLM Prompting"):
orignal_excel_file = create_output_result_column(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index, output_response = output_response)
orignal_excel_file = create_output_context_column(orignal_excel_file, llm_prompts_index, nodes_to_retrieve = nodes_to_retrieve, output_context = output_context)
intermediate_output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully finished Post-LLM Prompting", icon="✅")
with st.spinner("Parsing"):
schema = create_schema_from_excel(orignal_excel_file, llm_prompts_index)
orignal_excel_file = parse_value(output_response = output_response, llm_prompts_index = llm_prompts_index, orignal_excel_file = orignal_excel_file, schema = schema, llm = llm)
st.success("Successfully finished Parsing", icon="✅")
with st.spinner("Post-parsing"):
filtered_excel_file = create_filtered_excel_file(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = final_result_orignal_excel_file(filtered_excel_file = filtered_excel_file, orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = reordering_columns(orignal_excel_file)
st.success("Successfully finished Post-Parsing", icon="✅")
with st.spinner("Fixing LLM-post processing fields"):
results_fund_name_value = find_result_fund_name(orignal_excel_file)
result_fund_house_value = find_result_fund_house(orignal_excel_file)
result_fund_class_value = find_result_fund_class(orignal_excel_file)
result_currency_value = find_result_currency(orignal_excel_file)
result_acc_or_inc_value = find_result_acc_or_inc(orignal_excel_file)
kristal_alias = create_new_kristal_alias(results_fund_name_value, result_fund_house_value, result_fund_class_value, result_currency_value, result_acc_or_inc_value)
orignal_excel_file = update_kristal_alias(orignal_excel_file = orignal_excel_file, kristal_alias = kristal_alias)
orignal_excel_file = update_sponsored_by(orignal_excel_file = orignal_excel_file, sponsored_by = "[email protected]")
orignal_excel_file = update_required_broker(orignal_excel_file = orignal_excel_file, required_broker = "Kristal Pooled")
orignal_excel_file = update_transactional_fund(orignal_excel_file = orignal_excel_file, transactional_fund = "Yes")
orignal_excel_file = update_disclaimer(
orignal_excel_file = orignal_excel_file,
disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
orignal_excel_file = update_risk_disclaimer(
orignal_excel_file = orignal_excel_file,
risk_disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
result_nav_value = find_nav_value(orignal_excel_file)
orignal_excel_file = update_nav_value(orignal_excel_file = orignal_excel_file, result_nav_value = result_nav_value)
output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully Fixed LLM-post processing fields", icon="✅")
st.markdown("### Results")
return output_response, llm_prompts_to_use, context_with_max_score_list, file_path_metadata_list, source_metadata_list, orignal_excel_file, table_dfs, docs
# @st.cache_data
# def slider_state():
# return {"value": None}
# prompt_result_selector = st.number_input(
# label="Select result of prompt to display", min_value = 1, max_value = len(output_response), step = 1
# )
# # is_chosen = slider_state() # gets our cached dictionary
# # if prompt_result_selector:
# # # any changes need to be performed in place
# # prompt_result_selector.update({"value": prompt_result_selector})
# if prompt_result_selector or st.session_state.load_prompt_result_selector_state:
# st.session_state.load_prompt_result_selector_state = True
# st.markdown(f"Displaying results for Prompt #{prompt_result_selector}: {llm_prompts_to_use[prompt_result_selector - 1]}")
# answer_col, sources_col = st.columns(2)
# # Displaying in answers columns
# with answer_col:
# st.markdown("#### Answer")
# st.markdown(output_response[prompt_result_selector - 1])
# # Displaying in sources columns
# with sources_col:
# # User selected option to display all chunks from vector search
# if return_all_chunks is True:
# # These are lists of corresponding question (as source was list of list)
# context_to_display = context_with_max_score_list[prompt_result_selector - 1]
# file_path_to_display = file_path_metadata_list[prompt_result_selector - 1]
# source_metadata_to_display = source_metadata_list[prompt_result_selector - 1]
# for i in range(nodes_to_retrieve):
# st.markdown(context_to_display[i])
# st.markdown(f"Document: {file_path_to_display[i]}")
# st.markdown(f"Page Source: {source_metadata_to_display[i]}")
# st.markdown("---")
# # User selected option to display only 1 chunk
# if return_all_chunks is False:
# # Display particular lists
# st.markdown(context_with_max_score_list[prompt_result_selector - 1])
# st.markdown(f"Document: {file_path_to_display[prompt_result_selector - 1]}")
# st.markdown(f"Page Source: {source_metadata_to_display[prompt_result_selector - 1]}")
# st.markdown("### Bulk Prompt Results")
# # Display dataframe containing final results
# st.dataframe(data = orignal_excel_file, use_container_width = True, column_order = None)
# # Display button to download results to excel file
# download_data_as_excel(orignal_excel_file = orignal_excel_file)
# # Display button to download results to csv file
# download_data_as_csv(orignal_excel_file = orignal_excel_file)
def no_embeddings_process_documents_loop(uploaded_files, uploaded_xlsx_files, chroma_file_path, fund_variable):
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file and excel file", icon="✅")
with st.spinner("Saving Embeddings"):
# vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
vector_store, storage_context = create_chroma_db(chroma_file_path)
st.success("Successfully saved embeddings", icon="✅")
with st.spinner("Conducting Indexing & LLM-preprocessing"):
# Functions performing indexing
llm, service_context, df_query_engines = query_engine_function(table_dfs = table_dfs)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
# vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3)
# Functions performing LLM-preprocessing
LLM_inputs, Discretionary_inputs = conditions_excel(orignal_excel_file)
# fund_variable = extract_fund_variable(info_excel_file = info_excel_file)
orignal_excel_file, llm_full_index = prompts_to_substitute_variable(orignal_excel_file = orignal_excel_file, fund_variable = fund_variable, LLM_inputs = LLM_inputs)
orignal_excel_file, llm_prompts_to_use, llm_prompts_index = storing_input_prompt_in_list(orignal_excel_file = orignal_excel_file, llm_full_index = llm_full_index)
# Diagnostic purposes
# st.write("Checking fund variable")
# st.write(fund_variable)
# st.write("Checking list - llm_prompts_to_use")
# st.write(llm_prompts_to_use)
# st.write("Checking list - llm_prompts_index")
# st.write(llm_prompts_index)
# Showing dataframes for diagnostic purposes
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
st.success("Successfully finished indexing & LLM-preprocessing", icon="✅")
with st.spinner("Conducting Querying"):
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
# Diagnostic purposes
# st.write("Checking recursive_retriever")
# st.write(type(recursive_retriever))
# st.write(recursive_retriever)
# st.write("Checking response_synthesizer")
# st.write(type(response_synthesizer))
# st.write(response_synthesizer)
# st.write("Checking query engine")
# st.write(type(query_engine))
# st.write(query_engine)
st.success("Successfully finished Querying", icon="✅")
with st.spinner("Conducting Prompting"):
output_response, output_context = prompt_loop(query_engine = query_engine, llm_prompts_to_use = llm_prompts_to_use)
# Showing list for diagnostic purposes
# st.write("Final output")
# st.write(output_response)
# st.write(output_context)
st.success("Successfully finished Prompting", icon="✅")
with st.spinner("Conducting Post-LLM Prompting"):
orignal_excel_file = create_output_result_column(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index, output_response = output_response)
orignal_excel_file = create_output_context_column(orignal_excel_file, llm_prompts_index, nodes_to_retrieve = nodes_to_retrieve, output_context = output_context)
intermediate_output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully finished Post-LLM Prompting", icon="✅")
with st.spinner("Parsing"):
schema = create_schema_from_excel(orignal_excel_file, llm_prompts_index)
orignal_excel_file = parse_value(output_response = output_response, llm_prompts_index = llm_prompts_index, orignal_excel_file = orignal_excel_file, schema = schema, llm = llm)
st.success("Successfully finished Parsing", icon="✅")
with st.spinner("Post-parsing"):
filtered_excel_file = create_filtered_excel_file(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = final_result_orignal_excel_file(filtered_excel_file = filtered_excel_file, orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = reordering_columns(orignal_excel_file)
st.success("Successfully finished Post-Parsing", icon="✅")
with st.spinner("Fixing LLM-post processing fields"):
results_fund_name_value = find_result_fund_name(orignal_excel_file)
result_fund_house_value = find_result_fund_house(orignal_excel_file)
result_fund_class_value = find_result_fund_class(orignal_excel_file)
result_currency_value = find_result_currency(orignal_excel_file)
result_acc_or_inc_value = find_result_acc_or_inc(orignal_excel_file)
kristal_alias = create_new_kristal_alias(results_fund_name_value, result_fund_house_value, result_fund_class_value, result_currency_value, result_acc_or_inc_value)
orignal_excel_file = update_kristal_alias(orignal_excel_file = orignal_excel_file, kristal_alias = kristal_alias)
orignal_excel_file = update_sponsored_by(orignal_excel_file = orignal_excel_file, sponsored_by = "[email protected]")
orignal_excel_file = update_required_broker(orignal_excel_file = orignal_excel_file, required_broker = "Kristal Pooled")
orignal_excel_file = update_transactional_fund(orignal_excel_file = orignal_excel_file, transactional_fund = "Yes")
orignal_excel_file = update_disclaimer(
orignal_excel_file = orignal_excel_file,
disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
orignal_excel_file = update_risk_disclaimer(
orignal_excel_file = orignal_excel_file,
risk_disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
result_nav_value = find_nav_value(orignal_excel_file)
orignal_excel_file = update_nav_value(orignal_excel_file = orignal_excel_file, result_nav_value = result_nav_value)
output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully Fixed LLM-post processing fields", icon="✅")
# Display dataframe containing final results
st.dataframe(data = orignal_excel_file, use_container_width = True, column_order = None)
# Display button to download results to excel file
download_data_as_excel_link(orignal_excel_file = orignal_excel_file)
# Display button to download results to csv file
download_data_as_csv_link(orignal_excel_file = orignal_excel_file)
# print_files_in_particular_directory(chroma_file_path)
# print_files_in_directory(chroma_file_path)
# Display button to download embeddings from a given file path
download_embedding_zip(chroma_file_path, zip_filename = "embeddings")
#download_embedding_old(chroma_file_path)
def embeddings_process_documents_individual(uploaded_files, prompt, uploaded_zip_file):
with st.spinner("Extract zip files"):
master_folder, chroma_file_path, chroma_file_name = check_zipfile_directory()
write_zip_files_to_directory(uploaded_zip_file, chroma_file_path)
st.success("Successfully extracted zip files", icon="✅")
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
# orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file and excel file", icon="✅")
with st.spinner("Conducting Indexing, Querying and Prompting"):
# vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
vector_store, storage_context = get_chroma_db(chroma_file_path)
llm, service_context, df_query_engines = query_engine_function(table_dfs = table_dfs)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
output_response, output_context = individual_prompt(query_engine = query_engine, prompt = prompt)
st.success("Successfully finished Indexing, Querying and Prompting", icon="✅")
st.markdown("#### Answer")
st.markdown(f"{output_response}")
def embeddings_process_documents_loop(uploaded_files, uploaded_xlsx_files, fund_variable, uploaded_zip_file):
with st.spinner("Extract zip files"):
master_folder, chroma_file_path, chroma_file_name = check_zipfile_directory()
write_zip_files_to_directory(uploaded_zip_file, chroma_file_path)
st.success("Successfully extracted zip files", icon="✅")
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file and excel file", icon="✅")
with st.spinner("Loading Embeddings"):
# vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
vector_store, storage_context = get_chroma_db(chroma_file_path)
st.success("Successfully loaded embeddings", icon="✅")
with st.spinner("Conducting Indexing & LLM-preprocessing"):
# Functions performing indexing
llm, service_context, df_query_engines = query_engine_function(table_dfs = table_dfs)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
# vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3)
# Functions performing LLM-preprocessing
LLM_inputs, Discretionary_inputs = conditions_excel(orignal_excel_file)
# fund_variable = extract_fund_variable(info_excel_file = info_excel_file)
orignal_excel_file, llm_full_index = prompts_to_substitute_variable(orignal_excel_file = orignal_excel_file, fund_variable = fund_variable, LLM_inputs = LLM_inputs)
orignal_excel_file, llm_prompts_to_use, llm_prompts_index = storing_input_prompt_in_list(orignal_excel_file = orignal_excel_file, llm_full_index = llm_full_index)
# Diagnostic purposes
# st.write("Checking fund variable")
# st.write(fund_variable)
# st.write("Checking list - llm_prompts_to_use")
# st.write(llm_prompts_to_use)
# st.write("Checking list - llm_prompts_index")
# st.write(llm_prompts_index)
# Showing dataframes for diagnostic purposes
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
st.success("Successfully finished indexing & LLM-preprocessing", icon="✅")
with st.spinner("Conducting Querying"):
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
# Diagnostic purposes
# st.write("Checking recursive_retriever")
# st.write(type(recursive_retriever))
# st.write(recursive_retriever)
# st.write("Checking response_synthesizer")
# st.write(type(response_synthesizer))
# st.write(response_synthesizer)
# st.write("Checking query engine")
# st.write(type(query_engine))
# st.write(query_engine)
st.success("Successfully finished Querying", icon="✅")
with st.spinner("Conducting Prompting"):
output_response, output_context = prompt_loop(query_engine = query_engine, llm_prompts_to_use = llm_prompts_to_use)
# Showing list for diagnostic purposes
# st.write("Final output")
# st.write(output_response)
# st.write(output_context)
st.success("Successfully finished Prompting", icon="✅")
with st.spinner("Conducting Post-LLM Prompting"):
orignal_excel_file = create_output_result_column(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index, output_response = output_response)
orignal_excel_file = create_output_context_column(orignal_excel_file, llm_prompts_index, nodes_to_retrieve = nodes_to_retrieve, output_context = output_context)
intermediate_output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully finished Post-LLM Prompting", icon="✅")
with st.spinner("Parsing"):
schema = create_schema_from_excel(orignal_excel_file, llm_prompts_index)
orignal_excel_file = parse_value(output_response = output_response, llm_prompts_index = llm_prompts_index, orignal_excel_file = orignal_excel_file, schema = schema, llm = llm)
st.success("Successfully finished Parsing", icon="✅")
with st.spinner("Post-parsing"):
filtered_excel_file = create_filtered_excel_file(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = final_result_orignal_excel_file(filtered_excel_file = filtered_excel_file, orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = reordering_columns(orignal_excel_file)
st.success("Successfully finished Post-Parsing", icon="✅")
with st.spinner("Fixing LLM-post processing fields"):
results_fund_name_value = find_result_fund_name(orignal_excel_file)
result_fund_house_value = find_result_fund_house(orignal_excel_file)
result_fund_class_value = find_result_fund_class(orignal_excel_file)
result_currency_value = find_result_currency(orignal_excel_file)
result_acc_or_inc_value = find_result_acc_or_inc(orignal_excel_file)
kristal_alias = create_new_kristal_alias(results_fund_name_value, result_fund_house_value, result_fund_class_value, result_currency_value, result_acc_or_inc_value)
orignal_excel_file = update_kristal_alias(orignal_excel_file = orignal_excel_file, kristal_alias = kristal_alias)
orignal_excel_file = update_sponsored_by(orignal_excel_file = orignal_excel_file, sponsored_by = "[email protected]")
orignal_excel_file = update_required_broker(orignal_excel_file = orignal_excel_file, required_broker = "Kristal Pooled")
orignal_excel_file = update_transactional_fund(orignal_excel_file = orignal_excel_file, transactional_fund = "Yes")
orignal_excel_file = update_disclaimer(
orignal_excel_file = orignal_excel_file,
disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
orignal_excel_file = update_risk_disclaimer(
orignal_excel_file = orignal_excel_file,
risk_disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
result_nav_value = find_nav_value(orignal_excel_file)
orignal_excel_file = update_nav_value(orignal_excel_file = orignal_excel_file, result_nav_value = result_nav_value)
output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully Fixed LLM-post processing fields", icon="✅")
# Display dataframe containing final results
st.dataframe(data = orignal_excel_file, use_container_width = True, column_order = None)
# Display button to download results to excel file
download_data_as_excel_link(orignal_excel_file = orignal_excel_file)
# Display button to download results to csv file
download_data_as_csv_link(orignal_excel_file = orignal_excel_file)
# download_embedding_zip(directory, zip_filename)
def embeddings_process_documents_loop_advanced(uploaded_files, uploaded_xlsx_files, nodes_to_retrieve, model, temperature, request_timeout, max_retries, sleep, return_all_chunks, fund_variable, uploaded_zip_file):
with st.spinner("Extract zip files"):
master_folder, chroma_file_path, chroma_file_name = check_zipfile_directory()
write_zip_files_to_directory(uploaded_zip_file, chroma_file_path)
with st.spinner("Reading uploaded PDF and Excel files"):
docs = read_documents_from_uploaded_files(uploaded_files)
# st.write("This is docs", docs)
table_dfs = iterate_files_from_uploaded_files(uploaded_files)
save_uploaded_file(uploaded_files)
# print_file_details(uploaded_files)
orignal_excel_file, info_excel_file = iterate_uploaded_excel_file(uploaded_xlsx_files)
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
directory_pickles = save_to_pickle(directory_pickles = "Pickle/table_dfs.pkl", table_dfs = table_dfs)
st.success("Successfully read pdf file and excel file", icon="✅")
with st.spinner("Loading Embeddings"):
vector_store, storage_context = create_or_get_chroma_db(chroma_file_path)
st.success("Successfully loaded embeddings", icon="✅")
with st.spinner("Conducting Indexing & LLM-preprocessing"):
# Functions performing indexing
llm, service_context, df_query_engines = query_engine_function_advanced(table_dfs = table_dfs, model = model, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries)
vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = nodes_to_retrieve, storage_context = storage_context, vector_store = vector_store, is_chroma_loading = False)
# vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve = build_vector_index(service_context = service_context, df_query_engines = df_query_engines, docs = docs, nodes_to_retrieve = 3)
# Functions performing LLM-preprocessing
LLM_inputs, Discretionary_inputs = conditions_excel(orignal_excel_file)
#fund_variable = extract_fund_variable(info_excel_file = info_excel_file)
orignal_excel_file, llm_full_index = prompts_to_substitute_variable(orignal_excel_file = orignal_excel_file, fund_variable = fund_variable, LLM_inputs = LLM_inputs)
orignal_excel_file, llm_prompts_to_use, llm_prompts_index = storing_input_prompt_in_list(orignal_excel_file = orignal_excel_file, llm_full_index = llm_full_index)
# Diagnostic purposes
# st.write("Checking fund variable")
# st.write(fund_variable)
# st.write("Checking list - llm_prompts_to_use")
# st.write(llm_prompts_to_use)
# st.write("Checking list - llm_prompts_index")
# st.write(llm_prompts_index)
# Showing dataframes for diagnostic purposes
# list_of_dataframes = [orignal_excel_file, info_excel_file]
# show_dataframes(list_of_dataframes)
st.success("Successfully finished indexing & LLM-preprocessing", icon="✅")
with st.spinner("Conducting Querying"):
recursive_retriever, response_synthesizer, query_engine = recursive_retriever_old(vector_retriever = vector_retriever, df_id_query_engine_mapping = df_id_query_engine_mapping, service_context = service_context)
# Diagnostic purposes
# st.write("Checking recursive_retriever")
# st.write(type(recursive_retriever))
# st.write(recursive_retriever)
# st.write("Checking response_synthesizer")
# st.write(type(response_synthesizer))
# st.write(response_synthesizer)
# st.write("Checking query engine")
# st.write(type(query_engine))
# st.write(query_engine)
st.success("Successfully finished Querying", icon="✅")
with st.spinner("Conducting Prompting"):
output_response, output_context, context_with_max_score_list, file_path_metadata_list, source_metadata_list = prompt_loop_advanced(query_engine = query_engine, llm_prompts_to_use = llm_prompts_to_use, nodes_to_retrieve = nodes_to_retrieve, sleep = sleep, return_all_chunks = return_all_chunks)
# Showing list for diagnostic purposes
# st.write("Final output")
# st.write(output_response)
# st.write(output_context)
st.success("Successfully finished Prompting", icon="✅")
with st.spinner("Conducting Post-LLM Prompting"):
orignal_excel_file = create_output_result_column(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index, output_response = output_response)
orignal_excel_file = create_output_context_column(orignal_excel_file, llm_prompts_index, nodes_to_retrieve = nodes_to_retrieve, output_context = output_context)
intermediate_output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully finished Post-LLM Prompting", icon="✅")
with st.spinner("Parsing"):
schema = create_schema_from_excel(orignal_excel_file, llm_prompts_index)
orignal_excel_file = parse_value(output_response = output_response, llm_prompts_index = llm_prompts_index, orignal_excel_file = orignal_excel_file, schema = schema, llm = llm)
st.success("Successfully finished Parsing", icon="✅")
with st.spinner("Post-parsing"):
filtered_excel_file = create_filtered_excel_file(orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = final_result_orignal_excel_file(filtered_excel_file = filtered_excel_file, orignal_excel_file = orignal_excel_file, llm_prompts_index = llm_prompts_index)
orignal_excel_file = reordering_columns(orignal_excel_file)
st.success("Successfully finished Post-Parsing", icon="✅")
with st.spinner("Fixing LLM-post processing fields"):
results_fund_name_value = find_result_fund_name(orignal_excel_file)
result_fund_house_value = find_result_fund_house(orignal_excel_file)
result_fund_class_value = find_result_fund_class(orignal_excel_file)
result_currency_value = find_result_currency(orignal_excel_file)
result_acc_or_inc_value = find_result_acc_or_inc(orignal_excel_file)
kristal_alias = create_new_kristal_alias(results_fund_name_value, result_fund_house_value, result_fund_class_value, result_currency_value, result_acc_or_inc_value)
orignal_excel_file = update_kristal_alias(orignal_excel_file = orignal_excel_file, kristal_alias = kristal_alias)
orignal_excel_file = update_sponsored_by(orignal_excel_file = orignal_excel_file, sponsored_by = "[email protected]")
orignal_excel_file = update_required_broker(orignal_excel_file = orignal_excel_file, required_broker = "Kristal Pooled")
orignal_excel_file = update_transactional_fund(orignal_excel_file = orignal_excel_file, transactional_fund = "Yes")
orignal_excel_file = update_disclaimer(
orignal_excel_file = orignal_excel_file,
disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
orignal_excel_file = update_risk_disclaimer(
orignal_excel_file = orignal_excel_file,
risk_disclaimer = '''
The recommendations contained herein are for the exclusive use of investor and prohibits any form of disclosure or reproduction. The content cannot be relied upon by any other person for any other purpose. The recommendations are preliminary information to the investors, are subject to risks and may change based on investment objectives, financials, liabilities or the risk profile of an investor. Any recommendations including financial advice provided by Kristal.AI or its affiliates shall be subject to contractual understanding, necessary documentation, applicable laws, approvals and regulations. The recommendations contained herein may not be eligible for sale/purchase in some jurisdictions, in specific, are not intended for residents of the USA or within the USA.Though the recommendations are based on information obtained from reliable sources and are provided in good faith, they may be valid only on the date and time the recommendations are provided and shall be subject to change without notice. Kristal.AI
'''
)
result_nav_value = find_nav_value(orignal_excel_file)
orignal_excel_file = update_nav_value(orignal_excel_file = orignal_excel_file, result_nav_value = result_nav_value)
output_to_excel(orignal_excel_file = orignal_excel_file, excel_directory = "Results", output_excel_filename = "results_output", file_extension = "xlsx")
st.success("Successfully Fixed LLM-post processing fields", icon="✅")
#st.markdown("### Collective Prompt Results")
st.markdown("### Results")
return output_response, llm_prompts_to_use, context_with_max_score_list, file_path_metadata_list, source_metadata_list, orignal_excel_file, table_dfs, docs
# # Display dataframe containing final results
# st.dataframe(data = orignal_excel_file, use_container_width = True, column_order = None)
# # Display button to download results to excel file
# download_data_as_excel(orignal_excel_file = orignal_excel_file)
# # Display button to download results to csv file
# download_data_as_csv(orignal_excel_file = orignal_excel_file)
| [] |
2024-01-10 | yashmehtakristal/KristalGPT | pages~home.py | # All imports
import streamlit as st
from streamlit_extras.app_logo import add_logo
from st_pages import Page, Section, add_page_title, show_pages, hide_pages
# Setting page config & header
st.set_page_config(page_title = "Kristal Retriever", page_icon = "📖", layout = "wide", initial_sidebar_state = "expanded")
st.header("📖 Kristal Retriever")
# Hide particular pages if not logged in
if not st.session_state.logged_in:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
# Hide particular pages if logged out
if st.session_state.logged_out:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
# Add the logo to the sidebar
add_logo("https://assets-global.website-files.com/614a9edd8139f5def3897a73/61960dbb839ce5fefe853138_Kristal%20Logotype%20Primary.svg")
import openai
import os
import tempfile
from tempfile import NamedTemporaryFile
from database_helper_functions import sign_up, fetch_users
import streamlit_authenticator as stauth
## Importing functions
# from ui import (
# is_query_valid,
# display_file_read_error,
# )
# from bundle import no_embeddings_process_documents, embeddings_process_documents
# from core.loading import read_documents_from_directory, iterate_files_from_directory, save_uploaded_file, read_documents_from_uploaded_files, get_tables_from_uploaded_file, iterate_files_from_uploaded_files, iterate_excel_files_from_directory, iterate_uploaded_excel_files, print_file_details, show_dataframes, iterate_uploaded_excel_file
# from core.pickle import save_to_pickle, load_from_pickle
# from core.indexing import query_engine_function, build_vector_index
# from core.LLM_preprocessing import conditions_excel, extract_fund_variable, prompts_to_substitute_variable, storing_input_prompt_in_list
# from core.querying import recursive_retriever_old, recursive_retriever
# from core.LLM_prompting import individual_prompt, prompt_loop
# from core.PostLLM_prompting import create_output_result_column, create_output_context_column, intermediate_output_to_excel
# from core.parsing import create_schema_from_excel, parse_value
# from core.Postparsing import create_filtered_excel_file, final_result_orignal_excel_file, reordering_columns
# from core.Last_fixing_fields import find_result_fund_name, find_result_fund_house, find_result_fund_class, find_result_currency, find_result_acc_or_inc, create_new_kristal_alias, update_kristal_alias, update_sponsored_by, update_required_broker, update_transactional_fund, update_disclaimer, update_risk_disclaimer, find_nav_value, update_nav_value
# from core.output import output_to_excel, download_data_as_excel, download_data_as_csv
# def login_callback():
# st.session_state.logged_out = True
# st.session_state.logged_in = False
# st.write(st.session_state.logged_out, st.session_state.logged_in)
# let User see app if logged in = True & logged out = False
if st.session_state.logged_in is True and st.session_state.logout is False:
st.sidebar.subheader(f'Welcome {st.session_state.username}')
#st.session_state.Authenticator.logout('Log Out', 'sidebar')
logout_button = st.session_state.Authenticator.logout('Log Out', 'sidebar')
# If user has clicked logged_out button, update the state variables
if logout_button:
st.session_state.logged_out = True
st.session_state.logged_in = False
# st.write("Before Rerun")
# st.write(st.session_state.logged_out, st.session_state.logged_in)
# st.write("XXXX")
st.rerun()
# Display Markdown of the main page
st.markdown(
'''
This section will give more information about Kristal GPT.
This application has 2 main features (Bulk Upload and Q&A). Moreover, it has two high-level categorization (Basic, Advanced)
Here is the simple categorization of the aforementioned:
- Basic
- Bulk Upload - Basic
- Q&A - Basic
- Advanced
- Bulk Upload - Advanced
- Q&A - Advanced
### Features explanation
***Bulk Upload:***
This feature allows the user to upload an excel file (or select a template) containing the list of prompts, along with other relevant fields.
***Q&A:***
This feature allows the user to input prompts individually, as if they are "chatting" with the uploaded documents.
### Categorization
***Basic:***
The Basic version of the application has the minimum features required to successfully run the application. These are:
1. Option to save embeddings for current iteration/load saved embeddings
2. Specifying the folder for the embeddings
3. Uploading the pdf files, as well as the excel files.
4. Displaying the results as a dataframe
5. Providing option to download displayed dataframe as a CSV file or Excel file
***Advanced:***
The Advanced version of the application has the same features as the basic, with the addition of the following:
1. Select which LLM model to use
2. Select the number of nodes to retrieve from LLM (during vector search)
3. Select the temperature parameter of LLM
4. Select the request timeout (in seconds) of LLM
5. Select the maximum retries of LLM
6. Select the amount of time for LLM to wait before executing next prompt (in loop)
7. Select whether to display all chunks retrieved from vector search (If no, i.e. default value, will display the chunk that has highest score)
8. Select to show the parsed contents of the document
9. Select to show all tables parsed from the pdf document
'''
)
else:
st.info("Seems like you are not logged in. Please head over to the Login page to login", icon="ℹ️")
| [] |
2024-01-10 | yashmehtakristal/KristalGPT | ui.py | # All imports
import streamlit as st
import openai
import os
from streamlit.logger import get_logger
logger = get_logger(__name__)
from typing import List
from typing import NoReturn
## Importing functions
# Function to check if question is entered
def is_query_valid(query: str) -> bool:
if not query:
st.error("Please enter a question!")
return False
return True
# Function to handle errors in reading the file
def display_file_read_error(e: Exception, file_name: str) -> NoReturn:
st.error("Error reading file. Make sure the file is not corrupted or encrypted")
# {Log the "type of exception occured}: {error message}. Extension: {extension of file}"
logger.error(f"{e.__class__.__name__}: {e}. Extension: {file_name.split('.')[-1]}")
# Stop execution
st.stop()
| [] |
2024-01-10 | yashmehtakristal/KristalGPT | core~querying.py | #!/usr/bin/env python
# coding: utf-8
# Chosen imports
# from llama_index.retrievers import RecursiveRetriever
# from llama_index.response_synthesizers import get_response_synthesizer
# from llama_index.query_engine import RetrieverQueryEngine
# import pandas as pd
# import os
# import time
# import warnings
# warnings.filterwarnings("ignore")
# All imports
# pdf imports
import fitz
from pprint import pprint
import camelot
import PyPDF2
from PyPDF2 import PdfReader
import streamlit as st
# import pdfplumber
# Langchain imports
from langchain.chains import RetrievalQA
from langchain.chains import create_extraction_chain
from langchain.indexes import VectorstoreIndexCreator
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.document_loaders import CSVLoader
from langchain.llms import OpenAI
from langchain.document_loaders import UnstructuredPDFLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.openai_functions.utils import (
_convert_schema,
_resolve_schema_references,
get_llm_kwargs,
)
from langchain.output_parsers.openai_functions import (
JsonKeyOutputFunctionsParser,
PydanticAttrOutputFunctionsParser,
)
from langchain.prompts import ChatPromptTemplate
from langchain.pydantic_v1 import BaseModel
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
# LlamaIndex imports
from llama_hub.file.pymu_pdf.base import PyMuPDFReader
from llama_index import Document, SummaryIndex
from llama_index import VectorStoreIndex, ServiceContext, LLMPredictor
from llama_index.query_engine import PandasQueryEngine, RetrieverQueryEngine
from llama_index.retrievers import RecursiveRetriever
from llama_index.schema import IndexNode
from llama_index.llms import OpenAI
from llama_hub.file.pymu_pdf.base import PyMuPDFReader
from llama_index.retrievers import RecursiveRetriever
from llama_index.query_engine import RetrieverQueryEngine
from llama_index.response_synthesizers import get_response_synthesizer
# Other library imports
import pandas as pd
import os
import time
from typing import Any, List, Optional
from pathlib import Path
import pickle
# @st.cache_data(show_spinner = False)
@st.cache_resource(show_spinner = False)
def recursive_retriever(orignal_excel_file, vector_retriever, df_id_query_engine_mapping, service_context, llm_prompts_to_use):
'''
recursive_retriever: This function uses recursive retriever in our RetrieverQueryEngine
Input -
orignal_excel_file: Dataframe of the results excel file
vector_retriever: Top 3 nodes of vector index
df_id_query_engine_mapping: Mapping of the query engine with each dataframe
service_context: service_context object defined above
Output -
recursive_retriever: Instance of RecursiveRetriever class
response_synthesizer: Output of get_response_synthesizer
query_engine: Instance of Retriever Query Engine class
'''
recursive_retriever = RecursiveRetriever(
"vector",
retriever_dict={"vector": vector_retriever},
query_engine_dict = df_id_query_engine_mapping,
verbose = False,
)
response_synthesizer = get_response_synthesizer(
service_context=service_context,
response_mode="no_text"
)
query_engine = RetrieverQueryEngine.from_args(
recursive_retriever, response_synthesizer = response_synthesizer
)
output_response = []
output_context = []
count = 1
for prompt in llm_prompts_to_use:
# Diagnostic purposes
st.write(f"{count} time entering loop")
# Diagnostic purposes - Checking prompt
st.write(f"Prompt used for this iteration is {prompt}")
# Diagnostic purposes - Query Engine
# st.write(type(query_engine))
# st.write(query_engine)
# Calling query engine
response = query_engine.query(f"{prompt}")
# Appending to list
output_context.append(response)
output_response.append(response.response)
#output_response.append(str(response))
count += 1
# Diagnostic purposes - response from LLM
st.write(f"Response from llm is {response.response}")
# Diagnostic purposes - context from LLM
st.write(f"Context from LLM is {response}")
# Wait 8 seconds before executing next prompt
time.sleep(10)
return output_response, output_context
# @st.cache_data(show_spinner = False)
# @st.cache_resource(show_spinner = False)
def recursive_retriever_old(vector_retriever, df_id_query_engine_mapping, service_context):
'''
recursive_retriever: This function uses recursive retriever in our RetrieverQueryEngine
Input -
orignal_excel_file: Dataframe of the results excel file
vector_retriever: Top 3 nodes of vector index
df_id_query_engine_mapping: Mapping of the query engine with each dataframe
service_context: service_context object defined above
Output -
recursive_retriever: Instance of RecursiveRetriever class
response_synthesizer: Output of get_response_synthesizer
query_engine: Instance of Retriever Query Engine class
'''
recursive_retriever = RecursiveRetriever(
"vector",
retriever_dict={"vector": vector_retriever},
query_engine_dict = df_id_query_engine_mapping,
verbose = True,
)
response_synthesizer = get_response_synthesizer(
service_context=service_context,
response_mode="compact"
)
query_engine = RetrieverQueryEngine.from_args(
recursive_retriever, response_synthesizer = response_synthesizer, verbose = True
)
return recursive_retriever, response_synthesizer, query_engine
# @st.cache_data(show_spinner = False)
@st.cache_resource(show_spinner = False)
def recursive_retriever_orignal(orignal_excel_file, vector_retriever, df_id_query_engine_mapping, service_context):
'''
recursive_retriever: This function uses recursive retriever in our RetrieverQueryEngine
Input -
orignal_excel_file: Dataframe of the results excel file
vector_retriever: Top 3 nodes of vector index
df_id_query_engine_mapping: Mapping of the query engine with each dataframe
service_context: service_context object defined above
Output -
recursive_retriever: Instance of RecursiveRetriever class
response_synthesizer: Output of get_response_synthesizer
query_engine: Instance of Retriever Query Engine class
'''
recursive_retriever = RecursiveRetriever(
"vector",
retriever_dict={"vector": vector_retriever},
query_engine_dict = df_id_query_engine_mapping,
verbose = False,
)
response_synthesizer = get_response_synthesizer(
service_context=service_context,
response_mode="no_text"
)
query_engine = RetrieverQueryEngine.from_args(
recursive_retriever, response_synthesizer = response_synthesizer
)
return recursive_retriever, response_synthesizer, query_engine
| [] |
2024-01-10 | yashmehtakristal/KristalGPT | pages~bulk_upload_advanced.py | # All imports
import streamlit as st
from streamlit_extras.app_logo import add_logo
from st_pages import Page, Section, add_page_title, show_pages, hide_pages
# Setting page config & header
st.set_page_config(page_title = "Kristal Retriever", page_icon = "📖", layout = "wide")
st.header("📖 Kristal Retriever")
# Hide particular pages if not logged in
if not st.session_state.logged_in:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
# Hide particular pages if logged out
if st.session_state.logged_out:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
import openai
import os
import tempfile
from tempfile import NamedTemporaryFile
from streamlit_extras.app_logo import add_logo
from llama_index.readers.schema.base import Document
# from core.config import max_retries
## Importing functions
from ui import (
is_query_valid,
display_file_read_error,
)
from bundle import no_embeddings_process_documents_loop_advanced, embeddings_process_documents_loop_advanced
from core.output import output_to_excel, download_data_as_excel_link, download_data_as_csv_link
from core.loading import display_document_from_uploaded_files
from core.chroma import create_or_get_chroma_db, download_embedding_old, print_files_in_particular_directory, print_files_in_directory, download_embedding_zip, st_server_file, check_zipfile_directory, upload_zip_files
# from core.loading import read_documents_from_directory, iterate_files_from_directory, save_uploaded_file, read_documents_from_uploaded_files, get_tables_from_uploaded_file, iterate_files_from_uploaded_files, iterate_excel_files_from_directory, iterate_uploaded_excel_files, print_file_details, show_dataframes, iterate_uploaded_excel_file
# from core.pickle import save_to_pickle, load_from_pickle
# from core.indexing import query_engine_function, build_vector_index
# from core.LLM_preprocessing import conditions_excel, extract_fund_variable, prompts_to_substitute_variable, storing_input_prompt_in_list
# from core.querying import recursive_retriever_old, recursive_retriever
# from core.LLM_prompting import individual_prompt, prompt_loop
# from core.PostLLM_prompting import create_output_result_column, create_output_context_column, intermediate_output_to_excel
# from core.parsing import create_schema_from_excel, parse_value
# from core.Postparsing import create_filtered_excel_file, final_result_orignal_excel_file, reordering_columns
# from core.Last_fixing_fields import find_result_fund_name, find_result_fund_house, find_result_fund_class, find_result_currency, find_result_acc_or_inc, create_new_kristal_alias, update_kristal_alias, update_sponsored_by, update_required_broker, update_transactional_fund, update_disclaimer, update_risk_disclaimer, find_nav_value, update_nav_value
# from core.output import output_to_excel, download_data_as_excel, download_data_as_csv
# from core.persist import persist, load_widget_state
### CODE
add_logo("https://assets-global.website-files.com/614a9edd8139f5def3897a73/61960dbb839ce5fefe853138_Kristal%20Logotype%20Primary.svg")
OPENAI_API_KEY = st.secrets["OPENAI_API_KEY"]
openai.api_key = OPENAI_API_KEY
openai_api_key = OPENAI_API_KEY
# Error handling for OpenAI API key
if not openai_api_key:
st.warning(
"There is something wrong with the API Key Configuration."
"Please check with creator of the program (OpenAI keys can be found at https://platform.openai.com/account/api-keys)"
)
# Initializing session states
if "load_prompt_result_selector_state" not in st.session_state:
st.session_state.load_prompt_result_selector_state = False
if "output_response" not in st.session_state:
st.session_state.output_response = 0
if "llm_prompts_to_use" not in st.session_state:
st.session_state.llm_prompts_to_use = 0
if "context_with_max_score_list" not in st.session_state:
st.session_state.context_with_max_score_list = 0
if "file_path_metadata_list" not in st.session_state:
st.session_state.file_path_metadata_list = 0
if "source_metadata_list" not in st.session_state:
st.session_state.source_metadata_list = 0
if "prompt_result_selector" not in st.session_state:
st.session_state.prompt_result_selector = 0
if "process_documents" not in st.session_state:
st.session_state.process_documents = False
# Display app only if user is logged in
if st.session_state.logged_in is True and st.session_state.logout is False:
st.sidebar.subheader(f'Welcome {st.session_state.username}')
logout_button = st.session_state.Authenticator.logout('Log Out', 'sidebar')
# If user has clicked logged_out button, update the state variables
if logout_button:
st.session_state.logged_out = True
st.session_state.logged_in = False
st.rerun()
# Check embeddings
check_embeddings = st.radio(label = "Do you have saved embeddings?", options = ["Yes", "No"], index = None, help = "Embeddings are saved files created by ChromaDB", disabled=False, horizontal = False, label_visibility="visible")
# def callback():
# # Button was clicked
# st.session_state.process_documents = True
# User does not have embeddings they can use
if check_embeddings == "No":
# Obtain chrome_file_path and chroma_file_name
master_folder, chroma_file_path, chroma_file_name = st_server_file()
# print_files_in_particular_directory(master_folder)
# print_files_in_particular_directory(chroma_file_path)
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True
)
# File uploader section for xlsx
uploaded_xlsx_files = st.file_uploader(
"Upload a xlsx file",
type=["xlsx"],
help="Please upload the excel file. Make sure it is in the appropriate format. Check the [name] sidebar for more details about the format",
accept_multiple_files = False
)
# Fund name
fund_variable = st.text_input(
label = "Fund name:",
value = None,
max_chars = None,
type = "default",
help = "This will be used to replace the word, fund, in certain prompts",
placeholder = '''Please input the exact, full fund name. Example: FRANKLIN US GOVERNMENT "A" INC''',
disabled = False,
label_visibility = "visible"
)
# Model selection
MODEL_LIST = ["gpt-3.5-turbo", "gpt-4", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-4-0314", "gpt-4-32k-0314", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-instruct", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0301"]
# Select model to use (type hint as string)
model: str = st.selectbox(label = "Model", options = MODEL_LIST, index = 1, help = "Please select the appropriate LLM model you want to use. Refer to https://platform.openai.com/docs/models/overview for the model details", placeholder = "Please choose an option ...")
# Nodes to retrieve slider
nodes_to_retrieve = st.slider(label = "Please select the number of nodes to retrieve from LLM", min_value = 0, max_value = 5, value = 3, step = 1,
help =
'''
Nodes to retrieve is simply how many nodes LLM will consider in giving output.
Higher the number of nodes, greater the accuracy but more costlier it will be, and vice-versa.
I'd recommend setting an even balance (hence, set a default value of 3)
''',
disabled = False,
label_visibility = "visible")
# Temperature slider
temperature = st.slider(label = "Please select temperature of the LLM", min_value = 0.0, max_value = 1.0, value = 0.2, step = 0.1,
help =
'''
Temperature is a parameter that controls the “creativity” or randomness of the text generated by GPT-3.
A higher temperature (e.g., 0.7) results in more diverse and creative output, while a lower temperature (e.g., 0.2) makes the output more deterministic and focused.
Look at this page for more details: https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683
''',
disabled = False,
label_visibility = "visible")
# Timeout for requests slider
request_timeout = st.slider(label = "Please select the request timeout (in seconds) of the LLM", min_value = 0, max_value = 600, value = 120, step = 60,
help =
'''
Request timeout is the timeout for requests to OpenAI completion API
A higher number means you wait for a longer time before the request times out and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking a balance but leaning a bit more towards lower side (hence, default is 120 seconds)
''',
disabled = False,
label_visibility = "visible")
# Maximum retries slider
max_retries = st.slider(label = "Please select the maximum retries of the LLM", min_value = 0, max_value = 15, value = 5, step = 1,
help =
'''
This is maximum number of retries LLM will make in case it reaches a failure
A higher number means you allow it for more failure and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking an even balance (hence, default is 5 retries)
''',
disabled = False,
label_visibility = "visible")
# Sleep function slider
sleep = st.slider(label = "Please select the amount of time you want LLM to sleep before executing next prompt (in seconds)", min_value = 0, max_value = 60, value = 8, step = 1,
help =
'''
This is amount of time our LLM will sleep before executing next prompt.
This is done primarily to avoid ratelimit errors and any failure that might interrupt the code.
A higher number means you wait for more time and have less chances of hitting ratelimit errors, and vice versa,
I'd recommend leaning more towards a lower number (hence, default is 8 seconds
Besides this, there is also another safety check that will conduct exponential waiting between 1 and 20 seconds, for maximum 6 retries (using tenacity library)
)
''',
disabled = False,
label_visibility = "visible")
# Advanced options:
# Return_all_chunks: Shows all chunks retrieved from vector search
# Show_full_doc: Displays parsed contents of the document
with st.expander("Advanced Options"):
return_all_chunks = st.checkbox("Show all chunks retrieved from vector search")
show_full_doc = st.checkbox("Show parsed contents of the document")
show_tables = st.checkbox("Show tables in dataframe")
#st.session_state["max_retries"] = max_retries
#persist(max_retries)
# Error handling for model selection
if not model:
st.warning("Please select a model", icon="⚠")
st.stop()
# User has embeddings which they can use
elif check_embeddings == "Yes":
uploaded_zip_file = upload_zip_files()
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True
)
# File uploader section for xlsx
uploaded_xlsx_files = st.file_uploader(
"Upload a xlsx file",
type=["xlsx"],
help="Please upload the excel file. Make sure it is in the appropriate format. Check the [name] sidebar for more details about the format",
accept_multiple_files = False
)
# Fund name
fund_variable = st.text_input(
label = "Fund name:",
value = None,
max_chars = None,
type = "default",
help = "This will be used to replace the word, fund, in certain prompts",
placeholder = '''Please input the exact, full fund name. Example: FRANKLIN US GOVERNMENT "A" INC''',
disabled = False,
label_visibility = "visible"
)
# Model selection
MODEL_LIST = ["gpt-3.5-turbo", "gpt-4", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-4-0314", "gpt-4-32k-0314", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-instruct", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0301"]
# Select model to use (type hint as string)
model: str = st.selectbox(label = "Model", options = MODEL_LIST, index = 1, help = "Please select the appropriate LLM model you want to use. Refer to https://platform.openai.com/docs/models/overview for the model details", placeholder = "Please choose an option ...")
# Temperature slider
nodes_to_retrieve = st.slider(label = "Please select the number of nodes to retrieve from LLM", min_value = 0, max_value = 5, value = 3, step = 1,
help =
'''
Nodes to retrieve is simply how many nodes LLM will consider in giving output.
Higher the number of nodes, greater the accuracy but more costlier it will be, and vice-versa.
I'd recommend setting an even balance (hence, set a default value of 3)
''',
disabled = False,
label_visibility = "visible")
# Temperature slider
temperature = st.slider(label = "Please select temperature of the LLM", min_value = 0.0, max_value = 1.0, value = 0.2, step = 0.1,
help =
'''
Temperature is a parameter that controls the “creativity” or randomness of the text generated by GPT-3.
A higher temperature (e.g., 0.7) results in more diverse and creative output, while a lower temperature (e.g., 0.2) makes the output more deterministic and focused.
Look at this page for more details: https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683
''',
disabled = False,
label_visibility = "visible")
# Timeout for requests slider
request_timeout = st.slider(label = "Please select the request timeout (in seconds) of the LLM", min_value = 0, max_value = 600, value = 120, step = 60,
help =
'''
Request timeout is the timeout for requests to OpenAI completion API
A higher number means you wait for a longer time before the request times out and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking a balance but leaning a bit more towards lower side (hence, default is 120 seconds)
''',
disabled = False,
label_visibility = "visible")
# Maximum retries slider
max_retries = st.slider(label = "Please select the maximum retries of the LLM", min_value = 0, max_value = 15, value = 5, step = 1,
help =
'''
This is maximum number of retries LLM will make in case it reaches a failure
A higher number means you allow it for more failure and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking an even balance (hence, default is 5 retries)
''',
disabled = False,
label_visibility = "visible")
# Sleep function slider
sleep = st.slider(label = "Please select the amount of time you want LLM to sleep before executing next prompt (in seconds)", min_value = 0, max_value = 60, value = 8, step = 1,
help =
'''
This is amount of time our LLM will sleep before executing next prompt.
This is done primarily to avoid ratelimit errors and any failure that might interrupt the code.
A higher number means you wait for more time and have less chances of hitting ratelimit errors, and vice versa,
I'd recommend leaning more towards a lower number (hence, default is 8 seconds
Besides this, there is also another safety check that will conduct exponential waiting between 1 and 20 seconds, for maximum 6 retries (using tenacity library)
)
''',
disabled = False,
label_visibility = "visible")
# Advanced options:
# Return_all_chunks: Shows all chunks retrieved from vector search
# Show_full_doc: Displays parsed contents of the document
with st.expander("Advanced Options"):
return_all_chunks = st.checkbox("Show all chunks retrieved from vector search")
show_full_doc = st.checkbox("Show parsed contents of the document")
show_tables = st.checkbox("Show tables in dataframe")
#st.session_state["max_retries"] = max_retries
#persist(max_retries)
# Error handling for model selection
if not model:
st.warning("Please select a model", icon="⚠")
st.stop()
# submit_button = st.form_submit_button(label='Process Documents', on_click = callback)
# No value inserted for check_embeddings - raise warning
else:
st.warning("Please select whether you have embeddings to use or not")
st.stop()
# If user clicks on the button process
# PS: Commented the process documents session state so it doesn't rerun the entire app again
# if st.button("Process documents", type = "primary", on_click = callback) or st.session_state.process_documents:
if st.button("Process documents", type = "primary"):
# st.session_state.process_documents = True
# User does not have embeddings they can use
if check_embeddings == "No":
# Checking if both conditions are satisfied
if uploaded_files and uploaded_xlsx_files:
# Call bundle function - no_embeddings_process_documents
output_response, llm_prompts_to_use, context_with_max_score_list, file_path_metadata_list, source_metadata_list, orignal_excel_file, table_dfs, docs = no_embeddings_process_documents_loop_advanced(uploaded_files = uploaded_files, uploaded_xlsx_files = uploaded_xlsx_files, chroma_file_path = chroma_file_path, model = model, nodes_to_retrieve = nodes_to_retrieve, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries, sleep = sleep, return_all_chunks = return_all_chunks, fund_variable = fund_variable)
# Storing all above variables into session state
# st.session_state["output_response"] = output_response
# st.session_state["llm_prompts_to_use"] = llm_prompts_to_use
# st.session_state["context_with_max_score_list"] = context_with_max_score_list
# st.session_state["file_path_metadata_list"] = file_path_metadata_list
# st.session_state["source_metadata_list"] = source_metadata_list
# Display collective prompt results in an expander
with st.expander("Display prompt results & relevant context"):
for i in range(len(llm_prompts_to_use)):
st.markdown(f"Displaying results for Prompt #{i}: {llm_prompts_to_use[i]}")
answer_col, sources_col = st.columns(2)
# Displaying answers columns
with answer_col:
st.markdown("#### Answer")
st.markdown(output_response[i])
# Displaying sources columns
with sources_col:
# User selected option to display all chunks from vector search
if return_all_chunks is True:
# These are lists of corresponding question (as source was list of list)
context_to_display = context_with_max_score_list[i]
file_path_to_display = file_path_metadata_list[i]
source_metadata_to_display = source_metadata_list[i]
for chunk in range(nodes_to_retrieve):
st.markdown(context_to_display[chunk])
st.markdown(f"Document: {file_path_to_display[chunk]}")
st.markdown(f"Page Source: {source_metadata_to_display[chunk]}")
st.markdown("---")
# User selected option to display only 1 chunk
if return_all_chunks is False:
# Display particular lists
st.markdown(context_with_max_score_list[i])
st.markdown(f"Document: {file_path_metadata_list[i]}")
st.markdown(f"Page Source: {source_metadata_list[i]}")
st.markdown("---")
# If show full document option is True
if show_full_doc is True:
# Display parsed results in the expander
with st.expander("Display parsed documents"):
content, content_document_list, content_filename = display_document_from_uploaded_files(uploaded_files)
for i in range(len(content_document_list)):
st.markdown(f"### File name: {content_filename[i]}")
# st.markdown(f"### Content:")
st.markdown(content_document_list[i])
# If show tables option is True, display it in expander
if show_tables is True:
# Display all parsed tables
with st.expander("Display Parsed Tables"):
st.markdown(f"Parsed Table results")
# st.write(table_dfs)
for i in range(len(table_dfs)):
st.dataframe(table_dfs[i])
# Display dataframe and download to excel and csv
st.markdown("### Bulk Prompt Results")
st.dataframe(data = orignal_excel_file, use_container_width = True, column_order = None) # Display dataframe containing final results
download_data_as_excel_link(orignal_excel_file = orignal_excel_file) # Display link to download results to excel file
download_data_as_csv_link(orignal_excel_file = orignal_excel_file) # Display link to download results to csv file
download_embedding_zip(chroma_file_path, zip_filename = "embeddings")
## ERROR HANDLING FOR ALL 2 FILE UPLOADS
## 1 CONDITION NOT SATISFIED
elif uploaded_files and not uploaded_xlsx_files:
st.warning("1) Please upload an excel file", icon="⚠")
st.stop()
elif uploaded_xlsx_files and not uploaded_files:
st.warning("1) Please upload pdf files", icon="⚠")
st.stop()
# ALL 2 CONDITIONS NOT SATISFIED
else:
st.warning(
'''
1) Please upload the pdf files
2) and upload the excel files''',
icon="⚠")
st.stop()
# User does not have embeddings they can use
elif check_embeddings == "Yes":
# Checking if all three conditions are satisfied
if uploaded_xlsx_files:
# Call bundle function - no_embeddings_process_documents
output_response, llm_prompts_to_use, context_with_max_score_list, file_path_metadata_list, source_metadata_list, orignal_excel_file, table_dfs, docs = embeddings_process_documents_loop_advanced(uploaded_files = uploaded_files, uploaded_xlsx_files = uploaded_xlsx_files, model = model, nodes_to_retrieve = nodes_to_retrieve, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries, sleep = sleep, return_all_chunks = return_all_chunks, fund_variable = fund_variable, uploaded_zip_file = uploaded_zip_file)
# Display collective prompt results in an expander
with st.expander("Display prompt results & relevant context"):
for i in range(len(llm_prompts_to_use)):
st.markdown(f"Displaying results for Prompt #{i}: {llm_prompts_to_use[i]}")
answer_col, sources_col = st.columns(2)
# Displaying answers columns
with answer_col:
st.markdown("#### Answer")
st.markdown(output_response[i])
# Displaying sources columns
with sources_col:
# User selected option to display all chunks from vector search
if return_all_chunks is True:
# These are lists of corresponding question (as source was list of list)
context_to_display = context_with_max_score_list[i]
file_path_to_display = file_path_metadata_list[i]
source_metadata_to_display = source_metadata_list[i]
for chunk in range(nodes_to_retrieve):
st.markdown(context_to_display[chunk])
st.markdown(f"Document: {file_path_to_display[chunk]}")
st.markdown(f"Page Source: {source_metadata_to_display[chunk]}")
st.markdown("---")
# User selected option to display only 1 chunk
if return_all_chunks is False:
# Display particular lists
st.markdown(context_with_max_score_list[i])
st.markdown(f"Document: {file_path_metadata_list[i]}")
st.markdown(f"Page Source: {source_metadata_list[i]}")
st.markdown("---")
# If show full document option is True
if show_full_doc is True:
# Display parsed results in the expander
with st.expander("Display parsed documents"):
content, content_document_list, content_filename = display_document_from_uploaded_files(uploaded_files)
for i in range(len(content_document_list)):
st.markdown(f"### File name: {content_filename[i]}")
# st.markdown(f"### Content:")
st.markdown(content_document_list[i])
# If show tables option is True, display it in expander
if show_tables is True:
# Display all parsed tables
with st.expander("Display Parsed Tables"):
st.markdown(f"Parsed Table results")
# st.write(table_dfs)
for i in range(len(table_dfs)):
st.dataframe(table_dfs[i])
# Display dataframe and download to excel and csv
st.markdown("### Bulk Prompt Results")
st.dataframe(data = orignal_excel_file, use_container_width = True, column_order = None) # Display dataframe containing final results
download_data_as_excel_link(orignal_excel_file = orignal_excel_file) # Display button to download results to excel file
download_data_as_csv_link(orignal_excel_file = orignal_excel_file) # Display button to download results to csv file
# File uploading error handling - Excel files were not uploaded
else:
st.warning("1) Please upload the excel files", icon="⚠")
st.stop()
else:
st.info("Seems like you are not logged in. Please head over to the Login page to login", icon="ℹ️")
| [] |
2024-01-10 | yashmehtakristal/KristalGPT | core~LLM_prompting.py | #!/usr/bin/env python
# coding: utf-8
# Chosen imports
import streamlit as st
# from core.persist import persist, load_widget_state
# from pages.bulk_upload_advanced import max_retries - This is giving some error
import pickle
import pandas as pd
import os
import time
import warnings
warnings.filterwarnings("ignore")
from tenacity import retry, stop_after_attempt, wait_random_exponential
import openai
# All imports
# pdf imports
import fitz
from pprint import pprint
import camelot
import PyPDF2
from PyPDF2 import PdfReader
# import pdfplumber
# Langchain imports
from langchain.chains import RetrievalQA
from langchain.chains import create_extraction_chain
from langchain.indexes import VectorstoreIndexCreator
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.document_loaders import CSVLoader
from langchain.llms import OpenAI
from langchain.document_loaders import UnstructuredPDFLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.openai_functions.utils import (
_convert_schema,
_resolve_schema_references,
get_llm_kwargs,
)
from langchain.output_parsers.openai_functions import (
JsonKeyOutputFunctionsParser,
PydanticAttrOutputFunctionsParser,
)
from langchain.prompts import ChatPromptTemplate
from langchain.pydantic_v1 import BaseModel
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
# LlamaIndex imports
from llama_hub.file.pymu_pdf.base import PyMuPDFReader
from llama_index import Document, SummaryIndex
from llama_index import VectorStoreIndex, ServiceContext, LLMPredictor
from llama_index.query_engine import PandasQueryEngine, RetrieverQueryEngine
from llama_index.retrievers import RecursiveRetriever
from llama_index.schema import IndexNode
from llama_index.llms import OpenAI
from llama_hub.file.pymu_pdf.base import PyMuPDFReader
from llama_index.retrievers import RecursiveRetriever
from llama_index.query_engine import RetrieverQueryEngine
from llama_index.response_synthesizers import get_response_synthesizer
# Other library imports
import pandas as pd
import os
import time
from typing import Any, List, Optional
from pathlib import Path
import pickle
OPENAI_API_KEY = st.secrets["OPENAI_API_KEY"]
openai.api_key = OPENAI_API_KEY
openai_api_key = OPENAI_API_KEY
#load_widget_state()
# @st.cache_data(show_spinner = False)
# @retry(wait=wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
# @st.cache_resource(show_spinner = False)
def individual_prompt(query_engine, prompt):
'''
individual_prompt: This function runs for a single prompt and displays the output result
Input -
query_engine: An instance of the Retriever Query Engine class
prompt: The prompt inputted by the user
Output -
final_output: The output of the prompt by the LLM
'''
# Query engine prompt
response = query_engine.query(prompt)
# The context used for generating output by LLM
output_context = response
# The final output from LLM
# output_response = str(response)
output_response = response.response
return output_response, output_context
def individual_prompt_advanced(query_engine, prompt, nodes_to_retrieve, return_all_chunks):
'''
individual_prompt: This function runs for a single prompt and displays the output result
Input -
query_engine: An instance of the Retriever Query Engine class
prompt: The prompt inputted by the user
Output -
final_output: The output of the prompt by the LLM
'''
# If user wants to return all chunks
if return_all_chunks is True:
individual_context_list = []
file_path_metadata_list = []
source_metadata_list = []
# Query engine prompt
response = query_engine.query(prompt)
# The context used for generating output by LLM
output_context = response
# The final output from LLM
# output_response = str(response)
output_response = response.response
# Looping through the scores and appending it to a list
for i in range(nodes_to_retrieve):
# st.write(response.source_nodes[i].metadata)
# Appending each individual context in the list
individual_context = response.source_nodes[i].get_text()
individual_context_list.append(individual_context)
# Extracting file_path metadata information & append to list
if "file_path" in response.source_nodes[i].metadata and response.source_nodes[i].metadata["file_path"] is not None:
file_path_metadata = response.source_nodes[i].metadata["file_path"]
else:
file_path_metadata = ""
file_path_metadata_list.append(file_path_metadata)
# Extracting source metadata information & append to list
if "source" in response.source_nodes[i].metadata and response.source_nodes[i].metadata["source"] is not None:
source_metadata = response.source_nodes[i].metadata["source"]
else:
source_metadata = ""
source_metadata_list.append(source_metadata)
return output_response, output_context, individual_context_list, file_path_metadata_list, source_metadata_list
# If user doesn't want to return all chunks
if return_all_chunks is False:
context_with_max_score_list = []
file_path_metadata_list = []
source_metadata_list = []
scores = []
# Query engine prompt
response = query_engine.query(prompt)
# The context used for generating output by LLM
output_context = response
# The final output from LLM
# output_response = str(response)
output_response = response.response
# Looping through the scores and appending it to a list
for i in range(nodes_to_retrieve):
# Append each score to list
scores.append(response.source_nodes[i].get_score())
# Finding the maximum score and index at which it was
max_score = max(scores)
max_index = scores.index(max_score)
# Obtain the context which has the corresponding maximum score
context_with_max_score = response.source_nodes[max_index].get_text()
context_with_max_score_list.append(context_with_max_score)
# Extracting file_path metadata information & append to list
file_path_metadata = response.source_nodes[max_index].metadata["file_path"]
file_path_metadata_list.append(file_path_metadata)
# Extracting source metadata information
source_metadata = response.source_nodes[max_index].metadata["source"]
source_metadata_list.append(source_metadata)
return output_response, output_context, context_with_max_score_list, file_path_metadata_list, source_metadata_list
# @st.cache_data(show_spinner = False)
# @retry(wait = wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
# @st.cache_resource(show_spinner = False)
def prompt_loop(query_engine, llm_prompts_to_use):
'''
prompt_loop: This function runs a loop by inputting multiple prompts (from list llm_prompts_to_use) and stores the output
Input -
query_engine: An instance of the Retriever Query Engine class
llm_prompts_to_use: List of input prompts to LLM
Output -
output_response: List containing response of prompts passed to LLM
output_context: List containing context of the response of prompts passed to LLM
'''
output_response = []
output_context = []
count = 1
for prompt in llm_prompts_to_use:
# Diagnostic purposes
# st.write(f"{count} time entering loop")
# Diagnostic purposes - Checking prompt
# st.write(f"Prompt used for this iteration is {prompt}")
# Diagnostic purposes - Query Engine
# st.write(type(query_engine))
# st.write(query_engine)
# Calling query engine
response = query_engine.query(f"{prompt}")
# Debugging - Checking if problem is with metadata
# metadata = response.metadata
# error_message = metadata.get("error_message")
# if error_message:
# st.write(f"Error message: {error_message}")
# else:
# st.write(f"Response text: {response.response}")
# Appending to list
output_context.append(response)
output_response.append(response.response)
#output_response.append(str(response))
count += 1
# Diagnostic purposes - response from LLM
# st.write(f"Response from llm is {response.response}")
# Diagnostic purposes - context from LLM
# st.write(f"Context from LLM is {response}")
# Wait 8 seconds before executing next prompt
time.sleep(3)
return output_response, output_context
# @retry(wait = wait_random_exponential(min=1, max=20), stop=stop_after_attempt(6))
# @st.cache_resource(show_spinner = False)
def prompt_loop_advanced(query_engine, llm_prompts_to_use, nodes_to_retrieve, sleep, return_all_chunks):
# If we want to return all chunks
if return_all_chunks is True:
# variable for keeping track of count
count = 1
# These two lists returned will be used in filling up our excel file
output_response = []
output_context = []
# These 3 lists returned will be used in displaying in our UI (will be a list of lists)
context_with_max_score_list = []
file_path_metadata_list = []
source_metadata_list = []
for prompt in llm_prompts_to_use:
# st.write(f"Loop #{count}")
individual_context_list = []
individual_file_path_metadata_list = []
individual_source_metadata_list = []
# Calling query engine
response = query_engine.query(f"{prompt}")
# Appending to list
output_context.append(response)
output_response.append(response.response)
#output_response.append(str(response))
count += 1
# Wait 8 seconds before executing next prompt
# time.sleep(sleep)
# Looping through the scores and appending it to a list
for i in range(nodes_to_retrieve):
# Appending each individual context in the list
individual_context = response.source_nodes[i].get_text()
individual_context_list.append(individual_context)
# st.write(individual_context)
# st.write("--")
# st.write(response.source_nodes[i].metadata)
# st.write("--")
# st.write(type(response.source_nodes[i].metadata))
# st.write("--")
# st.write(response.source_nodes[i].metadata["file_path"])
# # Extracting file_path metadata information
file_path_metadata = response.source_nodes[i].metadata["file_path"]
# # Split by "\\"
# split_string = original_string.split("\\")
# if len(split_string) == 1:
# # Split by "/"
# split_string = original_string.split("/")
# # Take the last element from list & print it
# file_path_metadata = split_string[-1]
# Append file_path_metadata to the list
individual_file_path_metadata_list.append(file_path_metadata)
# Extracting source metadata information
source_metadata = response.source_nodes[i].metadata["source"]
individual_source_metadata_list.append(source_metadata)
# Now that we have finished iteration over all nodes for a prompt, we update master list.
# Each variable here will be a list of list, for each prompt.
context_with_max_score_list.append(individual_context_list)
file_path_metadata_list.append(individual_file_path_metadata_list)
source_metadata_list.append(individual_source_metadata_list)
# sleep for a while before executing next prompt
time.sleep(sleep)
return output_response, output_context, context_with_max_score_list, file_path_metadata_list, source_metadata_list
# If we don't want to return all chunks
if return_all_chunks is False:
# variable for keeping track of count
count = 1
# These two lists returned will be used in filling up our excel file
output_response = []
output_context = []
# These 3 lists returned will be used in displaying in our UI (for each prompt, will be one value in the list)
context_with_max_score_list = []
file_path_metadata_list = []
source_metadata_list = []
for prompt in llm_prompts_to_use:
scores = []
# Calling query engine
response = query_engine.query(f"{prompt}")
# Appending to list
output_context.append(response)
output_response.append(response.response)
#output_response.append(str(response))
count += 1
# Wait 8 seconds before executing next prompt
# time.sleep(sleep)
# Looping through the scores and appending it to a list
for i in range(nodes_to_retrieve):
scores.append(response.source_nodes[i].get_score())
# Finding the maximum score and index at which it was
max_score = max(scores)
max_index = scores.index(max_score)
# Obtain the context which has the corresponding maximum score
context_with_max_score = response.source_nodes[max_index].get_text()
context_with_max_score_list.append(context_with_max_score)
# Extracting file_path metadata information
original_string = response.source_nodes[max_index].metadata["file_path"]
# Split by "\\"
split_string = original_string.split("\\")
if len(split_string) == 1:
# Split by "/"
split_string = original_string.split("/")
# Take the last element from list & print it
file_path_metadata = split_string[-1]
# Append file_path_metadata to the list
file_path_metadata_list.append(file_path_metadata)
# Extracting source metadata information
source_metadata = response.source_nodes[max_index].metadata["source"]
source_metadata_list.append(source_metadata)
# print("Page source:", source)
# sleep for a while before executing next prompt
time.sleep(sleep)
return output_response, output_context, context_with_max_score_list, file_path_metadata_list, source_metadata_list | [] |
2024-01-10 | yashmehtakristal/KristalGPT | pages~bulk_upload_basic.py | # All imports
import streamlit as st
from streamlit_extras.app_logo import add_logo
from st_pages import Page, Section, add_page_title, show_pages, hide_pages
# Setting page config & header
st.set_page_config(page_title = "Kristal Retriever", page_icon = "📖", layout = "wide")
st.header("📖 Kristal Retriever")
# Hide particular pages if not logged in
if not st.session_state.logged_in:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
# Hide particular pages if logged out
if st.session_state.logged_out:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
add_logo("https://assets-global.website-files.com/614a9edd8139f5def3897a73/61960dbb839ce5fefe853138_Kristal%20Logotype%20Primary.svg")
import openai
import os
import tempfile
from tempfile import NamedTemporaryFile
import zipfile
## Importing functions
from ui import (
is_query_valid,
display_file_read_error,
)
from bundle import no_embeddings_process_documents_loop, embeddings_process_documents_loop
from core.chroma import st_server_file, print_files_in_particular_directory, upload_zip_files, print_files_in_directory, check_zipfile_directory
# from core.loading import read_documents_from_directory, iterate_files_from_directory, save_uploaded_file, read_documents_from_uploaded_files, get_tables_from_uploaded_file, iterate_files_from_uploaded_files, iterate_excel_files_from_directory, iterate_uploaded_excel_files, print_file_details, show_dataframes, iterate_uploaded_excel_file
# from core.pickle import save_to_pickle, load_from_pickle
# from core.indexing import query_engine_function, build_vector_index
# from core.LLM_preprocessing import conditions_excel, extract_fund_variable, prompts_to_substitute_variable, storing_input_prompt_in_list
# from core.querying import recursive_retriever_old, recursive_retriever
# from core.LLM_prompting import individual_prompt, prompt_loop
# from core.PostLLM_prompting import create_output_result_column, create_output_context_column, intermediate_output_to_excel
# from core.parsing import create_schema_from_excel, parse_value
# from core.Postparsing import create_filtered_excel_file, final_result_orignal_excel_file, reordering_columns
# from core.Last_fixing_fields import find_result_fund_name, find_result_fund_house, find_result_fund_class, find_result_currency, find_result_acc_or_inc, create_new_kristal_alias, update_kristal_alias, update_sponsored_by, update_required_broker, update_transactional_fund, update_disclaimer, update_risk_disclaimer, find_nav_value, update_nav_value
# from core.output import output_to_excel, download_data_as_excel, download_data_as_csv
### CODE
OPENAI_API_KEY = st.secrets["OPENAI_API_KEY"]
openai.api_key = OPENAI_API_KEY
openai_api_key = OPENAI_API_KEY
# Error handling for OpenAI API key
if not openai_api_key:
st.warning(
"There is something wrong with the API Key Configuration."
"Please check with creator of the program (OpenAI keys can be found at https://platform.openai.com/account/api-keys)"
)
# Display app only if user is logged in
if st.session_state.logged_in is True and st.session_state.logout is False:
st.sidebar.subheader(f'Welcome {st.session_state.username}')
logout_button = st.session_state.Authenticator.logout('Log Out', 'sidebar')
# If user has clicked logged_out button, update the state variables
if logout_button:
st.session_state.logged_out = True
st.session_state.logged_in = False
st.rerun()
# Check embeddings
check_embeddings = st.radio(label = "Do you have saved embeddings?", options = ["Yes", "No"], index = None, help = "Embeddings are saved files created by ChromaDB", disabled=False, horizontal = False, label_visibility="visible")
# User does not have embeddings they can use
if check_embeddings == "No":
# Obtain chrome_file_path and chroma_file_name
master_folder, chroma_file_path, chroma_file_name = st_server_file()
# print_files_in_particular_directory(master_folder)
# print_files_in_particular_directory(chroma_file_path)
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True
)
# File uploader section for xlsx
uploaded_xlsx_files = st.file_uploader(
"Upload a xlsx file",
type=["xlsx"],
help="Please upload the excel file. Make sure it is in the appropriate format. Check the [name] sidebar for more details about the format",
accept_multiple_files = False
)
# Fund name variable
fund_variable = st.text_input(
label = "Fund name:",
value = None,
max_chars = None,
type = "default",
help = "This will be used to replace the word, fund, in certain prompts",
placeholder = '''Please input the exact, full fund name. Example: FRANKLIN US GOVERNMENT "A" INC''',
disabled = False,
label_visibility = "visible"
)
# User has embeddings which they can use
elif check_embeddings == "Yes":
uploaded_zip_file = upload_zip_files()
# print_files_in_directory(chroma_file_path)
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True
)
# File uploader section for xlsx
uploaded_xlsx_files = st.file_uploader(
"Upload a xlsx file",
type=["xlsx"],
help="Please upload the excel file. Make sure it is in the appropriate format. Check the [name] sidebar for more details about the format",
accept_multiple_files = False)
# Fund name variable
fund_variable = st.text_input(
label = "Fund name:",
value = None,
max_chars = None,
type = "default",
help = "This will be used to replace the word, fund, in certain prompts",
placeholder = '''Please input the exact, full fund name. Example: FRANKLIN US GOVERNMENT "A" INC''',
disabled = False,
label_visibility = "visible"
)
# No value inserted for check_embeddings - raise warning
else:
st.warning("Please select whether you have embeddings to use or not")
st.stop()
# If user clicks on the button process
if st.button("Process documents", type = "primary"):
# User does not have embeddings they can use
if check_embeddings == "No":
# Checking if both conditions are satisfied
if uploaded_files and uploaded_xlsx_files:
# Call bundle function - no_embeddings_process_documents
no_embeddings_process_documents_loop(uploaded_files = uploaded_files, uploaded_xlsx_files = uploaded_xlsx_files, chroma_file_path = chroma_file_path, fund_variable = fund_variable)
# Printing files in particular directory
# print_files_in_particular_directory(chroma_file_path)
## ERROR HANDLING FOR ALL 2 FILE UPLOADS
## 1 CONDITION NOT SATISFIED
elif uploaded_files and not uploaded_xlsx_files:
st.warning("1) Please upload an excel file", icon="⚠")
st.stop()
elif uploaded_xlsx_files and not uploaded_files:
st.warning("1) Please upload pdf files", icon="⚠")
st.stop()
# ALL 2 CONDITIONS NOT SATISFIED
else:
st.warning(
'''
1) Please upload the pdf files
2) and upload the excel files''',
icon="⚠")
st.stop()
# User does not have embeddings they can use
elif check_embeddings == "Yes":
# Checking if all three conditions are satisfied
if uploaded_files and uploaded_xlsx_files:
# Call bundle function - no_embeddings_process_documents
embeddings_process_documents_loop(uploaded_files = uploaded_files, uploaded_xlsx_files = uploaded_xlsx_files, fund_variable = fund_variable, uploaded_zip_file = uploaded_zip_file)
## ERROR HANDLING FOR ALL 2 FILE UPLOADS
## 1 CONDITION NOT SATISFIED
elif uploaded_files and not uploaded_xlsx_files:
st.warning("1) Please upload an excel file", icon="⚠")
st.stop()
elif uploaded_xlsx_files and not uploaded_files:
st.warning("1) Please upload pdf files", icon="⚠")
st.stop()
# ALL 2 CONDITIONS NOT SATISFIED
else:
st.warning(
'''
1) Please upload the pdf files
2) and upload the excel files''',
icon="⚠")
st.stop()
else:
st.info("Seems like you are not logged in. Please head over to the Login page to login", icon="ℹ️") | [] |
2024-01-10 | yashmehtakristal/KristalGPT | core~indexing.py | #!/usr/bin/env python
# coding: utf-8
# All imports
from langchain.chat_models import ChatOpenAI
from llama_index.query_engine import PandasQueryEngine, RetrieverQueryEngine
from llama_index import VectorStoreIndex, ServiceContext, LLMPredictor
from llama_index.schema import IndexNode
import pickle
import pandas as pd
import os
import time
import warnings
import streamlit as st
warnings.filterwarnings("ignore")
# Defining query engine over tables
# @st.cache_resource(show_spinner = False)
def query_engine_function(table_dfs):
'''
query_engine: This function defines the llm, service context object and df_query_engines
Input -
table_dfs: list containing dataframe of various tables
Output -
llm, service context, df_query_engines: The respective defined objects
'''
# GPT 4 Model used: "gpt-4-0613"
# GPT 3.5 Model used:
llm = ChatOpenAI(model="gpt-3.5-turbo", request_timeout = 120, max_retries = 6)
# Create a service context object
service_context = ServiceContext.from_defaults(llm=llm)
# Create a query engine for each table in the list of table dataframes
df_query_engines = [
PandasQueryEngine(table_df, service_context = service_context)
for table_df in table_dfs
]
# Returns the llm, service context, and query engine
return llm, service_context, df_query_engines
# @st.cache_resource(show_spinner = False)
def query_engine_function_advanced(table_dfs, model, temperature, request_timeout, max_retries):
'''
query_engine: This function defines the llm, service context object and df_query_engines
Input -
table_dfs: list containing dataframe of various tables
Output -
llm, service context, df_query_engines: The respective defined objects
'''
# GPT 4 Model used: "gpt-4-0613"
# GPT 3.5 Model used:
llm = ChatOpenAI(model = model, request_timeout = request_timeout, max_retries = max_retries, temperature = temperature)
# Create a service context object
service_context = ServiceContext.from_defaults(llm=llm)
# Create a query engine for each table in the list of table dataframes
df_query_engines = [
PandasQueryEngine(table_df, service_context = service_context)
for table_df in table_dfs
]
# Returns the llm, service context, and query engine
return llm, service_context, df_query_engines
### Build Vector Index
# Cannot cache because query engine cannot be pickled
# @st.cache_resource(show_spinner = False)
def build_vector_index(service_context, df_query_engines, docs, nodes_to_retrieve, storage_context, vector_store, is_chroma_loading):
'''
build_vector_index: This function ultimately builds the vector index for each of the documents
Input -
service_context: service_context object defined above
df_query_engines: Query engine for each table in list of tables dataframe
docs: A list of documents
nodes_to_retrieve: Number of nodes to retrieve from vector_retriever
Output -
vector_index: vector_index object created
vector_retriever: Top 3 nodes of vector index
df_id_query_engine_mapping: Mapping of the query engine with each dataframe
'''
doc_nodes = []
for doc in docs:
doc_nodes.extend(service_context.node_parser.get_nodes_from_documents(doc))
summaries = [
"This node provides information stored in the tables in the PDFs. Information could be anything about the financial product.",
]
df_nodes = [
IndexNode(text=summary, index_id=f"pandas{idx}")
for idx, summary in enumerate(summaries)
]
df_id_query_engine_mapping = {
f"pandas{idx}": df_query_engine
for idx, df_query_engine in enumerate(df_query_engines)
}
# If we are creating a new chroma, use this method of vector_index
if is_chroma_loading is False:
vector_index = VectorStoreIndex(
doc_nodes + df_nodes,
storage_context = storage_context,
service_context = service_context
)
# If we are simply loading chroma, use this method of vector_index
if is_chroma_loading is True:
vector_index = VectorStoreIndex.from_vector_store(vector_store)
vector_retriever = vector_index.as_retriever(similarity_top_k = nodes_to_retrieve)
return vector_index, vector_retriever, df_id_query_engine_mapping, nodes_to_retrieve
| [] |
2024-01-10 | yashmehtakristal/KristalGPT | core~parsing.py | #!/usr/bin/env python
# coding: utf-8
# All imports
import streamlit as st
from langchain.chains import create_extraction_chain
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.openai_functions.utils import (
_convert_schema,
_resolve_schema_references,
get_llm_kwargs,
)
from langchain.output_parsers.openai_functions import (
JsonKeyOutputFunctionsParser,
PydanticAttrOutputFunctionsParser,
)
from langchain.prompts import ChatPromptTemplate
from langchain.pydantic_v1 import BaseModel
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
import pandas as pd
from typing import Any, List, Optional
import warnings
warnings.filterwarnings("ignore")
@st.cache_data(show_spinner = False)
def create_schema_from_excel(orignal_excel_file, llm_prompts_index):
'''
create_schema_from_excel: This function will automatically create a schema based on the "Field Name" and "Data Type" columns in results_output.xlsx
Input -
orignal_excel_file: Dataframe of the results excel file
llm_prompts_index: List of the index of the rows of prompts (in orignal_excel_file) that were fed to LLM
Output - None
'''
filtered_df = orignal_excel_file.iloc[llm_prompts_index]
schema = {"properties": {}}
for index, row in filtered_df.iterrows():
field_name = row["Field name"]
data_type = row["Data Type"]
property_dict = {"type": data_type}
schema["properties"][field_name] = property_dict
# Return the schema generated
return schema
# @st.cache_data(show_spinner = False)
def _get_extraction_function(entity_schema: dict) -> dict:
'''
_get_extraction_function: This is the information_extraction function returning a dictionary
Input -
entity_schema: Takes the entity schema dictionary as input
Output -
Below dictionary
'''
return {
"name": "information_extraction",
"description": "Extracts the relevant information from the passage.",
"parameters": {
"type": "object",
"properties": {
"info": {"type": "array", "items": _convert_schema(entity_schema)} # calling _convert_schema function from langchain
},
"required": ["info"],
},
}
# @st.cache_data(show_spinner = False)
# @st.cache_resource(show_spinner = False)
def create_extraction_chain(
schema: dict,
llm: BaseLanguageModel,
prompt: Optional[BasePromptTemplate] = None,
verbose: bool = False,
) -> Chain:
"""
Create_extraction_chain: Creates a chain that extracts information from a passage.
Input -
schema: The schema of the entities to extract.
llm: The language model to use.
prompt: The prompt to use for extraction.
verbose: Whether to run in verbose mode. In verbose mode, some intermediate
logs will be printed to the console. Defaults to the global `verbose` value,
accessible via `langchain.globals.get_verbose()`.
Output -
Chain that can be used to extract information from a passage.
"""
# Call _get_extraction_function which returns a dictionary
function = _get_extraction_function(schema)
# Extraction template that user enters
# Note: recommended you keep here 'Passage: {input}' and the extraction template as follows as well
_EXTRACTION_TEMPLATE = """Extract and save the relevant entities mentioned\
in the following passage together with their properties.
Only extract the properties mentioned in the 'information_extraction' function.
If a property is not present and is not required in the function parameters, do not include it in the output.
If output is a Date then change it to dd/mm/yyyy format.
Passage:
{input}
"""
extraction_prompt = prompt or ChatPromptTemplate.from_template(_EXTRACTION_TEMPLATE)
output_parser = JsonKeyOutputFunctionsParser(key_name="info")
llm_kwargs = get_llm_kwargs(function)
# Construct the LLMChain
chain = LLMChain(
llm=llm,
prompt = extraction_prompt,
llm_kwargs=llm_kwargs,
output_parser=output_parser,
verbose=verbose,
)
# Return the chain
return chain
# @st.cache_data(show_spinner = False)
# @st.cache_resource(show_spinner = False)
def parse_value(output_response, llm_prompts_index, orignal_excel_file, schema, llm):
'''
parse_value: This function will take the values in column, "Output Result", feed it to parser and generate a list of key-value pairs in the dictionary
Input -
output_response: List containing response of prompts passed to LLM
llm_prompts_index: List of the index of the rows of prompts (in orignal_excel_file) that were fed to LLM
Output -
orignal_excel_file: Dataframe of the results excel file
'''
final_output_value = []
for output_value in output_response:
# Create chain
chain = create_extraction_chain(schema, llm)
chain_result = chain.run(output_value)
final_output_value.append(chain_result)
print(llm_prompts_index)
print(final_output_value)
# Ensure that the "Final Output result" column accepts object (dictionary) data type
orignal_excel_file['Final Output result'] = None # Initialize the column with None values
orignal_excel_file['Final Output result'] = orignal_excel_file['Final Output result'].astype(object)
# Iterate through llm_prompts_index and assign values from final_output_value to a new column, "Final Output result"
for index, info_dict in zip(llm_prompts_index, final_output_value):
orignal_excel_file.at[index, 'Final Output result'] = info_dict
return orignal_excel_file
| [
"information_extraction",
"Extract and save the relevant entities mentioned in the following passage together with their properties.\n\n Only extract the properties mentioned in the 'information_extraction' function.\n\n If a property is not present and is not required in the function parameters, do not include it in the output.\n\n If output is a Date then change it to dd/mm/yyyy format.\n\n Passage:\n {input}\n "
] |
2024-01-10 | yashmehtakristal/KristalGPT | pages~qa_advanced.py | # All imports
import streamlit as st
from streamlit_extras.app_logo import add_logo
from st_pages import Page, Section, add_page_title, show_pages, hide_pages
# Setting page config & header
st.set_page_config(page_title="Kristal Retriever", page_icon="📖", layout="wide")
st.header("📖 Kristal Retriever")
# Hide particular pages if not logged in
if not st.session_state.logged_in:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
# Hide particular pages if logged out
if st.session_state.logged_out:
hide_pages(["Bulk Upload - Basic", "Bulk Upload - Advanced", "Q&A - Basic", "Q&A - Advanced"])
add_logo("https://assets-global.website-files.com/614a9edd8139f5def3897a73/61960dbb839ce5fefe853138_Kristal%20Logotype%20Primary.svg")
import openai
import os
import tempfile
from tempfile import NamedTemporaryFile
from streamlit_extras.app_logo import add_logo
from st_pages import Page, Section, add_page_title, show_pages, hide_pages
from core.loading import display_document_from_uploaded_files
## Importing functions
from ui import (
is_query_valid,
display_file_read_error,
)
from bundle import no_embeddings_process_documents_individual, embeddings_process_documents_individual, no_embeddings_process_documents_individual_advanced, embeddings_process_documents_individual_advanced
from core.loading import read_documents_from_directory, iterate_files_from_directory, save_uploaded_file, read_documents_from_uploaded_files, get_tables_from_uploaded_file, iterate_files_from_uploaded_files, iterate_excel_files_from_directory, iterate_uploaded_excel_files, print_file_details, show_dataframes, iterate_uploaded_excel_file
from core.pickle import save_to_pickle, load_from_pickle
from core.indexing import query_engine_function, build_vector_index
from core.LLM_preprocessing import conditions_excel, extract_fund_variable, prompts_to_substitute_variable, storing_input_prompt_in_list
from core.querying import recursive_retriever_old, recursive_retriever
from core.LLM_prompting import individual_prompt, prompt_loop
from core.PostLLM_prompting import create_output_result_column, create_output_context_column, intermediate_output_to_excel
from core.parsing import create_schema_from_excel, parse_value
from core.Postparsing import create_filtered_excel_file, final_result_orignal_excel_file, reordering_columns
from core.Last_fixing_fields import find_result_fund_name, find_result_fund_house, find_result_fund_class, find_result_currency, find_result_acc_or_inc, create_new_kristal_alias, update_kristal_alias, update_sponsored_by, update_required_broker, update_transactional_fund, update_disclaimer, update_risk_disclaimer, find_nav_value, update_nav_value
from core.chroma import st_server_file, print_files_in_particular_directory, upload_zip_files, print_files_in_directory, check_zipfile_directory, download_embedding_zip
### CODE
OPENAI_API_KEY = st.secrets["OPENAI_API_KEY"]
openai.api_key = OPENAI_API_KEY
openai_api_key = OPENAI_API_KEY
# Error handling for OpenAI API key
if not openai_api_key:
st.warning(
"There is something wrong with the API Key Configuration."
"Please check with creator of the program (OpenAI keys can be found at https://platform.openai.com/account/api-keys)"
)
# Initializing session states
if "load_prompt_result_selector_state" not in st.session_state:
st.session_state.load_prompt_result_selector_state = False
if "output_response" not in st.session_state:
st.session_state.output_response = 0
if "llm_prompts_to_use" not in st.session_state:
st.session_state.llm_prompts_to_use = 0
if "context_with_max_score_list" not in st.session_state:
st.session_state.context_with_max_score_list = 0
if "file_path_metadata_list" not in st.session_state:
st.session_state.file_path_metadata_list = 0
if "source_metadata_list" not in st.session_state:
st.session_state.source_metadata_list = 0
if "prompt_result_selector" not in st.session_state:
st.session_state.prompt_result_selector = 0
if "process_documents" not in st.session_state:
st.session_state.process_documents = False
# Display app only if user is logged in
if st.session_state.logged_in is True and st.session_state.logout is False:
st.sidebar.subheader(f'Welcome {st.session_state.username}')
logout_button = st.session_state.Authenticator.logout('Log Out', 'sidebar')
# If user has clicked logged_out button, update the state variables
if logout_button:
st.session_state.logged_out = True
st.session_state.logged_in = False
st.rerun()
# Check embeddings
check_embeddings = st.radio(label = "Do you have saved embeddings?", options = ["Yes", "No"], index = None, help = "Embeddings are saved files created by ChromaDB", disabled=False, horizontal = False, label_visibility="visible")
# def callback():
# # Button was clicked
# st.session_state.process_documents = True
# User does not have embeddings they can use
if check_embeddings == "No":
# Obtain chrome_file_path and chroma_file_name
master_folder, chroma_file_path, chroma_file_name = st_server_file()
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True
)
# Fund name - Don't need fund variable input for this
# fund_variable = st.text_input(
# label = "Fund name:",
# value = None,
# max_chars = None,
# type = "default",
# help = "This will be used to replace the word, fund, in certain prompts",
# placeholder = '''Please input the exact, full fund name. Example: FRANKLIN US GOVERNMENT "A" INC''',
# disabled = False,
# label_visibility = "visible"
# )
# Model selection
MODEL_LIST = ["gpt-3.5-turbo", "gpt-4", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-4-0314", "gpt-4-32k-0314", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-instruct", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0301"]
# Select model to use (type hint as string)
model: str = st.selectbox(label = "Model", options = MODEL_LIST, index = 1, help = "Please select the appropriate LLM model you want to use. Refer to https://platform.openai.com/docs/models/overview for the model details", placeholder = "Please choose an option ...")
# Nodes to retrieve slider
nodes_to_retrieve = st.slider(label = "Please select the number of nodes to retrieve from LLM", min_value = 0, max_value = 5, value = 3, step = 1,
help =
'''
Nodes to retrieve is simply how many nodes LLM will consider in giving output.
Higher the number of nodes, greater the accuracy but more costlier it will be, and vice-versa.
I'd recommend setting an even balance (hence, set a default value of 3)
''',
disabled = False,
label_visibility = "visible")
# Temperature slider
temperature = st.slider(label = "Please select temperature of the LLM", min_value = 0.0, max_value = 1.0, value = 0.2, step = 0.1,
help =
'''
Temperature is a parameter that controls the “creativity” or randomness of the text generated by GPT-3.
A higher temperature (e.g., 0.7) results in more diverse and creative output, while a lower temperature (e.g., 0.2) makes the output more deterministic and focused.
Look at this page for more details: https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683
''',
disabled = False,
label_visibility = "visible")
# Timeout for requests slider
request_timeout = st.slider(label = "Please select the request timeout (in seconds) of the LLM", min_value = 0, max_value = 600, value = 120, step = 60,
help =
'''
Request timeout is the timeout for requests to OpenAI completion API
A higher number means you wait for a longer time before the request times out and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking a balance but leaning a bit more towards lower side (hence, default is 120 seconds)
''',
disabled = False,
label_visibility = "visible")
# Maximum retries slider
max_retries = st.slider(label = "Please select the maximum retries of the LLM", min_value = 0, max_value = 15, value = 5, step = 1,
help =
'''
This is maximum number of retries LLM will make in case it reaches a failure
A higher number means you allow it for more failure and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking an even balance (hence, default is 5 retries)
''',
disabled = False,
label_visibility = "visible")
# Sleep function slider
# sleep = st.slider(label = "Please select the amount of time you want LLM to sleep before executing next prompt (in seconds)", min_value = 0, max_value = 60, value = 8, step = 1,
# help =
# '''
# This is amount of time our LLM will sleep before executing next prompt.
# This is done primarily to avoid ratelimit errors and any failure that might interrupt the code.
# A higher number means you wait for more time and have less chances of hitting ratelimit errors, and vice versa,
# I'd recommend leaning more towards a lower number (hence, default is 8 seconds
# Besides this, there is also another safety check that will conduct exponential waiting between 1 and 20 seconds, for maximum 6 retries (using tenacity library)
# )
# ''',
# disabled = False,
# label_visibility = "visible")
# Advanced options:
# Return_all_chunks: Shows all chunks retrieved from vector search
# Show_full_doc: Displays parsed contents of the document
with st.expander("Advanced Options"):
return_all_chunks = st.checkbox("Show all chunks retrieved from vector search")
show_full_doc = st.checkbox("Show parsed contents of the document")
show_tables = st.checkbox("Show tables in dataframe")
# Error handling for model selection
if not model:
st.warning("Please select a model", icon="⚠")
st.stop()
# User has embeddings which they can use
elif check_embeddings == "Yes":
uploaded_zip_file = upload_zip_files()
# File uploader section for pdfs
uploaded_files = st.file_uploader(
"Upload your pdf documents",
type=["pdf"],
help="You can upload multiple files."
"Please note that scanned documents are not supported yet!",
accept_multiple_files = True
)
# Fund name - Don't need fund variable input for this
# Fund name
# fund_variable = st.text_input(
# label = "Fund name:",
# value = None,
# max_chars = None,
# type = "default",
# help = "This will be used to replace the word, fund, in certain prompts",
# placeholder = '''Please input the exact, full fund name. Example: FRANKLIN US GOVERNMENT "A" INC''',
# disabled = False,
# label_visibility = "visible"
# )
# Model selection
MODEL_LIST = ["gpt-3.5-turbo", "gpt-4", "gpt-4-0613", "gpt-4-32k", "gpt-4-32k-0613", "gpt-4-0314", "gpt-4-32k-0314", "gpt-3.5-turbo-16k", "gpt-3.5-turbo-instruct", "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-3.5-turbo-0301"]
# Select model to use (type hint as string)
model: str = st.selectbox(label = "Model", options = MODEL_LIST, index = 1, help = "Please select the appropriate LLM model you want to use. Refer to https://platform.openai.com/docs/models/overview for the model details", placeholder = "Please choose an option ...")
# Nodes to retrieve slider
nodes_to_retrieve = st.slider(label = "Please select the number of nodes to retrieve from LLM", min_value = 0, max_value = 5, value = 3, step = 1,
help =
'''
Nodes to retrieve is simply how many nodes LLM will consider in giving output.
Higher the number of nodes, greater the accuracy but more costlier it will be, and vice-versa.
I'd recommend setting an even balance (hence, set a default value of 3)
''',
disabled = False,
label_visibility = "visible")
# Temperature slider
temperature = st.slider(label = "Please select temperature of the LLM", min_value = 0.0, max_value = 1.0, value = 0.2, step = 0.1,
help =
'''
Temperature is a parameter that controls the “creativity” or randomness of the text generated by GPT-3.
A higher temperature (e.g., 0.7) results in more diverse and creative output, while a lower temperature (e.g., 0.2) makes the output more deterministic and focused.
Look at this page for more details: https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683
''',
disabled = False,
label_visibility = "visible")
# Timeout for requests slider
request_timeout = st.slider(label = "Please select the request timeout (in seconds) of the LLM", min_value = 0, max_value = 600, value = 120, step = 60,
help =
'''
Request timeout is the timeout for requests to OpenAI completion API
A higher number means you wait for a longer time before the request times out and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking a balance but leaning a bit more towards lower side (hence, default is 120 seconds)
''',
disabled = False,
label_visibility = "visible")
# Maximum retries slider
max_retries = st.slider(label = "Please select the maximum retries of the LLM", min_value = 0, max_value = 15, value = 5, step = 1,
help =
'''
This is maximum number of retries LLM will make in case it reaches a failure
A higher number means you allow it for more failure and vice versa.
Note, too high a number means you wait too long and too low a number means you don't give it chance to retry.
I'd recommend striking an even balance (hence, default is 5 retries)
''',
disabled = False,
label_visibility = "visible")
# Sleep function slider
# sleep = st.slider(label = "Please select the amount of time you want LLM to sleep before executing next prompt (in seconds)", min_value = 0, max_value = 60, value = 8, step = 1,
# help =
# '''
# This is amount of time our LLM will sleep before executing next prompt.
# This is done primarily to avoid ratelimit errors and any failure that might interrupt the code.
# A higher number means you wait for more time and have less chances of hitting ratelimit errors, and vice versa,
# I'd recommend leaning more towards a lower number (hence, default is 8 seconds
# Besides this, there is also another safety check that will conduct exponential waiting between 1 and 20 seconds, for maximum 6 retries (using tenacity library)
# )
# ''',
# disabled = False,
# label_visibility = "visible")
with st.expander("Advanced Options"):
return_all_chunks = st.checkbox("Show all chunks retrieved from vector search")
show_full_doc = st.checkbox("Show parsed contents of the document")
show_tables = st.checkbox("Show tables in dataframe")
# Error handling for model selection
if not model:
st.warning("Please select a model", icon="⚠")
st.stop()
# No value inserted for check_embeddings - raise warning
else:
st.warning("Please select whether you have embeddings to use or not")
st.stop()
# Display the question input box for user to type question and submit
with st.form(key="qa_form"):
query = st.text_area(label = "Ask a question from the documents uploaded", value = None, height = None, max_chars = None, help = "Please input your questions regarding the document. Greater the prompt engineering, better the output", disabled = False, label_visibility = "visible")
# submit = st.form_submit_button("Submit", on_click = callback)
submit = st.form_submit_button("Submit")
if not query:
st.warning("Please enter a question to ask about the document!")
st.stop()
# If user clicks on the button process
if submit:
st.session_state.process_documents = True
# User does not have embeddings they can use
if check_embeddings == "No":
# Checking if both conditions are satisfied
if uploaded_files:
# Call bundle function - no_embeddings_process_documents
output_response, prompt, context_with_max_score_list, file_path_metadata_list, source_metadata_list, table_dfs, docs = no_embeddings_process_documents_individual_advanced(uploaded_files = uploaded_files, prompt = query, chroma_file_path = chroma_file_path, model = model, nodes_to_retrieve = nodes_to_retrieve, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries, return_all_chunks = return_all_chunks)
# Display collective prompt results in an expander
with st.expander("Display prompt results & relevant context"):
st.markdown(f"Displaying results for Prompt #1: {prompt}")
answer_col, sources_col = st.columns(2)
# Displaying answers columns
with answer_col:
st.markdown("#### Answer")
st.markdown(output_response)
# Displaying sources columns
with sources_col:
# User selected option to display all chunks from vector search
if return_all_chunks is True:
for chunk in range(nodes_to_retrieve):
st.markdown(context_with_max_score_list[chunk])
st.markdown(f"Document: {file_path_metadata_list[chunk]}")
st.markdown(f"Page Source: {source_metadata_list[chunk]}")
st.markdown("---")
# User selected option to display only 1 chunk
if return_all_chunks is False:
# Display particular lists
st.markdown(context_with_max_score_list[0])
st.markdown(f"Document: {file_path_metadata_list[0]}")
st.markdown(f"Page Source: {source_metadata_list[0]}")
st.markdown("---")
# If show full document option is True
if show_full_doc is True:
# Display parsed results in the expander
with st.expander("Display parsed documents"):
content, content_document_list, content_filename = display_document_from_uploaded_files(uploaded_files)
for i in range(len(content_document_list)):
st.markdown(f"### File name: {content_filename[i]}")
# st.markdown(f"### Content:")
st.markdown(content_document_list[i])
# If show tables option is True, display it in expander
if show_tables is True:
# Display all parsed tables
with st.expander("Display Parsed Tables"):
st.markdown(f"Parsed Table results")
# st.write(table_dfs)
for i in range(len(table_dfs)):
st.dataframe(table_dfs[i])
download_embedding_zip(chroma_file_path, zip_filename = "embeddings")
# Condition not satisfied
else:
st.warning(
"1) Please upload the pdf files",
icon="⚠")
st.stop()
# User does not have embeddings they can use
elif check_embeddings == "Yes":
# Checking if uploaded_files is satisfied
if uploaded_files:
# Call bundle function - no_embeddings_process_documents
output_response, prompt, context_with_max_score_list, file_path_metadata_list, source_metadata_list, table_dfs, docs = embeddings_process_documents_individual_advanced(uploaded_files = uploaded_files, prompt = query, model = model, nodes_to_retrieve = nodes_to_retrieve, temperature = temperature, request_timeout = request_timeout, max_retries = max_retries, return_all_chunks = return_all_chunks, uploaded_zip_file = uploaded_zip_file)
#output_response, prompt, context_with_max_score_list, file_path_metadata_list, source_metadata_list, table_dfs, docs = embeddings_process_documents_individual_advanced(uploaded_files = uploaded_files, chroma_file_path = st.session_state['chroma_file_path'], prompt = query)
# embeddings_process_documents_individual(uploaded_files = uploaded_files, chroma_file_path = st.session_state['chroma_file_path'], prompt = query)
# Display collective prompt results in an expander
with st.expander("Display prompt results & relevant context"):
st.markdown(f"Displaying results for Prompt #1: {prompt}")
answer_col, sources_col = st.columns(2)
# Displaying answers columns
with answer_col:
st.markdown("#### Answer")
st.markdown(output_response)
# Displaying sources columns
with sources_col:
# User selected option to display all chunks from vector search
if return_all_chunks is True:
for chunk in range(nodes_to_retrieve):
st.markdown(context_with_max_score_list[chunk])
st.markdown(f"Document: {file_path_metadata_list[chunk]}")
st.markdown(f"Page Source: {source_metadata_list[chunk]}")
st.markdown("---")
# User selected option to display only 1 chunk
if return_all_chunks is False:
# Display particular lists
st.markdown(context_with_max_score_list[0])
st.markdown(f"Document: {file_path_metadata_list[0]}")
st.markdown(f"Page Source: {source_metadata_list[0]}")
st.markdown("---")
# If show full document option is True
if show_full_doc is True:
# Display parsed results in the expander
with st.expander("Display parsed documents"):
content, content_document_list, content_filename = display_document_from_uploaded_files(uploaded_files)
for i in range(len(content_document_list)):
st.markdown(f"### File name: {content_filename[i]}")
# st.markdown(f"### Content:")
st.markdown(content_document_list[i])
# If show tables option is True, display it in expander
if show_tables is True:
# Display all parsed tables
with st.expander("Display Parsed Tables"):
st.markdown(f"Parsed Table results")
# st.write(table_dfs)
for i in range(len(table_dfs)):
st.dataframe(table_dfs[i])
# Pdf files were not uploaded
else:
st.warning(
"1) Please upload the pdf files",
icon="⚠")
st.stop()
else:
st.info("Seems like you are not logged in. Please head over to the Login page to login", icon="ℹ️") | [] |
2024-01-10 | hcook9994/gpu-files | optagan-main~optagan~wgan_test.py | from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import logging
import torch
import torch.nn as nn
import numpy as np
from modules.gan import Generator
import glob
import os
import pickle
import random
import torch.nn.functional as F
from tqdm import tqdm, trange
from func import GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig, BertConfig
from func import GPT2LMHeadModel, GPT2Tokenizer, GPT2ForLatentConnector, GPT2ForLatentConnectorValueHead
from func import OpenAIGPTLMHeadModel, OpenAIGPTTokenizer
from func import XLNetLMHeadModel, XLNetTokenizer
from func import TransfoXLLMHeadModel, TransfoXLTokenizer
from func import BertForLatentConnector, BertTokenizer
from collections import defaultdict
import pdb
from modules.utils import rollout_test
MAX_LENGTH = int(10000) # Hardcoded max length to avoid infinite loop
ALL_MODELS = sum((tuple(conf.pretrained_config_archive_map.keys()) for conf in (GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig)), ())
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
MODEL_CLASSES = {
'gpt2': (GPT2Config, GPT2ForLatentConnector, GPT2Tokenizer),
'bert': (BertConfig, BertForLatentConnector, BertTokenizer),
'gpt2v': (GPT2Config, GPT2ForLatentConnectorValueHead, GPT2Tokenizer)
}
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--seed', type=int, default=0)
parser.add_argument('--new_sent', type=int, default=1, help="Number of sentences to generate")
parser.add_argument('--n_layers', type=int, default=20, help="Number of layers of generator")
parser.add_argument('--block_dim', type=int, default=100)
parser.add_argument('--interval', type=int, default=10)
parser.add_argument('--cuda', type=bool, default=torch.cuda.is_available())
parser.add_argument('--generator_dir', default=None, type=str, required=True, help="Directory of GAN model checkpoint")
parser.add_argument("--checkpoint_dir", default=None, type=str, required=True,
help="The directory where checkpoints are saved.")
parser.add_argument("--output_dir", default=None, type=str, required=True,
help="The output directory where the model predictions and checkpoints will be written.")
parser.add_argument("--save", default=False, type=bool, help="Save results to file.")
parser.add_argument("--latent_size", default=32, type=int, help="Latent space dimension.")
parser.add_argument("--output_name", default="results", type=str, help="File name of output")
parser.add_argument("--batch_size", default=100, type=int, help="Batch size to generate outputs")
## Encoder options
parser.add_argument("--encoder_model_type", default="bert", type=str,
help="The encoder model architecture to be fine-tuned.")
parser.add_argument("--encoder_model_name_or_path", default="bert-base-cased", type=str,
help="The encoder model checkpoint for weights initialization.")
parser.add_argument("--encoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--encoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
## Decoder options
parser.add_argument("--decoder_model_type", default="gpt2", type=str,
help="The decoder model architecture to be fine-tuned.")
parser.add_argument("--decoder_model_name_or_path", default="gpt2", type=str,
help="The decoder model checkpoint for weights initialization.")
parser.add_argument("--decoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--decoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
parser.add_argument("--max_seq_length", default=512, type=int,
help="Optional input sequence length before tokenization. The sequence will be dropped if it is longer the max_seq_length")
parser.add_argument("--finetune_decoder", default=False, type=bool,
help="Uses finetuned decoder in output dir if true.")
## Variational auto-encoder(check this)
parser.add_argument("--top_k", type=int, default=0)
parser.add_argument("--top_p", type=float, default=1.0)
parser.add_argument("--prompt", type=str, default="")
parser.add_argument("--padding_text", type=str, default="")
parser.add_argument("--length", type=int, default=20)
parser.add_argument("--block_size", default=-1, type=int,
help="Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens).")
parser.add_argument("--do_lower_case", action='store_true',
help="Set this flag if you are using an uncased model.")
parser.add_argument("--use_philly", action='store_true',
help="Use Philly for computing.")
parser.add_argument('--gloabl_step_eval', type=int, default=508523,
help="Evaluate the results at the given global step")
# Load a trained Encoder model and vocabulary that you have fine-tuned
args = parser.parse_args()
global_step = args.gloabl_step_eval
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.backends.cudnn.deterministic = True
args.device = torch.device("cuda" if args.cuda else "cpu")
args.n_gpu = torch.cuda.device_count()
if args.n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
args.encoder_model_type = args.encoder_model_type.lower()
args.decoder_model_type = args.decoder_model_type.lower()
output_encoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-encoder-{}'.format(global_step))
output_decoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-decoder-{}'.format(global_step))
if not args.finetune_decoder:
output_decoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-decoder-{}'.format(global_step))
else:
output_decoder_dir = os.path.join(args.output_dir, 'checkpoint-decoder-{}'.format(global_step))
checkpoints = [ [output_encoder_dir, output_decoder_dir] ]
# Load a trained Encoder model and vocabulary that you have fine-tuned
encoder_config_class, encoder_model_class, encoder_tokenizer_class = MODEL_CLASSES[args.encoder_model_type]
model_encoder = encoder_model_class.from_pretrained(output_encoder_dir, latent_size=args.latent_size)
tokenizer_encoder = encoder_tokenizer_class.from_pretrained(args.encoder_tokenizer_name if args.encoder_tokenizer_name else args.encoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_encoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_encoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_encoder.max_len_single_sentence)
# Load a trained Decoder model and vocabulary that you have fine-tuned
if not args.finetune_decoder:
decoder_config_class, decoder_model_class, decoder_tokenizer_class = MODEL_CLASSES[args.decoder_model_type]
else:
decoder_config_class, decoder_model_class, decoder_tokenizer_class = MODEL_CLASSES["gpt2v"]
model_decoder = decoder_model_class.from_pretrained(output_decoder_dir, latent_size=args.latent_size)
tokenizer_decoder = decoder_tokenizer_class.from_pretrained(args.decoder_tokenizer_name if args.decoder_tokenizer_name else args.decoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_decoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_decoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_decoder.max_len_single_sentence)
# Chunyuan: Add Padding token to GPT2
special_tokens_dict = {'pad_token': '<PAD>', 'bos_token': '<BOS>', 'eos_token': '<EOS>'}
num_added_toks = tokenizer_decoder.add_special_tokens(special_tokens_dict)
logger.info('We have added {} tokens to GPT2'.format(num_added_toks))
model_decoder.resize_token_embeddings(len(tokenizer_decoder)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
assert tokenizer_decoder.pad_token == '<PAD>'
generator = Generator(args.n_layers, args.block_dim, args.latent_size)
if args.cuda:
generator = generator.cuda()
generator.load_state_dict(torch.load(args.generator_dir+'/generator_'+str(args.gloabl_step_eval)+'.th'))
generator.eval()
model_decoder.eval()
model_encoder.eval()
if args.save:
if not os.path.exists(args.output_dir+"/{}.txt".format(args.output_name)):
with open(args.output_dir+"/{}.txt".format(args.output_name), 'w'):
pass
for i in range(int(args.new_sent/args.batch_size)):
# sample noise
noise = torch.Tensor(np.random.normal(0, 1, (args.batch_size, args.latent_size))).to(args.device)
new_z = generator(noise).data
# create new sent
sents = rollout_test(model_decoder, new_z, tokenizer_decoder, args.max_seq_length, args.batch_size, args.top_k, args.top_p)
if args.save:
with open(args.output_dir+"/{}.txt".format(args.output_name), 'a') as file:
for i in sents:
file.write(i+"\n")
else:
for i in sents:
logger.info(i)
| [] |
2024-01-10 | hcook9994/gpu-files | optagan-main~optagan~wgan_gp_train.py | from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import logging
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import numpy as np
from modules.gan import Generator, Critic
import glob
import os
import pickle
import random
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset, SequentialSampler, RandomSampler, TensorDataset
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm, trange
from func import GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig, BertConfig
from func import GPT2LMHeadModel, GPT2Tokenizer, GPT2ForLatentConnector
from func import OpenAIGPTLMHeadModel, OpenAIGPTTokenizer
from func import XLNetLMHeadModel, XLNetTokenizer
from func import TransfoXLLMHeadModel, TransfoXLTokenizer
from func import BertForLatentConnector, BertTokenizer
from collections import defaultdict
from utils import (TextDataset_Split, TextDataset_2Tokenizers, BucketingDataLoader)
import pdb
from modules.utils import (calc_blue_parallel_func, pad_seq, rollout, rollout_test)
from transformers.modeling_utils import top_k_top_p_filtering
MAX_LENGTH = int(10000) # Hardcoded max length to avoid infinite loop
ALL_MODELS = sum((tuple(conf.pretrained_config_archive_map.keys()) for conf in (GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig)), ())
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
MODEL_CLASSES = {
'gpt2': (GPT2Config, GPT2ForLatentConnector, GPT2Tokenizer),
'bert': (BertConfig, BertForLatentConnector, BertTokenizer)
}
def load_and_cache_examples(args, tokenizer):
if isinstance(tokenizer, list):
dataset = TextDataset_2Tokenizers(tokenizer, args, args.train_data_file, block_size=args.block_size)
else:
dataset = TextDataset_Split(tokenizer, args, args.train_data_file, block_size=args.block_size)
return dataset
def build_dataload_and_cache_examples(args, tokenizer):
if isinstance(tokenizer, list):
args.batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
file_path=args.train_data_file
dataloader = BucketingDataLoader(file_path, args.batch_size, args.max_seq_length, tokenizer, args, bucket=100, shuffle=True)
else:
pass
return dataloader
def compute_grad_penalty(critic, real_data, fake_data):
B = real_data.size(0)
alpha = torch.FloatTensor(np.random.random((B, 1)))
if args.cuda:
alpha = alpha.cuda()
sample = alpha*real_data + (1-alpha)*fake_data
sample.requires_grad_(True)
score = critic(sample)
outputs = torch.FloatTensor(B, 1).fill_(1.0) #args.latent_size
outputs.requires_grad_(False)
if args.cuda:
outputs = outputs.cuda()
grads = autograd.grad(
outputs=score,
inputs=sample,
grad_outputs=outputs,
create_graph=True,
retain_graph=True,
only_inputs=True
)[0]
#grads = grads.view(B, -1)
grad_penalty = ((grads.norm(2, dim=1) - 1.) ** 2).mean()
return grad_penalty
def train(epoch):
model_encoder.eval()
model_decoder.eval()
generator.train()
critic.train()
c_train_loss = 0.
g_train_loss = 0.
g_batches = 0
for i, x in enumerate(train_loader):
x = x[0]
if args.cuda:
x = x.cuda()
# Generate noise
B = args.per_gpu_train_batch_size
c_optimizer.zero_grad()
noise = torch.from_numpy(np.random.normal(0, 1, (B,
args.latent_size))).float()
if args.cuda:
noise = noise.cuda()
# Get original text latent embeddings
with torch.no_grad():
pooled_hidden_fea = model_encoder(x, attention_mask=(x > 0).float())[1]
mean, logvar = model_encoder.linear(pooled_hidden_fea).chunk(2, -1)
z_real = mean.squeeze(1)
# train critic
z_fake = generator(noise)
real_score = critic(z_real)
fake_score = critic(z_fake)
grad_penalty = compute_grad_penalty(critic, z_real.data, z_fake.data)
c_loss = -torch.mean(real_score) + torch.mean(fake_score) + \
args.gp_lambda*grad_penalty
c_train_loss += c_loss.item()
c_loss.backward()
c_optimizer.step()
# train generator
if i % args.n_critic == 0:
g_batches += 1
g_optimizer.zero_grad()
fake_score = critic(generator(noise))
g_loss = -torch.mean(fake_score)
g_train_loss += g_loss.item()
g_loss.backward()
g_optimizer.step()
if args.interval > 0 and i % args.interval == 0:
logger.info('Epoch: {} | Batch: {}/{} ({:.0f}%) | G Loss: {:.6f} | C Loss: {:.6f}'.format(
epoch, args.batch_size*i, len(train_loader.dataset),
100.*(args.batch_size*i)/len(train_loader.dataset),
g_loss.item(), c_loss.item()
))
test_noise = torch.Tensor(np.random.normal(0, 1, (1, args.latent_size))).to(args.device)
test_new_z = generator(test_noise).data
# create new sent
test_z = rollout_test(model_decoder, test_new_z, tokenizer_decoder, args.max_seq_length, 1, 0, 1)
logger.info("Text: {}".format(test_z))
g_train_loss /= g_batches
c_train_loss /= len(train_loader)
logger.info('* (Train) Epoch: {} | G Loss: {:.4f} | C Loss: {:.4f}'.format(
epoch, g_train_loss, c_train_loss
))
return (g_train_loss, c_train_loss)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--seed', type=int, default=0)
parser.add_argument('--epochs', type=int, default=15)
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--gp_lambda', type=int, default=10)
parser.add_argument('--n_critic', type=int, default=5, help="Number of critic updates before each generator update")
parser.add_argument('--n_layers', type=int, default=20, help="Number of layers of generator and critic")
parser.add_argument('--block_dim', type=int, default=100)
parser.add_argument('--interval', type=int, default=10, help="Steps before logging output")
parser.add_argument('--cuda', type=bool, default=torch.cuda.is_available())
# Optimus parameters
parser.add_argument("--train_data_file", default=None, type=str, required=True,
help="The input training data file (a text file).")
parser.add_argument("--valid_data_file", default=None, type=str, required=True,
help="The input validation data file (a text file).")
parser.add_argument("--checkpoint_dir", default=None, type=str, required=True,
help="The directory where checkpoints are saved.")
parser.add_argument('--generator_dir', default=None, type=str, help="Directory where GAN models are saved")
parser.add_argument("--output_dir", default=None, type=str, required=True,
help="The output directory where the model predictions and checkpoints will be written.")
parser.add_argument("--dataset", default='Snli', type=str, help="The dataset.")
parser.add_argument("--latent_size", default=32, type=int, help="Latent space dimension.")
## Encoder options
parser.add_argument("--encoder_model_type", default="bert", type=str,
help="The encoder model architecture to be fine-tuned.")
parser.add_argument("--encoder_model_name_or_path", default="bert-base-cased", type=str,
help="The encoder model checkpoint for weights initialization.")
parser.add_argument("--encoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--encoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
## Decoder options
parser.add_argument("--decoder_model_type", default="gpt2", type=str,
help="The decoder model architecture to be fine-tuned.")
parser.add_argument("--decoder_model_name_or_path", default="bert-base-cased", type=str,
help="The decoder model checkpoint for weights initialization.")
parser.add_argument("--decoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--decoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
parser.add_argument("--per_gpu_train_batch_size", default=1, type=int,
help="Batch size per GPU/CPU for training.")
parser.add_argument("--max_seq_length", default=512, type=int,
help="Optional input sequence length before tokenization. The sequence will be dropped if it is longer the max_seq_length")
## Variational auto-encoder(check this)
parser.add_argument("--prompt", type=str, default="")
parser.add_argument("--padding_text", type=str, default="")
parser.add_argument("--length", type=int, default=20)
parser.add_argument("--block_size", default=-1, type=int,
help="Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens).")
parser.add_argument("--do_lower_case", action='store_true',
help="Set this flag if you are using an uncased model.")
parser.add_argument("--use_philly", action='store_true',
help="Use Philly for computing.")
parser.add_argument('--gloabl_step_eval', type=int, default=661,
help="Evaluate the results at the given global step")
# Load a trained Encoder model and vocabulary that you have fine-tuned
args = parser.parse_args()
global_step = args.gloabl_step_eval
torch.backends.cudnn.deterministic = True
args.device = torch.device("cuda" if args.cuda else "cpu")
args.n_gpu = torch.cuda.device_count()
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
args.encoder_model_type = args.encoder_model_type.lower()
args.decoder_model_type = args.decoder_model_type.lower()
output_encoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-encoder-{}'.format(global_step))
output_decoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-decoder-{}'.format(global_step))
checkpoints = [ [output_encoder_dir, output_decoder_dir] ]
# Load a trained Encoder model and vocabulary that you have fine-tuned
encoder_config_class, encoder_model_class, encoder_tokenizer_class = MODEL_CLASSES[args.encoder_model_type]
model_encoder = encoder_model_class.from_pretrained(output_encoder_dir, latent_size=args.latent_size)
tokenizer_encoder = encoder_tokenizer_class.from_pretrained(args.encoder_tokenizer_name if args.encoder_tokenizer_name else args.encoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_encoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_encoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_encoder.max_len_single_sentence)
# Load a trained Decoder model and vocabulary that you have fine-tuned
decoder_config_class, decoder_model_class, decoder_tokenizer_class = MODEL_CLASSES[args.decoder_model_type]
model_decoder = decoder_model_class.from_pretrained(output_decoder_dir, latent_size=args.latent_size)
tokenizer_decoder = decoder_tokenizer_class.from_pretrained(args.decoder_tokenizer_name if args.decoder_tokenizer_name else args.decoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_decoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_decoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_decoder.max_len_single_sentence)
# Chunyuan: Add Padding token to GPT2
special_tokens_dict = {'pad_token': '<PAD>', 'bos_token': '<BOS>', 'eos_token': '<EOS>'}
num_added_toks = tokenizer_decoder.add_special_tokens(special_tokens_dict)
logger.info('We have added {} tokens to GPT2'.format(num_added_toks))
model_decoder.resize_token_embeddings(len(tokenizer_decoder)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
assert tokenizer_decoder.pad_token == '<PAD>'
train_loader = build_dataload_and_cache_examples(args, [tokenizer_encoder, tokenizer_decoder])
generator = Generator(args.n_layers, args.block_dim,args.latent_size)
critic = Critic(args.n_layers, args.block_dim,args.latent_size)
if args.generator_dir!=None:
generator.load_state_dict(torch.load(args.generator_dir+'/generator_'+str(args.gloabl_step_eval)+'.th'))
critic.load_state_dict(torch.load(args.generator_dir+'/critic_'+str(args.gloabl_step_eval)+'.th'))
g_optimizer = optim.Adam(generator.parameters(), lr=args.lr, betas=(0.5, 0.999))
c_optimizer = optim.Adam(critic.parameters(), lr=args.lr, betas=(0.5, 0.999))
if args.cuda:
generator = generator.cuda()
critic = critic.cuda()
logger.info('G Parameters:{}'.format(sum([p.numel() for p in generator.parameters() if \
p.requires_grad])))
logger.info('C Parameters:{}'.format(sum([p.numel() for p in critic.parameters() if \
p.requires_grad])))
best_bleu = 0
reference = list()
with(open(args.valid_data_file,"r")) as valid:
for sents in valid:
reference.append(sents.replace("\n", ""))
for epoch in range(1, args.epochs + 1):
g_loss, c_loss = train(epoch)
data_test = list()
for i in range(2):
test_noise = torch.Tensor(np.random.normal(0, 1, (250, args.latent_size))).to(args.device)
test_z = generator(test_noise).data
new_sent = rollout_test(model_decoder, test_z, tokenizer_decoder, args.max_seq_length, 250, 0, 1)
data_test.extend(new_sent)
p_reference = random.sample(reference, 500)
bleu = calc_blue_parallel_func(p_reference, data_test, 2, 500)
b_bleu = calc_blue_parallel_func(data_test, p_reference, 2, 500)
logger.info("Bleu-2:{:0.3f} | B-Bleu-2:{:0.3f}".format(bleu, b_bleu))
if (bleu+b_bleu) > best_bleu:
best_bleu = bleu + b_bleu
logger.info('* Saving. Best Score:{:0.3f} | Bleu-2:{:0.3f} | B-Bleu-2:{:0.3f}'.format(best_bleu, bleu, b_bleu))
torch.save(generator.state_dict(), args.output_dir+'/generator_'+str(args.gloabl_step_eval)+'.th')
torch.save(critic.state_dict(), args.output_dir+'/critic_'+str(args.gloabl_step_eval)+'.th') | [] |
2024-01-10 | SemperFidelis0510/utils | debugger.py | import os
import argparse
import numpy as np
import requests
import openai
import subprocess
import json
import time
import re
from termcolor import colored
class Code:
def __init__(self, code='', output=None, error=None, path=None):
self.code = code
self.code_lines = code.split('\n')
self.output = output
self.error = error
self.fixes = []
self.path = ''
if path is not None:
self.from_file(path)
def to_json(self):
return {
'code': self.code,
'output': self.output,
'error': self.error
}
@staticmethod
def from_json(json_obj):
return Code(json_obj['code'], json_obj['output'], json_obj['error'])
def fix(self, lines, new_code):
new_code = new_code.split('\n')
if len(lines) == 1:
lines = [lines[0], lines[0]]
elif len(lines) == 0:
if new_code == '':
print('No fix is needed')
return
else:
print('A new code was given, but instructions of where to put it are missing.')
return
j = 0
for i in range(lines[0] - 1, lines[1]):
self.code_lines[i] = new_code[j]
j += 1
self.compile_code()
def compile_code(self):
self.code = '\n'.join(self.code_lines)
def debug(self, model):
input_json = self.to_json()
prompt = f"""
You are a code debugger. Here is the code, its output, and the error it produced:
{json.dumps(input_json, indent=4)}
Please identify the lines that need to be changed and suggest the new code to fix the issue.
Return your response in the following JSON format:
{{
"lines": [start_line, end_line],
"new_code": "the new code"
}}
Note to yourself:
- If there is only one line to be changed, the value on the key "lines", will be as [change_line, change_line], i.e both elements of the list will be the same single line.
- Add nothing else to you response, send only the JSON.
- The content of this prompt might be divided into a few parts, and be sent in a sequence.
Therefore, you should not send any response back, until you receive the total prompt. To know when the prompt is complete,
expect the total content of this complete prompt to end with only the JSON with keys {{'code','output','error'}}.
"""
prompt_parts = [prompt[i:i + 4097] for i in range(0, len(prompt), 4097)]
responses = []
for part in prompt_parts:
response = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": part}
]
)
responses.append(response['choices'][0]['message']['content'])
content = ''.join(responses)
try:
# Use a regex to extract the JSON object from the response
match = re.search(r'\{\s*"lines":\s*\[.*\],\s*"new_code":\s*".*"\s*\}', content, re.DOTALL)
if match:
json_str = match.group(0)
response_json = json.loads(json_str)
else:
print("No JSON object found in the response.")
response_json = {'lines': [], 'new_code': ''}
except json.JSONDecodeError:
print("The content could not be parsed as JSON.")
response_json = {'lines': [], 'new_code': ''}
self.fix(response_json["lines"], response_json["new_code"])
def to_file(self):
path = self.path
# Rename the old file if it exists
if os.path.exists(path):
timestamp = time.strftime("%Y%m%d-%H%M%S")
os.rename(path, f"{path}_{timestamp}")
with open(path, "w") as f:
for line in self.code:
f.write(line)
def from_file(self, path):
self.path = path
# Read the code from the file
with open(self.path, 'r') as f:
self.code = f.read()
def run(self, env_name, args=''):
args = args.split(' ')
try:
# Run the Python file in the specified conda environment
result = subprocess.run(['conda', 'run', '-n', env_name, 'python', self.path] + args, capture_output=True,
text=True)
self.output = result.stdout
self.error = result.stderr
except Exception as e:
self.output = ''
self.error = str(e)
def run_python_file(file_path, env_name, args=''):
args = args.split(' ')
try:
# Run the Python file in the specified conda environment
result = subprocess.run(['conda', 'run', '-n', env_name, 'python', file_path] + args, capture_output=True,
text=True)
output = result.stdout
error = result.stderr
except Exception as e:
output = ''
error = str(e)
# Read the code from the file
with open(file_path, 'r') as f:
code = f.read()
return Code(code, output, error, path=file_path)
def parse():
parser = argparse.ArgumentParser()
parser.add_argument('--file', help="Path to file")
parser.add_argument('--env', help="Name of conda environment", default="utils")
parser.add_argument('--model', help="Name of GPT model", default='gpt-3.5-turbo')
parser.add_argument('--n', help="Max number of iterations", default=3)
parser.add_argument('--args', help="Default args", default=[])
api_key = os.getenv('OPENAI_API_KEY')
if not api_key:
raise ValueError("Missing OpenAI API key")
openai.api_key = api_key
return parser.parse_args()
def main():
args = parse()
j = 0
args.file = 'SR2.py'
args.args = '--train'
for i in range(args.n):
j = i
code = run_python_file(args.file, args.env, args.args)
print(colored(f'code:\n{code.code}', 'yellow'))
print(colored(f'output:\n{code.output}', 'blue'))
print(colored(f'error:\n{code.error}', 'red'))
if code.error == '':
break
code.debug(args.model)
code.to_file()
print(f"All went well. It took {j + 1} runs.")
if __name__ == '__main__':
main()
| [
"You are a helpful assistant."
] |
2024-01-10 | sauravpanda/cal-do-more | services~whisper.py | from openai import OpenAI
import os
import boto3
def whisper_video(bucket_name, object_key):
# Access the variables
openai_api_key = os.environ.get("OPENAI_API_KEY")
# Set your AWS credentials (replace 'your_access_key' and 'your_secret_key' with your actual credentials)
aws_access_key = os.environ.get("AWS_ACCESS_KEY")
aws_secret_key = os.environ.get("AWS_SECRET_KEY")
s3 = boto3.client(
"s3", aws_access_key_id=aws_access_key, aws_secret_access_key=aws_secret_key
)
# response = s3.list_objects_v2(Bucket=bucket_name)
local_path = f"audio-folder/{object_key}"
s3.download_file(bucket_name, object_key, local_path)
client = OpenAI(api_key=openai_api_key)
audio_file = open(f"audio-folder/{object_key}", "rb")
transcript = client.audio.transcriptions.create(model="whisper-1", file=audio_file)
return transcript.text
| [] |
2024-01-10 | alfredcs/immersion_day_labs | genai~app_radaide.py | import copy
import glob
import hashlib
import logging
import os
import re
from pathlib import Path
from typing import List, Optional, Tuple
from urllib.parse import urlparse
import gradio as gr
import PIL
from gradio import processing_utils
from gradio_client.client import DEFAULT_TEMP_DIR
from text_generation import Client
from transformers import AutoProcessor
import boto3
import whisper
import base64
# For dino_sam segementation
import copy
import cv2
import torch
import matplotlib.pyplot as plt
import dino_sam_inpainting as D
# Multiclass classification
import utils.multi_class as M
import random
#SDXL
import io, base64
from PIL import Image
#from utils import bedrock
import botocore.config
from io import BytesIO
from base64 import b64encode
import json
## CoT
from langchain import PromptTemplate, LLMChain
from langchain.llms import HuggingFaceTextGenInference
# Keyword extraction
from keybert import KeyBERT
kw_model = KeyBERT()
# Dino SAM cfg
config_file = 'GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py' # change the path of the model config file
grounded_checkpoint = './models/groundingdino_swint_ogc.pth' # change the path of the model
sam_checkpoint = './models/sam_vit_h_4b8939.pth'
sam_hq_checkpoint = '' #sam_hq_vit_h.pth
use_sam_hq = ''
# image_path = image_path
# text_prompt = text_prompt
output_dir = './outputs'
# box_threshold = box_threshold
# text_threshold = text_threshold
device = 'cuda'
s3_client = boto3.client('s3')
asr_model = whisper.load_model("large")
MODELS = [
#"HuggingFaceM4/idefics-9b-instruct",
#"HuggingFaceM4/idefics-80b-instruct",
"local/idefics-9b-instruct",
]
API_PATHS = {
"local/idefics-9b-instruct": (
"http://infs.cavatar.info:8080"
),
}
API_PATHS_2 = {
"HuggingFaceM4/idefics-9b-instruct": (
"https://api-inference.huggingface.co/models/HuggingFaceM4/idefics-9b-instruct"
),
"HuggingFaceM4/idefics-80b-instruct": (
"https://api-inference.huggingface.co/models/HuggingFaceM4/idefics-80b-instruct"
),
"local/idefics-9b-instruct": (
"http://infs.cavatar.info:8080"
),
}
SYSTEM_PROMPT = [
""""The following is a conversation between a highly knowledgeable and intelligent visual AI assistant, called RadAide, and a human user, called User. In the following interactions, User and Assistant will converse in natural language, and RadAide will do its best to answer User’s questions. RadAide has the ability to perceive images and reason about the content of visual inputs. It can also process images by following precise instructs. RadAide was built to be smart, respectful, polite and inclusive. When prompted with an image, it tells the truth and does not make up facts. The conversation begins:""",
"""\nUser:""",
"https://miro.medium.com/v2/resize:fit:1332/0*yl2b-bDJeEwKPUI5"
"Describe the nature of this image.<end_of_utterance>",
"""\RadAide: A tattooed person holding a sign that says, “Teach your children well,” in a crowd of people. In the middle of the sign, there’s an illustration of the earth with 2 raised fists on either side that have a rainbow pride square background, a trans pride circle background, and brown skin tone stripes on the fists. The raised fist is a symbol of solidarity and specifically Black power as popularized by the Black Panther Party in the 1960s. The rainbow pride flag has rainbow stripes and symbolizes general LGBTQ pride. The trans pride flag has pink, blue, and white stripes and celebrates pride for the trans and gender non-conforming umbrella.<end_of_utterance>""",
"\nUser: How many dogs do you see in this image?",
"https://i.dailymail.co.uk/i/pix/2011/07/01/article-2010308-0CD22A8300000578-496_634x414.jpg",
"""\nAssistant: There is no dogs in this image. The picture shows a tennis player jumping to volley the ball.<end_of_utterance>""",
]
BAN_TOKENS = ( # For documentation puporse. We are not using this list, it is hardcoded inside `idefics_causal_lm.py` inside TGI.
"<image>;<fake_token_around_image>"
)
EOS_STRINGS = ["<end_of_utterance>", "\nUser:"]
STOP_SUSPECT_LIST = []
#GRADIO_LINK = "https://huggingfacem4-idefics-playground.hf.space"
GRADIO_LINK = "http://0.0.0.0:7863"
HTTPD_URL = "http://radaide.cavatar.info:8080/"
API_TOKEN = os.getenv("hf_api_token")
IDEFICS_LOGO = "https://huggingface.co/spaces/HuggingFaceM4/idefics_playground/resolve/main/IDEFICS_logo.png"
DocAid_logo = "example_images/medicine.png"
global orig_image_path
PROCESSOR = AutoProcessor.from_pretrained(
"HuggingFaceM4/idefics-9b-instruct",
token=API_TOKEN,
)
BOT_AVATAR = "IDEFICS_logo.png"
BOT_AVATAR = None
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger()
# Monkey patch adapted from gradio.components.image.Image - mostly to make the `save` step optional in `pil_to_temp_file`
def hash_bytes(bytes: bytes):
sha1 = hashlib.sha1()
sha1.update(bytes)
return sha1.hexdigest()
def pil_to_temp_file(img: PIL.Image.Image, dir: str = DEFAULT_TEMP_DIR, format: str = "png") -> str:
"""Save a PIL image into a temp file"""
bytes_data = processing_utils.encode_pil_to_bytes(img, format)
temp_dir = Path(dir) / hash_bytes(bytes_data)
temp_dir.mkdir(exist_ok=True, parents=True)
filename = str(temp_dir / f"image.{format}")
if not os.path.exists(filename):
img.save(filename, pnginfo=processing_utils.get_pil_metadata(img))
return filename
def add_file(file):
return file.name, gr.update(label='🖼️ Uploaded!')
# Dino SAM
def dino_sam(image_path, text_prompt, text_threshold=0.4, box_threshold=0.5, output_dir='/temp/gradio/outputs'):
config_file = 'GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py' # change the path of the model config file
grounded_checkpoint = './models/groundingdino_swint_ogc.pth' # change the path of the model
sam_checkpoint = './models/sam_vit_h_4b8939.pth'
sam_hq_checkpoint = '' #sam_hq_vit_h.pth
use_sam_hq = ''
output_dir = '/tmp/gradio/outputs'
device = 'cuda'
# make dir
os.makedirs(output_dir, exist_ok=True)
# load image
image_pil, image = D.load_image(image_path)
# load model
model = D.load_model(config_file, grounded_checkpoint, device=device)
rnum = random.randint(10, 100)
output_file_name = f'{rnum}_{format(os.path.basename(image_path))}'
# visualize raw image
image_pil.save(os.path.join(output_dir, output_file_name))
# run grounding dino model
boxes_filt, pred_phrases = D.get_grounding_output(
model, image, text_prompt, box_threshold, text_threshold, device=device
)
# initialize SAM
if use_sam_hq:
predictor = D.SamPredictor(D.build_sam_hq(checkpoint=sam_hq_checkpoint).to(device))
else:
predictor = D.SamPredictor(D.build_sam(checkpoint=sam_checkpoint).to(device))
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
predictor.set_image(image)
size = image_pil.size
H, W = size[1], size[0]
for i in range(boxes_filt.size(0)):
boxes_filt[i] = boxes_filt[i] * torch.Tensor([W, H, W, H])
boxes_filt[i][:2] -= boxes_filt[i][2:] / 2
boxes_filt[i][2:] += boxes_filt[i][:2]
boxes_filt = boxes_filt.cpu()
transformed_boxes = predictor.transform.apply_boxes_torch(boxes_filt, image.shape[:2]).to(device)
masks, _, _ = predictor.predict_torch(
point_coords = None,
point_labels = None,
boxes = transformed_boxes.to(device),
multimask_output = False,
)
# draw output image
plt.figure(figsize=(10, 10))
plt.imshow(image)
for mask in masks:
D.show_mask(mask.cpu().numpy(), plt.gca(), random_color=True)
for box, label in zip(boxes_filt, pred_phrases):
D.show_box(box.numpy(), plt.gca(), label)
#output_file_name = f'{format(os.path.basename(image_path))}'
plt.axis('off')
plt.savefig(
os.path.join(output_dir, f'grounded_sam_{output_file_name}'),
bbox_inches="tight", dpi=300, pad_inches=0.0
)
D.save_mask_data(output_dir, masks, boxes_filt, pred_phrases)
return f'grounded_sam_{output_file_name}'
## SDXL
def image_gen(prompt: str, image_path: str) -> str:
if prompt is None:
return
config = botocore.config.Config(connect_timeout=120, read_timeout=120)
boto3_bedrock_rt = boto3.client(service_name='bedrock-runtime', region_name='us-east-1', endpoint_url='https://bedrock.us-east-1.amazonaws.com', config=config)
negative_prompts = [
"poorly rendered",
"poor background details",
"poorly drawn dog",
"disfigured dog features",
"blurry"
]
style_preset = "photographic" # (photographic, digital-art, cinematic, ...)
modelId = 'stability.stable-diffusion-xl'
accept = 'application/json'
contentType = 'application/json'
body = json.dumps({"text_prompts": [{"text": prompt}],
"cfg_scale": 5,
"seed": 2325,
"steps": 75,
})
rnum = random.randint(100, 2000)
if image_path is None:
response = boto3_bedrock_rt.invoke_model(body=body,modelId=modelId, accept=accept, contentType=contentType)
response_body = json.loads(response.get('body').read())
base_64_img_str = response_body['artifacts'][0]['base64']
#Old base_64_img_str = br_runtime_client.generate_image(prompt, modelId=model_name, cfg_scale=5, seed=2143, steps=70, style_preset=style_preset)
image_2 = Image.open(io.BytesIO(base64.decodebytes(bytes(base_64_img_str, "utf-8"))))
image_2.save(f'/tmp/gradio/outputs/sdxl_{rnum}.jpg')
else:
buffer = BytesIO()
image_1 = Image.open(image_path)
# Resize to 512
basewidth = 512
hsize = 512
'''
width, height = image_1.size
if width > 512:
basewidth = 512
wpercent = (basewidth/float(image_1.size[0]))
hsize = int((float(image_1.size[1])*float(wpercent)))
'''
image_1 = image_1.resize((basewidth,hsize), Image.Resampling.LANCZOS)
# Gen image to image
image_1.save(buffer, format="JPEG")
img_bytes = buffer.getvalue()
init_image = b64encode(img_bytes).decode()
body2 = json.dumps({
"text_prompts": (
[{"text": prompt, "weight": 1.0}]
+ [{"text": negprompt, "weight": -1.0} for negprompt in negative_prompts]
),
"cfg_scale": 10,
"init_image": init_image,
"seed": 129,
"start_schedule": 0.6,
"steps": 75,
"style_preset": style_preset,
})
response = boto3_bedrock_rt.invoke_model(body=body2, modelId=modelId)
response_body = json.loads(response.get('body').read())
base_64_img_str = response_body['artifacts'][0]['base64']
#base_64_img_str = model.generate_image(prompt, init_image=init_image, start_schedule=0.6, cfg_scale=5, seed=12345, steps=70, style_preset=style_preset)
image_3 = Image.open(io.BytesIO(base64.decodebytes(bytes(base_64_img_str, "utf-8"))))
image_3.save(f'/tmp/gradio/outputs/sdxl_{rnum}.jpg')
return f'sdxl_{rnum}.jpg'
# This is a hack to make pre-computing the default examples work.
# During normal inference, we pass images as url to a local file using the method `gradio_link`
# which allows the tgi server to fetch the local image from the frontend server.
# however, we are building the space (and pre-computing is part of building the space), the frontend is not available
# and won't answer. So tgi server will try to fetch an image that is not available yet, which will result in a timeout error
# because tgi will never be able to return the generation.
# To bypass that, we pass instead the images URLs from the spaces repo.
all_images = glob.glob(f"{os.path.dirname(__file__)}/example_images/*")
DEFAULT_IMAGES_TMP_PATH_TO_URL = {}
for im_path in all_images:
H = gr.Image(im_path, visible=False, type="filepath")
tmp_filename = H.preprocess(H.value)
#DEFAULT_IMAGES_TMP_PATH_TO_URL[tmp_filename] = f"https://huggingface.co/spaces/HuggingFaceM4/idefics_playground/resolve/main/example_images/{os.path.basename(im_path)}"
#DEFAULT_IMAGES_TMP_PATH_TO_URL[tmp_filename] = f"/https://bedrock-415275363822.s3.us-east-1.amazonaws.com/uploads/{os.path.basename(im_path)}"
#print(f"The tem file path {DEFAULT_IMAGES_TMP_PATH_TO_URL[tmp_filename]}")
# Utils to handle the image markdown display logic
def split_str_on_im_markdown(string: str) -> List[str]:
"""
Extract from a string (typically the user prompt string) the potential images from markdown
Examples:
- `User:Describe this image.` would become `["User:", "https://favurl.com/chicken_on_money.png", "Describe this image."]`
- `User:Describe this image.` would become `["User:", "/my_temp/chicken_on_money.png", "Describe this image."]`
"""
IMAGES_PATTERN = re.compile(r"!\[[^\]]*\]\((.*?)\s*(\"(?:.*[^\"])\")?\s*\)")
parts = []
cursor = 0
for pattern in IMAGES_PATTERN.finditer(string):
start = pattern.start()
if start != cursor:
parts.append(string[cursor:start])
image_url = pattern.group(1)
if image_url.startswith("/file="):
image_url = image_url[6:] # Remove the 'file=' prefix
parts.append(image_url)
cursor = pattern.end()
if cursor != len(string):
parts.append(string[cursor:])
return parts
def is_image(string: str) -> bool:
"""
There are two ways for images: local image path or url.
"""
return is_url(string) or string.startswith(DEFAULT_TEMP_DIR)
def is_url(string: str) -> bool:
"""
Checks if the passed string contains a valid url and nothing else. e.g. if space is included it's immediately
invalidated the url
"""
if " " in string:
return False
result = urlparse(string)
return all([result.scheme, result.netloc])
def isolate_images_urls(prompt_list: List) -> List:
linearized_list = []
for prompt in prompt_list:
# Prompt can be either a string, or a PIL image
if isinstance(prompt, PIL.Image.Image):
linearized_list.append(prompt)
elif isinstance(prompt, str):
if "<fake_token_around_image>" not in prompt:
linearized_list.append(prompt)
else:
prompt_splitted = prompt.split("<fake_token_around_image>")
for ps in prompt_splitted:
if ps == "":
continue
if ps.startswith("<image:"):
linearized_list.append(ps[7:-1])
else:
linearized_list.append(ps)
else:
raise TypeError(
f"Unrecognized type for `prompt`. Got {type(type(prompt))}. Was expecting something in [`str`,"
" `PIL.Image.Image`]"
)
return linearized_list
def cot_langchain_llama27b(query_string: str) -> str:
inference_server_url_local = "http://infs.cavatar.info:8083"
llm_local = HuggingFaceTextGenInference(
inference_server_url=inference_server_url_local,
max_new_tokens=1024,
top_k=5,
top_p=0.96,
typical_p=0.95,
temperature=0.001,
repetition_penalty=1.08,
)
template = """Use the following pieces of context to fully understand the intent and create sub staks to address the context. Please try not to,
make up an answer nor hallucinate. Use five maximum sentences and keep the sub tasks as precise as possible. List all actionable steps in
detail. Be cautious to avoid phrasing that might replicate previous inquiries. This will help in obtaining an accurate and detailed answer.
Avoid repetition for clarity.
Question: {question}
Answer: Understand the intent of the question then break down the {question} in to sub-tasks. """
prompt = PromptTemplate(
template=template,
input_variables= ["question"]
)
llm_chain_local = LLMChain(prompt=prompt, llm=llm_local)
cot_return = llm_chain_local(query_string)["text"].replace("\n", "")
return f'. Please follow the sub tasks listed below and organize your answers in a short paragraph with precisise and professional writing style plus duplicate avoidance: {cot_return}'
def fetch_images(url_list: str) -> PIL.Image.Image:
"""Fetching images"""
return PROCESSOR.image_processor.fetch_images(url_list)
def handle_manual_images_in_user_prompt(user_prompt: str) -> List[str]:
"""
Handle the case of textually manually inputted images (i.e. the `<fake_token_around_image><image:IMG_URL><fake_token_around_image>`) in the user prompt
by fetching them, saving them locally and replacing the whole sub-sequence the image local path.
"""
if "<fake_token_around_image>" in user_prompt:
splitted_user_prompt = isolate_images_urls([user_prompt])
resulting_user_prompt = []
for u_p in splitted_user_prompt:
if is_url(u_p):
img = fetch_images([u_p])[0]
tmp_file = pil_to_temp_file(img)
resulting_user_prompt.append(tmp_file)
else:
resulting_user_prompt.append(u_p)
return resulting_user_prompt
else:
return [user_prompt]
def gradio_link(img_path: str) -> str:
#url = f"{GRADIO_LINK}/file={img_path}"
#url = f"{format(os.path.basename(image_path.name))}"
#url = f"{img_path}"
#key_name = f'uploads/{os.path.basename(img_path)}'
new_file_name =str(img_path)[12:]
#bucket = 'bedrock-415275363822'
#s3_client.upload_file(Filename=img_path, Bucket=bucket, Key=key_name)
#url = f"http://radaide.cavatar.info:8080/"
orig_image_path = img_path
return f'{HTTPD_URL}{new_file_name}'
#return "https://{0}.s3.us-east-1.amazonaws.com/{1}".format(bucket, key_name)
def prompt_list_to_markdown(prompt_list: List[str]) -> str:
"""
Convert a user prompt in the list format (i.e. elements are either a PIL image or a string) into
the markdown format that is used for the chatbot history and rendering.
"""
resulting_string = ""
for elem in prompt_list:
if is_image(elem):
if is_url(elem):
resulting_string += f""
else:
resulting_string += f""
else:
resulting_string += elem
return resulting_string
def prompt_list_to_tgi_input(prompt_list: List[str]) -> str:
"""
TGI expects a string that contains both text and images in the image markdown format (i.e. the `![]()` ).
The images links are parsed on TGI side
"""
result_string_input = ""
for elem in prompt_list:
if is_image(elem):
if is_url(elem):
result_string_input += f""
else:
result_string_input += f"})"
else:
result_string_input += elem
return result_string_input
def remove_spaces_around_token(text: str) -> str:
pattern = r"\s*(<fake_token_around_image>)\s*"
replacement = r"\1"
result = re.sub(pattern, replacement, text)
return result
# Chatbot utils
def format_user_prompt_with_im_history_and_system_conditioning(
current_user_prompt_str: str, current_image: Optional[str], history: List[Tuple[str, str]]
) -> Tuple[List[str], List[str]]:
"""
Produces the resulting list that needs to go inside the processor.
It handles the potential image box input, the history and the system conditionning.
"""
resulting_list = copy.deepcopy(SYSTEM_PROMPT)
#CoT Alfred
cot_added_str = cot_langchain_llama27b(current_user_prompt_str.strip()) if ("detail" in current_user_prompt_str.lower() or "elaborate" in current_user_prompt_str.lower() or "comprehen" in current_user_prompt_str.lower() or 'depict' in current_user_prompt_str.lower()) else ""
# Format history
for turn in history:
user_utterance, assistant_utterance = turn
splitted_user_utterance = split_str_on_im_markdown(user_utterance)
optional_space = ""
if not is_image(splitted_user_utterance[0]):
optional_space = " "
resulting_list.append(f"\nUser:{optional_space}")
resulting_list.extend(splitted_user_utterance)
# CoT Alfred
resulting_list.append(cot_added_str)
resulting_list.append(f"<end_of_utterance>\nAssistant: {assistant_utterance}")
# Format current input
current_user_prompt_str = remove_spaces_around_token(current_user_prompt_str)
if current_image is None:
if "
else:
current_user_prompt_list = handle_manual_images_in_user_prompt(current_user_prompt_str)
optional_space = ""
if not is_image(current_user_prompt_list[0]):
# Check if the first element is an image (and more precisely a path to an image)
optional_space = " "
resulting_list.append(f"\nUser:{optional_space}")
resulting_list.extend(current_user_prompt_list)
#CoT Alfred
resulting_list.append(cot_added_str)
resulting_list.append("<end_of_utterance>\nAssistant:")
else:
# Choosing to put the image first when the image is inputted through the UI, but this is an arbiratrary choice.
resulting_list.extend(["\nUser:", current_image, f"{current_user_prompt_str}{cot_added_str}<end_of_utterance>\nAssistant:"])
current_user_prompt_list = [current_user_prompt_str]
return resulting_list, current_user_prompt_list
# dope_callback = gr.CSVLogger()
# problematic_callback = gr.CSVLogger()
textbox = gr.Textbox(
placeholder="Upload an image and send a message",
show_label=False,
# value="Describe the battle against the fierce dragons.",
visible=True,
container=False,
label="Text input",
scale=6,
)
with gr.Blocks(title="Multimodal Playground", theme=gr.themes.Base()) as demo:
gr.HTML("""<h1 align="center">Multimodal Playground</h1>""")
with gr.Row(variant="panel"):
with gr.Column(scale=1):
gr.Image(DocAid_logo, elem_id="banner-image", show_label=False, show_download_button=False, height=200, weight=100)
with gr.Column(scale=5):
gr.HTML("""
<p>📚 The demo presents <strong>Dialogue Guided Visual Language Processing</strong>, an multimodality VLP pirpeline based on LLM (i.e. Llama-v2) and VLM (i.e. IDEFICS) model that processes both image, text and voicenputs.</p>
<p>🅿️ <strong>Intended uses:</strong> This demo serves as a proof of concept for multimodal generation. To prepare it for production, further refinement, including fine-tuning and expert evaluation, is necessary.</p>
<p>⛔️ <strong>Limitations:</strong> The model might generate inaccurate information, invent details from images or text, and often overlooks minute image details. Although it generally avoids responding to dubious user queries, it can still produce outputs that may be racist, stereotypical, or offensive, especially when specifically prompted.</p>
""")
# with gr.Row():
# with gr.Column(scale=2):
with gr.Row(elem_id="model_selector_row"):
model_selector = gr.Dropdown(
choices=MODELS,
value="local/idefics-9b-instruct",
interactive=True,
show_label=False,
container=False,
label="Model",
visible=False,
)
imagebox = gr.Image(type="filepath", label="Image input", visible=False)
with gr.Row():
# def prefetch_images_in_history(user_prompt_str):
# """
# Pre-fetch the images that are passed in the chatbot default history.
# """
# return prompt_list_to_markdown(handle_manual_images_in_user_prompt(user_prompt_str))
chatbot = gr.Chatbot(
elem_id="chatbot",
label="Multimodal",
visible=True,
height=1000,
avatar_images=[None, BOT_AVATAR]
# value=[
# [
# (
# prefetch_images_in_history(
# """Write a funny story including the following characters.
# There should be drama and most importantly romance.
# Character 1:<fake_token_around_image><image:https://m.media-amazon.com/images/M/MV5BODg1OTczMWEtNTU3MS00OTUzLThjODEtNDg1MWQwZmExYmFlXkEyXkFqcGdeQWFybm8@._V1_QL75_UX500_CR0,0,500,281_.jpg><fake_token_around_image>
# Character 2:<fake_token_around_image><image:https://static.wikia.nocookie.net/dreamworks/images/0/09/Shrek_dreamworks.png/revision/latest?cb=20170921225439><fake_token_around_image>
# Character 3:<fake_token_around_image><image:https://upload.wikimedia.org/wikipedia/en/e/ef/Marsupilami_1.jpg><fake_token_around_image>
# The plot should take place in the world pictured here:<fake_token_around_image><image:https://www.astronomy.com/wp-content/uploads/sites/2/2021/09/ON1.jpg><fake_token_around_image>"""
# )
# ),
# """Once upon a time, in a world where the sky was a dome and the ground was covered in grass, there lived a beautiful princess named Mulan. She was kind, brave, and always ready to help others. One day, while walking through the forest, she stumbled upon a strange creature. It was a green ogre named Shrek, who was lost and looking for his way back home.
# Mulan, being the kind-hearted person she was, decided to help Shrek find his way back. As they traveled together, they began to develop a strong bond. Shrek was impressed by Mulan's bravery and determination, while Mulan admired Shrek's loyalty and sense of humor.
# However, their journey was not without obstacles. They encountered a mischievous cat named Hoboua, who was always causing trouble and getting into fights. Despite his troublemaking ways, Hoboua had a good heart and eventually became a valuable ally to Mulan and Shrek.
# As they traveled, they faced many challenges, including battling fierce dragons and navigating treacherous mountains. Through it all, Mulan and Shrek grew closer, and their feelings for each other deepened.
# Finally, they reached Shrek's home, and he was reunited with his family and friends. Mulan, however, was sad to leave him behind. But Shrek had a surprise for her. He had fallen in love with her and wanted to be with her forever.
# Mulan was overjoyed, and they shared a passionate kiss. From that day on, they lived happily ever after, exploring the world together and facing any challenges that came their way.
# And so, the story of Mulan and Shrek's romance came to an end, leaving a lasting impression on all who heard it.""",
# ],
# ],
)
with gr.Group():
with gr.Row():
with gr.Column():
textbox.render()
with gr.Column():
asr_audio = gr.Audio(
label="Input Audio",
show_label=True,
source="microphone",
type="filepath")
with gr.Row():
#textbox.render()
submit_btn = gr.Button(value="▶️ Submit", visible=True)
clear_btn = gr.ClearButton([textbox, imagebox, chatbot], value="🧹 Clear")
regenerate_btn = gr.Button(value="🔄 Regenerate", visible=True)
upload_btn = gr.UploadButton("📁 Upload image", file_types=["image"])
asr_btn = gr.Button("😬 Transcribe")
# with gr.Group():
# with gr.Row():
# with gr.Column(scale=1, min_width=50):
# dope_bttn = gr.Button("Dope🔥")
# with gr.Column(scale=1, min_width=50):
# problematic_bttn = gr.Button("Problematic😬")
with gr.Row():
with gr.Accordion("Advanced settings", open=False, visible=True) as parameter_row:
max_new_tokens = gr.Slider(
minimum=8,
maximum=1024,
value=512,
step=1,
interactive=True,
label="Maximum number of new tokens to generate",
)
repetition_penalty = gr.Slider(
minimum=0.01,
maximum=5.0,
value=1.0,
step=0.01,
interactive=True,
label="Repetition penalty",
info="1.0 is equivalent to no penalty",
)
decoding_strategy = gr.Radio(
[
"Greedy",
"Top P Sampling",
],
value="Greedy",
label="Decoding strategy",
interactive=True,
info="Higher values is equivalent to sampling more low-probability tokens.",
)
temperature = gr.Slider(
minimum=0.0,
maximum=5.0,
value=0.4,
step=0.1,
interactive=True,
visible=False,
label="Sampling temperature",
info="Higher values will produce more diverse outputs.",
)
decoding_strategy.change(
fn=lambda selection: gr.Slider.update(
visible=(
selection in ["contrastive_sampling", "beam_sampling", "Top P Sampling", "sampling_top_k"]
)
),
inputs=decoding_strategy,
outputs=temperature,
)
top_p = gr.Slider(
minimum=0.01,
maximum=0.99,
value=0.8,
step=0.01,
interactive=True,
visible=False,
label="Top P",
info="Higher values is equivalent to sampling more low-probability tokens.",
)
decoding_strategy.change(
fn=lambda selection: gr.Slider.update(visible=(selection in ["Top P Sampling"])),
inputs=decoding_strategy,
outputs=top_p,
)
gr.Markdown(
"""<p><strong>💡 Pro tip</strong>:<br>
You can input an arbitrary number of images at arbitrary positions in the same query.<br>
You will need to input each image with its URL with the syntax <code><fake_token_around_image><image:IMAGE_URL><fake_token_around_image></code>.<br>
For example, for two images, you could input <code>TEXT_1<fake_token_around_image><image:IMAGE_URL_1><fake_token_around_image>TEXT_2<fake_token_around_image><image:IMAGE_URL_2><fake_token_around_image>TEXT_3</code>.<br>
In the particular case where two images are consecutive, it is not necessary to add an additional separator: <code><fake_token_around_image><image:IMAGE_URL_1><fake_token_around_image><image:IMAGE_URL_2><fake_token_around_image></code>.</p>"""
)
def model_inference(
model_selector,
user_prompt_str,
chat_history,
image,
decoding_strategy,
temperature,
max_new_tokens,
repetition_penalty,
top_p,
):
if user_prompt_str.strip() == "" and image is None:
return "", None, chat_history
formated_prompt_list, user_prompt_list = format_user_prompt_with_im_history_and_system_conditioning(
current_user_prompt_str=user_prompt_str.strip(),
# With CoT
#current_user_prompt_str=f'{user_prompt_str.strip()}. {cot_langchain_llama27b(user_prompt_str.strip())}',
current_image=image,
history=chat_history,
)
client_endpoint = API_PATHS[model_selector]
client = Client(
base_url=client_endpoint,
headers={"x-use-cache": "0", "Authorization": f"Bearer {API_TOKEN}"},
)
# Common parameters to all decoding strategies
# This documentation is useful to read: https://huggingface.co/docs/transformers/main/en/generation_strategies
generation_args = {
"max_new_tokens": max_new_tokens,
"repetition_penalty": repetition_penalty,
"stop_sequences": EOS_STRINGS,
}
assert decoding_strategy in [
"Greedy",
"Top P Sampling",
]
if decoding_strategy == "Greedy":
generation_args["do_sample"] = False
elif decoding_strategy == "Top P Sampling":
generation_args["temperature"] = temperature
generation_args["do_sample"] = True
generation_args["top_p"] = top_p
mask_filename = None
orig_image_path = None
if image is None:
top_n = M.mclass(text_prompt=user_prompt_str, topics=['Others', 'Generate image from text', 'Generate image from image', 'Image segmentation'], top_k=1)
for label, score in top_n:
print(f'With label: {label} and score: {score}')
if ('Image segmentation' in label and score >= 0.65 ):
words_list = kw_model.extract_keywords(docs=user_prompt_str, keyphrase_ngram_range=(1,3))
words_list = [*words_list[0],][0].split()
print(f'{words_list} and with type {type(words_list)}')
stopwords = ['mask', 'create', 'generate', 'image', 'cut', 'edge', 'picture', 'photo', 'segment', 'new', 'her', 'his', 'my', 'the', 'that', 'this']
top_word = [i for i in words_list if i not in stopwords][0]
orig_image_path = re.findall('\((.*?)\)', chat_history[0][0])[0].split('=')[1]
filename = dino_sam(image_path=orig_image_path, text_prompt=top_word, \
output_dir='/temp/gradio/outputs', box_threshold=0.5, text_threshold=0.55)
view_mask_filename = f'[View generated image with with large size.]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
#f"{prompt_list_to_markdown(user_prompt_list + [view_mask_filename] + [mask_filename])}",
f"{prompt_list_to_markdown(user_prompt_list)}",
f"{mask_filename} {view_mask_filename}",
]
)
elif ('generate image from image' in label.lower() and score >= 0.81 ):
orig_image_path = re.findall('\((.*?)\)', chat_history[0][0])[0].split('=')[1]
filename = image_gen(prompt=user_prompt_str, image_path=orig_image_path)
if filename is not None:
view_mask_filename = f' [View generated imagewith large sie.]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
f"{prompt_list_to_markdown(user_prompt_list)}",
f"{mask_filename} {view_mask_filename}",
]
)
elif ('generate image from text' in label.lower() and score >= 0.81 ):
filename = image_gen(prompt=user_prompt_str, image_path=None)
if filename is not None:
view_mask_filename = f' [View generated image]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
f"{prompt_list_to_markdown(user_prompt_list)}",
f"{mask_filename} {view_mask_filename}"
]
)
yield "", None, chat_history
else:
chat_history.append([prompt_list_to_markdown(user_prompt_list), ''])
else:
# Case where the image is passed through the Image Box.
# Convert the image into base64 for both passing it through the chat history and
# displaying the image inside the same bubble as the text.
chat_history.append(
[
f"{prompt_list_to_markdown([image] + user_prompt_list)}",
'',
]
)
query = prompt_list_to_tgi_input(formated_prompt_list)
print(query)
#query += cot_langchain_llama27b(user_prompt_str.strip())
#print(f'New query: {query}')
stream = client.generate_stream(prompt=query, **generation_args)
acc_text = ""
if mask_filename is not None:
#chat_history.append([prompt_list_to_markdown(user_prompt_list), ''])
yield "", None, chat_history
else:
for idx, response in enumerate(stream):
text_token = response.token.text
if response.details:
# That's the exit condition
return
if text_token in STOP_SUSPECT_LIST:
acc_text += text_token
continue
if idx == 0 and text_token.startswith(" "):
text_token = text_token.lstrip()
acc_text += text_token
last_turn = chat_history.pop(-1)
last_turn[-1] += acc_text
if last_turn[-1].endswith("\nUser"):
# Safeguard: sometimes (rarely), the model won't generate the token `<end_of_utterance>` and will go directly to generating `\nUser:`
# It will thus stop the generation on `\nUser:`. But when it exits, it will have already generated `\nUser`
# This post-processing ensures that we don't have an additional `\nUser` wandering around.
last_turn[-1] = last_turn[-1][:-5]
chat_history.append(last_turn)
yield "", None, chat_history
acc_text = ""
def asr_inference(audio):
audio = whisper.load_audio(audio)
audio = whisper.pad_or_trim(audio)
mel = whisper.log_mel_spectrogram(audio).to(asr_model.device)
_, probs = asr_model.detect_language(mel)
options = whisper.DecodingOptions(fp16 = False)
result = whisper.decode(asr_model, mel, options)
return(result.text)
def model_inference_asr(
model_selector,
audio,
chat_history,
image,
decoding_strategy,
temperature,
max_new_tokens,
repetition_penalty,
top_p,
):
user_prompt_str = asr_inference(audio)
acc_text = ""
if user_prompt_str.strip() == "" and image is None:
return "", None, chat_history
formated_prompt_list, user_prompt_list = format_user_prompt_with_im_history_and_system_conditioning(
current_user_prompt_str=user_prompt_str.strip(),
current_image=image,
history=chat_history,
)
client_endpoint = API_PATHS[model_selector]
client = Client(
base_url=client_endpoint,
headers={"x-use-cache": "0", "Authorization": f"Bearer {API_TOKEN}"},
)
# Common parameters to all decoding strategies
# This documentation is useful to read: https://huggingface.co/docs/transformers/main/en/generation_strategies
generation_args = {
"max_new_tokens": max_new_tokens,
"repetition_penalty": repetition_penalty,
"stop_sequences": EOS_STRINGS,
}
print(f'Chat_history:{type(chat_history)} and the 1st {chat_history[0]}')
orig_image_path = re.findall('\((.*?)\)', chat_history[0][0])[0].split('=')[1]
print(f'...... and the image_path {orig_image_path}')
assert decoding_strategy in [
"Greedy",
"Top P Sampling",
]
if decoding_strategy == "Greedy":
generation_args["do_sample"] = False
elif decoding_strategy == "Top P Sampling":
generation_args["temperature"] = temperature
generation_args["do_sample"] = True
generation_args["top_p"] = top_p
mask_filename = None
if image is None:
top_n = M.mclass(text_prompt=user_prompt_str, topics=['Others', 'Generate image from text', 'Generate image from image', 'Image segmentation'], top_k=1)
for label, score in top_n:
print(f'With label: {label} and score: {score}')
if ('Image segmentation' in label and score >= 0.65 ):
words_list = kw_model.extract_keywords(docs=user_prompt_str, keyphrase_ngram_range=(1,3))
words_list = [*words_list[0],][0].split()
print(f'{words_list} and with type {type(words_list)}')
stopwords = ['mask', 'create', 'generate', 'image', 'cut', 'edge', 'picture', 'photo', 'segment', 'new', 'her', 'his', 'my', 'the', 'that', 'this']
top_word = [i for i in words_list if i not in stopwords][0]
orig_image_path = re.findall('\((.*?)\)', chat_history[0][0])[0].split('=')[1]
filename = dino_sam(image_path=orig_image_path, text_prompt=top_word, \
output_dir='/temp/gradio/outputs', box_threshold=0.5, text_threshold=0.55)
view_mask_filename = f' [View generated image with with large size.]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
#f"{prompt_list_to_markdown(user_prompt_list + [view_mask_filename] + [mask_filename])}",
f"{prompt_list_to_markdown(user_prompt_list)}",
f"{mask_filename} {view_mask_filename}",
]
)
elif ('generate image from image' in label.lower() and score >= 0.81 ):
orig_image_path = re.findall('\((.*?)\)', chat_history[0][0])[0].split('=')[1]
filename = image_gen(prompt=user_prompt_str, image_path=orig_image_path)
if filename is not None:
view_mask_filename = f' [View generated imagewith large sie.]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
f"{prompt_list_to_markdown(user_prompt_list)}",
f"{mask_filename} {view_mask_filename}",
]
)
elif ('generate image from text' in label.lower() and score >= 0.81 ):
filename = image_gen(prompt=user_prompt_str, image_path=None)
if filename is not None:
view_mask_filename = f' [View generated image]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
f"{prompt_list_to_markdown(user_prompt_list)}",
f"{mask_filename} {view_mask_filename}"
]
)
yield "", None, chat_history
else:
chat_history.append([prompt_list_to_markdown(user_prompt_list), ''])
'''
for label, score in top_n:
print(f'With label: {label} and score: {score}')
if ('Others' not in label and score >=0.55):
if ('Image segmentation' in label and score >= 0.65 ):
words_list = kw_model.extract_keywords(docs=user_prompt_str, keyphrase_ngram_range=(1,3))
words_list = [*words_list[0],][0].split()
print(f'{words_list} and with type {type(words_list)}')
stopwords = ['mask', 'create', 'generate', 'image', 'cut', 'edge', 'picture', 'photo', 'segment', 'new', 'her', 'his', 'my', 'the', 'that', 'this']
top_word = [i for i in words_list if i not in stopwords][0]
orig_image_path = re.findall('\((.*?)\)', chat_history[0][0])[0].split('=')[1]
filename = dino_sam(image_path=orig_image_path, text_prompt=top_word, \
output_dir='/temp/gradio/outputs', box_threshold=0.5, text_threshold=0.55)
view_mask_filename = f' [View generated image]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
f"{prompt_list_to_markdown(user_prompt_list + [view_mask_filename] + [mask_filename])}",
'',
]
)
else:
if ('generate image from image' in label.lower() and score >= 0.60 ):
orig_image_path = re.findall('\((.*?)\)', chat_history[0][0])[0].split('=')[1]
filename = image_gen(prompt=user_prompt_str, image_path=orig_image_path)
if filename is not None:
view_mask_filename = f' [View generated image]({HTTPD_URL}outputs/{filename})'
mask_filename = f''
chat_history.append(
[
f"{prompt_list_to_markdown(user_prompt_list + [view_mask_filename] + [mask_filename])}",
'',
]
)
yield "", None, chat_history
else:
chat_history.append([prompt_list_to_markdown(user_prompt_list), ''])
'''
elif mask_filename is None:
# Case where the image is passed through the Image Box.
# Convert the image into base64 for both passing it through the chat history and
# displaying the image inside the same bubble as the text.
chat_history.append(
[
f"{prompt_list_to_markdown([image] + user_prompt_list)}",
'',
]
)
query = prompt_list_to_tgi_input(formated_prompt_list)
stream = client.generate_stream(prompt=query, **generation_args)
if mask_filename is not None:
yield "", None, chat_history
else:
for idx, response in enumerate(stream):
text_token = response.token.text
if response.details:
# That's the exit condition
return
if text_token in STOP_SUSPECT_LIST:
acc_text += text_token
continue
if idx == 0 and text_token.startswith(" "):
text_token = text_token.lstrip()
acc_text += text_token
last_turn = chat_history.pop(-1)
last_turn[-1] += acc_text
if last_turn[-1].endswith("\nUser"):
# Safeguard: sometimes (rarely), the model won't generate the token `<end_of_utterance>` and will go directly to generating `\nUser:`
# It will thus stop the generation on `\nUser:`. But when it exits, it will have already generated `\nUser`
# This post-processing ensures that we don't have an additional `\nUser` wandering around.
last_turn[-1] = last_turn[-1][:-5]
chat_history.append(last_turn)
yield "", None, chat_history
acc_text = ""
def process_example(message, image):
"""
Same as `model_inference` but in greedy mode and with the 80b-instruct.
Specifically for pre-computing the default examples.
"""
model_selector="local/idefics-9b-instruct"
user_prompt_str=message
chat_history=[]
max_new_tokens=1024
formated_prompt_list, user_prompt_list = format_user_prompt_with_im_history_and_system_conditioning(
current_user_prompt_str=user_prompt_str.strip(),
current_image=image,
history=chat_history,
)
client_endpoint = API_PATHS[model_selector]
client = Client(
base_url=client_endpoint,
headers={"x-use-cache": "0", "Authorization": f"Bearer {API_TOKEN}"},
timeout=240, # Generous time out just in case because we are in greedy. All examples should be computed in less than 30secs with the 80b-instruct.
)
# Common parameters to all decoding strategies
# This documentation is useful to read: https://huggingface.co/docs/transformers/main/en/generation_strategies
generation_args = {
"max_new_tokens": max_new_tokens,
"repetition_penalty": None,
"stop_sequences": EOS_STRINGS,
"do_sample": False,
}
if image is None:
# Case where there is no image OR the image is passed as `<fake_token_around_image><image:IMAGE_URL><fake_token_around_image>`
chat_history.append([prompt_list_to_markdown(user_prompt_list), ''])
else:
# Case where the image is passed through the Image Box.
# Convert the image into base64 for both passing it through the chat history and
# displaying the image inside the same bubble as the text.
chat_history.append(
[
f"{prompt_list_to_markdown([image] + user_prompt_list)}",
'',
]
)
# Hack - see explanation in `DEFAULT_IMAGES_TMP_PATH_TO_URL`
for idx, i in enumerate(formated_prompt_list):
if i.startswith(DEFAULT_TEMP_DIR):
for k, v in DEFAULT_IMAGES_TMP_PATH_TO_URL.items():
if k == i:
formated_prompt_list[idx] = v
break
query = prompt_list_to_tgi_input(formated_prompt_list)
print(query)
generated_text = client.generate(prompt=query, **generation_args).generated_text
if generated_text.endswith("\nUser"):
generated_text = generated_text[:-5]
last_turn = chat_history.pop(-1)
last_turn[-1] += generated_text
chat_history.append(last_turn)
return "", None, chat_history
textbox.submit(
fn=model_inference,
inputs=[
model_selector,
textbox,
chatbot,
imagebox,
decoding_strategy,
temperature,
max_new_tokens,
repetition_penalty,
top_p,
],
outputs=[textbox, imagebox, chatbot],
)
submit_btn.click(
fn=model_inference,
inputs=[
model_selector,
textbox,
chatbot,
imagebox,
decoding_strategy,
temperature,
max_new_tokens,
repetition_penalty,
top_p,
],
outputs=[
textbox,
imagebox,
chatbot,
],
)
def remove_last_turn(chat_history):
if len(chat_history) == 0:
return gr.Update(), gr.Update()
last_interaction = chat_history[-1]
chat_history = chat_history[:-1]
chat_update = gr.update(value=chat_history)
text_update = gr.update(value=last_interaction[0])
return chat_update, text_update
regenerate_btn.click(fn=remove_last_turn, inputs=chatbot, outputs=[chatbot, textbox]).then(
fn=model_inference,
inputs=[
model_selector,
textbox,
chatbot,
imagebox,
decoding_strategy,
temperature,
max_new_tokens,
repetition_penalty,
top_p,
],
outputs=[
textbox,
imagebox,
chatbot,
],
)
asr_btn.click(
fn=model_inference_asr,
inputs=[
model_selector,
asr_audio,
chatbot,
imagebox,
decoding_strategy,
temperature,
max_new_tokens,
repetition_penalty,
top_p,
],
outputs=[
textbox,
imagebox,
chatbot,
],
)
upload_btn.upload(add_file, [upload_btn], [imagebox, upload_btn], queue=False)
submit_btn.click(lambda : gr.update(label='📁 Upload image', interactive=True), [], upload_btn)
textbox.submit(lambda : gr.update(label='📁 Upload image', interactive=True), [], upload_btn)
clear_btn.click(lambda : gr.update(label='📁 Upload image', interactive=True), [], upload_btn)
asr_btn.click(lambda : gr.update(label='📁 Upload image', interactive=True), [], upload_btn)
examples_path = os.getcwd()
gr.Examples(
examples=[
[
(
"Which device produced this image? Please explain the main clinical purpose of such image?"
"Can you write a radiology report based on this image?"
),
f"{examples_path}/example_images/chest-ct.jpg",
],
[
"Can you describe the nature of this image? Do you think it's real?",
f"{examples_path}/example_images/fashion_12.jpg",
],
[
"Can you describe the action on this image? How many animals total are there in this image? Please identify the species by name with best effort.",
f"{examples_path}/example_images/assets/demo8.jpg",
],
[
"Name the sport from this image? Please identify the player's role by name with best effort.",
f"{examples_path}/example_images/college_football.jpg",
],
],
inputs=[textbox, imagebox],
outputs=[textbox, imagebox, chatbot],
fn=process_example,
cache_examples=True,
examples_per_page=6,
label=(
"Click on any example below to get started.\nFor convenience, the model generations have been"
" pre-computed with `idefics-9b-instruct`."
),
)
demo.queue(concurrency_count=40, max_size=40)
demo.launch(debug=True, server_name="0.0.0.0", server_port=7863, height=2048, share=False, ssl_verify=False, ssl_keyfile="/home/alfred/utils/cavatar.key", ssl_certfile="/home/alfred/utils/cavatar.pem", auth=("demo", "smjs2023"))
| [
"['\"The following is a conversation between a highly knowledgeable and intelligent visual AI assistant, called RadAide, and a human user, called User. In the following interactions, User and Assistant will converse in natural language, and RadAide will do its best to answer User’s questions. RadAide has the ability to perceive images and reason about the content of visual inputs. It can also process images by following precise instructs. RadAide was built to be smart, respectful, polite and inclusive. When prompted with an image, it tells the truth and does not make up facts. The conversation begins:', '\\nUser:', 'https://miro.medium.com/v2/resize:fit:1332/0*yl2b-bDJeEwKPUI5Describe the nature of this image.<end_of_utterance>', '\\\\RadAide: A tattooed person holding a sign that says, “Teach your children well,” in a crowd of people. In the middle of the sign, there’s an illustration of the earth with 2 raised fists on either side that have a rainbow pride square background, a trans pride circle background, and brown skin tone stripes on the fists. The raised fist is a symbol of solidarity and specifically Black power as popularized by the Black Panther Party in the 1960s. The rainbow pride flag has rainbow stripes and symbolizes general LGBTQ pride. The trans pride flag has pink, blue, and white stripes and celebrates pride for the trans and gender non-conforming umbrella.<end_of_utterance>', '\\nUser: How many dogs do you see in this image?', 'https://i.dailymail.co.uk/i/pix/2011/07/01/article-2010308-0CD22A8300000578-496_634x414.jpg', '\\nAssistant: There is no dogs in this image. The picture shows a tennis player jumping to volley the ball.<end_of_utterance>']",
"question",
"<fake_token_around_image>",
"[]",
"Use the following pieces of context to fully understand the intent and create sub staks to address the context. Please try not to, \n make up an answer nor hallucinate. Use five maximum sentences and keep the sub tasks as precise as possible. List all actionable steps in \n detail. Be cautious to avoid phrasing that might replicate previous inquiries. This will help in obtaining an accurate and detailed answer. \n Avoid repetition for clarity.\n\n Question: {question}\n Answer: Understand the intent of the question then break down the {question} in to sub-tasks. ",
"['poorly rendered', 'poor background details', 'poorly drawn dog', 'disfigured dog features', 'blurry']"
] |
2024-01-10 | monotaro/MonoChat | monochat_beta~slack_application.py | import re
import secret
from openai_client import OpenAiClient
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
from slack_client import SlackClient
secret = secret.get_secret()
SLACK_API_TOKEN = secret.get("SLACK_API_TOKEN")
SLACK_BOT_TOKEN = secret.get("SLACK_BOT_TOKEN")
SLACK_BOT_USER_ID = "XXXXXXXXXXX" # BotのユーザID
SLACK_DELETE_REACTION = "del_monochat" # 削除用のリアクションを作成しておく
app = App(token=SLACK_BOT_TOKEN)
def start_server():
try:
SocketModeHandler(app, SLACK_API_TOKEN).start()
except KeyboardInterrupt:
pass # Ctrl+C が押された時はエラーではなく正常終了とする.
@app.event("app_mention")
def event_mention(event, say):
"""
ボットがメンションされた場合に返信する.
"""
open_ai_client = OpenAiClient()
messages = _build_messages(event)
try:
response = open_ai_client.request(messages)
say({"text": response, "thread_ts": event["ts"]})
except Exception as e:
say(
{
"text": "メッセージを処理できませんでした。もう一度実行してみてください。:monochat: :すまんな: \n※本メッセージはChat GPTからの返答ではありません。\n\nエラー詳細: ```"
+ str(e)
+ "```",
"thread_ts": event["ts"],
}
)
@app.event("message")
def event_message(event, say):
"""
Slack Apps に対する DM で発言された時に返信する.
"""
# ボットとの DM 以外を除外する.
if event.get("channel_type") != "im":
return
open_ai_client = OpenAiClient()
messages = _build_messages(event)
try:
response = open_ai_client.request(messages)
say({"text": response, "thread_ts": event["ts"]})
except Exception as e:
say(
{
"text": "メッセージを処理できませんでした。もう一度実行してみてください。:monochat: :すまんな: \n※本メッセージはChat GPTからの返答ではありません。\nエラー詳細: ```"
+ str(e)
+ "```",
"thread_ts": event["ts"],
}
)
@app.event("reaction_added")
def message_delete(event):
"""
削除用のリアクションがついた場合に発言を削除する.
"""
if (
event["reaction"] == SLACK_DELETE_REACTION
and event["item_user"] == SLACK_BOT_USER_ID
):
response = app.client.chat_delete(
channel=event["item"]["channel"], ts=event["item"]["ts"]
)
@app.command("/monochat")
def handle_slash_command(ack, client, command):
"""
スラッシュコマンドを実行する.
"""
ack()
def post_ephemeral_message(text):
client.chat_postEphemeral(
channel=command["channel_id"], user=command["user_id"], text=text
)
# /monochat [text] の [text] 部分.
split = command["text"].split(" ")
subcommand, arguments = split[0], split[1:]
# monochat の使い方を表示する.
if subcommand == "help":
post_ephemeral_message(
_strip_heredoc(
"""
(1) MonoChat をメンションすると Azure OpenAI Service (ChatGPT) からのレスポンスが返されます。
(2) スレッド内でやり取りを繰り返すと、それまでの会話を考慮した回答がおこなわれます。
(3) スレッド内でもメンションは必要です。
(4) MonoChat との DM でも利用可能です。この場合はメンション不要です。
(5) MonoChat からの返信に :del_monochat: でリアクションすると返信を削除できます。
"""
)
)
return
post_ephemeral_message("Not Implemented")
def _build_messages(event) -> list[dict[str, str]]:
"""
OpenAiClient に渡すメッセージを組み立てる.
"""
if "thread_ts" in event:
slack_client = SlackClient(SLACK_BOT_TOKEN)
response = slack_client.get_replies(event["channel"], event.get("thread_ts"))
messages = []
for m in response["messages"]:
# mにbot_idがあるかないか
text = _remove_mention_string(m["text"])
if "bot_id" in m:
messages.append({"role": "assistant", "content": text})
else:
messages.append({"role": "user", "content": text})
# スレッドの数が20を超えたら古いものから削除する
if len(messages) > 20:
messages.pop(0)
return messages
else:
text = _remove_mention_string(event["text"])
return [{"role": "user", "content": text}]
def _remove_mention_string(text: str) -> str:
"""
テキストからメンション文字列を削除する.
"""
return re.sub(r"<@.+?>", "", text, 1).strip()
def _strip_heredoc(text: str) -> str:
return "\n".join(map(str.strip, text.splitlines())).strip()
| [] |
2024-01-10 | rizwandel/optagan | optagan~wgan_test.py | from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import logging
import torch
import torch.nn as nn
import numpy as np
from modules.gan import Generator
import glob
import os
import pickle
import random
import torch.nn.functional as F
from tqdm import tqdm, trange
from func import GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig, BertConfig
from func import GPT2LMHeadModel, GPT2Tokenizer, GPT2ForLatentConnector, GPT2ForLatentConnectorValueHead
from func import OpenAIGPTLMHeadModel, OpenAIGPTTokenizer
from func import XLNetLMHeadModel, XLNetTokenizer
from func import TransfoXLLMHeadModel, TransfoXLTokenizer
from func import BertForLatentConnector, BertTokenizer
from collections import defaultdict
import pdb
from modules.utils import rollout_test
MAX_LENGTH = int(10000) # Hardcoded max length to avoid infinite loop
ALL_MODELS = sum((tuple(conf.pretrained_config_archive_map.keys()) for conf in (GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig)), ())
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
MODEL_CLASSES = {
'gpt2': (GPT2Config, GPT2ForLatentConnector, GPT2Tokenizer),
'bert': (BertConfig, BertForLatentConnector, BertTokenizer),
'gpt2v': (GPT2Config, GPT2ForLatentConnectorValueHead, GPT2Tokenizer)
}
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--seed', type=int, default=0)
parser.add_argument('--new_sent', type=int, default=1, help="Number of sentences to generate")
parser.add_argument('--n_layers', type=int, default=20, help="Number of layers of generator")
parser.add_argument('--block_dim', type=int, default=100)
parser.add_argument('--interval', type=int, default=10)
parser.add_argument('--cuda', type=bool, default=torch.cuda.is_available())
parser.add_argument('--generator_dir', default=None, type=str, required=True, help="Directory of GAN model checkpoint")
parser.add_argument("--checkpoint_dir", default=None, type=str, required=True,
help="The directory where checkpoints are saved.")
parser.add_argument("--output_dir", default=None, type=str, required=True,
help="The output directory where the model predictions and checkpoints will be written.")
parser.add_argument("--save", default=False, type=bool, help="Save results to file.")
parser.add_argument("--latent_size", default=32, type=int, help="Latent space dimension.")
parser.add_argument("--output_name", default="results", type=str, help="File name of output")
parser.add_argument("--batch_size", default=100, type=int, help="Batch size to generate outputs")
## Encoder options
parser.add_argument("--encoder_model_type", default="bert", type=str,
help="The encoder model architecture to be fine-tuned.")
parser.add_argument("--encoder_model_name_or_path", default="bert-base-cased", type=str,
help="The encoder model checkpoint for weights initialization.")
parser.add_argument("--encoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--encoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
## Decoder options
parser.add_argument("--decoder_model_type", default="gpt2", type=str,
help="The decoder model architecture to be fine-tuned.")
parser.add_argument("--decoder_model_name_or_path", default="gpt2", type=str,
help="The decoder model checkpoint for weights initialization.")
parser.add_argument("--decoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--decoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
parser.add_argument("--max_seq_length", default=512, type=int,
help="Optional input sequence length before tokenization. The sequence will be dropped if it is longer the max_seq_length")
parser.add_argument("--finetune_decoder", default=False, type=bool,
help="Uses finetuned decoder in output dir if true.")
## Variational auto-encoder(check this)
parser.add_argument("--top_k", type=int, default=0)
parser.add_argument("--top_p", type=float, default=1.0)
parser.add_argument("--prompt", type=str, default="")
parser.add_argument("--padding_text", type=str, default="")
parser.add_argument("--length", type=int, default=20)
parser.add_argument("--block_size", default=-1, type=int,
help="Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens).")
parser.add_argument("--do_lower_case", action='store_true',
help="Set this flag if you are using an uncased model.")
parser.add_argument("--use_philly", action='store_true',
help="Use Philly for computing.")
parser.add_argument('--gloabl_step_eval', type=int, default=508523,
help="Evaluate the results at the given global step")
# Load a trained Encoder model and vocabulary that you have fine-tuned
args = parser.parse_args()
global_step = args.gloabl_step_eval
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.backends.cudnn.deterministic = True
args.device = torch.device("cuda" if args.cuda else "cpu")
args.n_gpu = torch.cuda.device_count()
if args.n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
args.encoder_model_type = args.encoder_model_type.lower()
args.decoder_model_type = args.decoder_model_type.lower()
output_encoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-encoder-{}'.format(global_step))
output_decoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-decoder-{}'.format(global_step))
if not args.finetune_decoder:
output_decoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-decoder-{}'.format(global_step))
else:
output_decoder_dir = os.path.join(args.output_dir, 'checkpoint-decoder-{}'.format(global_step))
checkpoints = [ [output_encoder_dir, output_decoder_dir] ]
# Load a trained Encoder model and vocabulary that you have fine-tuned
encoder_config_class, encoder_model_class, encoder_tokenizer_class = MODEL_CLASSES[args.encoder_model_type]
model_encoder = encoder_model_class.from_pretrained(output_encoder_dir, latent_size=args.latent_size)
tokenizer_encoder = encoder_tokenizer_class.from_pretrained(args.encoder_tokenizer_name if args.encoder_tokenizer_name else args.encoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_encoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_encoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_encoder.max_len_single_sentence)
# Load a trained Decoder model and vocabulary that you have fine-tuned
if not args.finetune_decoder:
decoder_config_class, decoder_model_class, decoder_tokenizer_class = MODEL_CLASSES[args.decoder_model_type]
else:
decoder_config_class, decoder_model_class, decoder_tokenizer_class = MODEL_CLASSES["gpt2v"]
model_decoder = decoder_model_class.from_pretrained(output_decoder_dir, latent_size=args.latent_size)
tokenizer_decoder = decoder_tokenizer_class.from_pretrained(args.decoder_tokenizer_name if args.decoder_tokenizer_name else args.decoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_decoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_decoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_decoder.max_len_single_sentence)
# Chunyuan: Add Padding token to GPT2
special_tokens_dict = {'pad_token': '<PAD>', 'bos_token': '<BOS>', 'eos_token': '<EOS>'}
num_added_toks = tokenizer_decoder.add_special_tokens(special_tokens_dict)
logger.info('We have added {} tokens to GPT2'.format(num_added_toks))
model_decoder.resize_token_embeddings(len(tokenizer_decoder)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
assert tokenizer_decoder.pad_token == '<PAD>'
generator = Generator(args.n_layers, args.block_dim, args.latent_size)
if args.cuda:
generator = generator.cuda()
generator.load_state_dict(torch.load(args.generator_dir+'/generator_'+str(args.gloabl_step_eval)+'.th'))
generator.eval()
model_decoder.eval()
model_encoder.eval()
if args.save:
if not os.path.exists(args.output_dir+"/{}.txt".format(args.output_name)):
with open(args.output_dir+"/{}.txt".format(args.output_name), 'w'):
pass
for i in range(int(args.new_sent/args.batch_size)):
# sample noise
noise = torch.Tensor(np.random.normal(0, 1, (args.batch_size, args.latent_size))).to(args.device)
new_z = generator(noise).data
# create new sent
sents = rollout_test(model_decoder, new_z, tokenizer_decoder, args.max_seq_length, args.batch_size, args.top_k, args.top_p)
if args.save:
with open(args.output_dir+"/{}.txt".format(args.output_name), 'a') as file:
for i in sents:
file.write(i+"\n")
else:
for i in sents:
logger.info(i)
| [] |
2024-01-10 | rizwandel/optagan | optagan~wgan_gp_train.py | from __future__ import absolute_import, division, print_function, unicode_literals
import argparse
import logging
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import numpy as np
from modules.gan import Generator, Critic
import glob
import os
import pickle
import random
import torch.nn.functional as F
from torch.utils.data import DataLoader, Dataset, SequentialSampler, RandomSampler, TensorDataset
from torch.utils.data.distributed import DistributedSampler
from tqdm import tqdm, trange
from func import GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig, BertConfig
from func import GPT2LMHeadModel, GPT2Tokenizer, GPT2ForLatentConnector
from func import OpenAIGPTLMHeadModel, OpenAIGPTTokenizer
from func import XLNetLMHeadModel, XLNetTokenizer
from func import TransfoXLLMHeadModel, TransfoXLTokenizer
from func import BertForLatentConnector, BertTokenizer
from collections import defaultdict
from utils import (TextDataset_Split, TextDataset_2Tokenizers, BucketingDataLoader)
import pdb
from modules.utils import (calc_blue_parallel_func, pad_seq, rollout, rollout_test)
from transformers.modeling_utils import top_k_top_p_filtering
MAX_LENGTH = int(10000) # Hardcoded max length to avoid infinite loop
ALL_MODELS = sum((tuple(conf.pretrained_config_archive_map.keys()) for conf in (GPT2Config, OpenAIGPTConfig, XLNetConfig, TransfoXLConfig)), ())
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
MODEL_CLASSES = {
'gpt2': (GPT2Config, GPT2ForLatentConnector, GPT2Tokenizer),
'bert': (BertConfig, BertForLatentConnector, BertTokenizer)
}
def load_and_cache_examples(args, tokenizer):
if isinstance(tokenizer, list):
dataset = TextDataset_2Tokenizers(tokenizer, args, args.train_data_file, block_size=args.block_size)
else:
dataset = TextDataset_Split(tokenizer, args, args.train_data_file, block_size=args.block_size)
return dataset
def build_dataload_and_cache_examples(args, tokenizer):
if isinstance(tokenizer, list):
args.batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu)
file_path=args.train_data_file
dataloader = BucketingDataLoader(file_path, args.batch_size, args.max_seq_length, tokenizer, args, bucket=100, shuffle=True)
else:
pass
return dataloader
def compute_grad_penalty(critic, real_data, fake_data):
B = real_data.size(0)
alpha = torch.FloatTensor(np.random.random((B, 1)))
if args.cuda:
alpha = alpha.cuda()
sample = alpha*real_data + (1-alpha)*fake_data
sample.requires_grad_(True)
score = critic(sample)
outputs = torch.FloatTensor(B, 1).fill_(1.0) #args.latent_size
outputs.requires_grad_(False)
if args.cuda:
outputs = outputs.cuda()
grads = autograd.grad(
outputs=score,
inputs=sample,
grad_outputs=outputs,
create_graph=True,
retain_graph=True,
only_inputs=True
)[0]
#grads = grads.view(B, -1)
grad_penalty = ((grads.norm(2, dim=1) - 1.) ** 2).mean()
return grad_penalty
def train(epoch):
model_encoder.eval()
model_decoder.eval()
generator.train()
critic.train()
c_train_loss = 0.
g_train_loss = 0.
g_batches = 0
for i, x in enumerate(train_loader):
x = x[0]
if args.cuda:
x = x.cuda()
# Generate noise
B = args.per_gpu_train_batch_size
c_optimizer.zero_grad()
noise = torch.from_numpy(np.random.normal(0, 1, (B,
args.latent_size))).float()
if args.cuda:
noise = noise.cuda()
# Get original text latent embeddings
with torch.no_grad():
pooled_hidden_fea = model_encoder(x, attention_mask=(x > 0).float())[1]
mean, logvar = model_encoder.linear(pooled_hidden_fea).chunk(2, -1)
z_real = mean.squeeze(1)
# train critic
z_fake = generator(noise)
real_score = critic(z_real)
fake_score = critic(z_fake)
grad_penalty = compute_grad_penalty(critic, z_real.data, z_fake.data)
c_loss = -torch.mean(real_score) + torch.mean(fake_score) + \
args.gp_lambda*grad_penalty
c_train_loss += c_loss.item()
c_loss.backward()
c_optimizer.step()
# train generator
if i % args.n_critic == 0:
g_batches += 1
g_optimizer.zero_grad()
fake_score = critic(generator(noise))
g_loss = -torch.mean(fake_score)
g_train_loss += g_loss.item()
g_loss.backward()
g_optimizer.step()
if args.interval > 0 and i % args.interval == 0:
logger.info('Epoch: {} | Batch: {}/{} ({:.0f}%) | G Loss: {:.6f} | C Loss: {:.6f}'.format(
epoch, args.batch_size*i, len(train_loader.dataset),
100.*(args.batch_size*i)/len(train_loader.dataset),
g_loss.item(), c_loss.item()
))
test_noise = torch.Tensor(np.random.normal(0, 1, (1, args.latent_size))).to(args.device)
test_new_z = generator(test_noise).data
# create new sent
test_z = rollout_test(model_decoder, test_new_z, tokenizer_decoder, args.max_seq_length, 1, 0, 1)
logger.info("Text: {}".format(test_z))
g_train_loss /= g_batches
c_train_loss /= len(train_loader)
logger.info('* (Train) Epoch: {} | G Loss: {:.4f} | C Loss: {:.4f}'.format(
epoch, g_train_loss, c_train_loss
))
return (g_train_loss, c_train_loss)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--seed', type=int, default=0)
parser.add_argument('--epochs', type=int, default=15)
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--gp_lambda', type=int, default=10)
parser.add_argument('--n_critic', type=int, default=5, help="Number of critic updates before each generator update")
parser.add_argument('--n_layers', type=int, default=20, help="Number of layers of generator and critic")
parser.add_argument('--block_dim', type=int, default=100)
parser.add_argument('--interval', type=int, default=10, help="Steps before logging output")
parser.add_argument('--cuda', type=bool, default=torch.cuda.is_available())
# Optimus parameters
parser.add_argument("--train_data_file", default=None, type=str, required=True,
help="The input training data file (a text file).")
parser.add_argument("--valid_data_file", default=None, type=str, required=True,
help="The input validation data file (a text file).")
parser.add_argument("--checkpoint_dir", default=None, type=str, required=True,
help="The directory where checkpoints are saved.")
parser.add_argument('--generator_dir', default=None, type=str, help="Directory where GAN models are saved")
parser.add_argument("--output_dir", default=None, type=str, required=True,
help="The output directory where the model predictions and checkpoints will be written.")
parser.add_argument("--dataset", default='Snli', type=str, help="The dataset.")
parser.add_argument("--latent_size", default=32, type=int, help="Latent space dimension.")
## Encoder options
parser.add_argument("--encoder_model_type", default="bert", type=str,
help="The encoder model architecture to be fine-tuned.")
parser.add_argument("--encoder_model_name_or_path", default="bert-base-cased", type=str,
help="The encoder model checkpoint for weights initialization.")
parser.add_argument("--encoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--encoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
## Decoder options
parser.add_argument("--decoder_model_type", default="gpt2", type=str,
help="The decoder model architecture to be fine-tuned.")
parser.add_argument("--decoder_model_name_or_path", default="bert-base-cased", type=str,
help="The decoder model checkpoint for weights initialization.")
parser.add_argument("--decoder_config_name", default="", type=str,
help="Optional pretrained config name or path if not the same as model_name_or_path")
parser.add_argument("--decoder_tokenizer_name", default="", type=str,
help="Optional pretrained tokenizer name or path if not the same as model_name_or_path")
parser.add_argument("--per_gpu_train_batch_size", default=1, type=int,
help="Batch size per GPU/CPU for training.")
parser.add_argument("--max_seq_length", default=512, type=int,
help="Optional input sequence length before tokenization. The sequence will be dropped if it is longer the max_seq_length")
## Variational auto-encoder(check this)
parser.add_argument("--prompt", type=str, default="")
parser.add_argument("--padding_text", type=str, default="")
parser.add_argument("--length", type=int, default=20)
parser.add_argument("--block_size", default=-1, type=int,
help="Optional input sequence length after tokenization."
"The training dataset will be truncated in block of this size for training."
"Default to the model max input length for single sentence inputs (take into account special tokens).")
parser.add_argument("--do_lower_case", action='store_true',
help="Set this flag if you are using an uncased model.")
parser.add_argument("--use_philly", action='store_true',
help="Use Philly for computing.")
parser.add_argument('--gloabl_step_eval', type=int, default=661,
help="Evaluate the results at the given global step")
# Load a trained Encoder model and vocabulary that you have fine-tuned
args = parser.parse_args()
global_step = args.gloabl_step_eval
torch.backends.cudnn.deterministic = True
args.device = torch.device("cuda" if args.cuda else "cpu")
args.n_gpu = torch.cuda.device_count()
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if args.n_gpu > 0:
torch.cuda.manual_seed_all(args.seed)
args.encoder_model_type = args.encoder_model_type.lower()
args.decoder_model_type = args.decoder_model_type.lower()
output_encoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-encoder-{}'.format(global_step))
output_decoder_dir = os.path.join(args.checkpoint_dir, 'checkpoint-decoder-{}'.format(global_step))
checkpoints = [ [output_encoder_dir, output_decoder_dir] ]
# Load a trained Encoder model and vocabulary that you have fine-tuned
encoder_config_class, encoder_model_class, encoder_tokenizer_class = MODEL_CLASSES[args.encoder_model_type]
model_encoder = encoder_model_class.from_pretrained(output_encoder_dir, latent_size=args.latent_size)
tokenizer_encoder = encoder_tokenizer_class.from_pretrained(args.encoder_tokenizer_name if args.encoder_tokenizer_name else args.encoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_encoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_encoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_encoder.max_len_single_sentence)
# Load a trained Decoder model and vocabulary that you have fine-tuned
decoder_config_class, decoder_model_class, decoder_tokenizer_class = MODEL_CLASSES[args.decoder_model_type]
model_decoder = decoder_model_class.from_pretrained(output_decoder_dir, latent_size=args.latent_size)
tokenizer_decoder = decoder_tokenizer_class.from_pretrained(args.decoder_tokenizer_name if args.decoder_tokenizer_name else args.decoder_model_name_or_path, do_lower_case=args.do_lower_case)
model_decoder.to(args.device)
if args.block_size <= 0:
args.block_size = tokenizer_decoder.max_len_single_sentence # Our input block size will be the max possible for the model
args.block_size = min(args.block_size, tokenizer_decoder.max_len_single_sentence)
# Chunyuan: Add Padding token to GPT2
special_tokens_dict = {'pad_token': '<PAD>', 'bos_token': '<BOS>', 'eos_token': '<EOS>'}
num_added_toks = tokenizer_decoder.add_special_tokens(special_tokens_dict)
logger.info('We have added {} tokens to GPT2'.format(num_added_toks))
model_decoder.resize_token_embeddings(len(tokenizer_decoder)) # Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e. the length of the tokenizer.
assert tokenizer_decoder.pad_token == '<PAD>'
train_loader = build_dataload_and_cache_examples(args, [tokenizer_encoder, tokenizer_decoder])
generator = Generator(args.n_layers, args.block_dim,args.latent_size)
critic = Critic(args.n_layers, args.block_dim,args.latent_size)
if args.generator_dir!=None:
generator.load_state_dict(torch.load(args.generator_dir+'/generator_'+str(args.gloabl_step_eval)+'.th'))
critic.load_state_dict(torch.load(args.generator_dir+'/critic_'+str(args.gloabl_step_eval)+'.th'))
g_optimizer = optim.Adam(generator.parameters(), lr=args.lr, betas=(0.5, 0.999))
c_optimizer = optim.Adam(critic.parameters(), lr=args.lr, betas=(0.5, 0.999))
if args.cuda:
generator = generator.cuda()
critic = critic.cuda()
logger.info('G Parameters:{}'.format(sum([p.numel() for p in generator.parameters() if \
p.requires_grad])))
logger.info('C Parameters:{}'.format(sum([p.numel() for p in critic.parameters() if \
p.requires_grad])))
best_bleu = 0
reference = list()
with(open(args.valid_data_file,"r")) as valid:
for sents in valid:
reference.append(sents.replace("\n", ""))
for epoch in range(1, args.epochs + 1):
g_loss, c_loss = train(epoch)
data_test = list()
for i in range(2):
test_noise = torch.Tensor(np.random.normal(0, 1, (250, args.latent_size))).to(args.device)
test_z = generator(test_noise).data
new_sent = rollout_test(model_decoder, test_z, tokenizer_decoder, args.max_seq_length, 250, 0, 1)
data_test.extend(new_sent)
p_reference = random.sample(reference, 500)
bleu = calc_blue_parallel_func(p_reference, data_test, 2, 500)
b_bleu = calc_blue_parallel_func(data_test, p_reference, 2, 500)
logger.info("Bleu-2:{:0.3f} | B-Bleu-2:{:0.3f}".format(bleu, b_bleu))
if (bleu+b_bleu) > best_bleu:
best_bleu = bleu + b_bleu
logger.info('* Saving. Best Score:{:0.3f} | Bleu-2:{:0.3f} | B-Bleu-2:{:0.3f}'.format(best_bleu, bleu, b_bleu))
torch.save(generator.state_dict(), args.output_dir+'/generator_'+str(args.gloabl_step_eval)+'.th')
torch.save(critic.state_dict(), args.output_dir+'/critic_'+str(args.gloabl_step_eval)+'.th') | [] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | sagemaker~generative-ai~1-Chatbot~common_code~inference_lib.py | import boto3
import time
import json
def descirbe_endpoint(endpoint_name):
'''
엔드폰인트 생성 유무를 확인. 생성 중이면 기다림.
'''
sm_client = boto3.client("sagemaker")
while(True):
response = sm_client.describe_endpoint(
EndpointName= endpoint_name
)
status = response['EndpointStatus']
if status == 'Creating':
print("Endpoint is ", status)
time.sleep(60)
else:
print("Endpoint is ", status)
break
def invoke_inference(endpoint_name, prompt):
'''
KoAlpaca 프롬프트를 제공하여 엔드포인트 호출
'''
client = boto3.client("sagemaker-runtime")
content_type = "text/plain"
response = client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=prompt
)
#print(response["Body"].read())
res = response["Body"].read().decode()
print (eval(res)[0]['generated_text'])
def invoke_inference_DJ(endpoint_name, prompt):
'''
invoke_inference 변형,
곤수님께서 기존에 invoke_inference를 사용하는 부분이 있어 우선 이름을 달리 함
추후 invoke_inference과 하나로 합칠 예정
'''
'''
KoAlpaca 프롬프트를 제공하여 엔드포인트 호출
'''
client = boto3.client("sagemaker-runtime")
content_type = "application/json"
response = client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType=content_type,
Body=json.dumps(prompt)
)
res = response["Body"].read().decode()
print (res)
return res
def query_endpoint_with_text_payload(plain_text, endpoint_name, content_type="text/plain"):
'''
content_type 이 text/plain 인 경우 사용
'''
client = boto3.client("runtime.sagemaker")
response = client.invoke_endpoint(
EndpointName=endpoint_name, ContentType=content_type, Body=plain_text
)
return response
def parse_response_text_model(query_response):
'''
content_type 이 text/plain 인 경우 사용
'''
model_predictions = json.loads(query_response["Body"].read())
# print("model_predictions: \n", model_predictions)
generated_text = model_predictions[0]["generated_text"]
return generated_text
def parse_response_json_model(query_response):
'''
content_type 이 application/json 인 경우 사용
'''
model_predictions = json.loads(query_response)
# print("model_predictions: \n", model_predictions)
# print("model_predictions: \n", type(model_predictions))
generated_text = model_predictions[0][0]["generated_text"]
return generated_text
def parse_response(query_response):
def traverse(o, tree_types=(list, tuple)):
if isinstance(o, tree_types):
for value in o:
for subvalue in traverse(value, tree_types):
yield subvalue
else:
yield o
data = eval(query_response)
listRes = []
for value in traverse(data):
listRes.append(value["generated_text"])
if len(listRes) >= 2: return listRes
else: return listRes[0].strip()
################################################
# Embedding Handler
################################################
from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler
from langchain.embeddings import SagemakerEndpointEmbeddings
from langchain.llms.sagemaker_endpoint import ContentHandlerBase
from typing import Any, Dict, List, Optional
class SagemakerEndpointEmbeddingsJumpStart(SagemakerEndpointEmbeddings):
def embed_documents(self, texts: List[str], chunk_size: int = 5) -> List[List[float]]:
"""Compute doc embeddings using a SageMaker Inference Endpoint.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns:
List of embeddings, one for each text.
"""
results = []
_chunk_size = len(texts) if chunk_size > len(texts) else chunk_size
# print("text size: ", len(texts))
# print("_chunk_size: ", _chunk_size)
for i in range(0, len(texts), _chunk_size):
response = self._embedding_func(texts[i : i + _chunk_size])
print
results.extend(response)
return results
import numpy as np
class KoSimCSERobertaContentHandler(EmbeddingsContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs={}) -> bytes:
input_str = json.dumps({"inputs": prompt, **model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
ndim = np.array(response_json).ndim
# print("response_json ndim: \n", ndim)
# print("response_json shape: \n", np.array(response_json).shape)
if ndim == 4:
# Original shape (1, 1, n, 768)
emb = response_json[0][0][0]
emb = np.expand_dims(emb, axis=0).tolist()
# print("emb shape: ", np.array(emb).shape)
# print("emb TYPE: ", type(emb))
elif ndim == 2:
# Original shape (n, 1)
# print(response_json[0])
emb = []
for ele in response_json:
# print(np.array(response_json[0]).shape)
e = ele[0][0]
#emb = np.expand_dims(emb, axis=0).tolist()
# print("emb shape: ", np.array(emb).shape)
# print("emb TYPE: ", type(emb))
emb.append(e)
# print("emb_list shape: ", np.array(emb).shape)
# print("emb_list TYPE: ", type(emb))
else:
print(f"Other # of dimension: {ndim}")
emb = None
return emb
################################################
# LLM Handler
################################################
from langchain.llms.sagemaker_endpoint import LLMContentHandler
import json
class KoAlpacaContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs={}) -> bytes:
input_str = json.dumps({"text_inputs": prompt, **model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
print("In KoAlpacaContentHandler")
# print("output: ", output)
response_json = json.loads(output.read().decode("utf-8"))
print("response_json: ", response_json)
# return response_json["generated_texts"][0]
doc = response_json[0]['generated_text']
doc = json.loads(doc)
doc = doc['text_inputs']
return doc
| [] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~aws-gen-ai-kr~utils~backup~rag-parent-document-original.py | ############################################################
############################################################
# RAG 관련 함수들
############################################################
############################################################
import json
import boto3
import numpy as np
import pandas as pd
from copy import deepcopy
from pprint import pprint
from operator import itemgetter
from itertools import chain as ch
from typing import Any, Dict, List, Optional, List, Tuple
from opensearchpy import OpenSearch, RequestsHttpConnection
from utils import print_ww
from utils.opensearch import opensearch_utils
from langchain.schema import Document
from langchain.chains import RetrievalQA
from langchain.schema import BaseRetriever
from langchain.prompts import PromptTemplate
from langchain.retrievers import AmazonKendraRetriever
from langchain.schema.output_parser import StrOutputParser
from langchain.embeddings import SagemakerEndpointEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.callbacks.manager import CallbackManagerForRetrieverRun
from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler
import threading
from functools import partial
from multiprocessing.pool import ThreadPool
#pool = ThreadPool(processes=2)
#rag_fusion_pool = ThreadPool(processes=5)
############################################################
# Prompt repo
############################################################
class prompt_repo():
template_types = ["web_search", "sci_fact", "fiqa", "trec_news"]
@staticmethod
def get_rag_fusion():
prompt = """
\n\nHuman:
You are a helpful assistant that generates multiple search queries based on a single input query.
Generate multiple search queries related to: {query}
OUTPUT ({query_augmentation_size} queries):
\n\nAssistant:"""
prompt_template = PromptTemplate(
template=prompt, input_variables=["query", "query_augmentation_size"]
)
return prompt_template
@classmethod
def get_hyde(cls, template_type):
assert template_type in cls.template_types, "Check your template_type"
# There are a few different templates to choose from
# These are just different ways to generate hypothetical documents
hyde_template = {
"web_search": """\n\nHuman:\nPlease write a concise passage to answer the question\nQuestion: {query}\nPassage:\n\nAssistant:""",
"sci_fact": """\n\nHuman:\nPlease write a concise scientific paper passage to support/refute the claim\nClaim: {query}\nPassage:\n\nAssistant:""",
"fiqa": """\n\nHuman:\nPlease write a concise financial article passage to answer the question\nQuestion: {query}\nPassage:\n\nAssistant:""",
"trec_news": """\n\nHuman:\nPlease write a concise news passage about the topic\nTopic: {query}\nPassage:\n\nAssistant:"""
}
return PromptTemplate(template=hyde_template[template_type], input_variables=["query"])
############################################################
# RetrievalQA (Langchain)
############################################################
def run_RetrievalQA(**kwargs):
chain_types = ["stuff", "map_reduce", "refine"]
assert "llm" in kwargs, "Check your llm"
assert "query" in kwargs, "Check your query"
assert "prompt" in kwargs, "Check your prompt"
assert "vector_db" in kwargs, "Check your vector_db"
assert kwargs.get("chain_type", "stuff") in chain_types, f'Check your chain_type, {chain_types}'
qa = RetrievalQA.from_chain_type(
llm=kwargs["llm"],
chain_type=kwargs.get("chain_type", "stuff"),
retriever=kwargs["vector_db"].as_retriever(
search_type="similarity",
search_kwargs={
"k": kwargs.get("k", 5),
"boolean_filter": opensearch_utils.get_filter(
filter=kwargs.get("boolean_filter", [])
),
}
),
return_source_documents=True,
chain_type_kwargs={
"prompt": kwargs["prompt"],
"verbose": kwargs.get("verbose", False),
},
verbose=kwargs.get("verbose", False)
)
return qa(kwargs["query"])
def run_RetrievalQA_kendra(query, llm_text, PROMPT, kendra_index_id, k, aws_region, verbose):
qa = RetrievalQA.from_chain_type(
llm=llm_text,
chain_type="stuff",
retriever=AmazonKendraRetriever(
index_id=kendra_index_id,
region_name=aws_region,
top_k=k,
attribute_filter = {
"EqualsTo": {
"Key": "_language_code",
"Value": {
"StringValue": "ko"
}
},
}
),
return_source_documents=True,
chain_type_kwargs={
"prompt": PROMPT,
"verbose": verbose,
},
verbose=verbose
)
result = qa(query)
return result
#################################################################
# Document Retriever with custom function: return List(documents)
#################################################################
class retriever_utils():
runtime_client = boto3.Session().client('sagemaker-runtime')
pool = ThreadPool(processes=2)
rag_fusion_pool = ThreadPool(processes=5)
hyde_pool = ThreadPool(processes=4)
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size=512,
chunk_overlap=0,
separators=["\n\n", "\n", ".", " ", ""],
length_function=len,
)
token_limit = 300
@classmethod
# semantic search based
def get_semantic_similar_docs_by_langchain(cls, **kwargs):
#print(f"Thread={threading.get_ident()}, Process={os.getpid()}")
search_types = ["approximate_search", "script_scoring", "painless_scripting"]
space_types = ["l2", "l1", "linf", "cosinesimil", "innerproduct", "hammingbit"]
assert "vector_db" in kwargs, "Check your vector_db"
assert "query" in kwargs, "Check your query"
assert kwargs.get("search_type", "approximate_search") in search_types, f'Check your search_type: {search_types}'
assert kwargs.get("space_type", "l2") in space_types, f'Check your space_type: {space_types}'
results = kwargs["vector_db"].similarity_search_with_score(
query=kwargs["query"],
k=kwargs.get("k", 5),
search_type=kwargs.get("search_type", "approximate_search"),
space_type=kwargs.get("space_type", "l2"),
boolean_filter=opensearch_utils.get_filter(
filter=kwargs.get("boolean_filter", [])
),
)
#print ("\nsemantic search args: ")
#print (results)
# pprint ({
# "k": kwargs.get("k", 5),
# "search_type": kwargs.get("search_type", "approximate_search"),
# "space_type": kwargs.get("space_type", "l2"),
# "boolean_filter": opensearch_utils.get_filter(filter=kwargs.get("boolean_filter", []))
# })
if kwargs.get("hybrid", False) and results:
max_score = results[0][1]
new_results = []
for doc in results:
nomalized_score = float(doc[1]/max_score)
new_results.append((doc[0], nomalized_score))
results = deepcopy(new_results)
return results
@classmethod
# semantic search based
def get_semantic_similar_docs(cls, **kwargs):
assert "query" in kwargs, "Check your query"
assert "k" in kwargs, "Check your k"
assert "os_client" in kwargs, "Check your os_client"
assert "index_name" in kwargs, "Check your index_name"
def normalize_search_results(search_results):
hits = (search_results["hits"]["hits"])
max_score = float(search_results["hits"]["max_score"])
for hit in hits:
hit["_score"] = float(hit["_score"]) / max_score
search_results["hits"]["max_score"] = hits[0]["_score"]
search_results["hits"]["hits"] = hits
return search_results
query = opensearch_utils.get_query(
query=kwargs["query"],
filter=kwargs.get("boolean_filter", []),
search_type="semantic", # enable semantic search
vector_field="vector_field", # for semantic search check by using index_info = os_client.indices.get(index=index_name)
vector=kwargs["llm_emb"].embed_query(kwargs["query"]),
k=kwargs["k"]
)
query["size"] = kwargs["k"]
#print ("\nsematic search query: ")
#pprint (query)
search_results = opensearch_utils.search_document(
os_client=kwargs["os_client"],
query=query,
index_name=kwargs["index_name"]
)
results = []
if search_results["hits"]["hits"]:
search_results = normalize_search_results(search_results)
for res in search_results["hits"]["hits"]:
metadata = res["_source"]["metadata"]
metadata["id"] = res["_id"]
doc = Document(
page_content=res["_source"]["text"],
metadata=metadata
)
if kwargs.get("hybrid", False):
results.append((doc, res["_score"]))
else:
results.append((doc))
return results
@classmethod
# lexical(keyword) search based (using Amazon OpenSearch)
def get_lexical_similar_docs(cls, **kwargs):
assert "query" in kwargs, "Check your query"
assert "k" in kwargs, "Check your k"
assert "os_client" in kwargs, "Check your os_client"
assert "index_name" in kwargs, "Check your index_name"
def normalize_search_results(search_results):
hits = (search_results["hits"]["hits"])
max_score = float(search_results["hits"]["max_score"])
for hit in hits:
hit["_score"] = float(hit["_score"]) / max_score
search_results["hits"]["max_score"] = hits[0]["_score"]
search_results["hits"]["hits"] = hits
return search_results
query = opensearch_utils.get_query(
query=kwargs["query"],
minimum_should_match=kwargs.get("minimum_should_match", 0),
filter=kwargs["filter"]
)
query["size"] = kwargs["k"]
#print ("\nlexical search query: ")
#pprint (query)
search_results = opensearch_utils.search_document(
os_client=kwargs["os_client"],
query=query,
index_name=kwargs["index_name"]
)
results = []
if search_results["hits"]["hits"]:
search_results = normalize_search_results(search_results)
for res in search_results["hits"]["hits"]:
metadata = res["_source"]["metadata"]
metadata["id"] = res["_id"]
doc = Document(
page_content=res["_source"]["text"],
metadata=metadata
)
if kwargs.get("hybrid", False):
results.append((doc, res["_score"]))
else:
results.append((doc))
return results
@classmethod
# rag-fusion based
def get_rag_fusion_similar_docs(cls, **kwargs):
search_types = ["approximate_search", "script_scoring", "painless_scripting"]
space_types = ["l2", "l1", "linf", "cosinesimil", "innerproduct", "hammingbit"]
assert "llm_emb" in kwargs, "Check your llm_emb"
assert "query" in kwargs, "Check your query"
assert "query_transformation_prompt" in kwargs, "Check your query_transformation_prompt"
assert kwargs.get("search_type", "approximate_search") in search_types, f'Check your search_type: {search_types}'
assert kwargs.get("space_type", "l2") in space_types, f'Check your space_type: {space_types}'
assert kwargs.get("llm_text", None) != None, "Check your llm_text"
llm_text = kwargs["llm_text"]
query_augmentation_size = kwargs["query_augmentation_size"]
query_transformation_prompt = kwargs["query_transformation_prompt"]
generate_queries = (
{
"query": itemgetter("query"),
"query_augmentation_size": itemgetter("query_augmentation_size")
}
| query_transformation_prompt
| llm_text
| StrOutputParser()
| (lambda x: x.split("\n"))
)
rag_fusion_query = generate_queries.invoke(
{
"query": kwargs["query"],
"query_augmentation_size": kwargs["query_augmentation_size"]
}
)
rag_fusion_query = [query for query in rag_fusion_query if query != ""]
if len(rag_fusion_query) > query_augmentation_size: rag_fusion_query = rag_fusion_query[-query_augmentation_size:]
rag_fusion_query.insert(0, kwargs["query"])
if kwargs["verbose"]:
print("===== RAG-Fusion Queries =====")
print(rag_fusion_query)
tasks = []
for query in rag_fusion_query:
semantic_search = partial(
cls.get_semantic_similar_docs,
#vector_db=kwargs["vector_db"],
os_client=kwargs["os_client"],
index_name=kwargs["index_name"],
query=query,
k=kwargs["k"],
boolean_filter=kwargs.get("boolean_filter", []),
llm_emb=kwargs["llm_emb"],
hybrid=True
)
tasks.append(cls.rag_fusion_pool.apply_async(semantic_search,))
rag_fusion_docs = [task.get() for task in tasks]
similar_docs = cls.get_ensemble_results(
doc_lists=rag_fusion_docs,
weights=[1/(query_augmentation_size+1)]*(query_augmentation_size+1), #query_augmentation_size + original query
algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
c=60,
k=kwargs["k"],
)
return similar_docs
@classmethod
# HyDE based
def get_hyde_similar_docs(cls, **kwargs):
def _get_hyde_response(query, prompt, llm_text):
chain = (
{
"query": itemgetter("query")
}
| prompt
| llm_text
| StrOutputParser()
)
return chain.invoke({"query": query})
search_types = ["approximate_search", "script_scoring", "painless_scripting"]
space_types = ["l2", "l1", "linf", "cosinesimil", "innerproduct", "hammingbit"]
assert "llm_emb" in kwargs, "Check your llm_emb"
assert "query" in kwargs, "Check your query"
assert "hyde_query" in kwargs, "Check your hyde_query"
assert kwargs.get("search_type", "approximate_search") in search_types, f'Check your search_type: {search_types}'
assert kwargs.get("space_type", "l2") in space_types, f'Check your space_type: {space_types}'
assert kwargs.get("llm_text", None) != None, "Check your llm_text"
query = kwargs["query"]
llm_text = kwargs["llm_text"]
hyde_query = kwargs["hyde_query"]
tasks = []
for template_type in hyde_query:
hyde_response = partial(
_get_hyde_response,
query=query,
prompt=prompt_repo.get_hyde(template_type),
llm_text=llm_text
)
tasks.append(cls.hyde_pool.apply_async(hyde_response,))
hyde_answers = [task.get() for task in tasks]
hyde_answers.insert(0, query)
tasks = []
for hyde_answer in hyde_answers:
semantic_search = partial(
cls.get_semantic_similar_docs,
os_client=kwargs["os_client"],
index_name=kwargs["index_name"],
query=hyde_answer,
k=kwargs["k"],
boolean_filter=kwargs.get("boolean_filter", []),
llm_emb=kwargs["llm_emb"],
hybrid=True
)
tasks.append(cls.hyde_pool.apply_async(semantic_search,))
hyde_docs = [task.get() for task in tasks]
hyde_doc_size = len(hyde_docs)
similar_docs = cls.get_ensemble_results(
doc_lists=hyde_docs,
weights=[1/(hyde_doc_size)]*(hyde_doc_size), #query_augmentation_size + original query
algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
c=60,
k=kwargs["k"],
)
if kwargs["verbose"]:
print("===== HyDE Answers =====")
print(hyde_answers)
return similar_docs
@classmethod
# ParentDocument based
def get_parent_document_similar_docs(cls, **kwargs):
def _get_parent_docs(child_search_results, **kwargs):
parent_info = {}
for rank, (doc, score) in enumerate(child_search_results):
parent_id = doc.metadata["parent_id"]
if parent_id not in parent_info:
parent_info[parent_id] = (rank+1, score)
parent_ids = sorted(parent_info.items(), key=lambda x: x[1], reverse=False)
parent_ids = list(map(lambda x:x[0], parent_ids))
parent_docs = opensearch_utils.get_documents_by_ids(
os_client=kwargs["os_client"],
ids=parent_ids,
index_name=kwargs["index_name"],
)
results = []
if parent_docs["docs"]:
for res in parent_docs["docs"]:
doc_id = res["_id"]
doc = Document(
page_content=res["_source"]["text"],
metadata=res["_source"]["metadata"]
)
if kwargs["hybrid"]:
results.append((doc, parent_info[doc_id][1]))
else:
results.append((doc))
return results
assert "llm_emb" in kwargs, "Check your llm_emb"
assert "query" in kwargs, "Check your query"
query = kwargs["query"]
child_search_results = cls.get_semantic_similar_docs(
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=query,
k=kwargs["k"],
boolean_filter=kwargs["boolean_filter"],
hybrid=True
)
similar_docs = _get_parent_docs(
child_search_results,
os_client=kwargs["os_client"],
index_name=kwargs["index_name"],
hybrid=True
)
if kwargs["verbose"]:
print("===== ParentDocument =====")
print (f'filter: {kwargs["boolean_filter"]}')
print (f'# child_docs: {len(child_search_results)}')
print (f'# parent docs: {len(similar_docs)}')
print (f'# duplicates: {len(child_search_results)-len(similar_docs)}')
return similar_docs
@classmethod
def get_rerank_docs(cls, **kwargs):
assert "reranker_endpoint_name" in kwargs, "Check your reranker_endpoint_name"
assert "k" in kwargs, "Check your k"
contexts, query, llm_text, rerank_queries = kwargs["context"], kwargs["query"], kwargs["llm_text"], {"inputs":[]}
exceed_info = []
for idx, (context, score) in enumerate(contexts):
page_content = context.page_content
token_size = llm_text.get_num_tokens(query+page_content)
exceed_flag = False
if token_size > cls.token_limit:
exceed_flag = True
splited_docs = cls.text_splitter.split_documents([context])
if kwargs["verbose"]:
print(f"\n[Exeeds EMB token limit] Number of chunk_docs after split and chunking= {len(splited_docs)}\n")
partial_set, length = [], []
for splited_doc in splited_docs:
rerank_queries["inputs"].append({"text": query, "text_pair": splited_doc.page_content})
length.append(llm_text.get_num_tokens(splited_doc.page_content))
partial_set.append(len(rerank_queries["inputs"])-1)
else:
rerank_queries["inputs"].append({"text": query, "text_pair": page_content})
if exceed_flag:
exceed_info.append([idx, exceed_flag, partial_set, length])
else:
exceed_info.append([idx, exceed_flag, len(rerank_queries["inputs"])-1, None])
rerank_queries = json.dumps(rerank_queries)
response = cls.runtime_client.invoke_endpoint(
EndpointName=kwargs["reranker_endpoint_name"],
ContentType="application/json",
Accept="application/json",
Body=rerank_queries
)
outs = json.loads(response['Body'].read().decode()) ## for json
rerank_contexts = []
for idx, exceed_flag, partial_set, length in exceed_info:
if not exceed_flag:
rerank_contexts.append((contexts[idx][0], outs[partial_set]["score"]))
else:
partial_scores = [outs[partial_idx]["score"] for partial_idx in partial_set]
partial_scores = np.average(partial_scores, axis=0, weights=length)
rerank_contexts.append((contexts[idx][0], partial_scores))
#rerank_contexts = [(contexts[idx][0], out["score"]) for idx, out in enumerate(outs)]
rerank_contexts = sorted(
rerank_contexts,
key=lambda x: x[1],
reverse=True
)
return rerank_contexts[:kwargs["k"]]
@classmethod
# hybrid (lexical + semantic) search based
def search_hybrid(cls, **kwargs):
assert "query" in kwargs, "Check your query"
assert "llm_emb" in kwargs, "Check your llm_emb"
assert "index_name" in kwargs, "Check your index_name"
assert "os_client" in kwargs, "Check your os_client"
rag_fusion = kwargs.get("rag_fusion", False)
hyde = kwargs.get("hyde", False)
parent_document = kwargs.get("parent_document", False)
assert (rag_fusion + hyde + parent_document) <= 1, "choose only one among RAG-FUSION, HyDE and ParentDocument"
if rag_fusion:
assert "query_augmentation_size" in kwargs, "if you use RAG-FUSION, Check your query_augmentation_size"
if hyde:
assert "hyde_query" in kwargs, "if you use HyDE, Check your hyde_query"
verbose = kwargs.get("verbose", False)
async_mode = kwargs.get("async_mode", True)
reranker = kwargs.get("reranker", False)
search_filter_semantic, search_filter_lexical = deepcopy(kwargs.get("filter", [])), deepcopy(kwargs.get("filter", []))
if parent_document:
search_filter_semantic.append({"term": {"metadata.family_tree": "child"}})
search_filter_lexical.append({"term": {"metadata.family_tree": "parent"}})
def do_sync():
if rag_fusion:
similar_docs_semantic = cls.get_rag_fusion_similar_docs(
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True,
llm_text=kwargs.get("llm_text", None),
query_augmentation_size=kwargs["query_augmentation_size"],
query_transformation_prompt=kwargs.get("query_transformation_prompt", None),
fusion_algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
verbose=kwargs.get("verbose", False),
)
elif hyde:
similar_docs_semantic = cls.get_hyde_similar_docs(
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True,
llm_text=kwargs.get("llm_text", None),
hyde_query=kwargs["hyde_query"],
fusion_algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
verbose=kwargs.get("verbose", False),
)
elif parent_document:
similar_docs_semantic = cls.get_parent_document_similar_docs(
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True,
verbose=kwargs.get("verbose", False),
)
else:
similar_docs_semantic = cls.get_semantic_similar_docs(
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True
)
similar_docs_keyword = cls.get_lexical_similar_docs(
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
minimum_should_match=kwargs.get("minimum_should_match", 0),
filter=search_filter_lexical,
hybrid=True
)
return similar_docs_semantic, similar_docs_keyword
def do_async():
if rag_fusion:
semantic_search = partial(
cls.get_rag_fusion_similar_docs,
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True,
llm_text=kwargs.get("llm_text", None),
query_augmentation_size=kwargs["query_augmentation_size"],
query_transformation_prompt=kwargs.get("query_transformation_prompt", None),
fusion_algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
verbose=kwargs.get("verbose", False),
)
elif hyde:
semantic_search = partial(
cls.get_hyde_similar_docs,
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True,
llm_text=kwargs.get("llm_text", None),
hyde_query=kwargs["hyde_query"],
fusion_algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
verbose=kwargs.get("verbose", False),
)
elif parent_document:
semantic_search = partial(
cls.get_parent_document_similar_docs,
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True,
verbose=kwargs.get("verbose", False),
)
else:
semantic_search = partial(
cls.get_semantic_similar_docs,
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
llm_emb=kwargs["llm_emb"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
boolean_filter=search_filter_semantic,
hybrid=True
)
lexical_search = partial(
cls.get_lexical_similar_docs,
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
query=kwargs["query"],
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
minimum_should_match=kwargs.get("minimum_should_match", 0),
filter=search_filter_lexical,
hybrid=True
)
semantic_pool = cls.pool.apply_async(semantic_search,)
lexical_pool = cls.pool.apply_async(lexical_search,)
similar_docs_semantic, similar_docs_keyword = semantic_pool.get(), lexical_pool.get()
return similar_docs_semantic, similar_docs_keyword
if async_mode:
similar_docs_semantic, similar_docs_keyword = do_async()
else:
similar_docs_semantic, similar_docs_keyword = do_sync()
similar_docs = cls.get_ensemble_results(
doc_lists=[similar_docs_semantic, similar_docs_keyword],
weights=kwargs.get("ensemble_weights", [.5, .5]),
algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
c=60,
k=kwargs.get("k", 5) if not reranker else int(kwargs["k"]*1.5),
)
#print (len(similar_docs_keyword), len(similar_docs_semantic), len(similar_docs))
if reranker:
reranker_endpoint_name = kwargs["reranker_endpoint_name"]
similar_docs = cls.get_rerank_docs(
llm_text=kwargs["llm_text"],
query=kwargs["query"],
context=similar_docs,
k=kwargs.get("k", 5),
reranker_endpoint_name=reranker_endpoint_name,
verbose=verbose
)
if verbose:
print("##############################")
print("async_mode")
print("##############################")
print(async_mode)
print("##############################")
print("reranker")
print("##############################")
print(reranker)
print("##############################")
print("rag_fusion")
print("##############################")
print(rag_fusion)
print("##############################")
print("HyDE")
print("##############################")
print(hyde)
print("##############################")
print("parent_document")
print("##############################")
print(parent_document)
print("##############################")
print("similar_docs_semantic")
print("##############################")
print(similar_docs_semantic)
print("##############################")
print("similar_docs_keyword")
print("##############################")
print(similar_docs_keyword)
print("##############################")
print("similar_docs")
print("##############################")
print(similar_docs)
similar_docs = list(map(lambda x:x[0], similar_docs))
return similar_docs
@classmethod
# Score fusion and re-rank (lexical + semantic)
def get_ensemble_results(cls, doc_lists: List[List[Document]], weights, algorithm="RRF", c=60, k=5) -> List[Document]:
assert algorithm in ["RRF", "simple_weighted"]
# Create a union of all unique documents in the input doc_lists
all_documents = set()
for doc_list in doc_lists:
for (doc, _) in doc_list:
all_documents.add(doc.page_content)
# Initialize the score dictionary for each document
hybrid_score_dic = {doc: 0.0 for doc in all_documents}
# Calculate RRF scores for each document
for doc_list, weight in zip(doc_lists, weights):
for rank, (doc, score) in enumerate(doc_list, start=1):
if algorithm == "RRF": # RRF (Reciprocal Rank Fusion)
score = weight * (1 / (rank + c))
elif algorithm == "simple_weighted":
score *= weight
hybrid_score_dic[doc.page_content] += score
# Sort documents by their scores in descending order
sorted_documents = sorted(
hybrid_score_dic.items(), key=lambda x: x[1], reverse=True
)
# Map the sorted page_content back to the original document objects
page_content_to_doc_map = {
doc.page_content: doc for doc_list in doc_lists for (doc, orig_score) in doc_list
}
sorted_docs = [
(page_content_to_doc_map[page_content], hybrid_score) for (page_content, hybrid_score) in sorted_documents
]
return sorted_docs[:k]
#################################################################
# Document Retriever with Langchain(BaseRetriever): return List(documents)
#################################################################
# lexical(keyword) search based (using Amazon OpenSearch)
class OpenSearchLexicalSearchRetriever(BaseRetriever):
os_client: Any
index_name: str
k = 3
minimum_should_match = 0
filter = []
def normalize_search_results(self, search_results):
hits = (search_results["hits"]["hits"])
max_score = float(search_results["hits"]["max_score"])
for hit in hits:
hit["_score"] = float(hit["_score"]) / max_score
search_results["hits"]["max_score"] = hits[0]["_score"]
search_results["hits"]["hits"] = hits
return search_results
def update_search_params(self, **kwargs):
self.k = kwargs.get("k", 3)
self.minimum_should_match = kwargs.get("minimum_should_match", 0)
self.filter = kwargs.get("filter", [])
self.index_name = kwargs.get("index_name", self.index_name)
def _reset_search_params(self, ):
self.k = 3
self.minimum_should_match = 0
self.filter = []
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun) -> List[Document]:
query = opensearch_utils.get_query(
query=query,
minimum_should_match=self.minimum_should_match,
filter=self.filter
)
query["size"] = self.k
print ("lexical search query: ")
pprint(query)
search_results = opensearch_utils.search_document(
os_client=self.os_client,
query=query,
index_name=self.index_name
)
results = []
if search_results["hits"]["hits"]:
search_results = self.normalize_search_results(search_results)
for res in search_results["hits"]["hits"]:
metadata = res["_source"]["metadata"]
metadata["id"] = res["_id"]
doc = Document(
page_content=res["_source"]["text"],
metadata=metadata
)
results.append((doc))
self._reset_search_params()
return results[:self.k]
# hybrid (lexical + semantic) search based
class OpenSearchHybridSearchRetriever(BaseRetriever):
os_client: Any
vector_db: Any
index_name: str
k = 3
minimum_should_match = 0
filter = []
fusion_algorithm: str
ensemble_weights: List
verbose = False
async_mode = True
reranker = False
reranker_endpoint_name = ""
rag_fusion = False
query_augmentation_size: Any
rag_fusion_prompt = prompt_repo.get_rag_fusion()
llm_text: Any
llm_emb: Any
hyde = False
hyde_query: Any
parent_document = False
def update_search_params(self, **kwargs):
self.k = kwargs.get("k", 3)
self.minimum_should_match = kwargs.get("minimum_should_match", 0)
self.filter = kwargs.get("filter", [])
self.index_name = kwargs.get("index_name", self.index_name)
self.fusion_algorithm = kwargs.get("fusion_algorithm", self.fusion_algorithm)
self.ensemble_weights = kwargs.get("ensemble_weights", self.ensemble_weights)
self.verbose = kwargs.get("verbose", self.verbose)
self.async_mode = kwargs.get("async_mode", True)
self.reranker = kwargs.get("reranker", False)
self.reranker_endpoint_name = kwargs.get("reranker_endpoint_name", self.reranker_endpoint_name)
self.rag_fusion = kwargs.get("rag_fusion", False)
self.query_augmentation_size = kwargs.get("query_augmentation_size", 3)
self.hyde = kwargs.get("hyde", False)
self.hyde_query = kwargs.get("hyde_query", ["web_search"])
self.parent_document = kwargs.get("parent_document", False)
def _reset_search_params(self, ):
self.k = 3
self.minimum_should_match = 0
self.filter = []
def _get_relevant_documents(self, query: str, *, run_manager: CallbackManagerForRetrieverRun) -> List[Document]:
search_hybrid_result = retriever_utils.search_hybrid(
query=query,
k=self.k,
index_name=self.index_name,
os_client=self.os_client,
filter=self.filter,
minimum_should_match=self.minimum_should_match,
fusion_algorithm=self.fusion_algorithm, # ["RRF", "simple_weighted"]
ensemble_weights=self.ensemble_weights, # 시멘트 서치에 가중치 0.5 , 키워드 서치 가중치 0.5 부여.
async_mode=self.async_mode,
reranker=self.reranker,
reranker_endpoint_name=self.reranker_endpoint_name,
rag_fusion=self.rag_fusion,
query_augmentation_size=self.query_augmentation_size,
query_transformation_prompt=self.rag_fusion_prompt if self.rag_fusion else "",
hyde=self.hyde,
hyde_query=self.hyde_query if self.hyde else [],
parent_document = self.parent_document,
llm_text=self.llm_text,
llm_emb=self.llm_emb,
verbose=self.verbose
)
#self._reset_search_params()
return search_hybrid_result
#################################################################
# Document visualization
#################################################################
def show_context_used(context_list, limit=10):
for idx, context in enumerate(context_list):
if idx < limit:
print("-----------------------------------------------")
print(f"{idx+1}. Chunk: {len(context.page_content)} Characters")
print("-----------------------------------------------")
print_ww(context.page_content)
print_ww("metadata: \n", context.metadata)
else:
break
def show_chunk_stat(documents):
doc_len_list = [len(doc.page_content) for doc in documents]
print(pd.DataFrame(doc_len_list).describe())
avg_doc_length = lambda documents: sum([len(doc.page_content) for doc in documents])//len(documents)
avg_char_count_pre = avg_doc_length(documents)
print(f'Average length among {len(documents)} documents loaded is {avg_char_count_pre} characters.')
max_idx = doc_len_list.index(max(doc_len_list))
print("\nShow document at maximum size")
print(documents[max_idx].page_content)
#################################################################
# JumpStart Embeddings
#################################################################
class SagemakerEndpointEmbeddingsJumpStart(SagemakerEndpointEmbeddings):
def embed_documents(self, texts: List[str], chunk_size: int=1) -> List[List[float]]:
"""Compute doc embeddings using a SageMaker Inference Endpoint.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns:
List of embeddings, one for each text.
"""
results = []
_chunk_size = len(texts) if chunk_size > len(texts) else chunk_size
print("text size: ", len(texts))
print("_chunk_size: ", _chunk_size)
for i in range(0, len(texts), _chunk_size):
#print (i, texts[i : i + _chunk_size])
response = self._embedding_func(texts[i : i + _chunk_size])
#print (i, response, len(response[0].shape))
results.extend(response)
return results
class KoSimCSERobertaContentHandler(EmbeddingsContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs={}) -> bytes:
input_str = json.dumps({"inputs": prompt, **model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
ndim = np.array(response_json).ndim
if ndim == 4:
# Original shape (1, 1, n, 768)
emb = response_json[0][0][0]
emb = np.expand_dims(emb, axis=0).tolist()
elif ndim == 2:
# Original shape (n, 1)
emb = []
for ele in response_json:
e = ele[0][0]
emb.append(e)
else:
print(f"Other # of dimension: {ndim}")
emb = None
return emb
| [
"{'web_search': '\\n\\nHuman:\\nPlease write a concise passage to answer the question\\nQuestion: {query}\\nPassage:\\n\\nAssistant:', 'sci_fact': '\\n\\nHuman:\\nPlease write a concise scientific paper passage to support/refute the claim\\nClaim: {query}\\nPassage:\\n\\nAssistant:', 'fiqa': '\\n\\nHuman:\\nPlease write a concise financial article passage to answer the question\\nQuestion: {query}\\nPassage:\\n\\nAssistant:', 'trec_news': '\\n\\nHuman:\\nPlease write a concise news passage about the topic\\nTopic: {query}\\nPassage:\\n\\nAssistant:'}",
"query_transformation_prompt",
"\n \n\nHuman:\n You are a helpful assistant that generates multiple search queries based on a single input query.\n Generate multiple search queries related to: {query}\n OUTPUT ({query_augmentation_size} queries):\n \n\nAssistant:",
"['web_search', 'sci_fact', 'fiqa', 'trec_news']",
"query_augmentation_size"
] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~workshop~utils~lib_ko.py | import json
import requests
import numpy as np
from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel, root_validator
from langchain.embeddings.base import Embeddings
from langchain.llms import AmazonAPIGateway
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
import os.path as osp
class Prompter(object):
"""
A dedicated helper to manage templates and prompt building.
"""
__slots__ = ("template", "_verbose")
def __init__(self, template_name: str = "", verbose: bool = False):
self._verbose = verbose
if not template_name:
# Enforce the default here, so the constructor can be called with '' and will not break.
template_name = "alpaca"
file_name = osp.join("../templates", f"{template_name}.json")
#file_name = osp.join("../templates", f"{template_name}.json")
if not osp.exists(file_name):
raise ValueError(f"Can't read {file_name}")
with open(file_name) as fp:
self.template = json.load(fp)
if self._verbose:
print(
f"Using prompt template {template_name}: {self.template['description']}"
)
def generate_prompt(
self,
instruction: str,
input: Union[None, str] = None,
label: Union[None, str] = None,
) -> str:
# returns the full prompt from instruction and optional input
# if a label (=response, =output) is provided, it's also appended.
if input:
res = self.template["prompt_input"].format(
instruction=instruction, input=input
)
else:
res = self.template["prompt_no_input"].format(
instruction=instruction
)
if label:
res = f"{res}{label}"
if self._verbose:
print(res)
return res
def get_response(self, output: str) -> str:
return output.split(self.template["response_split"])[1].strip()
prompter = Prompter("kullm")
def get_payload(instruction, input_text, params):
prompt = prompter.generate_prompt(instruction, input_text)
payload = {
'inputs': prompt,
'parameters': params
}
return payload
class KoLLMSageMakerEndpoint(object):
def __init__(self, endpoint_name):
self.endpoint_name = endpoint_name
self.prompter = Prompter("kullm")
self.smr_client = boto3.client('sagemaker-runtime')
def get_payload(self, instruction, input_text, params):
prompt = self.prompter.generate_prompt(instruction, input_text)
payload = {
'inputs': prompt,
'parameters': params
}
payload_str = json.dumps(payload)
return payload_str.encode("utf-8")
def infer(self, payload, content_type="application/json", verbose=True):
response = self.smr_client.invoke_endpoint(
EndpointName=self.endpoint_name,
ContentType=content_type,
Body=payload
)
res = json.loads(response['Body'].read().decode("utf-8"))
generated_text = res[0]["generated_text"]
#generated_text = self.prompter.get_response(generated_text)
generated_text = generated_text.split('###')[0]
if verbose:
pprint.pprint(f'Response: {generated_text}')
return generated_text
class KoSimCSERobertaContentHandlerAmazonAPIGateway:
@classmethod
def transform_input(
cls, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
return {"inputs": prompt, **model_kwargs}
@classmethod
def transform_output(cls, response: Any) -> str:
response_json = response.json()
ndim = np.array(response_json).ndim
if ndim == 4:
# Original shape (1, 1, n, 768)
emb = response_json[0][0][0]
emb = np.expand_dims(emb, axis=0).tolist()
elif ndim == 2:
# Original shape (n, 1)
emb = []
for ele in response_json:
e = ele[0][0]
emb.append(e)
else:
print(f"Other # of dimension: {ndim}")
emb = None
return emb
class KoSimCSERobertaEmbeddingAmazonApiGateway(BaseModel, Embeddings):
api_url: str
"""API Gateway URL"""
headers: Optional[Dict] = None
"""API Gateway HTTP Headers to send, e.g. for authentication"""
model_kwargs: Optional[Dict] = None
"""Key word arguments to pass to the model."""
content_handler: KoSimCSERobertaContentHandlerAmazonAPIGateway = KoSimCSERobertaContentHandlerAmazonAPIGateway()
"""The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
"""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
try:
if values["headers"] is None:
values["headers"] = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
except Exception as error:
pass
return values
class Config:
"""Configuration for this pydantic object."""
skip_on_failure = True
arbitrary_types_allowed=True
# extra = Extra.forbid
def _embedding_func(self, texts: List[str]) -> List[List[float]]:
"""Call out to SageMaker Inference embedding endpoint."""
# replace newlines, which can negatively affect performance.
texts = list(map(lambda x: x.replace("\n", " "), texts))
_model_kwargs = self.model_kwargs or {}
payload = self.content_handler.transform_input(texts, _model_kwargs)
try:
response = requests.post(
self.api_url,
headers=self.headers,
json=payload,
)
except Exception as error:
raise ValueError(f"Error raised by the service: {error}")
return self.content_handler.transform_output(response)
def embed_documents(
self, texts: List[str], chunk_size: int = 1
) -> List[List[float]]:
"""Compute doc embeddings using a SageMaker Inference Endpoint.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns:
List of embeddings, one for each text.
"""
results = []
_chunk_size = len(texts) if chunk_size > len(texts) else chunk_size
print("text size: ", len(texts))
print("_chunk_size: ", _chunk_size)
for i in range(0, len(texts), _chunk_size):
response = self._embedding_func(texts[i : i + _chunk_size])
results.extend(response)
return results
def embed_query(self, text: str) -> List[float]:
"""Compute query embeddings using a SageMaker inference endpoint.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
return self._embedding_func([text])[0]
| [
"alpaca"
] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~aws-gen-ai-kr~20_applications~02_qa_chatbot~01_preprocess_docs~utils~proc_docs.py | from termcolor import colored
from IPython.core.display import display, HTML
from langchain.docstore.document import Document
from utils.rag import get_semantic_similar_docs, get_lexical_similar_docs, get_ensemble_results
from utils.opensearch import opensearch_utils
def search_hybrid(**kwargs):
assert "query" in kwargs, "Check your query"
assert "vector_db" in kwargs, "Check your vector_db"
assert "index_name" in kwargs, "Check your index_name"
assert "os_client" in kwargs, "Check your os_client"
assert "Semantic_Search" in kwargs, "Check your Semantic_Search"
assert "Lexical_Search" in kwargs, "Check your Lexical_Search"
assert "Hybrid_Search" in kwargs, "Check your Hybrid_Search"
assert "minimum_should_match" in kwargs, "Check your minimum_should_match"
verbose = kwargs.get("verbose", False)
print("Query: \n", kwargs["query"])
# print("Semantic_Search: ", kwargs["Semantic_Search"])
# print("Lexical_Search: ", kwargs["Lexical_Search"])
# print("Hybrid_Search: ", kwargs["Hybrid_Search"])
if (kwargs["Semantic_Search"] == True) | (kwargs["Hybrid_Search"] == True):
similar_docs_semantic = get_semantic_similar_docs(
vector_db=kwargs["vector_db"],
query=kwargs["query"],
k=kwargs.get("k", 5),
hybrid=True
)
if verbose:
print("##############################")
print("similar_docs_semantic")
print("##############################")
# print(similar_docs_semantic)
opensearch_pretty_print_documents(similar_docs_semantic)
if (kwargs["Lexical_Search"] == True) | (kwargs["Hybrid_Search"] == True):
similar_docs_keyword = get_lexical_similar_docs(
query=kwargs["query"],
minimum_should_match=kwargs.get("minimum_should_match", 50),
# filter=kwargs.get("filter", []),
filter= [],
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
k=kwargs.get("k", 5),
hybrid=True
)
if verbose:
print("##############################")
print("similar_docs_keyword")
print("##############################")
# print(similar_docs_keyword)
opensearch_pretty_print_documents(similar_docs_keyword)
if kwargs["Hybrid_Search"] == True:
similar_docs_ensemble = get_ensemble_results(
doc_lists = [similar_docs_semantic, similar_docs_keyword],
weights = kwargs.get("ensemble_weights", [.5, .5]),
algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
c=60,
k=kwargs.get("k", 5)
)
if verbose:
print("##############################")
print("similar_docs_ensemble")
print("##############################")
# print(similar_docs_ensemble)
opensearch_pretty_print_documents(similar_docs_ensemble)
# similar_docs_ensemble = list(map(lambda x:x[0], similar_docs_ensemble))
# return similar_docs_ensemble
def opensearch_pretty_print_documents(response):
'''
OpenSearch 결과인 LIST 를 파싱하는 함수
'''
for doc, score in response:
print(f'\nScore: {score}')
# print(f'Document Number: {doc.metadata["row"]}')
# Split the page content into lines
lines = doc.page_content.split("\n")
metadata = doc.metadata
print(lines)
print(metadata)
# print(doc.metadata['origin'])
# Extract and print each piece of information if it exists
# for line in lines:
# split_line = line.split(": ")
# if len(split_line) > 1:
# print(f'{split_line[0]}: {split_line[1]}')
# print("Metadata:")
# print(f'Type: {doc.metadata["type"]}')
# print(f'Source: {doc.metadata["source"]}')
print('-' * 50)
def put_parameter(boto3_clinet, parameter_name, parameter_value):
# Specify the parameter name, value, and type
parameter_type = 'SecureString'
try:
# Put the parameter
response = boto3_clinet.put_parameter(
Name=parameter_name,
Value=parameter_value,
Type=parameter_type,
Overwrite=True # Set to True if you want to overwrite an existing parameter
)
# Print the response
print('Parameter stored successfully.')
print(response)
except Exception as e:
print('Error storing parameter:', str(e))
def get_parameter(boto3_clinet, parameter_name):
# Create a SSM Client
try:
# Get the parameter
response = boto3_clinet.get_parameter(
Name=parameter_name,
WithDecryption=True # Set to True if the parameter is a SecureString
)
# Retrieve parameter value from response
parameter_value = response['Parameter']['Value']
# Print the parameter value
# print('Parameter Value:', parameter_value)
return parameter_value
except Exception as e:
print('Error retrieving parameter:', str(e))
############################################
# JSON Loader Functions
############################################
from langchain.document_loaders import JSONLoader
# Define the metadata extraction function.
def metadata_func(record: dict, metadata: dict) -> dict:
metadata["title"] = record.get("title")
metadata["url"] = record.get("url")
metadata["project"] = record.get("project")
metadata["last_updated"] = record.get("last_updated")
if "source" in metadata:
source = metadata["source"].split("/")[-1]
metadata["source"] = source
return metadata
def get_load_json(file_path):
loader = JSONLoader(
file_path= file_path,
# jq_schema='.sections[]',
jq_schema='.[]',
content_key="content",
metadata_func=metadata_func
)
data = loader.load()
return data
def show_doc_json(data, file_path):
file_name = file_path.split("/")[-1]
print("### File name: ", file_name)
print("### of document: ", len(data))
print("### The first doc")
print(data[0])
def insert_chunk_opensearch(index_name, os_client, chunk_docs, lim_emb):
for i, doc in enumerate(chunk_docs):
# print(doc)
content = doc.page_content
content_emb = lim_emb.embed_query(content)
metadata_last_updated = doc.metadata['last_updated']
metadata_last_project = doc.metadata['project']
metadata_seq_num = doc.metadata['seq_num']
metadata_title = doc.metadata['title']
metadata_url = doc.metadata['url']
# print(content)
# print(metadata_last_updated)
# print(metadata_last_project)
# print(metadata_seq_num)
# print(metadata_title)
# print(metadata_url)
# Example document
doc_body = {
"text": content,
"vector_field": content_emb, # Replace with your vector
"metadata" : [
{"last_updated": metadata_last_updated,
"project": metadata_last_project,
"seq_num": metadata_seq_num,
"title": metadata_title,
"url": metadata_url}
]
}
# print(doc_body)
opensearch_utils.add_doc(os_client, index_name, doc_body, id=f"{i}")
if i == 100:
break
from langchain.text_splitter import RecursiveCharacterTextSplitter, SpacyTextSplitter
def create_chunk(docs, chunk_size, chunk_overlap):
'''
docs: list of docs
chunk_size: int
chunk_overlap: int
return: list of chunk_docs
'''
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = chunk_size,
chunk_overlap = chunk_overlap,
separators=["\n\n", "\n", ".", " ", ""],
length_function = len,
)
# print("doc: in create_chunk", docs )
chunk_docs = text_splitter.split_documents(docs)
return chunk_docs
def create_parent_chunk(docs, parent_id_key, family_tree_id_key, parent_chunk_size, parent_chunk_overlap):
parent_chunks = create_chunk(docs, parent_chunk_size, parent_chunk_overlap)
for i, doc in enumerate(parent_chunks):
doc.metadata[family_tree_id_key] = 'parent'
doc.metadata[parent_id_key] = None
return parent_chunks
def create_child_chunk(child_chunk_size, child_chunk_overlap, docs, parent_ids_value, parent_id_key, family_tree_id_key):
sub_docs = []
for i, doc in enumerate(docs):
# print("doc: ", doc)
parent_id = parent_ids_value[i]
doc = [doc]
_sub_docs = create_chunk(doc, child_chunk_size, child_chunk_overlap)
for _doc in _sub_docs:
_doc.metadata[family_tree_id_key] = 'child'
_doc.metadata[parent_id_key] = parent_id
sub_docs.extend(_sub_docs)
# if i == 0:
# return sub_docs
return sub_docs
| [] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~workshop~utils~lib_en.py | import json
import requests
from typing import Any, Dict, List, Optional, Union
from pydantic import BaseModel, root_validator
from langchain.embeddings.base import Embeddings
from langchain.llms import AmazonAPIGateway
from langchain.llms.sagemaker_endpoint import LLMContentHandler, SagemakerEndpoint
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
class FalconContentHandlerEndpoint(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: dict) -> bytes:
input_str = json.dumps({'inputs': prompt, 'parameters': model_kwargs})
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
return response_json[0]["generated_text"]
class Llama2ContentHandlerAmazonAPIGateway:
"""Adapter to prepare the inputs from Langchain to a format
that LLM model expects.
It also provides helper function to extract
the generated text from the model response."""
@classmethod
def transform_input(
cls, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
return {"inputs": prompt, "parameters": model_kwargs}
@classmethod
def transform_output(cls, response: Any) -> str:
return response.json()[0]["generation"]
class FalconContentHandlerAmazonAPIGateway:
@classmethod
def transform_input(
cls, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
return {"inputs": prompt, "parameters": model_kwargs}
@classmethod
def transform_output(cls, response: Any) -> str:
return response.json()[0]["generated_text"]
class ContentHandlerEmbeddingAmazonAPIGateway:
@classmethod
def transform_input(
cls, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
return {"text_inputs": prompt}
@classmethod
def transform_output(cls, response: Any) -> str:
return response.json()["embedding"]
class EmbeddingAmazonApiGateway(BaseModel, Embeddings):
api_url: str
"""API Gateway URL"""
headers: Optional[Dict] = None
"""API Gateway HTTP Headers to send, e.g. for authentication"""
model_kwargs: Optional[Dict] = None
"""Key word arguments to pass to the model."""
content_handler: ContentHandlerEmbeddingAmazonAPIGateway = ContentHandlerEmbeddingAmazonAPIGateway()
"""The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
"""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
try:
if values["headers"] is None:
values["headers"] = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
except Exception as error:
pass
return values
class Config:
"""Configuration for this pydantic object."""
skip_on_failure = True
arbitrary_types_allowed=True
# extra = Extra.forbid
def _embedding_func(self, texts: List[str]) -> List[List[float]]:
"""Call out to SageMaker Inference embedding endpoint."""
# replace newlines, which can negatively affect performance.
texts = list(map(lambda x: x.replace("\n", " "), texts))
_model_kwargs = self.model_kwargs or {}
payload = self.content_handler.transform_input(texts, _model_kwargs)
# content_type = self.content_handler.content_type
# accepts = self.content_handler.accepts
try:
response = requests.post(
self.api_url,
headers=self.headers,
json=payload,
)
text = self.content_handler.transform_output(response)
except Exception as error:
raise ValueError(f"Error raised by the service: {error}")
return text
def embed_documents(
self, texts: List[str], chunk_size: int = 64
) -> List[List[float]]:
"""Compute doc embeddings using a SageMaker Inference Endpoint.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns:
List of embeddings, one for each text.
"""
results = []
_chunk_size = len(texts) if chunk_size > len(texts) else chunk_size
for i in range(0, len(texts), _chunk_size):
response = self._embedding_func(texts[i : i + _chunk_size])
results.extend(response)
return results
def embed_query(self, text: str) -> List[float]:
"""Compute query embeddings using a SageMaker inference endpoint.
Args:
text: The text to embed.
Returns:
Embeddings for the text.
"""
return self._embedding_func([text])[0]
| [] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~aws-gen-ai-kr~20_applications~02_qa_chatbot~01_preprocess_docs~utils~bedrock.py | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
"""Helper utilities for working with Amazon Bedrock from Python notebooks"""
# Python Built-Ins:
import os
from typing import Optional
# External Dependencies:
import boto3
from botocore.config import Config
# Langchain
from langchain.callbacks.base import BaseCallbackHandler
def get_bedrock_client(
assumed_role: Optional[str] = None,
endpoint_url: Optional[str] = None,
region: Optional[str] = None,
):
"""Create a boto3 client for Amazon Bedrock, with optional configuration overrides
Parameters
----------
assumed_role :
Optional ARN of an AWS IAM role to assume for calling the Bedrock service. If not
specified, the current active credentials will be used.
endpoint_url :
Optional override for the Bedrock service API Endpoint. If setting this, it should usually
include the protocol i.e. "https://..."
region :
Optional name of the AWS Region in which the service should be called (e.g. "us-east-1").
If not specified, AWS_REGION or AWS_DEFAULT_REGION environment variable will be used.
"""
if region is None:
target_region = os.environ.get("AWS_REGION", os.environ.get("AWS_DEFAULT_REGION"))
else:
target_region = region
print(f"Create new client\n Using region: {target_region}")
session_kwargs = {"region_name": target_region}
client_kwargs = {**session_kwargs}
profile_name = os.environ.get("AWS_PROFILE")
print(f" Using profile: {profile_name}")
if profile_name:
print(f" Using profile: {profile_name}")
session_kwargs["profile_name"] = profile_name
retry_config = Config(
region_name=target_region,
retries={
"max_attempts": 10,
"mode": "standard",
},
)
session = boto3.Session(**session_kwargs)
if assumed_role:
print(f" Using role: {assumed_role}", end='')
sts = session.client("sts")
response = sts.assume_role(
RoleArn=str(assumed_role),
RoleSessionName="langchain-llm-1"
)
print(" ... successful!")
client_kwargs["aws_access_key_id"] = response["Credentials"]["AccessKeyId"]
client_kwargs["aws_secret_access_key"] = response["Credentials"]["SecretAccessKey"]
client_kwargs["aws_session_token"] = response["Credentials"]["SessionToken"]
if endpoint_url:
client_kwargs["endpoint_url"] = endpoint_url
bedrock_client = session.client(
service_name="bedrock-runtime",
config=retry_config,
**client_kwargs
)
print("boto3 Bedrock client successfully created!")
print(bedrock_client._endpoint)
return bedrock_client
class bedrock_info():
_BEDROCK_MODEL_INFO = {
"Claude-Instant-V1": "anthropic.claude-instant-v1",
"Claude-V1": "anthropic.claude-v1",
"Claude-V2": "anthropic.claude-v2",
"Jurassic-2-Mid": "ai21.j2-mid-v1",
"Jurassic-2-Ultra": "ai21.j2-ultra-v1",
"Command": "cohere.command-text-v14",
"Titan-Embeddings-G1": "amazon.titan-embed-text-v1",
"Llama2-13b-Chat" : "meta.llama2-13b-chat-v1",
"Titan-Text-G1": "TBD"
}
@classmethod
def get_list_fm_models(cls, ):
return cls._BEDROCK_MODEL_INFO
@classmethod
def get_model_id(cls, model_name):
assert model_name in cls._BEDROCK_MODEL_INFO.keys(), "Check model name"
return cls._BEDROCK_MODEL_INFO[model_name]
| [] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~aws-gen-ai-kr~20_applications~02_qa_chatbot~01_preprocess_docs~backup~LayoutPDFLoader~proc_docs.py | from termcolor import colored
from IPython.core.display import display, HTML
from langchain.docstore.document import Document
class LayoutPDFReader_Custom:
'''
sections = layout_pdf_reader.doc.sections()
i = 2
print(sections[i])
print("title: ", sections[i].title)
print("tag: ", sections[i].tag)
print("parent: ", sections[i].parent)
print("parent title: ", sections[i].parent.title)
print("children: ", sections[i].children)
print("children: ", sections[i].children[0].tag)
print("children sentences: ", sections[i].children[0].sentences)
print("chunk: ", sections[i].chunks())
# print("chunk title: ", sections[i].chunks()[0].title)
# sections[2].to_context_text()
display(HTML(sections[i].to_html(include_children=True, recurse=True)))
'''
def __init__(self, doc):
self.doc = doc
self.chunk_size = len(doc.chunks())
self.section_size = len(doc.sections())
self.table_size = len(doc.tables())
def show_chunk_info(self, show_size=5):
for idx, chunk in enumerate(self.doc.chunks()):
print(colored(f"To_context_text {idx}:\n {chunk.to_context_text()} ", "green"))
print(colored(f"To_text {idx}:\n {chunk.to_text()} ", "red"))
print(colored(f"Tag {idx}:\n {chunk.tag} ", "red"))
print("\n")
if idx == (show_size -1):
break
def create_document_with_chunk(self):
'''
chunk 와 메타를 langchain Document 오브젝트로 생성
'''
doc_list = []
for idx, chunk in enumerate(self.doc.chunks()):
doc=Document(
page_content= chunk.to_text(),
metadata={"tag": chunk.tag,
"row" : idx,
}
)
doc_list.append(doc)
return doc_list
def show_section_info(self, show_size=5):
for idx, section in enumerate(self.doc.sections()):
print(colored(f"section title: {idx}:\n {section.title} ", "green"))
# use include_children=True and recurse=True to fully expand the section.
# include_children only returns at one sublevel of children whereas recurse goes through all the descendants
# display(HTML(section.to_html(include_children=True, recurse=True)))
display(HTML(section.to_html(include_children=True)))
# display(HTML(section.to_html(include_children=True, recurse=True)))
# display(HTML(section.to_html()))
if idx == (show_size -1):
break
# def show_table_info(self, show_size=5):
# for idx, table in enumerate(doc.tables()):
# print(colored(f"table name: {idx}:\n {table.name} ", "green"))
# display(HTML(table.to_html(include_children=True, recurse=True)))
# # print(f"table name: {idx}:\n", HTML(table.to_html()) )
# print(colored(f"table name: {idx}:\n {table.sentences} ", "blue"))
# if idx == (show_size -1):
# break
from utils.rag import get_semantic_similar_docs, get_lexical_similar_docs, get_ensemble_results
def search_hybrid(**kwargs):
assert "query" in kwargs, "Check your query"
assert "vector_db" in kwargs, "Check your vector_db"
assert "index_name" in kwargs, "Check your index_name"
assert "os_client" in kwargs, "Check your os_client"
assert "Semantic_Search" in kwargs, "Check your Semantic_Search"
assert "Lexical_Search" in kwargs, "Check your Lexical_Search"
assert "Hybrid_Search" in kwargs, "Check your Hybrid_Search"
assert "minimum_should_match" in kwargs, "Check your minimum_should_match"
verbose = kwargs.get("verbose", False)
print("Query: \n", kwargs["query"])
# print("Semantic_Search: ", kwargs["Semantic_Search"])
# print("Lexical_Search: ", kwargs["Lexical_Search"])
# print("Hybrid_Search: ", kwargs["Hybrid_Search"])
if (kwargs["Semantic_Search"] == True) | (kwargs["Hybrid_Search"] == True):
similar_docs_semantic = get_semantic_similar_docs(
vector_db=kwargs["vector_db"],
query=kwargs["query"],
k=kwargs.get("k", 5),
hybrid=True
)
if verbose:
print("##############################")
print("similar_docs_semantic")
print("##############################")
# print(similar_docs_semantic)
opensearch_pretty_print_documents(similar_docs_semantic)
if (kwargs["Lexical_Search"] == True) | (kwargs["Hybrid_Search"] == True):
similar_docs_keyword = get_lexical_similar_docs(
query=kwargs["query"],
minimum_should_match=kwargs.get("minimum_should_match", 50),
# filter=kwargs.get("filter", []),
filter= [],
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
k=kwargs.get("k", 5),
hybrid=True
)
if verbose:
print("##############################")
print("similar_docs_keyword")
print("##############################")
# print(similar_docs_keyword)
opensearch_pretty_print_documents(similar_docs_keyword)
if kwargs["Hybrid_Search"] == True:
similar_docs_ensemble = get_ensemble_results(
doc_lists = [similar_docs_semantic, similar_docs_keyword],
weights = kwargs.get("ensemble_weights", [.5, .5]),
algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
c=60,
k=kwargs.get("k", 5)
)
if verbose:
print("##############################")
print("similar_docs_ensemble")
print("##############################")
# print(similar_docs_ensemble)
opensearch_pretty_print_documents(similar_docs_ensemble)
# similar_docs_ensemble = list(map(lambda x:x[0], similar_docs_ensemble))
# return similar_docs_ensemble
def opensearch_pretty_print_documents(response):
'''
OpenSearch 결과인 LIST 를 파싱하는 함수
'''
for doc, score in response:
print(f'\nScore: {score}')
# print(f'Document Number: {doc.metadata["row"]}')
# Split the page content into lines
lines = doc.page_content.split("\n")
print(lines)
# print(doc.metadata['origin'])
# Extract and print each piece of information if it exists
# for line in lines:
# split_line = line.split(": ")
# if len(split_line) > 1:
# print(f'{split_line[0]}: {split_line[1]}')
# print("Metadata:")
# print(f'Type: {doc.metadata["type"]}')
# print(f'Source: {doc.metadata["source"]}')
print('-' * 50)
def put_parameter(boto3_clinet, parameter_name, parameter_value):
# Specify the parameter name, value, and type
parameter_type = 'SecureString'
try:
# Put the parameter
response = boto3_clinet.put_parameter(
Name=parameter_name,
Value=parameter_value,
Type=parameter_type,
Overwrite=True # Set to True if you want to overwrite an existing parameter
)
# Print the response
print('Parameter stored successfully.')
print(response)
except Exception as e:
print('Error storing parameter:', str(e))
def get_parameter(boto3_clinet, parameter_name):
# Create a SSM Client
try:
# Get the parameter
response = boto3_clinet.get_parameter(
Name=parameter_name,
WithDecryption=True # Set to True if the parameter is a SecureString
)
# Retrieve parameter value from response
parameter_value = response['Parameter']['Value']
# Print the parameter value
# print('Parameter Value:', parameter_value)
return parameter_value
except Exception as e:
print('Error retrieving parameter:', str(e))
| [] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~aws-gen-ai-kr~utils~bedrock.py | # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
"""Helper utilities for working with Amazon Bedrock from Python notebooks"""
# Python Built-Ins:
import os
from typing import Optional
# External Dependencies:
import boto3
from botocore.config import Config
# Langchain
from langchain.callbacks.base import BaseCallbackHandler
def get_bedrock_client(
assumed_role: Optional[str] = None,
endpoint_url: Optional[str] = None,
region: Optional[str] = None,
):
"""Create a boto3 client for Amazon Bedrock, with optional configuration overrides
Parameters
----------
assumed_role :
Optional ARN of an AWS IAM role to assume for calling the Bedrock service. If not
specified, the current active credentials will be used.
endpoint_url :
Optional override for the Bedrock service API Endpoint. If setting this, it should usually
include the protocol i.e. "https://..."
region :
Optional name of the AWS Region in which the service should be called (e.g. "us-east-1").
If not specified, AWS_REGION or AWS_DEFAULT_REGION environment variable will be used.
"""
if region is None:
target_region = os.environ.get("AWS_REGION", os.environ.get("AWS_DEFAULT_REGION"))
else:
target_region = region
print(f"Create new client\n Using region: {target_region}")
session_kwargs = {"region_name": target_region}
client_kwargs = {**session_kwargs}
profile_name = os.environ.get("AWS_PROFILE")
print(f" Using profile: {profile_name}")
if profile_name:
print(f" Using profile: {profile_name}")
session_kwargs["profile_name"] = profile_name
retry_config = Config(
region_name=target_region,
retries={
"max_attempts": 10,
"mode": "standard",
},
)
session = boto3.Session(**session_kwargs)
if assumed_role:
print(f" Using role: {assumed_role}", end='')
sts = session.client("sts")
response = sts.assume_role(
RoleArn=str(assumed_role),
RoleSessionName="langchain-llm-1"
)
print(" ... successful!")
client_kwargs["aws_access_key_id"] = response["Credentials"]["AccessKeyId"]
client_kwargs["aws_secret_access_key"] = response["Credentials"]["SecretAccessKey"]
client_kwargs["aws_session_token"] = response["Credentials"]["SessionToken"]
if endpoint_url:
client_kwargs["endpoint_url"] = endpoint_url
bedrock_client = session.client(
service_name="bedrock-runtime",
config=retry_config,
**client_kwargs
)
print("boto3 Bedrock client successfully created!")
print(bedrock_client._endpoint)
return bedrock_client
class bedrock_info():
_BEDROCK_MODEL_INFO = {
"Claude-Instant-V1": "anthropic.claude-instant-v1",
"Claude-V1": "anthropic.claude-v1",
"Claude-V2": "anthropic.claude-v2",
"Claude-V2-1": "anthropic.claude-v2:1",
"Jurassic-2-Mid": "ai21.j2-mid-v1",
"Jurassic-2-Ultra": "ai21.j2-ultra-v1",
"Command": "cohere.command-text-v14",
"Command-Light": "cohere.command-light-text-v14",
"Cohere-Embeddings-En": "cohere.embed-english-v3",
"Cohere-Embeddings-Multilingual": "cohere.embed-multilingual-v3",
"Titan-Embeddings-G1": "amazon.titan-embed-text-v1",
"Titan-Text-G1": "amazon.titan-text-express-v1",
"Titan-Text-G1-Light": "amazon.titan-text-lite-v1",
"Llama2-13b-Chat": "meta.llama2-13b-chat-v1"
}
@classmethod
def get_list_fm_models(cls, verbose=False):
if verbose:
bedrock = boto3.client(service_name='bedrock')
model_list = bedrock.list_foundation_models()
return model_list["modelSummaries"]
else:
return cls._BEDROCK_MODEL_INFO
@classmethod
def get_model_id(cls, model_name):
assert model_name in cls._BEDROCK_MODEL_INFO.keys(), "Check model name"
return cls._BEDROCK_MODEL_INFO[model_name] | [] |
2024-01-10 | aws-samples/aws-ai-ml-workshop-kr | genai~aws-gen-ai-kr~utils~proc_docs.py | from termcolor import colored
from IPython.core.display import display, HTML
from langchain.docstore.document import Document
class LayoutPDFReader_Custom:
'''
sections = layout_pdf_reader.doc.sections()
i = 2
print(sections[i])
print("title: ", sections[i].title)
print("tag: ", sections[i].tag)
print("parent: ", sections[i].parent)
print("parent title: ", sections[i].parent.title)
print("children: ", sections[i].children)
print("children: ", sections[i].children[0].tag)
print("children sentences: ", sections[i].children[0].sentences)
print("chunk: ", sections[i].chunks())
# print("chunk title: ", sections[i].chunks()[0].title)
# sections[2].to_context_text()
display(HTML(sections[i].to_html(include_children=True, recurse=True)))
'''
def __init__(self, doc):
self.doc = doc
self.chunk_size = len(doc.chunks())
self.section_size = len(doc.sections())
self.table_size = len(doc.tables())
def show_chunk_info(self, show_size=5):
for idx, chunk in enumerate(self.doc.chunks()):
print(colored(f"To_context_text {idx}:\n {chunk.to_context_text()} ", "green"))
print(colored(f"To_text {idx}:\n {chunk.to_text()} ", "red"))
print(colored(f"Tag {idx}:\n {chunk.tag} ", "red"))
print("\n")
if idx == (show_size -1):
break
def create_document_with_chunk(self):
'''
chunk 와 메타를 langchain Document 오브젝트로 생성
'''
doc_list = []
for idx, chunk in enumerate(self.doc.chunks()):
doc=Document(
page_content= chunk.to_text(),
metadata={"tag": chunk.tag,
"row" : idx,
}
)
doc_list.append(doc)
return doc_list
def show_section_info(self, show_size=5):
for idx, section in enumerate(self.doc.sections()):
print(colored(f"section title: {idx}:\n {section.title} ", "green"))
# use include_children=True and recurse=True to fully expand the section.
# include_children only returns at one sublevel of children whereas recurse goes through all the descendants
# display(HTML(section.to_html(include_children=True, recurse=True)))
display(HTML(section.to_html(include_children=True)))
# display(HTML(section.to_html(include_children=True, recurse=True)))
# display(HTML(section.to_html()))
if idx == (show_size -1):
break
# def show_table_info(self, show_size=5):
# for idx, table in enumerate(doc.tables()):
# print(colored(f"table name: {idx}:\n {table.name} ", "green"))
# display(HTML(table.to_html(include_children=True, recurse=True)))
# # print(f"table name: {idx}:\n", HTML(table.to_html()) )
# print(colored(f"table name: {idx}:\n {table.sentences} ", "blue"))
# if idx == (show_size -1):
# break
#from utils.rag import get_semantic_similar_docs, get_lexical_similar_docs, get_ensemble_results
from utils.rag import retriever_utils
def search_hybrid(**kwargs):
assert "query" in kwargs, "Check your query"
assert "vector_db" in kwargs, "Check your vector_db"
assert "index_name" in kwargs, "Check your index_name"
assert "os_client" in kwargs, "Check your os_client"
assert "Semantic_Search" in kwargs, "Check your Semantic_Search"
assert "Lexical_Search" in kwargs, "Check your Lexical_Search"
assert "Hybrid_Search" in kwargs, "Check your Hybrid_Search"
assert "minimum_should_match" in kwargs, "Check your minimum_should_match"
verbose = kwargs.get("verbose", False)
print("Query: \n", kwargs["query"])
# print("Semantic_Search: ", kwargs["Semantic_Search"])
# print("Lexical_Search: ", kwargs["Lexical_Search"])
# print("Hybrid_Search: ", kwargs["Hybrid_Search"])
if (kwargs["Semantic_Search"] == True) | (kwargs["Hybrid_Search"] == True):
similar_docs_semantic = retriever_utils.get_semantic_similar_docs(
vector_db=kwargs["vector_db"],
query=kwargs["query"],
k=kwargs.get("k", 5),
hybrid=True
)
if verbose:
print("##############################")
print("similar_docs_semantic")
print("##############################")
# print(similar_docs_semantic)
opensearch_pretty_print_documents(similar_docs_semantic)
if (kwargs["Lexical_Search"] == True) | (kwargs["Hybrid_Search"] == True):
similar_docs_keyword = retriever_utils.get_lexical_similar_docs(
query=kwargs["query"],
minimum_should_match=kwargs.get("minimum_should_match", 50),
# filter=kwargs.get("filter", []),
filter= [],
index_name=kwargs["index_name"],
os_client=kwargs["os_client"],
k=kwargs.get("k", 5),
hybrid=True
)
if verbose:
print("##############################")
print("similar_docs_keyword")
print("##############################")
# print(similar_docs_keyword)
opensearch_pretty_print_documents(similar_docs_keyword)
if kwargs["Hybrid_Search"] == True:
similar_docs_ensemble = retriever_utils.get_ensemble_results(
doc_lists = [similar_docs_semantic, similar_docs_keyword],
weights = kwargs.get("ensemble_weights", [.5, .5]),
algorithm=kwargs.get("fusion_algorithm", "RRF"), # ["RRF", "simple_weighted"]
c=60,
k=kwargs.get("k", 5)
)
if verbose:
print("##############################")
print("similar_docs_ensemble")
print("##############################")
# print(similar_docs_ensemble)
opensearch_pretty_print_documents(similar_docs_ensemble)
# similar_docs_ensemble = list(map(lambda x:x[0], similar_docs_ensemble))
# return similar_docs_ensemble
def opensearch_pretty_print_documents(response):
'''
OpenSearch 결과인 LIST 를 파싱하는 함수
'''
for doc, score in response:
print(f'\nScore: {score}')
print(f'Document Number: {doc.metadata["row"]}')
# Split the page content into lines
lines = doc.page_content.split("\n")
print(lines)
# print(doc.metadata['origin'])
# Extract and print each piece of information if it exists
# for line in lines:
# split_line = line.split(": ")
# if len(split_line) > 1:
# print(f'{split_line[0]}: {split_line[1]}')
# print("Metadata:")
# print(f'Type: {doc.metadata["type"]}')
# print(f'Source: {doc.metadata["source"]}')
print('-' * 50)
def put_parameter(boto3_clinet, parameter_name, parameter_value):
# Specify the parameter name, value, and type
parameter_type = 'SecureString'
try:
# Put the parameter
response = boto3_clinet.put_parameter(
Name=parameter_name,
Value=parameter_value,
Type=parameter_type,
Overwrite=True # Set to True if you want to overwrite an existing parameter
)
# Print the response
print('Parameter stored successfully.')
print(response)
except Exception as e:
print('Error storing parameter:', str(e))
def get_parameter(boto3_clinet, parameter_name):
# Create a SSM Client
try:
# Get the parameter
response = boto3_clinet.get_parameter(
Name=parameter_name,
WithDecryption=True # Set to True if the parameter is a SecureString
)
# Retrieve parameter value from response
parameter_value = response['Parameter']['Value']
# Print the parameter value
# print('Parameter Value:', parameter_value)
return parameter_value
except Exception as e:
print('Error retrieving parameter:', str(e))
| [] |
2024-01-10 | jbdamask/RAG-snippits | streamlit~streamlit-ChatGPT-clone.py | # streamlit run streamlit-ChatGPT-clone.py
# ChatGPT in undert 30 lines of code
import streamlit as st
from openai import OpenAI
import os
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv('../.env')) # read local .env file
OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
st.title("ChatGPT-like clone")
# Set OpenAI API key from Streamlit secrets
client = OpenAI(api_key=OPENAI_API_KEY)
# Set a default model
if "openai_model" not in st.session_state:
st.session_state["openai_model"] = "gpt-4-1106-preview"
# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat messages from history on app rerun
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# Accept user input
if prompt := st.chat_input("What is up?"):
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message in chat message container
with st.chat_message("user"):
st.markdown(prompt)
# Display assistant response in chat message container
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
for response in client.chat.completions.create(
model=st.session_state["openai_model"],
messages=[{"role": m["role"], "content": m["content"]} for m in st.session_state.messages],
stream=True,
):
full_response += (response.choices[0].delta.content or "")
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
st.session_state.messages.append({"role": "assistant", "content": full_response}) | [
"content"
] |
2024-01-10 | jbdamask/RAG-snippits | chainlit~chainlit-pinecone-RAG.py | # chainlit run pinecone-RAG.py -w
import os
from typing import List
from langchain.document_loaders import PyPDFLoader, TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.pinecone import Pinecone
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ChatMessageHistory, ConversationBufferMemory
from langchain.docstore.document import Document
import pinecone
import chainlit as cl
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv('../.env')) # read local .env file
OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
PINECONE_API_KEY = os.getenv('PINECONE_API_KEY')
PINECONE_ENVIRONMENT = os.getenv('PINECONE_ENVIRONMENT')
PINECONE_INDEX = os.getenv('PINECONE_INDEX')
# initialize pinecone client and embeddings
pinecone.init(api_key=PINECONE_API_KEY, environment=PINECONE_ENVIRONMENT)
embeddings = OpenAIEmbeddings(api_key=OPENAI_API_KEY)
vector_store = Pinecone.from_existing_index(PINECONE_INDEX, embeddings)
welcome_message = """
Welcome to the Lab Protocol QA demo!
I can answer questions about any lab protocols that have been uploaded
"""
@cl.on_chat_start
async def start():
await cl.Avatar(
name="Chatbot",
url="https://avatars.githubusercontent.com/u/128686189?s=400&u=a1d1553023f8ea0921fba0debbe92a8c5f840dd9&v=4",
).send()
message_history = ChatMessageHistory()
memory = ConversationBufferMemory(
memory_key="chat_history",
output_key="answer",
chat_memory=message_history,
return_messages=True,
)
chain = ConversationalRetrievalChain.from_llm(
ChatOpenAI(model_name="gpt-4-1106-preview", temperature=0, streaming=True),
chain_type="stuff",
retriever=vector_store.as_retriever(),
memory=memory,
return_source_documents=True,
)
cl.user_session.set("chain", chain)
@cl.on_message
async def main(message: cl.Message):
chain = cl.user_session.get("chain") # type: ConversationalRetrievalChain
cb = cl.AsyncLangchainCallbackHandler()
res = await chain.acall(message.content, callbacks=[cb])
answer = res["answer"]
source_documents = res["source_documents"] # type: List[Document]
text_elements = [] # type: List[cl.Text]
if source_documents:
for source_idx, source_doc in enumerate(source_documents):
# source_name = f"source_{source_idx}"
source_name = source_doc.metadata['source']
# Create the text element referenced in the message
text_elements.append(
cl.Text(content=source_doc.page_content, name=source_name)
)
source_names = [f"{text_el.name}\n" for text_el in text_elements]
if source_names:
answer += f"\nSources:\n * {'* '.join(source_names)}"
else:
answer += "\nNo sources found"
await cl.Message(content=answer, elements=text_elements).send() | [] |
2024-01-10 | jbdamask/RAG-snippits | loading-data~s3-pdf-to-pinecone.py | import argparse
import os
import boto3
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader
from langchain.docstore.document import Document
import pinecone
from langchain.vectorstores import Pinecone
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv('../.env')) # read local .env file
AWS_ACCESS_KEY = var = os.getenv('AWS_ACCESS_KEY')
AWS_SECRET_KEY = os.getenv('AWS_SECRET_KEY')
AWS_REGION = os.getenv('AWS_REGION')
BUCKET_NAME = os.getenv('BUCKET_NAME')
OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
PINECONE_API_KEY = os.getenv('PINECONE_API_KEY')
PINECONE_ENVIRONMENT = os.getenv('PINECONE_ENVIRONMENT')
PINECONE_INDEX = os.getenv('PINECONE_INDEX')
# initialize pinecone client and embeddings
pinecone.init(api_key=PINECONE_API_KEY, environment=PINECONE_ENVIRONMENT)
embeddings = OpenAIEmbeddings(api_key=OPENAI_API_KEY)
vector_store = Pinecone.from_existing_index(PINECONE_INDEX, embeddings)
s3 = boto3.client('s3',
aws_access_key_id=AWS_ACCESS_KEY,
aws_secret_access_key=AWS_SECRET_KEY,
region_name=AWS_REGION)
local_directory = 'tmp_pdfs'
"""
Recursively search through a directory tree, filtering for pdf files.
For each file, create a LangChain PyPDFLoader, create a list of documents
using load_and_split with a RecursiveCharacterTextSplitter, and load the
docs into Pinecone
"""
def load_pdfs():
"""
List and download all PDF files from the bucket
"""
def list_files():
paginator = s3.get_paginator('list_objects_v2')
for page in paginator.paginate(Bucket=BUCKET_NAME):
for obj in page.get('Contents', []):
file_name = obj['Key']
if file_name.endswith('.pdf'):
local_file_path = os.path.join(local_directory, file_name)
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
s3.download_file(BUCKET_NAME, file_name, local_file_path)
print(f'Downloaded {file_name} to {local_file_path}')
list_files()
for root, dirs, files in os.walk(local_directory):
for file_name in files:
if file_name.endswith('.pdf'):
# print('Found a PDF')
file_path = os.path.join(root, file_name)
"""
Generic PyPDFLoader to load PDFs and split into documents.
Note that this is only useful for PDFs that are already text-based.
PDFs with images or tables will not be processed correctly.
"""
loader = PyPDFLoader(file_path=file_path)
docs = loader.load_and_split(text_splitter=RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100))
print(f"loading {file_name} into pinecone index")
# Customizing metadata to include bucket name and source key
# This will allow RAG apps to retrieve the original file from S3
for i in range(len(docs)):
dm = docs[i].metadata
dm['source'] = '/'.join(dm['source'].split('/')[1:])
dm['bucket'] = BUCKET_NAME
docs[i] = Document(page_content=docs[i].page_content, metadata=dm)
vector_store.add_documents(docs)
# now delete the local file to clean up after ourselves
os.remove(file_path)
"""
Main function to call load_pdfs.
Specific bucket_name in .env file
"""
def main():
load_pdfs()
if __name__ == "__main__":
main() | [] |
2024-01-10 | PengleiYu/ChatBot | models.py | from langchain import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferWindowMemory
class Conversation:
def __init__(self, num_of_round=10):
self.conversation = ConversationChain(
llm=ChatOpenAI(temperature=0.5, max_tokens=2048),
memory=ConversationBufferWindowMemory(k=num_of_round),
verbose=True,
)
def ask(self, question: str) -> str:
return self.conversation.predict(input=question)
if __name__ == '__main__':
myPrompt = """你是一个中国厨师,用中文回答做菜的问题。你的回答需要满足以下要求:1. 你的回答必须是中文2. 回答限制在100个字以内"""
conv = Conversation(10)
answer = conv.ask('你好')
print(answer)
answer = conv.ask('给我讲个笑话')
print(answer)
| [
"你是一个中国厨师,用中文回答做菜的问题。你的回答需要满足以下要求:1. 你的回答必须是中文2. 回答限制在100个字以内"
] |
2024-01-10 | leehoy/tensorforce | examples~quickstart.py | # Copyright 2017 reinforce.io. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""
Quick start example.
"""
from tensorforce import Configuration
from tensorforce.agents import TRPOAgent
from tensorforce.environments.openai_gym import OpenAIGym
from tensorforce.execution import Runner
from tensorforce.core.networks import layered_network_builder
import numpy as np
# Create an OpenAIgym environment
env = OpenAIGym('CartPole-v0')
# Create a Trust Region Policy Optimization agent
agent = TRPOAgent(config=Configuration(
loglevel="info",
batch_size=100,
baseline="mlp",
baseline_args=None,
baseline_kwargs=dict(
size=32,
repeat_update=100
),
override_line_search=False,
generalized_advantage_estimation=True,
normalize_advantage=False,
gae_lambda=0.97,
cg_iterations=20,
cg_damping=0.001,
line_search_steps=20,
max_kl_divergence=0.005,
gamma=0.97,
continuous=False,
preprocessing=None,
states=env.states,
actions=env.actions,
network=layered_network_builder([dict(type='dense', size=10, activation='selu')])
))
# Create the runner
runner = Runner(agent=agent, environment=env)
# Callback function printing episode statistics
def episode_finished(r):
print("Finished episode {ep} after {ts} timesteps (reward: {reward})".format(ep=r.episode, ts=r.timestep,
reward=r.episode_rewards[-1]))
return True
# Start learning
runner.run(episodes=3000, max_timesteps=200, episode_finished=episode_finished)
# Print statistics
print("Learning finished. Total episodes: {ep}. Average reward of last 100 episodes: {ar}.".format(ep=runner.episode,
ar=np.mean(
runner.episode_rewards[
-100:])))
| [] |
2024-01-10 | MaartenGr/KeyBERT | keybert~llm~_langchain.py | from tqdm import tqdm
from typing import List
from langchain.docstore.document import Document
from keybert.llm._base import BaseLLM
from keybert.llm._utils import process_candidate_keywords
DEFAULT_PROMPT = "What is this document about? Please provide keywords separated by commas."
class LangChain(BaseLLM):
""" Using chains in langchain to generate keywords.
Currently, only chains from question answering is implemented. See:
https://langchain.readthedocs.io/en/latest/modules/chains/combine_docs_examples/question_answering.html
NOTE: The resulting keywords are expected to be separated by commas so
any changes to the prompt will have to make sure that the resulting
keywords are comma-separated.
Arguments:
chain: A langchain chain that has two input parameters, `input_documents` and `query`.
prompt: The prompt to be used in the model. If no prompt is given,
`self.default_prompt_` is used instead.
verbose: Set this to True if you want to see a progress bar for the
keyword extraction.
Usage:
To use this, you will need to install the langchain package first.
Additionally, you will need an underlying LLM to support langchain,
like openai:
`pip install langchain`
`pip install openai`
Then, you can create your chain as follows:
```python
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
chain = load_qa_chain(OpenAI(temperature=0, openai_api_key=my_openai_api_key), chain_type="stuff")
```
Finally, you can pass the chain to KeyBERT as follows:
```python
from keybert.llm import LangChain
from keybert import KeyLLM
# Create your LLM
llm = LangChain(chain)
# Load it in KeyLLM
kw_model = KeyLLM(llm)
# Extract keywords
document = "The website mentions that it only takes a couple of days to deliver but I still have not received mine."
keywords = kw_model.extract_keywords(document)
```
You can also use a custom prompt:
```python
prompt = "What are these documents about? Please give a single label."
llm = LangChain(chain, prompt=prompt)
```
"""
def __init__(self,
chain,
prompt: str = None,
verbose: bool = False,
):
self.chain = chain
self.prompt = prompt if prompt is not None else DEFAULT_PROMPT
self.default_prompt_ = DEFAULT_PROMPT
self.verbose = verbose
def extract_keywords(self, documents: List[str], candidate_keywords: List[List[str]] = None):
""" Extract topics
Arguments:
documents: The documents to extract keywords from
candidate_keywords: A list of candidate keywords that the LLM will fine-tune
For example, it will create a nicer representation of
the candidate keywords, remove redundant keywords, or
shorten them depending on the input prompt.
Returns:
all_keywords: All keywords for each document
"""
all_keywords = []
candidate_keywords = process_candidate_keywords(documents, candidate_keywords)
for document, candidates in tqdm(zip(documents, candidate_keywords), disable=not self.verbose):
prompt = self.prompt.replace("[DOCUMENT]", document)
if candidates is not None:
prompt = prompt.replace("[CANDIDATES]", ", ".join(candidates))
input_document = Document(page_content=document)
keywords = self.chain.run(input_documents=[input_document], question=self.prompt).strip()
keywords = [keyword.strip() for keyword in keywords.split(",")]
all_keywords.append(keywords)
return all_keywords
| [
"[DOCUMENT]",
"What is this document about? Please provide keywords separated by commas.",
", ",
"[CANDIDATES]"
] |
2024-01-10 | MaartenGr/KeyBERT | keybert~llm~__init__.py | from keybert._utils import NotInstalled
from keybert.llm._base import BaseLLM
# TextGeneration
try:
from keybert.llm._textgeneration import TextGeneration
except ModuleNotFoundError:
msg = "`pip install keybert` \n\n"
TextGeneration = NotInstalled("TextGeneration", "keybert", custom_msg=msg)
# OpenAI Generator
try:
from keybert.llm._openai import OpenAI
except ModuleNotFoundError:
msg = "`pip install openai` \n\n"
OpenAI = NotInstalled("OpenAI", "openai", custom_msg=msg)
# Cohere Generator
try:
from keybert.llm._cohere import Cohere
except ModuleNotFoundError:
msg = "`pip install cohere` \n\n"
Cohere = NotInstalled("Cohere", "cohere", custom_msg=msg)
# LangChain Generator
try:
from keybert.llm._langchain import LangChain
except ModuleNotFoundError:
msg = "`pip install langchain` \n\n"
LangChain = NotInstalled("langchain", "langchain", custom_msg=msg)
# LiteLLM
try:
from keybert.llm._litellm import LiteLLM
except ModuleNotFoundError:
msg = "`pip install litellm` \n\n"
LiteLLM = NotInstalled("LiteLLM", "litellm", custom_msg=msg)
__all__ = [
"BaseLLM",
"Cohere",
"OpenAI",
"TextGeneration",
"LangChain",
"LiteLLM"
]
| [] |
2024-01-10 | MaartenGr/KeyBERT | keybert~llm~_cohere.py | import time
from tqdm import tqdm
from typing import List
from keybert.llm._base import BaseLLM
from keybert.llm._utils import process_candidate_keywords
DEFAULT_PROMPT = """
The following is a list of documents. Please extract the top keywords, separated by a comma, that describe the topic of the texts.
Document:
- Traditional diets in most cultures were primarily plant-based with a little meat on top, but with the rise of industrial style meat production and factory farming, meat has become a staple food.
Keywords: Traditional diets, Plant-based, Meat, Industrial style meat production, Factory farming, Staple food, Cultural dietary practices
Document:
- The website mentions that it only takes a couple of days to deliver but I still have not received mine.
Keywords: Website, Delivery, Mention, Timeframe, Not received, Waiting, Order fulfillment
Document:
- [DOCUMENT]
Keywords:"""
class Cohere(BaseLLM):
""" Use the Cohere API to generate topic labels based on their
generative model.
Find more about their models here:
https://docs.cohere.ai/docs
NOTE: The resulting keywords are expected to be separated by commas so
any changes to the prompt will have to make sure that the resulting
keywords are comma-separated.
Arguments:
client: A cohere.Client
model: Model to use within Cohere, defaults to `"xlarge"`.
prompt: The prompt to be used in the model. If no prompt is given,
`self.default_prompt_` is used instead.
NOTE: Use `"[KEYWORDS]"` and `"[DOCUMENTS]"` in the prompt
to decide where the keywords and documents need to be
inserted.
delay_in_seconds: The delay in seconds between consecutive prompts
in order to prevent RateLimitErrors.
verbose: Set this to True if you want to see a progress bar for the
keyword extraction.
Usage:
To use this, you will need to install cohere first:
`pip install cohere`
Then, get yourself an API key and use Cohere's API as follows:
```python
import cohere
from keybert.llm import Cohere
from keybert import KeyLLM
# Create your LLM
co = cohere.Client(my_api_key)
llm = Cohere(co)
# Load it in KeyLLM
kw_model = KeyLLM(llm)
# Extract keywords
document = "The website mentions that it only takes a couple of days to deliver but I still have not received mine."
keywords = kw_model.extract_keywords(document)
```
You can also use a custom prompt:
```python
prompt = "I have the following document: [DOCUMENT]. What keywords does it contain? Make sure to separate the keywords with commas."
llm = Cohere(co, prompt=prompt)
```
"""
def __init__(self,
client,
model: str = "command",
prompt: str = None,
delay_in_seconds: float = None,
verbose: bool = False
):
self.client = client
self.model = model
self.prompt = prompt if prompt is not None else DEFAULT_PROMPT
self.default_prompt_ = DEFAULT_PROMPT
self.delay_in_seconds = delay_in_seconds
self.verbose = verbose
def extract_keywords(self, documents: List[str], candidate_keywords: List[List[str]] = None):
""" Extract topics
Arguments:
documents: The documents to extract keywords from
candidate_keywords: A list of candidate keywords that the LLM will fine-tune
For example, it will create a nicer representation of
the candidate keywords, remove redundant keywords, or
shorten them depending on the input prompt.
Returns:
all_keywords: All keywords for each document
"""
all_keywords = []
candidate_keywords = process_candidate_keywords(documents, candidate_keywords)
for document, candidates in tqdm(zip(documents, candidate_keywords), disable=not self.verbose):
prompt = self.prompt.replace("[DOCUMENT]", document)
if candidates is not None:
prompt = prompt.replace("[CANDIDATES]", ", ".join(candidates))
# Delay
if self.delay_in_seconds:
time.sleep(self.delay_in_seconds)
request = self.client.generate(model=self.model,
prompt=prompt,
max_tokens=50,
num_generations=1,
stop_sequences=["\n"])
keywords = request.generations[0].text.strip()
keywords = [keyword.strip() for keyword in keywords.split(",")]
all_keywords.append(keywords)
return all_keywords
| [
"[DOCUMENT]",
", ",
"[CANDIDATES]",
"\nThe following is a list of documents. Please extract the top keywords, separated by a comma, that describe the topic of the texts.\n\nDocument:\n- Traditional diets in most cultures were primarily plant-based with a little meat on top, but with the rise of industrial style meat production and factory farming, meat has become a staple food.\n\nKeywords: Traditional diets, Plant-based, Meat, Industrial style meat production, Factory farming, Staple food, Cultural dietary practices\n\nDocument:\n- The website mentions that it only takes a couple of days to deliver but I still have not received mine.\n\nKeywords: Website, Delivery, Mention, Timeframe, Not received, Waiting, Order fulfillment\n\nDocument:\n- [DOCUMENT]\n\nKeywords:"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.