A workflow helps you organize your code into simple steps that run one after another.
Such a workflow is created by defining Steps which are triggered by Events, and themselves emit Events to trigger further steps.
Workflows in LlamaIndex provide a structured way to organize your code into sequential, manageable steps. They offer several key benefits:
As you might have guessed, they strike a great balance between autonomy of agents while maintaining control over the overall workflow.
So, let’s learn how to create a workflow ourselves!
First things first, let’s install the workflow package:
pip install llama-index-utils-workflow
Now, we can create a single step workflow by defining a class that inherits from Workflow and decorating your functions with @step.
We will also need to add StartEvent and StopEvent, which are special events that are used to indicate the start and end of the workflow.
from llama_index.core.workflow import StartEvent, StopEvent, Workflow, step
class MyWorkflow(Workflow):
@step
async def my_step(self, ev: StartEvent) -> StopEvent:
# do something here
return StopEvent(result="Hello, world!")
w = MyWorkflow(timeout=10, verbose=False)
result = await w.run()As you can see, we can now run the workflow by calling w.run().
To connect multiple steps, we create custom events that carry data between steps.
To do so, we need to add an Event that is passed between the steps and transfers the output of the first step to the second step.
from llama_index.core.workflow import Event
class ProcessingEvent(Event):
intermediate_result: str
class MultiStepWorkflow(Workflow):
@step
async def step_one(self, ev: StartEvent) -> ProcessingEvent:
# Process initial data
return ProcessingEvent(intermediate_result="Step 1 complete")
@step
async def step_two(self, ev: ProcessingEvent) -> StopEvent:
# Use the intermediate result
final_result = f"Finished processing: {ev.intermediate_result}"
return StopEvent(result=final_result)The type hinting is important here, as it ensures that the workflow is executed correctly. We can use this
This is also the most powerful part because the type hinting allows us to create branches, loops, joins to facilitate more complex workflows.
Let’s show an example of a loop to illustrate the concept.
@step
async def step_one(self, ev: StartEvent | LoopEvent) -> FirstEvent | LoopEvent:
if random.randint(0, 1) == 0:
print("Bad thing happened")
return LoopEvent(loop_output="Back to step one.")
else:
print("Good thing happened")
return FirstEvent(first_output="First step complete.")There is one last cool trick that we will cover in the course, which is the ability to add state to the workflow.
This is useful when you want to keep track of the state of the workflow, so that every step has access to the same state.
from llama_index.core.workflow import Context, StartEvent, StopEvent
@step
async def query(self, ctx: Context, ev: StartEvent) -> StopEvent:
# retrieve from context
query = await ctx.get("query")
# do something with context and event
val = ...
# store in context
await ctx.set("key", val)
return StopEvent(result=result)Great! Now you know how to create a controllable workflow in LlamaIndex!
There are some more complex nuances to workflows, which you can learn about in the LlamaIndex documentation
However, there is another way to create workflows, which is by using the AgentWorkflow class.
Instead of manual workflow creation, we can use the AgentWorkflow class to create a multi-agent workflow.
The AgentWorkflow uses Workflow Agents to allow you to create a system of one or more agents that can collaborate and hand off tasks to each other based on their specialized capabilities.
This enables building complex agent systems where different agents handle different aspects of a task.
Instead of importing classes from llama_index.core.agent, we will import the agent classes from llama_index.core.agent.workflow.
One agent must be designated as the root agent in the AgentWorkflow constructor.
When a user message comes in, it is first routed to the root agent. Each agent can then:
Let’s see how to create a multi-agent workflow.
from llama_index.core.agent.workflow import AgentWorkflow, FunctionAgent, ReActAgent
query_engine_agent_tool = # as defined in the previous section
# Define the agents
multiply_agent = FunctionAgent(
fn=lambda x, y: x * y,
name="multiply",
description="Multiplies two integers",
)
retriever_agent = ReActAgent(
llm=llm,
tools=[query_engine_agent_tool],
)
# Create the workflow
workflow = AgentWorkflow(
agents=[multiply_agent, retriever_agent], root_agent="multiply"
)
# Run the system
response = await workflow.run(user_msg="Can you add 5 and 3?")Before starting the workflow, we can provide an initial state dict that will be available to all agents. The state is stored in the state key of the workflow context. It will be injected into the state_prompt which augments each new user message.
workflow = AgentWorkflow(
agents=[...],
root_agent="root_agent",
initial_state={"counter": 0},
state_prompt="Current state: {state}. User message: {msg}",
)Congratulations! You have now mastered the basics of Agents in LlamaIndex! 🎉
Let’s continue with tackling LangGraph! 🚀