TLDR: Introduces Microsoft Agent Framework (Python/.NET) and focuses on workflow orchestration. Covers agent orchestration without workflows, the motivation for explicit workflows, building blocks of workflows (executors, edges), integration with Dev UI, and how MCP tools can be part of workflows. Demonstrates workflow design (conditional routing, state handling) and contrasts with manual orchestration
Preface #
Microsoft Agent Framework1 is an open source SDK for building AI agents and multi agent workflows. Today, it supports .NET and Python.
At a high level, it unifies ideas from Semantic Kernel2 and AutoGen3 into a single foundation for building agents going forward. The key takeaway is that this is Microsoft’s strategic direction for agent development.
Conceptually, its capabilities can be distilled into two primitives:
- AI Agents
- Workflows
This article focuses specifically on Workflows4.
Workflows define an explicit execution model where AI agents participate as components in a controlled, deterministic process. Unlike prompt chaining, workflows allow the execution path to be defined ahead of time, enabling validation, branching, retries, and integration with external systems.
Workflow: Building Blocks #
Workflows are graph based systems composed of processing units and routing logic.
- Executors perform discrete units of work. An executor may wrap an AI agent or custom logic. It consumes input messages and produces output messages.
- Edges control how execution flows between executors. They define routing rules and conditional branching.
- Workflows are directed graphs formed by executors and edges, defining the full execution lifecycle from start to termination.
Agent Orchestration Without Workflows #
Before introducing workflow based orchestration, it helps to understand how multi agent systems are implemented without workflows. Lets take the very common example below,
Multi-Agents #
- Writer generates content from user input
- Reviewer scores content quality and returns structured feedback
- Editor improves content when quality is insufficient
- Publisher formats the approved content
- Summarizer produces a final report
Logical Flow #
flowchart LR
A[User Input] --> B[Writer]
B --> C[Reviewer]
C -->|Score ≥ 80| D[Publisher]
C -->|Score < 80| E[Editor]
E --> D
D --> F[Summarizer]
D --> H[Publisher Report]
F --> G[Summarized Report]
Setup #
Nothing fancy. A simple on premises setup using Ollama5 running qwen2.3:3b, along with VS Code or any IDE would suffice.
> uv init maf_workflow
> cd .\maf_workflow\
> uv add agent-framework --pre
> uv add agent-framework-devui --pre
> .\.venv\Scripts\activate
Goal #
The goal is to generate an Azure Well Architected Framework Assessment6 document based on user input.
In this example, the user input is static synthetic data. In a real scenario, this can be extended by using an MCP too ( check my previous post https://www.pandaeatsbamboo.com/posts/mcpseries-1/#proof-of-concept) l to pull data from Azure subscriptions and feed that context into the Writer agent before the workflow executes.
user_input = (
"Write a Professional Azure Well Architected Framework Assessment report using the following dummy data:\n\n"
"**Inventory:**\n"
"- 10 VMs: 5 Windows Server 2019, 5 Linux (Ubuntu 20.04)\n"
"- 3 Azure SQL Databases: 2 Standard tier, 1 Premium tier\n"
"- 2 Storage Accounts: 1 General-purpose v2, 1 Blob storage\n"
"- 1 Azure Kubernetes Service (AKS) cluster with 3 nodes\n"
"- 2 Virtual Networks with subnets\n\n"
"**Current Configurations:**\n"
"- VMs: Mixed availability zones, basic monitoring enabled\n"
"- Databases: Geo-redundancy disabled, basic backup retention\n"
"- Storage: Soft delete enabled, no encryption at rest configured\n"
"- Security: Microsoft Defender for Cloud enabled for VMs and databases, but only basic policies active\n\n"
"**Initial Assessment Findings:**\n"
"- Security: 65/100 - Missing advanced threat protection, no network security groups on some subnets\n"
"- Reliability: 70/100 - No availability sets configured, single region deployment\n"
"- Performance: 75/100 - No auto-scaling, basic load balancing\n"
"- Cost Optimization: 60/100 - Underutilized VMs, no reserved instances\n"
"- Operational Excellence: 80/100 - Basic monitoring, no automated alerts\n\n"
"Assess all five pillars in detail, provide specific recommendations with priorities, and suggest implementation steps."
)
Sequential Execution Without Workflows #
In this approach, orchestration logic is manually implemented in application code rather than being modeled as a workflow. Agent execution order, branching, and state handling are explicitly controlled using code constructs such as function calls, conditionals, and local variables. Apart from this we do have 2 MCP Tools,
- Microsoft MCP
httpServer7 is used by the Writer agent to ground content and retrieve up to date information, compensating for the offline LLM. - Local MCP
stdioserver is used to convert generated Markdown reports into DOCX/PDF format as a post processing step.
flowchart LR
A[User Input] --> B[Writer]
B --> T1[Tool: Microsoft MCP]
B --> C[Reviewer]
C -->|Score ≥ 80| D[Publisher]
C -->|Score < 80| E[Editor]
E --> D
D --> T2[Tool: MCP md to docx]
D --> F[Summarizer]
F --> T2
D --> H[Publisher Report]
F --> G[Summarized Report]
The code imports Agent Framework primitives, MCP tools, and Pydantic for structured outputs. These enable agent execution, external tool access, and type-safe evaluation.
I am using the OpenAIChatClient to call Ollama (the endpoint url changes) , but you can choose to use OllamaChatClient if you need ( from agent_framework.ollama import OllamaChatClient).
import asyncio
from agent_framework import AgentExecutorResponse, MCPStdioTool , MCPStreamableHTTPTool
from agent_framework.openai import OpenAIChatClient
## Instantiate the Client
client = OpenAIChatClient(
base_url=os.environ.get("OLLAMA_ENDPOINT", "http://localhost:11434/v1"),
api_key="none",
model_id=os.environ.get("OLLAMA_MODEL", "qwen2.5:3b"
)
I do have a .env file exposing the two environmental variables,
OLLAMA_HOST="http://localhost:11434"
OLLAMA_MODEL="qwen2.5:3b"
Each agent is created with a single responsibility using client.create_agent. Instructions define behavior, and tools are attached where needed.
def create_writer():
return client.create_agent(
name="Writer",
instructions="Create clear, accurate content",
tools=[mslearn_mcp],
)
Notice the tools [mslearn_mcp] attached to the Writer agent, which allow it to fetch up to date documentation and code samples from the Microsoft hosted MCP server.
def mslearn_mcp() -> MCPStreamableHTTPTool:
""" It allows to search through MICROSOFT AZURE documentation, fetch a complete article, and search through code samples"""
logger.info("Creating Microsoft Learn MCP Tool")
return MCPStreamableHTTPTool(
name="Microsoft Learn MCP",
url="https://learn.microsoft.com/api/mcp",
)
The Reviewer, Editor, Publisher, and Summarizer are created in a similar manner. Content approval is enforced through explicit quality gate functions that parse the Reviewer’s structured ReviewResult and determine whether the content is approved or requires editing.
Agents are executed sequentially. Output text from one agent becomes input to the next. The Writer uses MCP for grounding, and review scores control branching.
writer_agent = create_writer()
writer_response = await writer_agent.run(user_input)
reviewer_agent = create_reviewer()
reviewer_response = await reviewer_agent.run(writer_response.text)
try:
review = ReviewResult.model_validate_json(reviewer_response.text)
score = review.score
content = writer_response.text
if score < 80:
editor_agent = create_editor()
editor_input = f"Original content: {content}\n\nReview feedback: {reviewer_response.text}"
editor_response = await editor_agent.run(editor_input)
content = editor_response.text
publisher_agent = create_publisher()
publisher_response = await publisher_agent.run(content)
This explicit quality gate ensures deterministic routing and prevents low-quality content from reaching publication.
Here is a demo run , you can see the to-fro between the Reviewer - Editor - Writer before finally sending it to publisher and finally using the MCP tool to convert markdown into docx/pdf.
Characteristics #
- Agents invoked sequentially using explicit await chains
- State passed through local variables
- Conditional routing embedded in application logic
- Manual input construction for each agent
- Manual JSON parsing from text responses
- No built in observability or eventing
- Flow changes require code changes
- No message abstraction layer
Workflows Using Dev UI #
The Agent Framework Dev UI provides a local, visual interface to run, chat , inspect, and debug agents and workflows without writing orchestration code. It is designed for rapid iteration, validation, and learning, before moving to fully programmatic execution.
Defining the Workflow #
We already know how to create agents from the above step .Agents are registered once and composed into a directed graph using WorkflowBuilder. Execution order, branching, and convergence are declared explicitly.
workflow = (
WorkflowBuilder(
name="Content Review Workflow",
description="Multi-agent content creation with quality-based routing (Writer→Reviewer→Editor/Publisher)",
)
.register_agent(create_writer, name="Writer")
.register_agent(create_reviewer, name="Reviewer")
.register_agent(create_editor, name="Editor")
.register_agent(create_publisher, name="Publisher", output_response=True)
.register_agent(create_summarizer, name="Summarizer", output_response=True)
.set_start_executor("Writer")
Conditional routing is handled declaratively using edge conditions. Reviewer output is parsed as a typed model and drives execution flow.
.add_edge("Writer", "Reviewer")
# Branch 1: High quality (>= 80) goes directly to publisher
.add_edge("Reviewer", "Publisher", condition=is_approved)
# Branch 2: Low quality (< 80) goes to editor first, then publisher
.add_edge("Reviewer", "Editor", condition=needs_editing)
.add_edge("Editor", "Publisher")
# Both paths converge: Publisher → Summarizer
.add_edge("Publisher", "Summarizer")
.build()
)
Conditions are functions attached to workflow edges that inspect an executor’s output and influence routing decisions.
def needs_editing(message: Any) -> bool:
"""Check if content needs editing based on review score."""
if not isinstance(message, AgentExecutorResponse):
return False
try:
review: ReviewResult = message.agent_run_response.value
return review.score < 80
except Exception:
return False
def is_approved(message: Any) -> bool:
"""Check if content is approved (high quality)."""
if not isinstance(message, AgentExecutorResponse):
return True
try:
review: ReviewResult = message.agent_run_response.value
return review.score >= 80
except Exception:
return True
The workflow is exposed to Dev UI with a single call.
from agent_framework.devui import serve
serve(entities=[workflow], port=8090, auto_open=True)
Dev UI renders the workflow graph, shows live execution, displays agent inputs and outputs, and makes routing decisions transparent. This makes it ideal for showcasing and validating workflows before productionizing them
From the top-left menu, you can browse the sample gallery to explore ready-made examples and quickly get started.
Other honourable mentions include OpenTelemetry support, OpenAI compatibility, proxy mode, and basic security features
Note: Don’t use
async withcontext managers when creating agents with MCP tools for DevUI - connections will close before execution
Workflow with MCP #
This version replaces manual orchestration with a declarative workflow built using WorkflowBuilder and custom executors. Each executor wraps an agent and defines how messages are handled and forwarded.
Each agent is implemented as an Executor, making responsibilities explicit and reusable. Shared data, such as original and published content, is stored in a WorkflowState object instead of local variables.
class ReviewerExecutor(Executor):
"""Custom executor for the Reviewer agent."""
agent: ChatAgent
def __init__(self, id: str = "Reviewer"):
self.agent = client.create_agent(
name="Reviewer",
instructions=(
"You are an expert content reviewer. "
"Evaluate the writer's content based on:\n"
"1. Clarity - Is it easy to understand?\n"
"2. Completeness - Does it fully address the topic?\n"
"3. Accuracy - Is the information correct?\n"
"4. Structure - Is it well-organized?\n\n"
"Return a JSON object with:\n"
"- score: overall quality (0-100)\n"
"- feedback: concise, actionable feedback\n"
"- clarity, completeness, accuracy, structure: individual scores (0-100)"
),
response_format=ReviewResult,
)
super().__init__(id=id)
@handler
async def handle(self, content: str, ctx: WorkflowContext[AgentExecutorResponse]) -> None:
"""Review content and forward structured result for conditional routing."""
# Store original content in shared state for Editor
workflow_state.original_content = content
messages = [ChatMessage(role="user", text=content)]
response = await self.agent.run(messages)
# Create AgentExecutorResponse for conditional routing
# The response.value contains the parsed ReviewResult
executor_response = AgentExecutorResponse(
executor_id=self.id,
agent_run_response=response
)
await ctx.send_message(executor_response)
I am maintaining a global/shared state across all executors , though it can be stored in executor instances.
Quality gates are enforced using condition functions attached to workflow edges same like we did on Dev. These inspect structured reviewer output and decide the execution path
.add_edge("Reviewer", "Publisher", condition=is_approved)
.add_edge("Reviewer", "Editor", condition=needs_editing)
The workflow emits structured events during execution. AgentRunEvent is used for observability, and WorkflowOutputEvent captures terminal outputs from the Agents.
events = await workflow.run(user_input)
workflow_outputs = events.get_outputs()
The Publisher stores formatted content in shared state, while the Summarizer yields the final report. Both are persisted as Markdown and converted to DOCX using an MCP-backed tool.
Not bad for a locally hosted SLM to generate this report.
This example is just one iteration of many possible workflow patterns. The official repository contains additional samples that explore more advanced and idiomatic uses of the Agent Framework, including variations on workflow design, custom executors, richer state handling, and integration patterns — see the workflows samples in the official repo8 and yt developer reactor series.9
Closing Thoughts #
This walkthrough demonstrates how Agent Framework workflows shift agent systems from ad hoc orchestration to structured, deterministic execution. By modeling agents as executors and routing logic as conditions, workflows make behavior explicit, observable, and easier to evolve.
This is only one possible design. The official Agent Framework repository includes multiple workflow samples that explore alternative patterns and more advanced capabilities. As agent systems move beyond prototypes, workflows become essential for reliability, control, and maintainability.
public preview. Availability and timelines remain subject to official Microsoft announcements.
-
https://learn.microsoft.com/en-us/agent-framework/overview/agent-framework-overview ↩︎
-
https://learn.microsoft.com/en-us/semantic-kernel/overview/ ↩︎
-
https://learn.microsoft.com/en-us/agent-framework/tutorials/workflows/simple-sequential-workflow?pivots=programming-language-python ↩︎
-
https://learn.microsoft.com/en-us/azure/well-architected/ ↩︎
-
https://github.com/microsoft/agent-framework/tree/main/python/samples/getting_started/workflows ↩︎
-
https://github.com/Azure-Samples/python-ai-agent-frameworks-demos/tree/main ↩︎