Agent Orchestrator
Built with Python & Microsoft Agent Framework
A2A Protocol v0.3.0 Compliant Server
Built with Python and powered by the Microsoft Agent Framework, this orchestrator enables seamless agent-to-agent communication using the standardized A2A protocol. Implemented with FastAPI and the official A2A SDK, it provides dynamic workflow orchestration, multi-agent coordination, and comprehensive health monitoring for distributed agent systems.
Uses Agent Framework Workflows with a graph-based architecture for defining complex agent interactions. Workflows are composed of nodes (representing agents) and edges (representing connections and message flow between agents), enabling sophisticated multi-step reasoning and task delegation patterns.
API Endpoints & Documentation
Explore the A2A protocol endpoints, agent cards, and workflow orchestration APIs.
Agent Orchestrator Dashboard
Live monitoring of all subsystems and services. Updates every 30 seconds.
Loading system status...
Quick Start: Chat with Agent
Use this curl command to send a message to the orchestrator agent via the A2A protocol. The agent will return a task ID that you can use to check the status and results.
curl -X POST [Loading...]/invoke \
-H "Content-Type: application/json" \
-d '{"input": {"message": "Explain quantum computing in simple terms"}}'
{
"task_id": "abc-123...",
"task_url": "[Loading...]/tasks/abc-123...",
"status": {"state": "running"}
}
Next Step: Click or curl the task_url to check status and get results.
Workflow Configuration
Configure the agent orchestrator to run multi-agent workflows.
Required Environment Variables
Configure these parameters to enable full orchestrator functionality:
| Parameter | Required | Default | Description |
|---|---|---|---|
| Workflow Configuration | |||
WORKFLOW_NODES |
Required | [] |
JSON array defining workflow nodes. Each node represents a step in the workflow (start, agent, end). |
WORKFLOW_EDGES |
Required | [] |
JSON array defining connections between nodes. Controls workflow execution flow. |
WORKFLOW_AGENT_LIST |
Required | [] |
JSON array of A2A agents. Defines available agents with their URLs and capabilities. |
| Azure OpenAI Configuration | |||
LLM_ENDPOINT |
Required | "" |
Azure OpenAI endpoint URL (e.g., https://your-resource.openai.azure.com/) |
LLM_KEY |
Required | "" |
Azure OpenAI API key for authentication |
LLM_DEPLOYMENT_NAME |
Optional | gpt-4o |
Azure OpenAI deployment name |
LLM_VERSION |
Optional | 2024-02-01 |
Azure OpenAI API version |
LLM_MODEL |
Optional | gpt-4 |
Azure OpenAI model name |
| Embedding Service Configuration | |||
EMBEDDING_BASE_URL |
Optional | "" |
Base URL for embedding service |
EMBEDDING_API_KEY |
Optional | "" |
API key for embedding service authentication |
EMBEDDING_PROVIDER |
Optional | azure |
Embedding provider (azure, openai, etc.) |
| Database Configuration | |||
DB_HOST or PG_HOST |
Optional | localhost |
PostgreSQL database host |
DB_PORT or PG_PORT |
Optional | 5432 |
PostgreSQL database port |
DB_NAME |
Optional | orchestrator |
PostgreSQL database name |
DB_USERNAME or PG_USER |
Optional | postgres |
PostgreSQL database username |
DB_PASSWORD or PG_PASSWORD |
Optional | "" |
PostgreSQL database password |
| Server Configuration | |||
PORT |
Optional | 8080 |
Server port (5015 for local development) |
HOST |
Optional | 0.0.0.0 |
Server host address |
ENVIRONMENT |
Optional | development |
Application environment (development, production) |
DEBUG |
Optional | false |
Enable debug mode |
Example Configuration
# Workflow Configuration
WORKFLOW_NODES='[{"id":"start","type":"start"},{"id":"agent1","type":"agent","agent_ref":"research-agent"}]'
WORKFLOW_EDGES='[{"source":"start","target":"agent1"}]'
WORKFLOW_AGENT_LIST='[{"agentName":"research-agent","agentUrl":"http://localhost:5016"}]'
# Azure OpenAI Configuration
LLM_ENDPOINT=https://your-openai.openai.azure.com/
LLM_KEY=your-api-key-here
LLM_DEPLOYMENT_NAME=gpt-4o
Sync Parameters to Fabric Developer Workflow Application
Automatically sync your local environment variables to Fabric Developer Workflow Application using the make params command.
This eliminates manual configuration in the Application Dashboard.
Prerequisites
-
Install Fabric Developer Journey CLI from Null Platform:
npm install -g @nullplatform/cli
-
Get your API Key:
Go to your App Dashboard from Fabric Workspace → Click user profile icon → Click "Copy personal access token" -
Export the API Key:
export NP_TOKEN=<your-token>
-
Get your App NRN (Null Resource Name):
- Go to your Application Dashboard
- Click the NRN button to copy your application's NRN
- You'll use this NRN when running the
make paramscommand
# Format: organization=ID:account=ID:namespace=ID:application=ID # Example: organization=1234567890:account=9876543210:namespace=1122334455:application=5544332211
Run the Command
Once prerequisites are met, run this command in your terminal:
# Use your NRN copied from Application Dashboard # Option 1: Using NRN variable (recommended) make params NRN=organization=1234567890:account=9876543210:namespace=1122334455:application=5544332211 # Option 2: Using ARGS variable make params ARGS="--nrn organization=1234567890:account=9876543210:namespace=1122334455:application=5544332211" # Option 3: Call script directly ./scripts/configure-parameters.sh --nrn organization=1234567890:account=9876543210:namespace=1122334455:application=5544332211
These parameters are automatically marked as secrets:
LLM_KEY, EMBEDDING_API_KEY,
DB_PASSWORD, PG_PASSWORD, SECURITY_SECRET_KEY, NP_TOKEN