Skip to main content

Introduction

watsonx Orchestrate supports two types of agents: native agents and external agents.
  • External agents are services hosted by partners or customers outside of WXO. They expose an API that watsonx Orchestrate can call when needed.
  • Native agents are built and deployed directly within Orchestrate. They use IBM-managed models, tools, and runtime.
An external agent is typically implemented as a REST service that conforms to a chat completions interface. This allows it to receive conversation history and return structured responses in real time. Because external agents cannot be chatted with directly, they are always used as collaborators to a native agent. The native agent acts as the host agent, and delegates specific tasks to the external agent when appropriate. This guide walks through the end-to-end lifecycle of building, validating, packaging, and onboarding an external agent into watsonx Orchestrate by using the Agent Development Kit (ADK) .

Steps to onboard an agent

1

Install the watsonx Orchestrate Agent Development Kit

Follow the instructions in the Getting started with the ADK to install the Agent Development Kit and to connect it to your current watsonx Orchestrate environment.
2

Create an agent

  • Build a service that exposes a /chat/completions endpoint. See an example of the watsonx Orchestrate chat completions API here .
  • Accepts conversation history, always expects ”stream”: True in the payload, and returns streams of SSE events.
  • Secure the endpoint with API key or bearer token. The api-key header will always be x-api-key.
  • See examples: External Agent GitHub Samples .
By following the Getting started with the ADK tutorial, you are building a native agent.To learn more about native agent capabilities, see Authoring agents .
3

Define an agent configuration file (external agent only)

Use this JSON specification as your configuration file. It declares the external agent, defines the metadata and authentication details.
partner_research_agent.json
{
  "spec_version": "v1",
  "kind": "external",
  "name": "news_agent",
  "title": "News Agent",
  "nickname": "news_agent",
  "provider": "external_chat",
  "description": "An agent built in langchain which searches the news.\n",
  "tags": [
    "test"
  ],
  "api_url": "https://someurl.com",
  "auth_scheme": "BEARER_TOKEN",
  "auth_config": {
    "token": "123"
  },
  "chat_params": {
    "stream": true
  },
  "config": {
    "hidden": false,
    "enable_cot": true
  }
}
4

Import agent into Orchestrate (valid for both native and external agents)

Run the orchestrate import command to bring your agent into your draft environment:
orchestrate agents import -f partner_research_agent.json
For more information about importing agents, see Importing/Deploying agents .
5

Build tools for your native agent (for native agents only)

Native agents can call tools to perform tasks. You can define tools as Python or YAML (OpenAPI) files, such as the following one:
from typing import List

import requests
from pydantic import BaseModel, Field
from enum import Enum

from ibm_watsonx_orchestrate.agent_builder.tools import tool, ToolPermission


class ContactInformation(BaseModel):
    phone: str
    email: str


class HealthcareSpeciality(str, Enum):
    GENERAL_MEDICINE = 'General Medicine'
    CARDIOLOGY = 'Cardiology'
    PEDIATRICS = 'Pediatrics'
    ORTHOPEDICS = 'Orthopedics'
    ENT = 'Ear, Nose and Throat'
    MULTI_SPECIALTY = 'Multi-specialty'


class HealthcareProvider(BaseModel):
    provider_id: str = Field(None, description="The unique identifier of the provider")
    name: str = Field(None, description="The providers name")
    provider_type: str = Field(None, description="Type of provider, (e.g. Hospital, Clinic, Individual Practitioner)")
    specialty: HealthcareSpeciality = Field(None, description="Medical speciality, if applicable")
    address: str = Field(None, description="The address of the provider")
    contact: ContactInformation = Field(None, description="The contact information of the provider")


@tool
def search_healthcare_providers(
        location: str,
        specialty: HealthcareSpeciality = HealthcareSpeciality.GENERAL_MEDICINE
) -> List[HealthcareProvider]:
    """
    Retrieve a list of the nearest healthcare providers based on location and optional specialty. Infer the
    speciality of the location from the request.

    Args:
        location: Geographic location to search providers in (city, state, zip code, etc.)
        specialty: (Optional) Medical specialty to filter providers by (Must be one of: "ENT", "General Medicine", "Cardiology", "Pediatrics", "Orthopedics", "Multi-specialty")

    Returns:
        A list of healthcare providers near a particular location for a given speciality
    """
    resp = requests.get(
        'https://find-provider.1sqnxi8zv3dh.us-east.codeengine.appdomain.cloud',
        params={
            'location': location,
            'speciality': specialty
        }
    )
    resp.raise_for_status()
    return resp.json()['providers']
To learn more about tools, see Overview of Tools .
6

Create native agent and add collaborator (for external agents only)

External agents can’t be chatted with directly — they must be attached to a native agent as collaborators.Partners can create a native agent using the ADK CLI:
orchestrate agents create \
  --name my_native_agent \
  --kind native \
  --description "Native agent that routes research queries to an external collaborator." \
  --llm watsonx/ibm/granite-3-8b-instruct \
  --style default \
  --collaborators partner_research_agent \
  --output "my_native_agent.yaml"
This command scaffolds a native agent definition file named my_native_agent.yaml.You can then edit this file to add more collaborators or tools as needed.
my_native_agent.yaml
spec_version: v1
kind: native
name: my_native_agent
llm: watsonx/ibm/granite-3-8b-instruct
style: default
description: Native agent that routes research queries to an external collaborator.
instructions: When the user asks about research, call the collaborator partner_research_agent.
collaborators:
  - partner_research_agent
tools: []
Finally, import the native agent into your environment with ADK:
orchestrate agents import -f my_native_agent.agent.yaml
For more informations about external agents and how to connect them with native agents, see Connect to external agents .
7

Open a chat with your native agent

  1. Log in to your watsonx Orchestrate instance and open the Agent Builder: Open the Agent Builder
  2. In the Build agents and tools page, select your agent. Select the agent
  3. In this page, you can customize your agent’s behavior. Use the test chat to test your agent and see if it calls your external agent (if available). Test your chat
Alternatively, if you have installed the watsonx Orchestrate Developer Edition (a local, development-focused version of watsonx Orchestrate), you can use the ADK command line to start a local chat environment to test your agent:
orchestrate chat start
Note: To install watsonx Orchestrate Developer Edition, you need a valid license. You can obtain a license by:
  • Purchasing a license for watsonx Orchestrate SaaS (on AWS or IBM Cloud)
  • Contacting the IBM sales department to purchase the Developer Edition exclusively
8

Validate with the ADK

Create a TSV test file with prompts and expected outputs.
You are Jane Doe in the US. What holiday is on 06/13/2025?	National Sewing Machine Day
Run validation with ADK:
orchestrate evaluations validate-external \
  --tsv ./evaluations/test.tsv \
  --external-agent-config ./partner_research_agent.json \
  --credential "$EXTERNAL_AGENT_API_KEY"
For more information, see Validating your agent.
9

Scaffold your offering folder

Run the ADK command to create a packaging workspace for your offering:
orchestrate partners offering create \
  --offering my_offering \
  --publisher partner_company \
  --type external \
  --agent-name partner_research_agent
This command:
  1. Creates a new folder named after your offering (e.g., my_offering/).
  2. Prepopulates a standard directory layout that the packager understands.
  3. Seeds placeholder files you’ll edit or replace (especially the agent definition).
  4. Binds metadata from flags (--offering, --publisher, --type, --agent-name) into the generated files so packaging can validate the bundle.
How to use the folder
my_offering/
├── agents/            # Put your agent definition(s) here
│   └── partner_research_agent.json   # Your external agent config (you will replace or edit)
├── connections/       # (Optional) Any connection definitions your agent needs
│   └── my_connections.json           # e.g., API keys or endpoints via connection objects
├── offerings/         # Top-level offering metadata used by the packager
│   └── my_offering.json              # Name, publisher, type=external, versioning
├── tools/             # (Optional) Custom tools that belong to the offering
│   └── ...                               
└── evaluations/       # Your validation artifacts go here (you add these in Step 8)
    └── partner_research_agent_eval.zip
  • agents/: Replace or confirm the generated file with your agent config JSON (the one you authored earlier). This is where you ensure required metadata like publisher, tags, language_support, icon, and category: “agent” are present before packaging.
  • connections/ (optional): If your agent references credentials or endpoints via “connections,” define them here so packaging can validate consistency.
  • offerings/: The packager reads this to understand your offering’s identity and type (external vs native). You rarely hand-edit this unless you’re changing the display metadata/version.
  • tools/ (optional): If your offering ships tools (e.g., helper endpoints), place their definitions here.
  • evaluations/: Add the evaluation outputs from Step 8 so the final ZIP contains both the agent and its validation evidence.
Once you’ve verified/editied the scaffolded files and dropped in your evaluation artifacts, you’ll run Step 11 (package) to produce my_offering-1.0.zip. For more information, see the Packaging your offering.
10

Place validation results

After running your evaluations with the ADK (Step 8), you’ll get an evaluation results archive — a single ZIP file that contains:
  • test.tsv → your original prompt/expected-output file.
  • validation_results.json → detailed pass/fail logs at the test case level.
  • summary_metrics.csv → aggregate metrics (accuracy, precision, recall, etc.).
These artifacts are not optional — they prove that your external agent behaves correctly and consistently. IBM requires them for Agent Connect onboarding so the catalog team can validate your agent without re-running your tests.To include them, copy he entire evaluation results zip into the scaffolded evaluations/ folder created in Step 9.
my_offering/
└── evaluations/
    └── partner_research_agent_eval.zip
11

Package your offering (final deliverable)

Now that your scaffold is complete (agents, connections, offering metadata, tools, evaluations), you’re ready to package everything into a single distributable zip.Before running this step, double-check whether your agent configuration file (JSON or YAML) has these required fields:
{
  "tags": [], // Catalog category tags (e.g., Sales, Productivity, Research, IT, Procurement, HR). Use catalog categories, not free-form keywords.
  "publisher": "Your Company Name", // Your company/organization name
  "language_support": ["English"], // Languages your agent supports
  "icon": "svg-icon", // Provide an icon as an inline SVG string
  "category": "agent", // Do not edit
  "llm": "model_name", // Model name actually used by the agent (e.g., gpt-4o, llama-3-70b-instruct)

  "related_links": [ // Populate <linkN> with the URL to: Support, Demo (if applicable), Documentation, Training (if applicable), Terms and Conditions. If not applicable, leave as [].
    { "key": "support", "value": "<link1>", "type": "hyperlink" },
    { "key": "demo", "value": "<link2>", "type": "embeded" },
    { "key": "documentation", "value": "<link3>", "type": "hyperlink" },
    { "key": "training", "value": "<link4>", "type": "embeded" },
    { "key": "terms_and_conditions", "value": "<link5>", "type": "hyperlink" }
  ],

  "style": "default", // Do not edit
  "delete_by": null, // Do not edit
  "agent_role": "collaborator", // Do not edit

  "channels": [ // Specify which additional channels are supported by your agent. If none of the following, please leave empty: [].
    "Teams",
    "WhatsAppTwilio",
    "FacebookMessenger",
    "GenesysConnector"
  ]
}
If these fields are missing or blank, the package command will fail and your agent won’t pass onboarding review.Run the ADK packaging command:
orchestrate partners offering package \
  --offering my_offering \
  --folder .
This command:
  • Validates folder structure — checks that required files are present (at least one agent, offering metadata, evaluations).
  • Bundles artifacts — zips up the folder into a portable archive.
  • Outputs a versioned package (e.g., my_offering-1.0.zip) which contains:
    • agents/ (your agent JSON definitions, with publisher, tags, icon, etc. filled in)
    • offerings/ (top-level metadata about the bundle)
    • connections/ (if defined)
    • tools/ (if defined)
    • evaluations/ (your validation results)
The ZIP file is your single source of truth for onboarding — it’s what you’ll upload to the IBM Partner Center for catalog listing.
12

Submit your complete bundle (onboarding via the IBM Partner Center application)

Once you have your packaged archive (my_offering-1.0.zip), you’re ready to submit it for onboarding.
  1. Log in to the IBM watsonx Orchestrate IBM Partner Center.
  2. Go to the My AI productsAgent page.
  3. Select the type of agent that applies to your case.
  4. Under Add agent package, select your agent package, and upload it to the application.
  5. Click Publish to submit your package.

What IBM checks during onboarding

When you submit your bundle, the IBM Partner Center and IBM onboarding team validate:
  • Offering metadata: publisher, tags, category, and icon are properly set.
  • Agent definition: API details, authentication scheme, and structure are correct.
  • Evaluation artifacts: validation results and metrics are included in evaluations/ and show that the agent is functional.
  • Folder structure: matches the required scaffold (agents, offerings, evaluations, etc.).
If something is missing (like publisher in your agent config or evaluation results), the onboarding team will ask you to repackage and resubmit.

What happens after approval

  • Catalog publishing: Once your submission is reviewed and approved, your agent enters the publishing queue for the Watsonx Orchestrate Catalog.
  • Discovery & provisioning: End-users will then be able to find your agent in the catalog, provision it into their own environment, and use it directly in Orchestrate flows.
  • Ongoing updates: To ship new features or fixes, simply increment the version number in your agent definition, re-run the packaging process, and resubmit through the IBM Partner Center. The same validation and approval process applies.
I