Tools are a powerful feature that enable AI agents to invoke external APIs and services, thereby extending their capabilities. This guide outlines the implementation and usage of tools within your agents that follow the Agent Connect Framework specification.

Tools allow agents to:

  • Invoke External Services: Access APIs, databases, and other external services.
  • Perform Specialized Tasks: Execute tasks requiring specific capabilities.
  • Collaborate: Share tool results to work together with other agents.

Within the ACF, both tool and tool responses can be shared among agents. This form of observability allows one agent to leverage the work performed by another agent. For instance, a CRM agent might retrieve account information and store details in a tool call response. Subsequently, an Email agent can utilize this information to find critical details, such as an email address for an account.

By sharing tools and their responses, agents can collaborate on sub-tasks while sharing contextual information essential to the overall success of the main task.

Creating your agent

Before you create your tools, you must first create an agent that will use the tool. The ACF requires at least 2 endpoints to be implemented in your agent:

  • GET /v1/chat: The Chat API enables agents to communicate and collaborate with each other using a standard chat completions style protocol with server-sent events (SSE) for streaming responses. This API supports both stateful and stateless agents.
  • POST /v1/agents: The Agent Discovery endpoint enables the identification of available agents and their capabilities within the Agent Connect Framework (ACF), eliminating the need for hardcoded knowledge of their existence.

You can see examples of implemented agents in the Examples page.

Alternatively, for a better integration with watsonx Orchestrate, you can use the watsonx Orchestrate Agent Development Kit (ADK) to develop your agents and tools. To learn how to create your agent by using the ADK, see Creating agents.

Defining your tools

The ACF assumes tools are defined using a JSON schema that specifies the tool’s name, description, and parameters, following the OpenAI specification for function calling:

{
  "type": "function",
  "function": {
    "name": "get_weather",
    "description": "Get the current weather for a location",
    "parameters": {
      "type": "object",
      "properties": {
        "location": {
          "type": "string",
          "description": "The city and state or country (e.g., 'San Francisco, CA')"
        },
        "unit": {
          "type": "string",
          "enum": ["celsius", "fahrenheit"],
          "description": "The unit of temperature"
        }
      },
      "required": ["location"]
    }
  }
}

Key components

  • name: A unique identifier for the tool
  • description: A human-readable description of what the tool does
  • parameters: A JSON Schema object that defines the tool’s input parameters
    • properties: The parameters the tool accepts
    • required: Which parameters are required

Certain LLMs and frameworks use a different format for tool definitions, tool calls, and tool responses. ACF requires at least these key components to be present in the tool calls.

You can also use the ADK to create tools that are compatible with watsonx Orchestrate agents. For more information, see Creating Tools.

How to implement tools in your agents

Agent frameworks usually implement their own mechanisms to implement tools.

The watsonx Orchestrate ADK uses Python decorators to implement Python-based tools.

Regardless of the framework that you use, you must implement the tool call in your /v1/chat endpoint.

The following example shows how you can implement a tool from scratch, and how to modify your /v1/chat endpoint to run your tools.

1

First, define the tools that your agent will use. You can use a JSON file, an OpenAPI specification, or even a JSON-like structure in your programming language of choice:

const tools = [
  {
    type: "function",
    function: {
      name: "get_weather",
      description: "Get the current weather for a location",
      parameters: {
        type: "object",
        properties: {
          location: {
            type: "string",
            description: "The city and state or country (e.g., 'San Francisco, CA')"
          },
          unit: {
            type: "string",
            enum: ["celsius", "fahrenheit"],
            description: "The unit of temperature"
          }
        },
        required: ["location"]
      }
    }
  },
  {
    type: "function",
    function: {
      name: "search_database",
      description: "Search a database for information",
      parameters: {
        type: "object",
        properties: {
          query: {
            type: "string",
            description: "The search query"
          },
          limit: {
            type: "integer",
            description: "Maximum number of results to return"
          }
        },
        required: ["query"]
      }
    }
  }
];
2

Next, implement the actual functions that will be called when a tool is invoked:

const toolFunctions = {
  get_weather: async (args) => {
    const { location, unit = "celsius" } = args;
    
    // In a real implementation, this would call a weather API
    // For this example, we'll return mock data
    const weatherData = {
      location,
      temperature: unit === "celsius" ? 22 : 72,
      condition: "sunny",
      humidity: 45,
      unit
    };
    
    return JSON.stringify(weatherData);
  },
  
  search_database: async (args) => {
    const { query, limit = 10 } = args;
    
    // In a real implementation, this would query a database
    // For this example, we'll return mock data
    const results = [
      { id: 1, title: "Result 1", content: "Content for result 1" },
      { id: 2, title: "Result 2", content: "Content for result 2" },
      { id: 3, title: "Result 3", content: "Content for result 3" }
    ].slice(0, limit);
    
    return JSON.stringify({
      query,
      results,
      total: results.length
    });
  }
};
3

Modify your chat completion endpoint to handle tool calls:

app.post('/v1/chat', async (req, res) => {
  try {
    const { messages, stream = false } = req.body;
    const threadId = req.headers['x-thread-id'] || 'default';
    
    if (stream) {
      // Set up SSE streaming
      res.setHeader('Content-Type', 'text/event-stream');
      res.setHeader('Cache-Control', 'no-cache');
      res.setHeader('Connection', 'keep-alive');
      
      // Process messages and stream response with tool calls
      await streamResponseWithTools(res, messages, threadId, tools, toolFunctions);
    } else {
      // Process messages and return complete response
      const response = await processMessagesWithTools(messages, threadId, tools, toolFunctions);
      res.json(response);
    }
  } catch (error) {
    console.error('Error processing chat request:', error);
    res.status(500).json({ error: 'Internal server error' });
  }
});
4

Implement the function to process messages and handle tool calls:

// Import the OpenAI SDK
import OpenAI from 'openai';

// Initialize the OpenAI client
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY, // Get API key from environment variable
});

async function processMessagesWithTools(messages, threadId, tools, toolFunctions) {
  try {
    // First, call the OpenAI API with the messages and tools
    const llmResponse = await openai.chat.completions.create({
      model: "gpt-4.1-mini", // Or your preferred model that supports tool calling
      messages: messages,
      tools: tools,
      tool_choice: "auto" // Let the LLM decide when to use tools
    });
    
    // Extract the assistant's response
    const assistantResponse = llmResponse.choices[0].message;
    
    // Check if the LLM decided to call any tools
    if (assistantResponse.tool_calls && assistantResponse.tool_calls.length > 0) {
      console.log(`LLM decided to call ${assistantResponse.tool_calls.length} tools`);
      
      // Create a new messages array with the assistant's response
      const updatedMessages = [...messages, assistantResponse];
      
      // Execute each tool call and add the results to the messages
      for (const toolCall of assistantResponse.tool_calls) {
        const { id, function: { name, arguments: argsString } } = toolCall;
        
        try {
          // Parse the arguments
          const args = JSON.parse(argsString);
          
          // Execute the tool function
          const result = await toolFunctions[name](args);
          
          // Add the tool result to the messages
          updatedMessages.push({
            role: "tool",
            tool_call_id: id,
            name: name,
            content: result
          });
        } catch (error) {
          console.error(`Error executing tool ${name}:`, error);
          
          // Add an error message as the tool result
          updatedMessages.push({
            role: "tool",
            tool_call_id: id,
            name: name,
            content: JSON.stringify({ error: error.message })
          });
        }
      }
      
      // Call the LLM again with the updated messages including tool results
      const finalResponse = await openai.chat.completions.create({
        model: "gpt-4-turbo", // Or your preferred model
        messages: updatedMessages,
      });
      
      // Return the final response
      return {
        id: finalResponse.id,
        object: 'chat.completion',
        created: finalResponse.created,
        model: finalResponse.model,
        choices: finalResponse.choices,
        usage: finalResponse.usage
      };
    } else {
      // The LLM didn't call any tools, just return its response
      return llmResponse;
    }
  } catch (error) {
    console.error("Error in processMessagesWithTools:", error);
    throw error;
  }
}
5

Implement the function to stream responses with tool calls:

async function streamResponseWithTools(res, messages, threadId, tools, toolFunctions) {
  try {
    // First, send a thinking step
    const thinkingStep = {
      id: `step-${Math.random().toString(36).substring(2, 15)}`,
      object: 'thread.run.step.delta',
      thread_id: threadId,
      model: 'agent-model',
      created: Math.floor(Date.now() / 1000),
      choices: [
        {
          delta: {
            role: 'assistant',
            step_details: {
              type: 'thinking',
              content: 'Analyzing the request and determining if tools are needed...'
            }
          }
        }
      ]
    };
    
    res.write(`event: thread.run.step.delta\n`);
    res.write(`data: ${JSON.stringify(thinkingStep)}\n\n`);
    
    // Call the OpenAI API with streaming enabled
    const stream = await openai.chat.completions.create({
      model: "gpt-4.1-mini", // Using the same model as in processMessagesWithTools
      messages: messages,
      tools: tools,
      tool_choice: "auto", // Let the LLM decide when to use tools
      stream: true
    });
    
    let assistantMessage = { role: "assistant", content: "", tool_calls: [] };
    let currentToolCall = null;
    
    // Process the stream
    for await (const chunk of stream) {
      const delta = chunk.choices[0]?.delta;
      
      // If there's content in the delta, add it to the assistant message
      if (delta.content) {
        assistantMessage.content += delta.content;
        
        // Stream the content chunk
        const messageDelta = {
          id: `msg-${Math.random().toString(36).substring(2, 15)}`,
          object: 'thread.message.delta',
          thread_id: threadId,
          model: chunk.model,
          created: Math.floor(Date.now() / 1000),
          choices: [
            {
              delta: {
                role: 'assistant',
                content: delta.content
              }
            }
          ]
        };
        
        res.write(`event: thread.message.delta\n`);
        res.write(`data: ${JSON.stringify(messageDelta)}\n\n`);
      }
      
      // If there's a tool call in the delta, process it
      if (delta.tool_calls && delta.tool_calls.length > 0) {
        const toolCallDelta = delta.tool_calls[0];
        
        // If this is a new tool call, initialize it
        if (toolCallDelta.index === 0 && toolCallDelta.id) {
          currentToolCall = {
            id: toolCallDelta.id,
            type: "function",
            function: {
              name: "",
              arguments: ""
            }
          };
          assistantMessage.tool_calls.push(currentToolCall);
        }
        
        // Update the current tool call with the delta
        if (currentToolCall) {
          if (toolCallDelta.function?.name) {
            currentToolCall.function.name = toolCallDelta.function.name;
          }
          
          if (toolCallDelta.function?.arguments) {
            currentToolCall.function.arguments += toolCallDelta.function.arguments;
          }
        }
      }
      
      // If this is the end of the completion, check for tool calls
      if (chunk.choices[0]?.finish_reason === "tool_calls") {
        // Stream a tool call step for each tool call
        for (const toolCall of assistantMessage.tool_calls) {
          const toolCallStep = {
            id: `step-${Math.random().toString(36).substring(2, 15)}`,
            object: 'thread.run.step.delta',
            thread_id: threadId,
            model: chunk.model,
            created: Math.floor(Date.now() / 1000),
            choices: [
              {
                delta: {
                  role: 'assistant',
                  step_details: {
                    type: 'tool_calls',
                    tool_calls: [
                      {
                        id: toolCall.id,
                        name: toolCall.function.name,
                        args: JSON.parse(toolCall.function.arguments)
                      }
                    ]
                  }
                }
              }
            ]
          };
          
          res.write(`event: thread.run.step.delta\n`);
          res.write(`data: ${JSON.stringify(toolCallStep)}\n\n`);
        }
        
        // Execute each tool call and stream the results
        const updatedMessages = [...messages, assistantMessage];
        
        for (const toolCall of assistantMessage.tool_calls) {
          try {
            const { id, function: { name, arguments: argsString } } = toolCall;
            const args = JSON.parse(argsString);
            
            // Execute the tool function
            const result = await toolFunctions[name](args);
            
            // Stream the tool response
            const toolResponseStep = {
              id: `step-${Math.random().toString(36).substring(2, 15)}`,
              object: 'thread.run.step.delta',
              thread_id: threadId,
              model: chunk.model,
              created: Math.floor(Date.now() / 1000),
              choices: [
                {
                  delta: {
                    role: 'assistant',
                    step_details: {
                      type: 'tool_response',
                      content: result,
                      name: name,
                      tool_call_id: id
                    }
                  }
                }
              ]
            };
            
            res.write(`event: thread.run.step.delta\n`);
            res.write(`data: ${JSON.stringify(toolResponseStep)}\n\n`);
            
            // Add the tool result to the messages
            updatedMessages.push({
              role: "tool",
              tool_call_id: id,
              name: name,
              content: result
            });
          } catch (error) {
            console.error(`Error executing tool ${toolCall.function.name}:`, error);
            
            // Stream an error response
            const errorResponseStep = {
              id: `step-${Math.random().toString(36).substring(2, 15)}`,
              object: 'thread.run.step.delta',
              thread_id: threadId,
              model: chunk.model,
              created: Math.floor(Date.now() / 1000),
              choices: [
                {
                  delta: {
                    role: 'assistant',
                    step_details: {
                      type: 'tool_response',
                      content: JSON.stringify({ error: error.message }),
                      name: toolCall.function.name,
                      tool_call_id: toolCall.id
                    }
                  }
                }
              ]
            };
            
            res.write(`event: thread.run.step.delta\n`);
            res.write(`data: ${JSON.stringify(errorResponseStep)}\n\n`);
            
            // Add the error result to the messages
            updatedMessages.push({
              role: "tool",
              tool_call_id: toolCall.id,
              name: toolCall.function.name,
              content: JSON.stringify({ error: error.message })
            });
          }
        }
        
        // Call the LLM again with the updated messages including tool results
        const finalStream = await openai.chat.completions.create({
          model: "gpt-4.1-mini", // Using the same model as in the first call
          messages: updatedMessages,
          stream: true
        });
        
        // Stream the final response
        for await (const finalChunk of finalStream) {
          if (finalChunk.choices[0]?.delta?.content) {
            const messageDelta = {
              id: `msg-${Math.random().toString(36).substring(2, 15)}`,
              object: 'thread.message.delta',
              thread_id: threadId,
              model: finalChunk.model,
              created: Math.floor(Date.now() / 1000),
              choices: [
                {
                  delta: {
                    role: 'assistant',
                    content: finalChunk.choices[0].delta.content
                  }
                }
              ]
            };
            
            res.write(`event: thread.message.delta\n`);
            res.write(`data: ${JSON.stringify(messageDelta)}\n\n`);
          }
        }
      }
    }
    
    // End the stream
    res.end();
  } catch (error) {
    console.error("Error in streamResponseWithTools:", error);
    
    // Send an error message
    const errorMessage = {
      id: `error-${Math.random().toString(36).substring(2, 15)}`,
      object: 'thread.message.delta',
      thread_id: threadId,
      model: 'agent-model',
      created: Math.floor(Date.now() / 1000),
      choices: [
        {
          delta: {
            role: 'assistant',
            content: `An error occurred: ${error.message}`
          }
        }
      ]
    };
    
    res.write(`event: thread.message.delta\n`);
    res.write(`data: ${JSON.stringify(errorMessage)}\n\n`);
    res.end();
  }
}

// Helper function to split text into chunks
function splitIntoChunks(text, chunkSize = 10) {
  const words = text.split(' ');
  const chunks = [];
  let currentChunk = [];
  
  for (const word of words) {
    currentChunk.push(word);
    if (currentChunk.length >= chunkSize) {
      chunks.push(currentChunk.join(' '));
      currentChunk = [];
    }
  }
  
  if (currentChunk.length > 0) {
    chunks.push(currentChunk.join(' '));
  }
  
  return chunks;
}

Logging

When an agent calls a tool, it can log the tool call in its event stream. The tool call event follows this format:

{
  "id": "call_abc123",
  "object": "thread.run.step.delta",
  "thread_id": "thread_xyz789",
  "model": "agent-name",
  "choices": [{
    "delta": {
      "role": "assistant",
      "step_details": {
        "type": "tool_calls",
        "tool_calls": [{
          "id": "tool_call_123",
          "name": "get_weather",
          "args": {
            "location": "New York, NY",
            "unit": "celsius"
          }
        }]
      }
    }
  }]
}

Tool Responses in Agent Connect

After a tool is executed, the result be returned in this format:

{
  "id": "resp_def456",
  "object": "thread.run.step.delta",
  "thread_id": "thread_xyz789",
  "model": "agent-name",
  "choices": [{
    "delta": {
      "role": "assistant",
      "step_details": {
        "type": "tool_response",
        "content": "{\"temperature\": 22, \"condition\": \"sunny\", \"humidity\": 45}",
        "name": "get_weather",
        "tool_call_id": "tool_call_123"
      }
    }
  }]
}