|

Integrating tool calling in Spring AI

Introduction: Leveraging LLM Automation with Spring AI and Tool Calling

Large Language Models (LLMs) have rapidly evolved from generating static text to acting as intelligent agents capable of automating business processes. Central to this evolution is tool calling (also known as function calling), which enables LLMs to invoke backend functions, orchestrate workflows, and interact with live business data. Spring AI brings this advanced capability to the Spring Boot ecosystem, abstracting the complexities of LLM integration and offering a robust, production-ready framework for exposing business logic as callable tools. In this step-by-step tutorial, you’ll discover how to implement tool calling in a Spring Boot application using Spring AI, with practical scenarios such as database retrieval and workflow orchestration. By the end, you’ll be prepared to build secure, scalable AI assistants that execute business functions on demand.

Understanding Tool Calling in LLMs and Spring AI’s Abstraction

Tool calling allows LLMs to move beyond static responses by invoking backend functions based on user intent. For example, when a user requests, “fetch user details” or “trigger a report generation,” the LLM interprets the intent, selects the appropriate tool, and calls it with structured parameters. Modern LLMs like OpenAI GPT-4 and Azure OpenAI support function calling through standardized schemas, enabling developers to define available functions and their signatures.

Spring AI simplifies this process by letting you define tools as Spring beans, annotate them with clear metadata, and register them with the AI model. The framework manages the translation between LLM tool calls and Java method invocations, handles argument parsing, and returns results for further reasoning or user response. This approach brings LLM-driven automation into the familiar Spring Boot development model, enabling seamless integration with your existing services, repositories, and APIs.

Project Setup: Dependencies and Spring Boot Configuration for Spring AI

Begin by creating a new Spring Boot project using Spring Initializr or your preferred setup. Add the following dependencies to your pom.xml:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-spring-boot-starter-openai</artifactId>
    <version>0.8.0</version>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-validation</artifactId>
</dependency>

Next, configure your application.properties with your LLM provider credentials:

spring.ai.openai.api-key=YOUR_OPENAI_API_KEY
spring.ai.openai.chat.model=gpt-4-0613

Set up your database and any other required service properties. With these dependencies and configurations in place, you’re ready to define business tools for LLM automation.

Defining Tools as Spring Beans: Exposing Business Functions for LLM Automation

Spring AI treats each callable business function as a “tool.” Tools are standard Spring beans, annotated to describe their functionality and parameters. For example, to expose a user lookup function as an agent tool:

@Component
public class UserLookupTool {
    private final UserRepository userRepository;

    public UserLookupTool(UserRepository userRepository) {
        this.userRepository = userRepository;
    }

    @AiTool(
        name = "get_user_by_email",
        description = "Fetch user details by email address."
    )
    public UserDto getUserByEmail(@AiToolParameter(description = "User's email address") String email) {
        User user = userRepository.findByEmail(email)
            .orElseThrow(() -> new UserNotFoundException(email));
        return new UserDto(user.getId(), user.getName(), user.getEmail());
    }
}

The @AiTool annotation exposes the method as a callable tool, providing a clear name and description for the LLM. Parameters can be annotated with @AiToolParameter for schema clarity. You can define tools for a variety of business functions—retrieving records, triggering workflows, or integrating with external APIs. For instance, to trigger a workflow:

@Component
public class WorkflowTool {
    private final WorkflowService workflowService;

    public WorkflowTool(WorkflowService workflowService) {
        this.workflowService = workflowService;
    }

    @AiTool(
        name = "trigger_workflow",
        description = "Start a named workflow with optional parameters."
    )
    public WorkflowResult triggerWorkflow(
        @AiToolParameter(description = "Workflow name") String workflowName,
        @AiToolParameter(description = "Parameters for the workflow") Map<String, Object> params) {
        return workflowService.startWorkflow(workflowName, params);
    }
}

These beans are automatically discovered and can be registered with the AI agent, enabling LLM-driven function execution.

Registering and Managing Tools with Spring AI’s ChatClient

Spring AI provides the ChatClient abstraction for interacting with LLMs. To enable tool calling, register your tool beans with the ChatClient. Typically, you’ll define a configuration class to assemble your tools and configure the agent:

@Configuration
public class AiToolConfig {
    @Bean
    public ChatClient chatClient(OpenAiChatClient openAiChatClient, List<AiTool> tools) {
        return openAiChatClient.withTools(tools);
    }
}

Alternatively, for explicit tool registration:

@Bean
public ChatClient chatClient(OpenAiChatClient openAiChatClient, UserLookupTool userLookupTool, WorkflowTool workflowTool) {
    return openAiChatClient.withTools(Arrays.asList(userLookupTool, workflowTool));
}

This configuration ensures the LLM is aware of all available tools and their schemas. The ChatClient manages the interaction flow—when the LLM determines a tool should be called, it invokes the corresponding bean method with the structured arguments.

Implementing Tool-Enabled Interactions: Code Walkthroughs for Database Access, Service Triggers, and External API Calls

To build an AI assistant endpoint that leverages tool calling, create a REST controller that receives user messages, allows the LLM to decide if a tool should be called, and returns the result:

@RestController
@RequestMapping("/ai-assistant")
public class AiAssistantController {
    private final ChatClient chatClient;

    public AiAssistantController(ChatClient chatClient) {
        this.chatClient = chatClient;
    }

    @PostMapping("/chat")
    public ResponseEntity<AiResponse> chat(@RequestBody AiRequest request) {
        ChatMessage userMessage = new ChatMessage(request.getMessage(), ChatMessage.Role.USER);
        ChatResponse resp
        return ResponseEntity.ok(new AiResponse(response.getContent()));
    }
}

For example, if a user asks, “Get the details for alice@example.com,” the LLM recognizes this as a tool call, invokes get_user_by_email, and returns the result. This pattern applies equally to triggering workflows or calling external APIs. To integrate an external weather API, define a tool as follows:

@Component
public class WeatherTool {
    private final RestTemplate restTemplate;

    public WeatherTool(RestTemplate restTemplate) {
        this.restTemplate = restTemplate;
    }

    @AiTool(
        name = "get_weather",
        description = "Get current weather for a city."
    )
    public WeatherDto getWeather(@AiToolParameter(description = "City name") String city) {
        String url = "https://api.weatherapi.com/v1/current.json?key=API_KEY&q=" + city;
        ResponseEntity<WeatherApiResponse> resp WeatherApiResponse.class);
        return new WeatherDto(city, response.getBody().getCurrent().getTempC());
    }
}

With this tool registered, the LLM can answer questions like “What’s the weather in Berlin?” by calling the tool, fetching live data, and composing a natural response.

Handling Tool Execution Results: Multi-Step Flows, Validation, and Error Management

Tool execution often involves more than a single step. An LLM may need to invoke multiple tools in sequence, handle intermediate results, or validate user inputs. Spring AI supports multi-turn conversations and can manage tool invocation flows as the dialogue progresses.

For input validation, leverage Spring’s validation annotations:

public class UserDto {
    @NotBlank
    private String email;
    // ...
}

In your tool method, handle validation exceptions and return informative errors:

@AiTool(name = "get_user_by_email", description = "Fetch user details by email address.")
public UserDto getUserByEmail(@Valid @AiToolParameter(description = "User's email address") String email) {
    // ...
}

For error handling, catch domain-specific exceptions and map them to user-friendly messages:

try {
    return userRepository.findByEmail(email)
        .orElseThrow(() -> new UserNotFoundException(email));
} catch (UserNotFoundException ex) {
    log.warn("User not found: {}", email);
    throw new AiToolException("No user found with email: " + email);
}

The ChatClient can relay these errors to the LLM, which can then prompt the user for clarification or suggest next steps. For multi-step flows, maintain conversation state (for example, via session or database) and pass context between tool calls as needed.

Security, Logging, and Production Best Practices for Agent Tools

Exposing business functions to LLMs introduces critical security considerations. Always validate and sanitize tool inputs. Use method-level security annotations such as @PreAuthorize or @Secured to restrict access to sensitive tools:

@PreAuthorize("hasRole('ADMIN')")
@AiTool(name = "delete_user", description = "Delete a user by ID.")
public void deleteUser(Long userId) { ... }

Log tool invocations, inputs, and results for auditability:

@Slf4j
@Component
public class AuditedTool {
    @AiTool(name = "trigger_audit_event", description = "Log an audit event.")
    public void triggerAuditEvent(String action, String details) {
        log.info("Audit event: {} - {}", action, details);
    }
}

Handle sensitive data with care—never expose secrets or internal implementation details in tool results. For production environments, enable rate limiting, monitor tool usage, and implement robust exception handling. Regularly review and update tool definitions to minimize potential vulnerabilities. Consider deploying AI-specific security gateways or middleware to inspect and filter LLM requests and responses.

Conclusion: Advancing Spring Boot Applications with Secure, Reliable LLM-Driven Function Calling

Integrating tool calling with Spring AI unlocks transformative automation for Spring Boot applications. By exposing business logic as secure, well-defined tools, you empower LLMs to drive workflows, access real-time data, and orchestrate complex processes through natural language. Spring AI’s abstractions make it straightforward to define, register, and manage tools, while providing essential hooks for validation, security, and error handling. As you move to production, prioritize input validation, access control, logging, and monitoring to ensure your agent tools remain robust and secure. With these best practices, you can confidently deliver AI-powered assistants and automation agents that extend the capabilities of your Spring Boot platforms.

Similar Posts