
Integrating OpenAI with Spring Boot and Kotlin: A Beginner's Guide
Want to add AI capabilities to your Spring Boot application? Learn how to integrate OpenAI's powerful API using Spring AI framework, with complete code examples in Kotlin and production-ready practices for error handling, testing, and security.
Want to add AI capabilities to your Spring Boot application? In this guide, we'll walk through the versy basics of integrating OpenAI's API using Spring AI, a framework that simplifies AI integration in Spring applications.
Key Takeaways
- How to set up Spring AI in a Kotlin Spring Boot project
- Basic configuration for OpenAI integration
- Creating a simple chat completion endpoint
- Best practices for API key management
- Error handling patterns for AI integration
Prerequisites
- Kotlin Spring Boot project (Spring Boot 3.2+)
- OpenAI API key (from platform.openai.com)
- Basic knowledge of REST APIs
Getting Started
First, add the Spring AI dependency to your pom.xml
:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai</artifactId>
</dependency>
Configuration
Create an application.yml
file with your OpenAI configuration:
spring:
ai:
openai:
api-key: ${OPENAI_API_KEY}
chat:
options:
model: gpt-3.5-turbo
temperature: 0.7
Create a simple service class to handle AI operations:
@Service
class OpenAIService(
private val chatClient: OpenAiChatClient
) {
val chatModel = OpenAiChatModel(
OpenAiApi("YOUR_API_KEY"),
OpenAiChatOptions().apply {
model = modelName
}
)
val client = ChatClient.create(chatModel)
fun generateResponse(prompt: String): String {
val response = chatClient.call(prompt)
return response.content
}
}
Create a controller to expose the AI endpoint:
@RestController
@RequestMapping("/api/ai")
class OpenAIController(
private val openAIService: OpenAIService
) {
@PostMapping("/chat")
fun chat(@RequestBody request: ChatRequest): ChatResponse {
return try {
val response = openAIService.generateResponse(request.prompt)
ChatResponse(response = response, error = null)
} catch (e: Exception) {
ChatResponse(
response = null,
error = "Failed to generate response: ${e.message}"
)
}
}
}
data class ChatRequest(
val prompt: String
)
data class ChatResponse(
val response: String?,
val error: String?
)
Error Handling
Add a global exception handler for AI-specific errors:
@RestControllerAdvice
class AIExceptionHandler {
@ExceptionHandler(OpenAiException::class)
fun handleOpenAiException(ex: OpenAiException): ResponseEntity<ErrorResponse> {
return ResponseEntity
.status(HttpStatus.SERVICE_UNAVAILABLE)
.body(ErrorResponse("AI Service Error: ${ex.message}"))
}
}
data class ErrorResponse(
val message: String
)
Usage Example
Here's how to use the AI endpoint:
// HTTP POST request to /api/ai/chat
val request = """
{
"prompt": "What is Kotlin?"
}
"""
Response:
{
"response": "Kotlin is a modern programming language...",
"error": null
}
Best Practices
- API Key Management
- Never commit API keys to version control
- Use environment variables or a secure configuration service
- Consider implementing API key rotation
- Rate Limiting
- Implement rate limiting to prevent abuse
- Cache responses when possible
- Monitor API usage to optimize costs
- Error Handling
- Always implement proper error handling
- Provide meaningful error messages
- Consider fallback options for when the AI service is unavailable
Common Issues and Solutions
Rate Limit Exceeded
If you encounter rate limit errors, implement exponential backoff:
@Service
class OpenAIService(
private val chatClient: OpenAiChatClient
) {
val chatModel = OpenAiChatModel(
OpenAiApi("YOUR_API_KEY"),
OpenAiChatOptions().apply {
model = modelName
}
)
val client = ChatClient.create(chatModel)
fun generateResponseWithRetry(
prompt: String,
maxAttempts: Int = 3
): String {
var attempts = 0
var lastException: Exception? = null
while (attempts < maxAttempts) {
try {
return chatClient.call(prompt).content
} catch (e: OpenAiException) {
lastException = e
attempts++
if (attempts < maxAttempts) {
Thread.sleep(1000L * attempts) // Exponential backoff
}
}
}
throw lastException ?: RuntimeException("Failed to generate response")
}
}
@
Conclusion
You now have a basic but production-ready integration of OpenAI in your Spring Boot application. Remember to monitor your API usage, implement proper error handling, and follow security best practices when dealing with API keys.
For more advanced features like streaming responses or handling different AI models, refer to the Spring AI documentation.
Accompanying Code
You can find the code accompanying this blog post on github.