Task Description:
Build a simple Spring Boot application that exposes an API for generating text using OpenAI. This homework focuses on getting comfortable with the OpenAI API calls from a Spring application (covering content from Lectures 1 and 2). You will create an endpoint that accepts a prompt from the user and returns a completion or answer generated by an OpenAI model. Essentially, you’re implementing a rudimentary “text responder” service that could serve as the backbone for chat or content generation features.
Requirements:
- Spring Boot Service: Set up a Spring Boot project (if not continuing from the lecture project) and include the Spring AI module or necessary HTTP client to call OpenAI.
- API Endpoint: Implement a REST endpoint (e.g., POST /generate) that accepts a JSON payload or form data containing a prompt or question.
- LLM Integration: Upon receiving a request, use a language model API to generate a text completion. This can be OpenAI's Completions API or ChatCompletion API in single-turn mode, or alternatives such as Anthropic's Claude models or local models served via Ollama. The application should be flexible to switch between these providers based on configuration. For instance, if a user provides a prompt "Tell me a fun fact about Spring Boot", your service should return a fun fact as produced by the selected model.
- Configurable Parameters: Allow configuration of model settings via application properties or environment (at least model name and possibly temperature). The application should support switching between different LLM providers, such as using OpenAI's gpt-3.5-turbo by default or locally hosted models through Ollama for privacy and cost-saving benefits.
- Error Handling: Gracefully handle errors such as missing prompt input, API request failures, or invalid API key. Return appropriate HTTP error codes/messages so the client knows what went wrong.
- Logging: Log the incoming prompt and the AI’s response (but avoid logging sensitive info) for debugging purposes.