This repository contains a collection of examples demonstrating how to use LangChain4j with Kotlin. LangChain4j is a popular Java framework for building applications with Large Language Models (LLMs).
Also checkout LINKS.md.
The examples in this repository showcase various features and capabilities of LangChain4j, including:
- Retrieval Augmented Generation (RAG): Enhance LLM responses with external knowledge
- Streaming Completions: Process LLM responses as they are generated
- Moderation: Filter inappropriate content
- Structured Outputs: Parse LLM responses into structured data
- Memory: Maintain conversation context
- Testing Utilities: Mock LLM responses for testing
- JDK 23 or later
- Kotlin 2.1.20 or later
- Gradle
- OpenAI API key (set in environment variable
OPENAI_API_KEY
)
-
Clone the repository:
git clone https://github.com/kpavlov/lc4j-kotlin-demo.git cd lc4j-kotlin-demo
-
Set your OpenAI API key:
export OPENAI_API_KEY=your-api-key
Or create a
.env
file in the project root with:OPENAI_API_KEY=your-api-key
-
Build the project:
./gradlew build
The examples are organized into different packages in the src/test/kotlin
directory:
- Async1JavaTest and Async2JavaTest: Show asynchronous operations in Java using CompletableFuture
- AsyncKotlinTest: Demonstrates basic asynchronous operations in Kotlin using coroutines
- BlockingCompletionsTest: Shows how to use LangChain4j with OpenAI in a blocking (synchronous) manner
- JavaAsyncWrapperTest: Demonstrates how to wrap Java-style asynchronous operations in Kotlin coroutines
- SuspendCompletionsTest: Shows how to use LangChain4j with OpenAI in a non-blocking (asynchronous) manner using Kotlin coroutines
- ParallelCompletionsTest.kt: Demonstrates how to make parallel (concurrent) requests to LangChain4j's chat models using Kotlin coroutines
- CompletionsStreamingTest: Shows how to use streaming completions with LangChain4j in a blocking manner
- SuspendCompletionsStreamingTest: Demonstrates how to use streaming completions with LangChain4j in a non-blocking manner using Kotlin flows
- AiMocksServiceTest: Demonstrates how to use LangChain4j's built-in ChatModelMock for testing
- AiMocksServiceTest.kt: Shows how to use MockOpenai to mock LLM responses for testing
- RagWithMemoryTest: Shows how to use Retrieval Augmented Generation (RAG) with memory in LangChain4j, including document loading, embedding, and retrieval
- ModerationTest: Demonstrates how to use LangChain4j's moderation capabilities to filter inappropriate content
- EmbeddingsTest: The test demonstrates that two texts about embeddings and AI performance have higher cosine similarity to each other than either has to a completely unrelated Cinderella fairy tale text, proving that the embedding model successfully groups semantically related content together in the vector space.
You can run the examples using Gradle:
./gradlew test
Or run a specific test:
./gradlew test --tests "e05.RagWithMemoryTest"
This project is licensed under the GNU General Public License v3.0 (GPLv3) - see the LICENSE file for details.
For more information about LangChain4j and related topics, check out the LINKS.md file.