This repo shows hot to implement a Unit Test suite for ChatCompletionAgents in Python. Achieving the correct behavior of the agents is crucial for ensuring that they perform as expected in various scenarios. Often, this involves testing the agents' ability to invoke the correct tools based on a certain input.
In the kernel, I add a function filter to collect all the functions that are selected by the LLM. This allows us to test the agents' behavior by checking if the correct functions are invoked based on the input provided.
- Fill in the
appsettings.json
file with the correct values:
{
"AzureOpenAI":
{
"Endpoint": "https://<YOUR_ENDPONIT>.openai.azure.com/",
"ApiKey": "<YOUR_KEY>",
"DeploymentName": "gpt-4o"
}
}
Note that this sample is set to use Azure OpenAI. To use other LLM providers, you may refer to the official Semantic Kernel documentation.
- Run the tests in the
MathAgentTest
project