8000 GitHub - cnoe-io/agent-atlassian: Atlassian (Jira/Confluence) AI Agent powered by 1st Party MCP Server using OpenAPI Codegen, LangGraph and LangChain MCP Adapters. Agent is exposed on various agent transport protocols (AGNTCY Slim, Google A2A, MCP Server)
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Atlassian (Jira/Confluence) AI Agent powered by 1st Party MCP Server using OpenAPI Codegen, LangGraph and LangChain MCP Adapters. Agent is exposed on various agent transport protocols (AGNTCY Slim, Google A2A, MCP Server)

License

Notifications You must be signed in to change notification settings

cnoe-io/agent-atlassian

πŸš€ Atlassian AI Agent

Python Poetry License

Conventional Commits Ruff Linter Super Linter

A2A Docker Build and Push

πŸ§ͺ Evaluation Badges

Claude Gemini OpenAI Llama
Claude Evals Gemini Evals OpenAI Evals Llama Evals

  • πŸ€– Atlassian Agent is an LLM-powered agent built using the LangGraph ReAct Agent workflow and MCP tools.
  • 🌐 Protocol Support: Compatible with A2A protocol for integration with external user clients.
  • πŸ›‘οΈ Secure by Design: Enforces Atlassian API token-based RBAC and supports external authentication for strong access control.
  • πŸ”Œ Integrated Communication: Uses langchain-mcp-adapters to connect with the Atlassian MCP server within the LangGraph ReAct Agent workflow.
  • 🏭 First-Party MCP Server: The MCP server is generated by our first-party openapi-mcp-codegen utility, ensuring version/API compatibility and software supply chain integrity.

🚦 Getting Started

1️⃣ Configure Environment

Example .env file:

1️⃣ Create/Update .env

LLM_PROVIDER=

AGENT_NAME=atlassian

ATLASSIAN_TOKEN=
ATLASSIAN_EMAIL=
ATLASSIAN_API_URL=
ATLASSIAN_VERIFY_SSL=

########### LLM Configuration ###########
# Refer to: https://github.com/cnoe-io/cnoe-agent-utils#-usage

Use the following link to get your own Atlassian API Token:

https://id.atlassian.com/manage-profile/security/api-tokens

2️⃣ Start the Agent (A2A Mode)

Run the agent in a Docker container using your .env file:

docker run -p 0.0.0.0:8000:8000 -it\
   -v "$(pwd)/.env:/app/.env"\
   ghcr.io/cnoe-io/agent-atlassian:a2a-stable

3️⃣ Run the Client

Use the agent-chat-cli to interact with the agent:

uvx https://github.com/cnoe-io/agent-chat-cli.git a2a

πŸ—οΈ Architecture

flowchart TD
  subgraph Client Layer
    A[User Client A2A]
  end

  subgraph Agent Transport Layer
    B[Google A2A]
  end

  subgraph Agent Graph Layer
    C[LangGraph ReAct Agent]
  end

  subgraph Tools/MCP Layer
    D[LangGraph MCP Adapter]
    E[Atlassian MCP Server]
    F[Atlassian API Server]
  end

  A --> B --> C
  C --> D
  D -.-> C
  D --> E --> F --> E
Loading

✨ Features

  • πŸ€– LangGraph + LangChain MCP Adapter for agent orchestration
  • 🧠 Azure OpenAI GPT-4o as the LLM backend
  • πŸ”— Connects to Atlassian via a dedicated Atlassian MCP agent

πŸ§ͺ Usage

▢️ Test with Atlassian Server

πŸƒ Quick Start: Run Atlassian Locally with Minikube

If you don't have an existing Atlassian server, you can quickly spin one up using Minikube:

  1. Start Minikube:
minikube start
  1. Install Atlassian in the atlassian namespace:
kubectl create namespace atlassian
kubectl apply -n atlassian -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
  1. Expose the Atlassian API server:
kubectl port-forward svc/atlassian-server -n atlassian 8080:443

The API will be available at https://localhost:8080.

  1. Get the Atlassian admin password:
kubectl -n atlassian get secret atlassian-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
  1. (Optional) Install Atlassian CLI:
brew install atlassian
# or see https://argo-cd.readthedocs.io/en/stable/cli_installation/

For more details, see the official getting started guide.

2️⃣ Run the A2A Client

To interact with the agent in A2A mode:

make run-a2a-client

Sample Streaming Output

When running in A2A mode, you'll see streaming responses like:

============================================================
RUNNING STREAMING TEST
============================================================

--- Single Turn Streaming Request ---
--- Streaming Chunk ---
The current version of Atlassian is **v2.13.3+a25c8a0**. Here are some additional details:

- **Build Date:** 2025-01-03
- **Git Commit:** a25c8a0eef7830be0c2c9074c92dbea8ff23a962
- **Git Tree State:** clean
- **Go Version:** go1.23.1
- **Compiler:** gc
- **Platform:** linux/amd64
- **Kustomize Version:** v5.4.3
- **Helm Version:** v3.15.4+gfa9efb0
- **Kubectl Version:** v0.31.0
- **Jsonnet Version:** v0.20.0

🧬 Internals

  • πŸ› οΈ Uses create_react_agent for tool-calling
  • πŸ”Œ Tools loaded from the Atlassian MCP server (submodule)
  • ⚑ MCP server launched via uv run with stdio transport
  • πŸ•ΈοΈ Single-node LangGraph for inference and action routing

πŸ“ Project Structure

agent_atlassian/
β”‚
β”œβ”€β”€ agent.py              # LLM + MCP client orchestration
β”œβ”€β”€ langgraph.py          # LangGraph graph definition
β”œβ”€β”€ __main__.py           # CLI entrypoint
β”œβ”€β”€ state.py              # Pydantic state models
└── atlassian_mcp/           # Git submodule: Atlassian MCP server



🧩 MCP Submodule (Atlassian Tools)

This project uses a first-party MCP module generated from the Atlassian OpenAPI specification using our openapi-mcp-codegen utility. The generated MCP server is included as a git submodule in atlassian_mcp/.

All Atlassian-related LangChain tools are defined by this MCP server implementation, ensuring up-to-date API compatibility and supply chain integrity.


πŸ”Œ MCP Integration

The agent uses MultiServerMCPClient to communicate with MCP-compliant services.

Example (stdio transport):

async with MultiServerMCPClient(
  {
    "atlassian": {
      "command": "uv",
      "args": ["run", "/abs/path/to/atlassian_mcp/server.py"],
      "env": {
        "ATLASSIAN_TOKEN": atlassian_token,
        "ATLASSIAN_API_URL": atlassian_api_url,
        "ATLASSIAN_VERIFY_SSL": "false"
      },
      "transport": "stdio",
    }
  }
) as client:
  agent = create_react_agent(model, client.get_tools())

Example (SSE transport):

async with MultiServerMCPClient(
  {
    "atlassian": {
      "transport": "sse",
      "url": "http://localhost:8000"
    }
  }
) as client:
  ...

Evals

Running Evals

This evaluation uses agentevals to perform strict trajectory match evaluation of the agent's behavior. To run the evaluation suite:

make evals

This will:

  • Set up and activate the Python virtual environment
  • Install evaluation dependencies (agentevals, tabulate, pytest)
  • Run strict trajectory matching tests against the agent

Example Output

=======================================
 Setting up the Virtual Environment
=======================================
Virtual environment already exists.
=======================================
 Activating virtual environment
=======================================
To activate venv manually, run: source .venv/bin/activate
. .venv/bin/activate
Running Agent Strict Trajectory Matching evals...
Installing agentevals with Poetry...
. .venv/bin/activate && uv add agentevals tabulate pytest
...
set -a && . .env && set +a && uv run evals/strict_match/test_strict_match.py
...
Test ID: atlassian_agent_1
Prompt: show atlassian version
Reference Trajectories: [['__start__', 'agent_atlassian']]
Note: Shows the version of the Atlassian Server Version.
...
Results:
{'score': True}
...

Evaluation Results

Latest Strict Match Eval Results


πŸ“œ License

Apache 2.0 (see LICENSE)


πŸ‘₯ Maintainers

See MAINTAINERS.md

  • Contributions welcome via PR or issue!

πŸ™ Acknowledgements

About

Atlassian (Jira/Confluence) AI Agent powered by 1st Party MCP Server using OpenAPI Codegen, LangGraph and LangChain MCP Adapters. Agent is exposed on various agent transport protocols (AGNTCY Slim, Google A2A, MCP Server)

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors 6

0