8000 GitHub - christianromney/diagrammer: Langchain experiment to create a diagramming tool
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

christianromney/diagrammer

Repository files navigation

Diagrammer

Langchain experiments to use LLMs as diagramming assistants.

Enivronment Setup

Prerequisites

Global Configuration

direnv layout support

layout_conda() {
  local ACTIVATE="$(brew config | grep PREFIX | cut -d' ' -f2)/Caskroom/miniconda/base/bin/activate"
  if [ -n "$1" ]; then
    # Explicit environment name from layout command.
    local env_name="$1"
    source $ACTIVATE ${env_name}
  elif (grep -q name: environment.yml); then
    # Detect environment name from `environment.yml` file in `.envrc` directory
    source $ACTIVATE `grep name: environment.yml | sed -e 's/name: //' | cut -d "'" -f 2 | cut -d '"' -f 2`
  else
    (>&2 echo No environment specified);
    exit 1;
  fi;
}

General Python Project Directory Configuration

directory

mkdir -p ~/src/study/<project-name>
cd ~/src/study/<project-name>
git init

conda

Initial Setup

Conda is used to create isolated per-project Python environments Some dependencies (e.g. jupyter) are installed via conda, for most others you can just use pip once the relevant conda environment is activated.

conda create --name <project-name> jupyter
Maintenance

Conda’s base installation should be updated from time to time.

conda update -n base -c defaults conda

direnv

direnv is used to set project-specific environment variables. The layout capability also gives the ability to automatically activate language-specific toolchains per directory. In order to keep my API key safe, I use git-crypt to transparently en/decrypt the .envrc file.

export OPENAI_API_KEY=<api-key>
layout conda <project-namqe>

Once the we permit direnv to auto-load the directory’s .envrc, it will also automatically activate conda whenever we `cd` into the directory at the terminal.

direnv allow

LLM Project Dependencies

Common Libraries

For AI experiments I often use a Jupyter notebook and a few libraries.

LangChain
the defacto framework for developing applications powered by language models
LangGraph
LangGraph is a library for building stateful, multi-actor applications with LLMs
LangFlow
an easy way to prototype LangChain flows
CrewAI
framework for orchestrating role-playing, autonomous AI agents
ChromaDB
open-source embedding database
Weaviate
an open source, AI-native vector database (Docker)

Incompatibilities

The latest version of LangFlow seems to be a bit out of date and depends on incompatible versions of various packages:

Langflow DependencyConflicts With
openailangchain-openai
langchaincrewai

Example Installation

pip3 install langchain langgraph langchain-openai chromadb

Running Jupyter

Once all the dependencies are installed, start a notebook and start experimenting.

jupyter lab

One of the first things you’ll want to do is click on the ‘New’ dropdown and create a notebook file (ending in .ipnyb) and give it a title.

Experiment

Design

This experiment will attempt to convert a plain text description of a system Data Flow into a PNG diagram that visualizes that data flow. We will break the computation up into 3 steps:

  1. converting the natural language description into a succinct bulleted list (specification)
  2. transforming that specification into Dot language source code
  3. generating a PNG diagram from the Dot source code using Graphviz

Source Code

The source code for this experiment lives in a Jupyter (Python) notebook using the LangChain framework.

System Description

The text below specifies both the architecture of this experiment and serves as a convenient test input to the application.

A user submits a plain text diagram description to the orchestator. The
orchestrator adds the description to a plain text prompt which it sends to a
formatter LLM, which responds with a bulleted list of interactions called a
spec. The orchestrator sends that spec to a diagrammer llm which responds with
diagram source code. The orchestrator sends the diagram source code to the
digramming tool which responds with a PNG diagram image. Finally, the
orchestrator returns the diagram image to the user.

Formatter Prompt

You are a text formatting assistant that converts a plain text descriptions of a
software application's data flow into a bulleted interaction list detailing each
and every data transfer implied by the description. Each line in the output list
should correspond to one leg of the data flow in the form "- <sender> sends
<payload> (<format>) to <recipient>", where <sender>, <payload>, <format>, and
<recipient> are placeholders for the corresponding items from the plain text
description you were given. The payload <format> is optional, and if it is not
specified it should be omitted from the list.

For example, if given a description that says, "The user sends a JSON query to
the service, the service reads the file location from the database, and the
service responds to the user with a PNG image", you should produce a bulleted
list with the following three lines:
- user sends query (JSON) to service
- database sends file location to service
- service sends image (PNG) to user

Formatter Output

This is the output from one sample run:

- user sends diagram description (plain text) to orchestrator
- orchestrator sends prompt (plain text) to formatter LLM
- formatter LLM sends spec to orchestrator
- orchestrator sends spec to diagrammer LLM
- diagrammer LLM sends diagram source code to orchestrator
- orchestrator sends diagram source code to diagramming tool
- diagramming tool sends diagram image (PNG) to orchestrator
- orchestrator sends diagram image (PNG) to user

Diagrammer Prompt

You are a software architect's Data Flow Diagramming assistant that produces
diagram source code in the Dot language for Graphviz from a data flow
specification given as a bulleted list.

Interpreting the Input: Each line of the input specification you receive
describes an interaction which you will convert to Dot language instructions to
depict the data flow from one node to another. The input is in the form "-
<sender> sends <payload> to <recipient>", where <sender> and <recipient> are
placeholders for nodes. Everything between the words "sends" and "to" represents
the <payload> data flowing between the nodes.

Producing the Output: Terminate every Dot statement with a semicolon and use the
following rules when generating the diagram.

Diagram Styles:
- the diagram's background should always be white
- the diagram should always use the "Roboto Mono" font
- add a label to the diagram to document your which includes who you are, who created you, and your model name and version number

Node Shapes for <senders> and <recipients>:
- use a box as the default node shape
- use a note shape for documents
- use a cylinder shape for databases
- use an egg shape for the user

Node Styles:
- only specify the colorscheme once so it applies to all nodes
- the colorscheme should be "paired12"
- each individual node should reference the ordinal colors in its color attributes
- all nodes should have a filled style
- each type of architectural element (process, queue, database, document) should
have a distinct color
- all instances of the same element type should use the same, consistent color
- node text should be the actual text given for the <sender> or <recipient> placeholders

Edge Styles:
- only specify the colorscheme once so it applies to all edges
- the colorscheme should be "paired12"
- edges should be labeled with the <payload> text
- arrows should always point toward the <recipient>
- if 2 nodes share multiple edges, they should be colored distinctly
- the color and fontcolor of an edge must always match
- break long label text with newlines

Diagrammer Output

digraph DataFlow {
    graph [bgcolor=white, label="Data Flow Diagram\nCreated by OpenAI's GPT-4", fontname="Roboto Mono"];
    node [style=filled, colorscheme=paired12, fontname="Roboto Mono"];
    edge [colorscheme=paired12, fontname="Roboto Mono"];

    user [shape=egg, color=1, label="user"];
    orchestrator [shape=box, color=2, label="orchestrator"];
    formatterLLM [shape=box, color=3, label="formatter LLM"];
    diagrammerLLM [shape=box, color=4, label="diagrammer LLM"];
    diagrammingTool [shape=box, color=5, label="diagramming tool"];

    user -> orchestrator [label="diagram description\n(plain text)", color=1, fontcolor=1];
    orchestrator -> formatterLLM [label="prompt\n(plain text)", color=2, fontcolor=2];
    formatterLLM -> orchestrator [label="spec", color=3, fontcolor=3];
    orchestrator -> diagrammerLLM [label="spec", color=4, fontcolor=4];
    diagrammerLLM -> orchestrator [label="diagram source code", color=5, fontcolor=5];
    orchestrator -> diagrammingTool [label="diagram source code", color=6, fontcolor=6];
    diagrammingTool -> orchestrator [label="diagram image\n(PNG)", color=7, fontcolor=7];
    orchestrator -> user [label="diagram image\n(PNG)", color=8, fontcolor=8];
}

Diagram Result

diagram.png

Discussion

This experiment drew from the AlphaCodium research[fn:1] on Flow Engineering which claims multi-step processing flows improved code generation performance. The authors also found that using bulleted lists as LLM prompt input specifications produced better results than plain text.

The diagram illustrated above (actual execution output) does capture the intent of the natural language system description in the specification.

Formatter Task

I observed consistently good results from the formatter task given to the LLM with its relatively simple prompt.

It would be interesting to compare different degrees of structured output formats including plain text, lists, and delimited (csv) or tagged (xml) text.

Diagrammer Task

Early versions of the diagrammer prompt talked about the <payload> and the optional (<format>). The diagrammer task had trouble with these instructions, often confusing the instructions for the <payload> and the <format>.

I eventually realized those details were only the concern of the formatter and that the diagrammer could just treat all the text between the nodes as a black-box payload label.

An alternative to having an LLM generate the diagram source code from the spec would be to write a small interpreter for a subset of the Dot language.

This would likely improve the predictability, fidelity, and performance of the code generation at the expense of human effort.

Future Work

This experiment needs some method of evaluating the results and a statistically meaningful number of test runs over which to collect performance data.

With that in place, we could compare different tools and approaches, including:

  • evaluating a broader set of inputs and outputs
  • trying other fidelity-improving techniques
  • using open-source local LLM

References

[fn:1] Ridnik, Tal, Dedy Kredo, and Itamar Friedman. “Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering.” arXiv, January 16, 2024. https://doi.org/10.48550/arXiv.2401.08500.

About

Langchain experiment to create a diagramming tool

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0