8000 GitHub - plutomi/plutomi at tf
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

plutomi/plutomi

 
 

Repository files navigation

Plutomi

Plutomi is a multi-tenant applicant tracking system that streamlines your entire application process with automated workflows.

infra

Introduction

Plutomi was inspired by my experience at Grubhub, where managing the recruitment, screening, and onboarding of thousands of contractors every month was a significant challenge. Many of these processes were manual, making it difficult to adapt to our constantly changing business needs. I set out to create an open, flexible, and customizable platform that could automate and streamline these workflows.

Plutomi allows you to create applications for anything from jobs to program enrollments, each with customizable stages, where you can setup rules and automated workflows based on applicant responses or after a certain time period. Here's an example of how a delivery driver application might look like:

  1. Questionnaire - Collects basic applicant info. Applicants are moved to the waiting list if not completed in 30 days.
  2. Waiting List - Pool of idle applicants.
  3. Document Upload - Collects required documents like licenses and insurance.
  4. Final Review - Manual compliance check.
  5. Ready to Drive - Applicants who have completed all stages and are approved. Triggers account creation in an external system via webhook.
  6. Account Creation Failures - Holds applicants whose account creation failed, allowing for troubleshooting and resolution.

Architecture

Plutomi follows a modular monolith architecture, featuring a Remix frontend and an Axum API written in Go. All core services rely on a single primary OLTP database, MySQL, which handles all operational data rather than splitting data between consumers or services. Blob storage is put into S3, while features like search and analytics are powered by OpenSearch and ClickHouse.

Infrastructure and Third-Party Tools

We run on Kubernetes (K3S) and manage our infrastructure using Terraform. We use SES to send emails and normalize those events into our jobs table in MySQL. Optional components include Linkerd for service mesh, Axiom for logging, and Cloudflare for CDN.

Event Streaming Pipeline TODO

Our event streaming pipeline, modeled after Uber's architecture, is powered by Kafka. All event processing is managed by independent consumers written in Rust, which communicate with Kafka rather than directly with each other or the API.

For each entity, we maintain a main Kafka topic along with corresponding retry and dead letter queue (DLQ) topics to handle failures gracefully:

  • Main Topic: The initial destination for all events.

TODO recommended self hosting specs? 3 nodes of 2vcpu, 8gb ram, 100gb ssd

  • Retry Topic: Messages that fail processing in the main topic are rerouted here. Retries implement exponential backoff to prevent overwhelming the system.

  • Dead Letter Queue (DLQ): If a message fails after multiple retries, it's moved to the DLQ for further investigation. Once underlying issues are resolved (e.g., code fixes, service restoration), the messages are reprocessed by moving them back into the retry topic in a controlled manner, ensuring they do not disrupt live traffic.

For more details on the event streaming pipeline and to view the events, refer to EVENT_STREAMING_PIPELINE.md.

Running Locally

Prerequisites:

To setup your datasources, simply run docker compose up -d to run the docker-compose.yaml file. This will start MySQL, Kafka with the required topics, and KafkaUI on ports 3306, 9092, and 9000 respectively.

Credentials for all datasources will be admin and password.

Then, simply copy the .env.example file to .env and execute the run.sh script which will run migrations for MySQL (using the migrator service) and start the api and web services, along with the kafka consumers.

$ cp .env.example .env
$ ./scripts/run.sh

TODO verify CF MAIL FROM works! Still validating

You can also run any service individually:

$ ./scripts/run.sh --service <web|api|migrator|consumers>

Deploying

Plutomi is designed to be flexible and can be deployed anywhere you can get your hands on a server, we recommend at least 3. All Docker images are available on Docker Hub. Check out DEPLOYING.md for more information.

For managing infrastructure through Terraform we do store that in S3 with DynamoDB. note TOD

Questions / Troubleshooting

Some common issues are documented in TROUBLESHOOTING.md. If you're wondering why certain architectural decisions were made, check the decisions folder as you might find it in there.

If you have other questions, feel free to open a discussion or issue, or contact me on X @notjoswayski or via email at jose@plutomi.com.

-- TODO see the terraform/ dir

-- TODO see runbooks for htings like how to add a new service etc.

0