⚠️ Applications for this challenge are now closed.
Thank you to everyone who participated! We are no longer reviewing new submissions.
Yes! Although we are not accepting new candidates, the challenge remains public for learning purposes. Feel free to fork the repository, explore the problem, and use it for practice.
If you forked this repository before March 17, 2025 and are still working on your solution, you are welcome to submit it. Please complete your implementation and open a pull request by March 21, 2025 for consideration. After this date, we will no longer be reviewing submissions.
Welcome to the Cybee.ai Backend Challenge!
This challenge will test your ability to integrate with a cloud event source, specifically Google Workspace Admin SDK logs, and build a system that:
- Accepts a new source (
POST /add-source
) with authentication credentials. - Periodically fetches logs from Google Workspace.
- Processes and forwards logs to a specified callback URL.
- Handles edge cases like API rate limits, failures, and credential expiration.
If you complete the challenge successfully, you’ll get a chance to talk with our team at Cybee.ai!
Your solution must be built using:
- Node.js
- Fastify (for API development)
- MongoDB (for storing sources and logs)
- Redis (for caching and job scheduling)
- Elasticsearch (for log indexing) (optional but a plus)
- Google Workspace Admin SDK (for fetching event logs)
Develop a Fastify-based API that allows users to connect a cloud event source and receive logs.
-
POST /add-source
- Accepts Google Workspace as a source type.
- Stores API credentials securely.
- Validates credentials before storing.
-
DELETE /remove-source/:id
- Removes an existing event source.
-
GET /sources
- Returns a list of active sources.
When a user adds a Google Workspace integration, the system should store:
{
"id": "uuid",
"sourceType": "google_workspace",
"credentials": {
"clientEmail": "string",
"privateKey": "string",
"scopes": ["admin.googleapis.com"]
},
"logFetchInterval": 300,
"callbackUrl": "https://example.com/webhook"
}
Notes:
- Credentials should be stored securely (e.g., encrypted in MongoDB).
logFetchInterval
defines how often logs should be fetched (in seconds).callbackUrl
is where processed logs should be sent.
- Once a source is added, the system should:
- Schedule a job to fetch logs at
logFetchInterval
(e.g., using a queue like BullMQ). - Call Google Workspace Admin SDK (
Reports API
) to fetch audit logs. - Forward logs to the
callbackUrl
of the source. - Retry failed requests and handle rate limits.
- Schedule a job to fetch logs at
Example Log from Google Workspace:
{
"id": "log-id",
"timestamp": "2024-03-10T12:00:00Z",
"actor": {
"email": "admin@example.com",
"ipAddress": "192.168.1.1"
},
"eventType": "LOGIN",
"details": {
"status": "SUCCESS"
}
}
Your system should properly handle:
API rate limits – Backoff and retry.
Credential expiration – Detect and alert the user.
Callback failures – Retry failed webhook deliveries.
Duplicate logs – Ensure logs are not duplicated.
High availability – Ensure logs keep flowing even if one instance restarts.
- (Required) Provide a README with:
- Setup instructions.
- API documentation.
- Explanation of how retries and scheduling work.
- (Bonus) Deploy the solution using Docker & a cloud provider.
- (Bonus) Implement monitoring (e.g., log metrics to Elasticsearch).
- Fork this repository and implement your solution in a
backend/
folder. - Add a
README.md
with setup and usage instructions. - Submit a pull request.
If your solution meets the challenge requirements, we’ll reach out to schedule a conversation. Looking forward to seeing your work!