This is an implementation attempt to Dynamic Consistency Boundary (DCB) with typescript & postgres. Described by Sara Pellegrini & Milan Savic : https://www.youtube.com/watch?v=0iP65Durhbs
npm install sorci --save
yarn add sorci
The idea was to be able to do DCB without the need of an event store. So for now there is only one implementation of Sorci => SorciPostgres. Maybe other implementation will be done later. This library has never been used in production yet. Use at your own risk :)
import { Sorci, SorciPostgres } from "sorci";
const sorci: Sorci = new SorciPostgres({
host: "localhost",
port: 54322,
user: "postgres",
password: "postgres",
databaseName: "postgres",
streamName: "Your-Stream-Name",
});
// This will create everything needed to persist the events properly
await sorci.createStream();
// Small exemple of adding an Event with no impact (No concurrency issue)
await sorci.appendEvent({
sourcingEvent: {
id: "48efa9d568d3",
type: "todo-item-created",
data: {
todoItemId: "0a19448ba362",
text: "Create the Readme of Sorci.js",
},
identifier: {
todoItemId: "0a19448ba362",
},
},
});
// Small exemple of adding an Event with query
await sorci.appendEvent({
sourcingEvent: {
id: "ec5cb643e454",
type: "todo-item-name-updated",
data: {
todoItemId: "0a19448ba362",
previousText: "Create the Readme of Sorci.js",
newText: "Improve the Readme of Sorci.js",
},
identifier: {
todoItemId: "0a19448ba362",
},
},
query: {
types: ["todo-item-created"],
identifiers: [
{
todoItemId: "0a19448ba362",
},
],
},
eventIdentifier: "48efa9d568d3",
});
The library create 2 tables:
- 1 writable
- 1 read-only
The writable table act as an append log. The read-only is a synchronized copy of the writable table.
It's a technical constraint. To make sure an event can be persisted the library completely lock the writable table. Wich mean it's also unreadable during write. The read-only table allow read while event are beeing persisted.
Full References - here
Unit test are testing proper appending, specialy focus on concurrency issues.
yarn run test:unit
Since the table where the event are persisted is locked during write. My main concern was performance. So I did some benchmark to see how it perform.
Performance vary with volume of events in the stream. But for most application it should not be a problem.
Every benchmark is run for 5s with 23 events and 500 000~ events. Those benchmark are done on a dell xps pro, they also run in the CI.
~300 ops/s
This is for reference. To know the baseline of Insert.
~300 ops/s
This is when we want to persist an event that we know don't impact decision. The library will be very close to baseline. It's almost a simple insert.
Here we have a big variation, in the first exemple there is only 2 event of the selected type course-created
, so getting the lastVersion is fast
In the second exemple we have 55 000 event of types course-created
it take a bit longer to get the lastVersion
This should not be a big issue because filtering only by types should not happen very often. The option remain available if necessary
~230 ops/s
Here volume should not impact the persistence. Identifier has a gin index. Wich make retrieving event by id fast.
This is great because it will be one of the most use way of persisting event.
Here volume is impacting the results. But performance are for most cases acceptable. On a benchmark with 1M events the library still score a 50 ops/s
Here volume is important, in the second exemple we are retrieving 55 000 events whereas in the first we retrieve 2.
Here volume is important, In those exemple we retrieve the same amount of event but going through the btree index is a bit slower since there is more data.
Perfomance should be good for most cases
Here volume is important, In those exemple we retrieve the same amount of event 9608 but going through the btree & gin index is a bit slower since there is more data.
Perfomance should be good for most cases
~20 000 ops/s
This is for reference. To know the baseline Query.
Requirement: Docker installed
yarn run bench
It will take around 30s ~ to load the half a million event into the table.
I've be figthing aggregate for a while now. Sometimes it really feel like trying to fit a square into a circle. The approache of Sara Pellegrini & Milan Savic (DCB) solve the concurrency issue I had with an event only approach. There conference talk is really great and explain the concept so well that this implementation was possible I highly recommend it : https://www.youtube.com/watch?v=0iP65Durhbs
I'm really curious to get feedback on this one. Feel free to start/join a discussion, issues or Pull requests.
- Add a appendEvents
- Add a mergeStreams
- Add a splitStream
- Add a way to be able to inject a createId function to SorciEvent
- Do Explanation/postgres-stream.md
- Make the constructor parameter a single explicit payload
- Add option to serialize data into binary
- Rename clean/clear en dropStream
- Use npm version to publish new version
- Fix eslint
- Make a github workflow to create a new release
- Version the Api Doc with multiple folder
- Add eslint workflow on new-release
- Update documentation only when there is a diff
- Remove dependency to uuid (make it possible to give a createId function to SorciEvent)
- Make the github CI run the unit test
- Make the github CI run the benchmark
- Auto generate the API reference
- Display the API with github page
Thrilled to share a new #typescript library I've been working on. Implementing #DynamicConsistencyBoundary from @sara_p & @MilanSavic14 #killAggregate #DCB.🌐 I would love your feedback on it: https://github.com/Sraleik/sorci