From 1e7c775df8a6827e6c5fd7f9488d0fb718b0e62b Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Fri, 22 Nov 2024 02:47:43 -0300 Subject: [PATCH 01/14] chore: remove HTTP listen address from entrypoint script (#735) Remove HTTP listen address from entrypoint script This update eliminates the HTTP listen address configuration from the tsn-entrypoint.sh script. The change ensures that the application will no longer listen on port 8080, which might be unnecessary for its operation. --- deployments/tsn-entrypoint.sh | 1 - 1 file changed, 1 deletion(-) diff --git a/deployments/tsn-entrypoint.sh b/deployments/tsn-entrypoint.sh index eef2ea3cf..5e2d4a447 100644 --- a/deployments/tsn-entrypoint.sh +++ b/deployments/tsn-entrypoint.sh @@ -5,7 +5,6 @@ # remember: flags > env variables > config.toml > defaults exec /app/kwild --root-dir $CONFIG_PATH \ - --app.http-listen-addr "0.0.0.0:8080"\ --app.jsonrpc-listen-addr "0.0.0.0:8484"\ --app.db-read-timeout "60s"\ --app.snapshots.enabled\ From 85e1bda18d7e3aad3514f2f5d26a70ed74bc0be7 Mon Sep 17 00:00:00 2001 From: Michael Buntarman Date: Tue, 26 Nov 2024 22:39:28 +0700 Subject: [PATCH 02/14] chore: update old references to the new organization (#742) * chore: update old references to the new organization * bump: sdk go dependency --- .github/workflows/deploy-staging.yaml | 2 +- .github/workflows/semantic-pr.yml | 2 +- CONTRIBUTING.md | 28 ++-- README.md | 30 ++-- TERMINOLOGY.md | 16 +- Taskfile.yml | 10 +- cmd/init-system/main.go | 2 +- go.mod | 53 +++---- go.sum | 4 +- internal/benchmark/benchmark.go | 6 +- internal/benchmark/load_test.go | 2 +- internal/benchmark/setup.go | 50 ++++-- internal/benchmark/types.go | 2 +- internal/benchmark/utils.go | 20 +-- internal/contracts/README.md | 4 +- internal/contracts/system_contract.kf | 2 +- internal/contracts/tests/common_test.go | 147 ++++++++++-------- .../contracts/tests/complex_composed_test.go | 10 +- internal/contracts/tests/composed_test.go | 42 ++--- internal/contracts/tests/index_change_test.go | 54 ++++--- internal/contracts/tests/primitive_test.go | 12 +- .../contracts/tests/system_contract_test.go | 121 +++++++------- .../tests/utils/procedure/execute.go | 91 ++++++----- .../contracts/tests/utils/setup/common.go | 20 ++- .../contracts/tests/utils/setup/composed.go | 41 ++--- .../contracts/tests/utils/setup/primitive.go | 60 ++++--- .../contracts/tests/utils/table/assert.go | 2 +- internal/init-system-contract/init.go | 10 +- internal/init-system-contract/init_test.go | 2 +- scripts/ci-tests.sh | 2 +- 30 files changed, 474 insertions(+), 373 deletions(-) diff --git a/.github/workflows/deploy-staging.yaml b/.github/workflows/deploy-staging.yaml index f7b71ada3..1074f5c9a 100644 --- a/.github/workflows/deploy-staging.yaml +++ b/.github/workflows/deploy-staging.yaml @@ -43,7 +43,7 @@ jobs: run: | curl -f -H "Authorization: token ${{ secrets.READ_TOKEN }}" \ --create-dirs -o ./deployments/network/staging/genesis.json \ - https://raw.githubusercontent.com/truflation/tsn-node-operator/main/configs/network/staging/genesis.json \ + https://raw.githubusercontent.com/trufnetwork/truf-node-operator/main/configs/network/staging/genesis.json \ --silent --show-error \ || echo "Error: Failed to download genesis.json (HTTP status code: $?)" diff --git a/.github/workflows/semantic-pr.yml b/.github/workflows/semantic-pr.yml index 50b1f2723..f0b43152b 100644 --- a/.github/workflows/semantic-pr.yml +++ b/.github/workflows/semantic-pr.yml @@ -38,7 +38,7 @@ jobs: message: | Hey there! 👋🏼 - We require pull request titles to follow [Truflation's Developer Guideline](https://github.com/truflation/developers) and it looks like your proposed title needs to be adjusted. + We require pull request titles to follow [Truflation's Developer Guideline](https://github.com/trufnetwork/developers) and it looks like your proposed title needs to be adjusted. Details: diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index aa1890af6..8c6bc2fda 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,8 +1,8 @@ # Contributing -Thanks for taking the time to contribute to tsn-db! +Thanks for taking the time to contribute to node! -Please follow the guidelines below when contributing. If you have any questions, please feel free to reach out to us on [Discord](https://discord.com/invite/5AMCBYxfW4) or use the [discussions](https://github.com/truflation/tsn-db/discussions) feature on GitHub. +Please follow the guidelines below when contributing. If you have any questions, please feel free to reach out to us on [Discord](https://discord.com/invite/5AMCBYxfW4) or use the [discussions](https://github.com/trufnetwork/node/discussions) feature on GitHub. ## Table of Contents @@ -16,20 +16,20 @@ Please follow the guidelines below when contributing. If you have any questions, ## Discussions -Discussions are for general discussions related to tsn-db. This can be to discuss tsn-db architecture, important tradeoffs / considerations, or any topics that cannot be directly closed by a pull request. +Discussions are for general discussions related to node. This can be to discuss tn-db architecture, important tradeoffs / considerations, or any topics that cannot be directly closed by a pull request. Examples of good discussions: -- Discussing potental ways to price queries in tsn-db +- Discussing potental ways to price queries in tn-db - Discussing potential procedures for handling consensus failures & changes (and a future issue if the discussion determines we need a feature, bug fix, or documentation change). Discussions can lead to an issue if they determine that a feature, bug fix, or documentation change is needed. ## Issues -Issues are for reporting bugs, requesting features, requesting repository documentation, or discussing any other changes that can be directly resolved with a pull request to the tsn-db repository. +Issues are for reporting bugs, requesting features, requesting repository documentation, or discussing any other changes that can be directly resolved with a pull request to the tn-db repository. -For general discussions, or discussions where it is unclear how the discussion would be closed by a pull request, please use the [discussion](https://github.com/truflation/tsn-db/discussions) section. +For general discussions, or discussions where it is unclear how the discussion would be closed by a pull request, please use the [discussion](https://github.com/trufnetwork/node/discussions) section. For opening issues, please follow the following guidelines: @@ -37,13 +37,13 @@ For opening issues, please follow the following guidelines: - **Search** the issue tracker before opening an issue to avoid duplicates. - **Be clear & detailed** about what you are reporting. If reporting a bug, please include a minimal reproducible example, a detailed explanation of your KwilD configuration, and any logs or error messages that may be relevant. -We strongly recommended submitting an issue before submitting a pull request, especially for pull requests that will require significant effort on your part. This is to ensure that the issue is not already being worked on and that your pull request will be accepted. Some features or fixes are intentionally not included in tsn-db - use issues to check with maintainers and save time! +We strongly recommended submitting an issue before submitting a pull request, especially for pull requests that will require significant effort on your part. This is to ensure that the issue is not already being worked on and that your pull request will be accepted. Some features or fixes are intentionally not included in tn-db - use issues to check with maintainers and save time! ## Pull Requests ### Commit Messages -tsn-db uses recommended [Go commit messages](https://go.dev/doc/contribute#commit_messages), with breaking changes and deprecations to be noted in the commit message footer. Commits should follow the following format: +tn-db uses recommended [Go commit messages](https://go.dev/doc/contribute#commit_messages), with breaking changes and deprecations to be noted in the commit message footer. Commits should follow the following format: ``` [Package/File/Directory Name]: [Concise and concrete description of the PR/Issue] @@ -65,7 +65,7 @@ Resolves #123 BREAKING CHANGE: This PR changes the behavior of the foo command. It now does something else. ``` -There are two types of breaking changes: API breaking changes and consensus breaking changes. API breaking changes are any changes that effect the external API of packages that are consumed outside of tsn-db. Consensus breaking changes are any changes that effect the consensus protocol of tsn-db (i.e. changes to the extension, or changes to the consensus protocol in internal). +There are two types of breaking changes: API breaking changes and consensus breaking changes. API breaking changes are any changes that effect the external API of packages that are consumed outside of tn-db. Consensus breaking changes are any changes that effect the consensus protocol of tn-db (i.e. changes to the extension, or changes to the consensus protocol in internal). Changes to internal packages (i.e. deployments, internal, and scripts) that do not affect the database or consensus are not considered breaking changes and do not need to be tagged in the commit footer. @@ -79,11 +79,11 @@ Please ensure that your contributions adhere to the following coding guidelines: ### Pull Request Process -1. Fork the repository by clicking the "Fork" button on the top right of the repository page. Clone the tsn-db repository and add your fork as a remote. +1. Fork the repository by clicking the "Fork" button on the top right of the repository page. Clone the tn-db repository and add your fork as a remote. ```bash -git clone https://github.com/truflation/tsn-db -cd tsn-db +git clone https://github.com/trufnetwork/node +cd tn-db git checkout main git remote add git fetch @@ -115,10 +115,10 @@ git push -u Please ensure that all the commits in your git history match the commit message [guidelines](#commit-messages) above. You can use `git rebase -i` to edit your commit history. -5. Open a pull request to the `main` branch of the tsn-db repository. Please follow the PR template. If `main` updates while the PR is open, please update the branch with latest `main` (rebase or merge). +5. Open a pull request to the `main` branch of the tn-db repository. Please follow the PR template. If `main` updates while the PR is open, please update the branch with latest `main` (rebase or merge). 6. Wait for a maintainer to review your PR. If there are any issues, you will be notified and you can make the necessary changes. ## License -By contributing to tsn-db, you agree that your contributions will be licensed under its [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). +By contributing to tn-db, you agree that your contributions will be licensed under its [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/README.md b/README.md index ec3e88467..8d017250e 100644 --- a/README.md +++ b/README.md @@ -1,34 +1,34 @@ -# TSN DB +# Truf Network -The database for Truflation Stream Network (TSN). It is built on top of the Kwil framework. +The database for Truf Network (TN). It is built on top of the Kwil framework. ## Overview -Learn more about Truflation at [Truflation.com](https://truflation.com). Check our internal components status [here](https://truflation.grafana.net/public-dashboards/6fe3021962bb4fe1a4aebf5baddecab6). +Learn more about Truflation at [Truflation.com](https://trufnetwork.com). Check our internal components status [here](https://trufnetwork.grafana.net/public-dashboards/6fe3021962bb4fe1a4aebf5baddecab6). ### SDKs -To interact with TSN, we provide official SDKs in multiple languages: +To interact with TN, we provide official SDKs in multiple languages: -- **Go SDK** ([tsn-sdk](https://github.com/truflation/tsn-sdk)): A Go library for interacting with TSN, providing tools for publishing, composing, and consuming economic data streams. Supports primitive streams, composed streams, and system streams. +- **Go SDK** ([sdk-go](https://github.com/trufnetwork/sdk-go)): A Go library for interacting with TN, providing tools for publishing, composing, and consuming economic data streams. Supports primitive streams, composed streams, and system streams. -- **TypeScript/JavaScript SDK** ([tsn-sdk-js](https://github.com/truflation/tsn-sdk-js)): A TypeScript/JavaScript library that offers the same capabilities as the Go SDK, with specific implementations for both Node.js and browser environments. +- **TypeScript/JavaScript SDK** ([sdk-js](https://github.com/trufnetwork/sdk-go-js)): A TypeScript/JavaScript library that offers the same capabilities as the Go SDK, with specific implementations for both Node.js and browser environments. Both SDKs provide high-level abstractions for: - Stream deployment and initialization - Data insertion and retrieval - Stream composition and management -- Configurable integration with any deployed TSN Node +- Configurable integration with any deployed TN Node ## Terminology -See [TERMINOLOGY.md](./TERMINOLOGY.md) for a list of terms used in the TSN project. +See [TERMINOLOGY.md](./TERMINOLOGY.md) for a list of terms used in the TN project. ## Build instructions ### Prerequisites -To build and run the TSN-DB, you will need the following installed on your system: +To build and run the TN-DB, you will need the following installed on your system: 1. [Go](https://golang.org/doc/install) 2. [Taskfile](https://taskfile.dev/installation) @@ -85,7 +85,7 @@ chmod +x ./.build/kwil-cli ``` 4. Export the kwil-cli before using it: ```shell -export PATH="$PATH:$HOME/tsn/.build" +export PATH="$PATH:$HOME/tn/.build" ``` ##### Run Postgres @@ -143,9 +143,9 @@ task indexer You can view the metrics dashboard at http://localhost:1337/v0/swagger. Replace `localhost` with the IP address or domain name of the server where the indexer is running. -You can view our deployed indexer at https://staging.tsn.truflation.com/v0/swagger. +You can view our deployed indexer at https://staging.tn.trufnetwork.com/v0/swagger. There you can see the list of available endpoints and their descriptions. -For example, you can see the list of transactions by calling the [/chain/transactions](https://staging.tsn.truflation.com/v0/chain/transactions) endpoint. +For example, you can see the list of transactions by calling the [/chain/transactions](https://staging.tn.trufnetwork.com/v0/chain/transactions) endpoint. ### Genesis File @@ -172,7 +172,7 @@ It still needs to be updated to use the correct private key. As our system contract is currently live on our staging server, you can fetch records from the system contract using the following command: ```shell -kwil-cli database call -a=get_unsafe_record -n=tsn_system_contract -o=34f9e432b4c70e840bc2021fd161d15ab5e19165 data_provider:4710a8d8f0d845da110086812a32de6d90d7ff5c stream_id:st1b148397e2ea36889efad820e2315d date_from:2024-06-01 date_to:2024-06-17 --private-key 0000000000000000000000000000000000000000000000000000000000000001 --provider https://staging.tsn.truflation.com +kwil-cli database call -a=get_unsafe_record -n=tn_system_contract -o=34f9e432b4c70e840bc2021fd161d15ab5e19165 data_provider:4710a8d8f0d845da110086812a32de6d90d7ff5c stream_id:st1b148397e2ea36889efad820e2315d date_from:2024-06-01 date_to:2024-06-17 --private-key 0000000000000000000000000000000000000000000000000000000000000001 --provider https://staging.tn.trufnetwork.com ``` in this example, we are fetching records from the system contract for the stream `st1b148397e2ea36889efad820e2315d` from the data provider `4710a8d8f0d845da110086812a32de6d90d7ff5c` which is Electric Vehicle Index that provided by @@ -197,7 +197,7 @@ list of available actions in the system contract: Currently, users can fetch records from the contract directly. ```shell -kwil-cli database call -a=get_record -n=st1b148397e2ea36889efad820e2315d -o=4710a8d8f0d845da110086812a32de6d90d7ff5c date_from:2024-06-01 date_to:2024-06-17 --private-key 0000000000000000000000000000000000000000000000000000000000000001 --provider https://staging.tsn.truflation.com +kwil-cli database call -a=get_record -n=st1b148397e2ea36889efad820e2315d -o=4710a8d8f0d845da110086812a32de6d90d7ff5c date_from:2024-06-01 date_to:2024-06-17 --private-key 0000000000000000000000000000000000000000000000000000000000000001 --provider https://staging.tn.trufnetwork.com ``` Users are able to fetch records from the stream contract without through the system contract to keep it simple at this phase, and in the hands of the Truflation as a data provider. Normally, before fetching records from the stream @@ -242,4 +242,4 @@ For detailed configuration and usage, refer to the workflow files in the `.githu ## License -The tsn-db repository is licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for more details. +The tn-db repository is licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for more details. diff --git a/TERMINOLOGY.md b/TERMINOLOGY.md index a1784b01e..1ace707d0 100644 --- a/TERMINOLOGY.md +++ b/TERMINOLOGY.md @@ -1,6 +1,6 @@ # Terminology -This document is a reference for the terminology used in the TSN project. It is intended to be a living document that evolves as the project evolves. It is meant to be a reference for developers and users of the TSN project. +This document is a reference for the terminology used in the TN project. It is intended to be a living document that evolves as the project evolves. It is meant to be a reference for developers and users of the TN project. ## Definitions @@ -14,7 +14,7 @@ This document is a reference for the terminology used in the TSN project. It is - EXTENSION: Short for Kwil extension. - PROCEDURE: Short for Kwil procedure. - DATA PROVIDER: An entity that creates/pushes primitive data OR creates taxonomy definitions. -- ENVIRONMENT: A TSN deployment that serves a purpose. E.g., local, staging, production. +- ENVIRONMENT: A TN deployment that serves a purpose. E.g., local, staging, production. - PRIMITIVE: A data element that is supplied directly from Data Provider. - STREAM RECORD: Or just RECORD. It's the value used to calculate indexes. If it's a primitive stream, it's the primitive value. - INDEX: A calculation over _RECORD_. E.g., `currentDateRecord / baseDateRecord`. @@ -22,14 +22,14 @@ This document is a reference for the terminology used in the TSN project. It is - UPGRADEABLE CONTRACT: A contract that doesn't need redeployment for important structural changes. - CHILD OF: A relation between streams, where a child stream is a subset of a parent stream - PARENT OF: A relation between streams, where a parent stream is a superset of a child stream. All streams that have children are Composed Streams. -- TRUFLATION DATABASE: The MariaDB instance that stores truflation data. It may be an instance of some environment (test, staging, prod). +- TRUFLATION DATABASE: The MariaDB instance that stores trufnetwork data. It may be an instance of some environment (test, staging, prod). - TRUFLATION DATABASE TABLE: We should NOT use _TABLE_ to refer it without correctly specifying; Otherwise, it creates confusion with kuneiform tables. - WHITELIST: A list of wallets that defines permission to perform a certain action. It may be "write" or "read" specific. -- PRIVATE KEY: A secret key that refers to a wallet. It may own contracts, or refer to an entity/user that needs to interact with the TSN-DB. -- SYSTEM CONTRACT: The unique contract within TSN that manages official streams and serves as the primary access point for stream queries. -- OFFICIAL STREAM: A stream that has been approved by TSN governance and registered in the System Contract. -- UNOFFICIAL STREAM: A stream that exists within TSN but has not been approved by governance or registered in the System Contract. -- TSN GOVERNANCE: The entity or group responsible for approving streams and managing the System Contract. +- PRIVATE KEY: A secret key that refers to a wallet. It may own contracts, or refer to an entity/user that needs to interact with the TN-DB. +- SYSTEM CONTRACT: The unique contract within TN that manages official streams and serves as the primary access point for stream queries. +- OFFICIAL STREAM: A stream that has been approved by TN governance and registered in the System Contract. +- UNOFFICIAL STREAM: A stream that exists within TN but has not been approved by governance or registered in the System Contract. +- TN GOVERNANCE: The entity or group responsible for approving streams and managing the System Contract. - SAFE READ: A query made through the System Contract that only returns data from official streams. - UNSAFE READ: A query made through the System Contract that can return data from any stream, official or unofficial. diff --git a/Taskfile.yml b/Taskfile.yml index 556bb472b..dc8fba374 100644 --- a/Taskfile.yml +++ b/Taskfile.yml @@ -68,9 +68,9 @@ tasks: - docker compose -f deployments/dev-gateway/dev-gateway-compose.yaml up -d - docker compose -f deployments/indexer/dev-indexer-compose.yaml up -d env: - BACKENDS: tsn-db-1:8484,tsn-db-2:8484 - CHAIN_ID: tsn-local - NODE_COMETBFT_ENDPOINT: http://tsn-db-1:26657 + BACKENDS: tn-db-1:8484,tn-db-2:8484 + CHAIN_ID: tn-local + NODE_COMETBFT_ENDPOINT: http://tn-db-1:26657 KWIL_PG_CONN: postgresql://kwild@kwil-postgres-1:5432/kwild?sslmode=disable compose-dev-down: @@ -144,11 +144,11 @@ tasks: - bash ./scripts/coverage.sh get-genesis: - desc: Get genesis file from tsn-node-operator repo + desc: Get genesis file from tn-node-operator repo dotenv: ['.env'] cmds: - | - curl -f -H "Authorization: token ${READ_TOKEN}" --create-dirs -o ./deployments/network/staging/genesis.json https://raw.githubusercontent.com/truflation/tsn-node-operator/main/configs/network/staging/genesis.json + curl -f -H "Authorization: token ${READ_TOKEN}" --create-dirs -o ./deployments/network/staging/genesis.json https://raw.githubusercontent.com/trufnetwork/truf-node-operator/main/configs/network/staging/genesis.json benchmark: desc: Run benchmark tests and regenerate the report diff --git a/cmd/init-system/main.go b/cmd/init-system/main.go index 020044b47..84d607514 100644 --- a/cmd/init-system/main.go +++ b/cmd/init-system/main.go @@ -15,7 +15,7 @@ import ( "github.com/aws/aws-sdk-go/service/ssm" "github.com/caarlos0/env/v11" "github.com/mitchellh/mapstructure" - init_system_contract "github.com/truflation/tsn-db/internal/init-system-contract" + init_system_contract "github.com/trufnetwork/node/internal/init-system-contract" ) // DeployContractResourceProperties represents the properties of the custom resource diff --git a/go.mod b/go.mod index 5b1b025d6..ce5d2db38 100644 --- a/go.mod +++ b/go.mod @@ -1,10 +1,11 @@ -module github.com/truflation/tsn-db +module github.com/trufnetwork/node go 1.22.1 require ( github.com/aws/aws-lambda-go v1.47.0 github.com/aws/aws-sdk-go v1.54.4 + github.com/caarlos0/env/v11 v11.2.2 github.com/cenkalti/backoff/v4 v4.3.0 github.com/cockroachdb/apd/v3 v3.2.1 github.com/docker/docker v27.3.1+incompatible @@ -18,35 +19,12 @@ require ( github.com/pkg/errors v0.9.1 github.com/samber/lo v1.47.0 github.com/stretchr/testify v1.9.0 - github.com/truflation/tsn-sdk v0.1.1-0.20241010060239-bca7759c3c10 + github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 + go.uber.org/zap v1.27.0 golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 golang.org/x/sync v0.8.0 ) -require ( - github.com/containerd/log v0.1.0 // indirect - github.com/distribution/reference v0.6.0 // indirect - github.com/docker/go-connections v0.5.0 // indirect - github.com/docker/go-units v0.5.0 // indirect - github.com/felixge/httpsnoop v1.0.4 // indirect - github.com/go-logr/logr v1.4.2 // indirect - github.com/go-logr/stdr v1.2.2 // indirect - github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect - github.com/jrick/logrotate v1.1.2 // indirect - github.com/moby/docker-image-spec v1.3.1 // indirect - github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect - github.com/opencontainers/go-digest v1.0.0 // indirect - github.com/opencontainers/image-spec v1.1.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0 // indirect - go.opentelemetry.io/otel v1.30.0 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.30.0 // indirect - go.opentelemetry.io/otel/metric v1.30.0 // indirect - go.opentelemetry.io/otel/sdk v1.30.0 // indirect - go.opentelemetry.io/otel/trace v1.30.0 // indirect - golang.org/x/time v0.6.0 // indirect - gotest.tools/v3 v3.5.1 // indirect -) - require ( dario.cat/mergo v1.0.0 // indirect github.com/DataDog/zstd v1.4.5 // indirect @@ -56,7 +34,6 @@ require ( github.com/beorn7/perks v1.0.1 // indirect github.com/bits-and-blooms/bitset v1.13.0 // indirect github.com/btcsuite/btcd/btcec/v2 v2.3.4 // indirect - github.com/caarlos0/env/v11 v11.2.2 github.com/cespare/xxhash v1.1.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/chzyer/readline v1.5.0 // indirect @@ -70,6 +47,7 @@ require ( github.com/cometbft/cometbft-db v0.11.0 // indirect github.com/consensys/bavard v0.1.13 // indirect github.com/consensys/gnark-crypto v0.12.1 // indirect + github.com/containerd/log v0.1.0 // indirect github.com/cosmos/gogoproto v1.7.0 // indirect github.com/crate-crypto/go-kzg-4844 v1.0.0 // indirect github.com/cstockton/go-conv v1.0.0 // indirect @@ -81,14 +59,20 @@ require ( github.com/dgraph-io/badger/v3 v3.2103.5 // indirect github.com/dgraph-io/ristretto v0.1.1 // indirect github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 // indirect + github.com/distribution/reference v0.6.0 // indirect + github.com/docker/go-connections v0.5.0 // indirect + github.com/docker/go-units v0.5.0 // indirect github.com/dustin/go-humanize v1.0.1 // indirect github.com/ethereum/c-kzg-4844 v1.0.2 // indirect github.com/ethereum/go-ethereum v1.14.8 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/getsentry/sentry-go v0.27.0 // indirect github.com/go-kit/kit v0.12.0 // indirect github.com/go-kit/log v0.2.1 // indirect github.com/go-logfmt/logfmt v0.6.0 // indirect + github.com/go-logr/logr v1.4.2 // indirect + github.com/go-logr/stdr v1.2.2 // indirect github.com/go-ole/go-ole v1.3.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/glog v1.2.1 // indirect @@ -100,6 +84,7 @@ require ( github.com/google/go-cmp v0.6.0 // indirect github.com/google/orderedcode v0.0.1 // indirect github.com/gorilla/websocket v1.5.3 // indirect + github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect github.com/hashicorp/hcl v1.0.0 // indirect github.com/holiman/uint256 v1.3.1 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect @@ -112,6 +97,7 @@ require ( github.com/jmespath/go-jmespath v0.4.0 // indirect github.com/jmhodges/levigo v1.0.0 // indirect github.com/jpillora/backoff v1.0.0 // indirect + github.com/jrick/logrotate v1.1.2 // indirect github.com/klauspost/compress v1.17.9 // indirect github.com/kr/pretty v0.3.1 // indirect github.com/kr/text v0.2.0 // indirect @@ -123,9 +109,13 @@ require ( github.com/mattn/go-runewidth v0.0.14 // indirect github.com/minio/highwayhash v1.0.2 // indirect github.com/mmcloughlin/addchain v0.4.0 // indirect + github.com/moby/docker-image-spec v1.3.1 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/near/borsh-go v0.3.1 // indirect github.com/oasisprotocol/curve25519-voi v0.0.0-20220708102147-0a8a51822cae // indirect github.com/olekukonko/tablewriter v0.0.6-0.20230925090304-df64c4bbad77 // indirect + github.com/opencontainers/go-digest v1.0.0 // indirect + github.com/opencontainers/image-spec v1.1.0 // indirect github.com/pelletier/go-toml/v2 v2.2.2 // indirect github.com/petermattis/goid v0.0.0-20240503122002-4b96552b8156 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect @@ -155,16 +145,23 @@ require ( github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346 // indirect go.etcd.io/bbolt v1.3.10 // indirect go.opencensus.io v0.24.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0 // indirect + go.opentelemetry.io/otel v1.30.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.30.0 // indirect + go.opentelemetry.io/otel/metric v1.30.0 // indirect + go.opentelemetry.io/otel/sdk v1.30.0 // indirect + go.opentelemetry.io/otel/trace v1.30.0 // indirect go.uber.org/multierr v1.11.0 // indirect - go.uber.org/zap v1.27.0 golang.org/x/crypto v0.27.0 // indirect golang.org/x/net v0.29.0 // indirect golang.org/x/sys v0.25.0 // indirect golang.org/x/text v0.18.0 // indirect + golang.org/x/time v0.6.0 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect google.golang.org/grpc v1.66.1 // indirect google.golang.org/protobuf v1.34.2 // indirect gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect + gotest.tools/v3 v3.5.1 // indirect rsc.io/tmplfunc v0.0.3 // indirect ) diff --git a/go.sum b/go.sum index 16c88461b..09fccb113 100644 --- a/go.sum +++ b/go.sum @@ -467,8 +467,8 @@ github.com/tklauser/numcpus v0.8.0 h1:Mx4Wwe/FjZLeQsK/6kt2EOepwwSl7SmJrK5bV/dXYg github.com/tklauser/numcpus v0.8.0/go.mod h1:ZJZlAY+dmR4eut8epnzf0u/VwodKmryxR8txiloSqBE= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346 h1:TvtdmeYsYEij78hS4oxnwikoiLdIrgav3BA+CbhaDAI= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346/go.mod h1:xKQhd7snlzKFuUi1taTGWjpRE8iFTA06DeacYi3CVFQ= -github.com/truflation/tsn-sdk v0.1.1-0.20241010060239-bca7759c3c10 h1:WxTIFzPp5qohnr7E0yPuCXgwnBhZN2mMRb8T8JExacU= -github.com/truflation/tsn-sdk v0.1.1-0.20241010060239-bca7759c3c10/go.mod h1:DghqhnXJRHyFNDXRYpiDKvO9TAv6Jepd8av+CbioURk= +github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 h1:LNZ99bITHatmYKVHH4YqBhpuAg2bUx8SlP2VHAWR4jE= +github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52/go.mod h1:xfGmTkamZxyAOG331+P2oceLFxR7u/4lIoELyYEeCR4= github.com/tyler-smith/go-bip39 v1.1.0 h1:5eUemwrMargf3BSLRRCalXT93Ns6pQJIjYQN2nyfOP8= github.com/tyler-smith/go-bip39 v1.1.0/go.mod h1:gUYDtqQw1JS3ZJ8UWVcGTGqqr6YIN3CWg+kkNaLt55U= github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= diff --git a/internal/benchmark/benchmark.go b/internal/benchmark/benchmark.go index 3cd908502..fad6975ac 100644 --- a/internal/benchmark/benchmark.go +++ b/internal/benchmark/benchmark.go @@ -3,7 +3,7 @@ package benchmark import ( "context" "fmt" - benchutil "github.com/truflation/tsn-db/internal/benchmark/util" + benchutil "github.com/trufnetwork/node/internal/benchmark/util" "log" "os" "time" @@ -12,8 +12,8 @@ import ( "github.com/pkg/errors" kwilTesting "github.com/kwilteam/kwil-db/testing" - "github.com/truflation/tsn-db/internal/benchmark/trees" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/benchmark/trees" + "github.com/trufnetwork/sdk-go/core/util" ) func runBenchmark(ctx context.Context, platform *kwilTesting.Platform, c BenchmarkCase, tree trees.Tree) ([]Result, error) { diff --git a/internal/benchmark/load_test.go b/internal/benchmark/load_test.go index 96139c30a..5f6b995a0 100644 --- a/internal/benchmark/load_test.go +++ b/internal/benchmark/load_test.go @@ -13,7 +13,7 @@ import ( kwilTesting "github.com/kwilteam/kwil-db/testing" "github.com/pkg/errors" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/sdk-go/core/util" ) // should execute docker", "rm", "-f", "kwil-testing-postgres diff --git a/internal/benchmark/setup.go b/internal/benchmark/setup.go index ad3a00500..876e3b2ec 100644 --- a/internal/benchmark/setup.go +++ b/internal/benchmark/setup.go @@ -17,10 +17,10 @@ import ( "github.com/kwilteam/kwil-db/parse" kwilTesting "github.com/kwilteam/kwil-db/testing" "github.com/pkg/errors" - "github.com/truflation/tsn-db/internal/benchmark/trees" - "github.com/truflation/tsn-db/internal/contracts" - "github.com/truflation/tsn-sdk/core/types" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/benchmark/trees" + "github.com/trufnetwork/node/internal/contracts" + "github.com/trufnetwork/sdk-go/core/types" + "github.com/trufnetwork/sdk-go/core/util" "golang.org/x/sync/errgroup" ) @@ -107,24 +107,32 @@ func setupSchemas( } func createAndInitializeSchema(ctx context.Context, platform *kwilTesting.Platform, schema *kwiltypes.Schema) error { - if err := platform.Engine.CreateDataset(ctx, platform.DB, schema, &common.TransactionData{ - Signer: platform.Deployer, + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: 0, + }, TxID: platform.Txid(), - Height: 0, - }); err != nil { + Signer: platform.Deployer, + } + + if err := platform.Engine.CreateDataset(txContext, platform.DB, schema); err != nil { return errors.Wrap(err, "failed to create dataset") } - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext2 := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: 1, + }, + TxID: platform.Txid(), + Signer: platform.Deployer, + Caller: MustEthereumAddressFromBytes(platform.Deployer).Address(), + } + _, err := platform.Engine.Procedure(txContext2, platform.DB, &common.ExecutionData{ Procedure: "init", Dataset: utils.GenerateDBID(schema.Name, platform.Deployer), Args: []any{}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - Caller: MustEthereumAddressFromBytes(platform.Deployer).Address(), - TxID: platform.Txid(), - Height: 1, - }, }) if err != nil { return errors.Wrap(err, "failed to initialize schema") @@ -264,8 +272,12 @@ func batchInsertMetadata(ctx context.Context, platform *kwilTesting.Platform, db sqlStmt += strings.Join(values, ", ") + txContext := &common.TxContext{ + Ctx: ctx, + } + // Execute the bulk insert - _, err := platform.Engine.Execute(ctx, platform.DB, dbid, sqlStmt, nil) + _, err := platform.Engine.Execute(txContext, platform.DB, dbid, sqlStmt, nil) if err != nil { return errors.Wrap(err, "failed to execute bulk insert for metadata") } @@ -291,8 +303,12 @@ func insertRecordsForPrimitive(ctx context.Context, platform *kwilTesting.Platfo sqlStmt += strings.Join(values, ", ") + txContext := &common.TxContext{ + Ctx: ctx, + } + // Execute the bulk insert - _, err := platform.Engine.Execute(ctx, platform.DB, dbid, sqlStmt, nil) + _, err := platform.Engine.Execute(txContext, platform.DB, dbid, sqlStmt, nil) if err != nil { return errors.Wrap(err, "failed to execute bulk insert") } diff --git a/internal/benchmark/types.go b/internal/benchmark/types.go index 63f2d1d3d..11dbbd1ff 100644 --- a/internal/benchmark/types.go +++ b/internal/benchmark/types.go @@ -3,7 +3,7 @@ package benchmark import ( "time" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/sdk-go/core/util" ) type ( diff --git a/internal/benchmark/utils.go b/internal/benchmark/utils.go index cadb06b73..749834f5b 100644 --- a/internal/benchmark/utils.go +++ b/internal/benchmark/utils.go @@ -14,8 +14,8 @@ import ( "github.com/kwilteam/kwil-db/common" kwilTesting "github.com/kwilteam/kwil-db/testing" "github.com/pkg/errors" - "github.com/truflation/tsn-db/internal/benchmark/benchexport" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/benchmark/benchexport" + "github.com/trufnetwork/sdk-go/core/util" "golang.org/x/exp/constraints" ) @@ -39,16 +39,18 @@ func generateRecords(fromDate, toDate time.Time) [][]any { // executeStreamProcedure executes a procedure on the given platform and database. // It handles the common setup for procedure execution, including transaction data. func executeStreamProcedure(ctx context.Context, platform *kwilTesting.Platform, dbid, procedure string, args []any, signer []byte) error { - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + TxID: platform.Txid(), + Signer: signer, + Caller: MustEthereumAddressFromBytes(signer).Address(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: procedure, Dataset: dbid, Args: args, - TransactionData: common.TransactionData{ - Signer: signer, - Caller: MustEthereumAddressFromBytes(signer).Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return errors.Wrap(err, "failed to execute stream procedure") diff --git a/internal/contracts/README.md b/internal/contracts/README.md index 32e11ca7c..fe373bb8e 100644 --- a/internal/contracts/README.md +++ b/internal/contracts/README.md @@ -20,11 +20,11 @@ These contracts define the core functionality of the TSN, including: ## Synchronization -We aim to keep these contracts in sync with the public versions in the [tsn-sdk repository](https://github.com/truflation/tsn-sdk). This private repository serves as the primary development environment. +We aim to keep these contracts in sync with the public versions in the [sdk-go repository](https://github.com/trufnetwork/sdk-go). This private repository serves as the primary development environment. ## Additional Resources -- [Detailed Contract Documentation](https://github.com/truflation/tsn-sdk/blob/main/docs/contracts.md) +- [Detailed Contract Documentation](https://github.com/trufnetwork/sdk-go/blob/main/docs/contracts.md) - Benchmark tool (located in this directory) - Kuneiform logic tests (located in this directory) diff --git a/internal/contracts/system_contract.kf b/internal/contracts/system_contract.kf index 36709a84a..4ab34872f 100644 --- a/internal/contracts/system_contract.kf +++ b/internal/contracts/system_contract.kf @@ -1,4 +1,4 @@ -database tsn_system_contract; +database tn_system_contract; // `system_streams` is the table that stores the streams that have been accepted by the TSN Gov table system_streams { diff --git a/internal/contracts/tests/common_test.go b/internal/contracts/tests/common_test.go index b7a2100cc..64ed788ab 100644 --- a/internal/contracts/tests/common_test.go +++ b/internal/contracts/tests/common_test.go @@ -4,7 +4,7 @@ import ( "context" "testing" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/procedure" + "github.com/trufnetwork/node/internal/contracts/tests/utils/procedure" "github.com/pkg/errors" "github.com/stretchr/testify/assert" @@ -16,8 +16,8 @@ import ( "github.com/kwilteam/kwil-db/parse" kwilTesting "github.com/kwilteam/kwil-db/testing" - "github.com/truflation/tsn-db/internal/contracts" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/contracts" + "github.com/trufnetwork/sdk-go/core/util" ) // ContractInfo holds information about a contract for testing purposes. @@ -285,16 +285,18 @@ func testInitializationLogic(t *testing.T, contractInfo ContractInfo) kwilTestin } dbid := getDBID(contractInfo) + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: contractInfo.Deployer.Bytes(), + TxID: platform.Txid(), + } + // Attempt to re-initialize the contract - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "init", Dataset: dbid, Args: []any{}, - TransactionData: common.TransactionData{ - Signer: contractInfo.Deployer.Bytes(), - TxID: platform.Txid(), - Height: 0, - }, }) assert.Error(t, err, "Contract should not be re-initializable") @@ -484,25 +486,30 @@ func setupContract(ctx context.Context, platform *kwilTesting.Platform, contract } schema.Name = contractInfo.StreamID.String() - return platform.Engine.CreateDataset(ctx, platform.DB, schema, &common.TransactionData{ - Signer: contractInfo.Deployer.Bytes(), - Caller: contractInfo.Deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }) + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: contractInfo.Deployer.Bytes(), + Caller: contractInfo.Deployer.Address(), + TxID: platform.Txid(), + } + + return platform.Engine.CreateDataset(txContext, platform.DB, schema) } func initializeContract(ctx context.Context, platform *kwilTesting.Platform, dbid string, deployer util.EthereumAddress) error { - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "init", Dataset: dbid, Args: []any{}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) return err } @@ -512,31 +519,35 @@ func getDBID(contractInfo ContractInfo) string { } func insertMetadata(ctx context.Context, platform *kwilTesting.Platform, deployer util.EthereumAddress, dbid string, key, value, valType string) error { - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "insert_metadata", Dataset: dbid, Args: []any{key, value, valType}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) return err } func getMetadata(ctx context.Context, platform *kwilTesting.Platform, deployer util.EthereumAddress, dbid, key string) ([]any, error) { - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_metadata", Dataset: dbid, Args: []any{key, true, nil}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return nil, err @@ -548,46 +559,52 @@ func getMetadata(ctx context.Context, platform *kwilTesting.Platform, deployer u } func disableMetadata(ctx context.Context, platform *kwilTesting.Platform, deployer util.EthereumAddress, dbid string, rowID *types.UUID) error { - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "disable_metadata", Dataset: dbid, Args: []any{rowID.String()}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) return err } func transferStreamOwnership(ctx context.Context, platform *kwilTesting.Platform, deployer util.EthereumAddress, dbid, newOwner string) error { - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "transfer_stream_ownership", Dataset: dbid, Args: []any{newOwner}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) return err } func checkReadPermissions(ctx context.Context, platform *kwilTesting.Platform, deployer util.EthereumAddress, dbid string, wallet string) (bool, error) { - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "is_wallet_allowed_to_read", Dataset: dbid, Args: []any{wallet}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return false, err @@ -603,16 +620,18 @@ func checkComposePermissions(ctx context.Context, platform *kwilTesting.Platform if err != nil { return false, err } - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "is_stream_allowed_to_compose", Dataset: dbid, Args: []any{foreignCaller}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return false, err diff --git a/internal/contracts/tests/complex_composed_test.go b/internal/contracts/tests/complex_composed_test.go index 2b37b5ddd..01bd4c75c 100644 --- a/internal/contracts/tests/complex_composed_test.go +++ b/internal/contracts/tests/complex_composed_test.go @@ -5,13 +5,13 @@ import ( "fmt" "testing" - testutils "github.com/truflation/tsn-db/internal/contracts/tests/utils" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/procedure" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/setup" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/table" + testutils "github.com/trufnetwork/node/internal/contracts/tests/utils" + "github.com/trufnetwork/node/internal/contracts/tests/utils/procedure" + "github.com/trufnetwork/node/internal/contracts/tests/utils/setup" + "github.com/trufnetwork/node/internal/contracts/tests/utils/table" "github.com/pkg/errors" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/sdk-go/core/util" "github.com/kwilteam/kwil-db/core/utils" kwilTesting "github.com/kwilteam/kwil-db/testing" diff --git a/internal/contracts/tests/composed_test.go b/internal/contracts/tests/composed_test.go index 5a26c9b1d..268de742a 100644 --- a/internal/contracts/tests/composed_test.go +++ b/internal/contracts/tests/composed_test.go @@ -6,17 +6,17 @@ import ( "strconv" "testing" - "github.com/truflation/tsn-sdk/core/types" + "github.com/trufnetwork/sdk-go/core/types" "github.com/kwilteam/kwil-db/common" "github.com/kwilteam/kwil-db/core/utils" kwilTesting "github.com/kwilteam/kwil-db/testing" "github.com/pkg/errors" "github.com/stretchr/testify/assert" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/procedure" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/setup" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/table" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/contracts/tests/utils/procedure" + "github.com/trufnetwork/node/internal/contracts/tests/utils/setup" + "github.com/trufnetwork/node/internal/contracts/tests/utils/table" + "github.com/trufnetwork/sdk-go/core/util" ) func TestComposed(t *testing.T) { @@ -560,16 +560,18 @@ func setTaxonomy(ctx context.Context, platform *kwilTesting.Platform, dbid strin startDate = taxonomies.StartDate.String() } - _, err = platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + _, err = platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "set_taxonomy", Dataset: dbid, Args: []any{dataProviders, streamIDs, decimalWeights, startDate}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return errors.Wrap(err, "Failed to execute set_taxonomy procedure") @@ -584,16 +586,18 @@ func disableTaxonomy(ctx context.Context, platform *kwilTesting.Platform, dbid s return errors.Wrap(err, "Failed to create Ethereum address from bytes") } - _, err = platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + _, err = platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "disable_taxonomy", Dataset: dbid, Args: []any{version}, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return errors.Wrap(err, "Failed to execute disable_taxonomy procedure") diff --git a/internal/contracts/tests/index_change_test.go b/internal/contracts/tests/index_change_test.go index bf0d7b352..399fdcfce 100644 --- a/internal/contracts/tests/index_change_test.go +++ b/internal/contracts/tests/index_change_test.go @@ -4,11 +4,11 @@ import ( "context" "testing" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/procedure" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/setup" + "github.com/trufnetwork/node/internal/contracts/tests/utils/procedure" + "github.com/trufnetwork/node/internal/contracts/tests/utils/setup" "github.com/pkg/errors" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/sdk-go/core/util" "github.com/kwilteam/kwil-db/common" "github.com/kwilteam/kwil-db/core/types/decimal" @@ -73,17 +73,19 @@ func testIndexChange(t *testing.T) func(ctx context.Context, platform *kwilTesti return errors.Wrap(err, "error creating ethereum address") } + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: signer.Bytes(), + Caller: signer.Address(), + TxID: platform.Txid(), + } + // Get index change for 7 days with 1 day interval - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_index_change", Dataset: dbid, Args: []any{"2023-01-02", "2023-01-08", nil, nil, 1}, - TransactionData: common.TransactionData{ - Signer: signer.Bytes(), - Caller: signer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return errors.Wrap(err, "error getting index change") @@ -119,7 +121,7 @@ func testIndexChange(t *testing.T) func(ctx context.Context, platform *kwilTesti } } -// testing https://system.docs.truflation.com/backend/cpi-calculations/workflow/yoy-values specification +// testing https://system.docs.trufnetwork.com/backend/cpi-calculations/workflow/yoy-values specification func testYoYIndexChange(t *testing.T) func(ctx context.Context, platform *kwilTesting.Platform) error { return func(ctx context.Context, platform *kwilTesting.Platform) error { @@ -160,17 +162,19 @@ func testYoYIndexChange(t *testing.T) func(ctx context.Context, platform *kwilTe return errors.Wrap(err, "error creating ethereum address") } + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: signer.Bytes(), + Caller: signer.Address(), + TxID: platform.Txid(), + } + // Test YoY calculation - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_index_change", Dataset: dbid, Args: []any{"2023-05-22", "2023-05-22", nil, nil, 365}, // 365 days interval for YoY - TransactionData: common.TransactionData{ - Signer: signer.Bytes(), - Caller: signer.Address(), - TxID: platform.Txid(), - Height: 0, - }, }) if err != nil { return errors.Wrap(err, "error getting index change") @@ -231,15 +235,17 @@ func testDivisionByZero(t *testing.T) func(ctx context.Context, platform *kwilTe return errors.Wrap(err, "error setting up primitive stream") } - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_index_change", Dataset: dbid, Args: []any{"2023-01-01", "2023-01-03", nil, nil, 1}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 0, - }, }) assert.Error(t, err, "division by zero") diff --git a/internal/contracts/tests/primitive_test.go b/internal/contracts/tests/primitive_test.go index 7e62922f0..6fd26ad86 100644 --- a/internal/contracts/tests/primitive_test.go +++ b/internal/contracts/tests/primitive_test.go @@ -5,13 +5,13 @@ import ( "testing" "github.com/stretchr/testify/assert" - testutils "github.com/truflation/tsn-db/internal/contracts/tests/utils" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/procedure" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/setup" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/table" + testutils "github.com/trufnetwork/node/internal/contracts/tests/utils" + "github.com/trufnetwork/node/internal/contracts/tests/utils/procedure" + "github.com/trufnetwork/node/internal/contracts/tests/utils/setup" + "github.com/trufnetwork/node/internal/contracts/tests/utils/table" - "github.com/truflation/tsn-sdk/core/types" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/sdk-go/core/types" + "github.com/trufnetwork/sdk-go/core/util" "github.com/pkg/errors" diff --git a/internal/contracts/tests/system_contract_test.go b/internal/contracts/tests/system_contract_test.go index f9ecc4d87..177a64cb1 100644 --- a/internal/contracts/tests/system_contract_test.go +++ b/internal/contracts/tests/system_contract_test.go @@ -12,10 +12,10 @@ import ( "github.com/kwilteam/kwil-db/parse" kwilTesting "github.com/kwilteam/kwil-db/testing" - "github.com/truflation/tsn-db/internal/contracts" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/procedure" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/setup" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/contracts" + "github.com/trufnetwork/node/internal/contracts/tests/utils/procedure" + "github.com/trufnetwork/node/internal/contracts/tests/utils/setup" + "github.com/trufnetwork/sdk-go/core/util" ) const ( @@ -196,12 +196,15 @@ func deploySystemContract(ctx context.Context, platform *kwilTesting.Platform) e return errors.Wrap(err, "Failed to create system contract deployer") } - return platform.Engine.CreateDataset(ctx, platform.DB, schema, &common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: platform.Txid(), - Height: 2, - }) + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 2}, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + TxID: platform.Txid(), + } + + return platform.Engine.CreateDataset(txContext, platform.DB, schema) } func checkContractExists(ctx context.Context, platform *kwilTesting.Platform, contractName string) (bool, error) { @@ -235,43 +238,49 @@ func deployPrimitiveStreamWithData(ctx context.Context, platform *kwilTesting.Pl } func executeAcceptStream(ctx context.Context, platform *kwilTesting.Platform, dataProvider util.EthereumAddress, streamID util.StreamId) error { - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 3}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "accept_stream", Dataset: utils.GenerateDBID(systemContractName, platform.Deployer), Args: []any{dataProvider.Address(), streamID.String()}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 3, - }, }) return err } func executeRevokeStream(ctx context.Context, platform *kwilTesting.Platform, dataProvider util.EthereumAddress, streamID util.StreamId) error { - _, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 4}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + _, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "revoke_stream", Dataset: utils.GenerateDBID(systemContractName, platform.Deployer), Args: []any{dataProvider.Address(), streamID.String()}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 4, - }, }) return err } func isStreamAccepted(ctx context.Context, platform *kwilTesting.Platform, dataProvider util.EthereumAddress, streamID util.StreamId) (bool, error) { - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 5}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_official_stream", Dataset: utils.GenerateDBID(systemContractName, platform.Deployer), Args: []any{dataProvider.Address(), streamID.String()}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 5, - }, }) if err != nil { return false, err @@ -283,43 +292,49 @@ func isStreamAccepted(ctx context.Context, platform *kwilTesting.Platform, dataP } func executeGetUnsafeRecord(ctx context.Context, platform *kwilTesting.Platform, dataProvider util.EthereumAddress, streamID util.StreamId, dateFrom, dateTo string, frozenAt int64) ([][]any, error) { - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 6}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_unsafe_record", Dataset: utils.GenerateDBID(systemContractName, platform.Deployer), Args: []any{dataProvider.Address(), streamID.String(), dateFrom, dateTo, frozenAt}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 6, - }, }) return result.Rows, err } func executeGetUnsafeIndex(ctx context.Context, platform *kwilTesting.Platform, dataProvider util.EthereumAddress, streamID util.StreamId, dateFrom, dateTo string, frozenAt int64) ([][]any, error) { - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 7}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_unsafe_index", Dataset: utils.GenerateDBID(systemContractName, platform.Deployer), Args: []any{dataProvider.Address(), streamID.String(), dateFrom, dateTo, frozenAt, nil}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 7, - }, }) return result.Rows, err } func executeGetRecord(ctx context.Context, platform *kwilTesting.Platform, dataProvider util.EthereumAddress, streamID util.StreamId, dateFrom, dateTo string, frozenAt int64) ([][]any, error) { - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 8}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_record", Dataset: utils.GenerateDBID(systemContractName, platform.Deployer), Args: []any{dataProvider.Address(), streamID.String(), dateFrom, dateTo, frozenAt}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 8, - }, }) // can't just return result.Rows, err, otherwise we get a nil pointer dereference @@ -331,15 +346,17 @@ func executeGetRecord(ctx context.Context, platform *kwilTesting.Platform, dataP } func executeGetIndex(ctx context.Context, platform *kwilTesting.Platform, dataProvider util.EthereumAddress, streamID util.StreamId, dateFrom, dateTo string, frozenAt int64) ([][]any, error) { - result, err := platform.Engine.Procedure(ctx, platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 9}, + Signer: platform.Deployer, + TxID: platform.Txid(), + } + + result, err := platform.Engine.Procedure(txContext, platform.DB, &common.ExecutionData{ Procedure: "get_index", Dataset: utils.GenerateDBID(systemContractName, platform.Deployer), Args: []any{dataProvider.Address(), streamID.String(), dateFrom, dateTo, frozenAt, nil}, - TransactionData: common.TransactionData{ - Signer: platform.Deployer, - TxID: platform.Txid(), - Height: 9, - }, }) return result.Rows, err } diff --git a/internal/contracts/tests/utils/procedure/execute.go b/internal/contracts/tests/utils/procedure/execute.go index 27298a70b..ac818d94a 100644 --- a/internal/contracts/tests/utils/procedure/execute.go +++ b/internal/contracts/tests/utils/procedure/execute.go @@ -3,8 +3,7 @@ package procedure import ( "context" "fmt" - - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/sdk-go/core/util" "github.com/kwilteam/kwil-db/common" kwilTesting "github.com/kwilteam/kwil-db/testing" @@ -17,16 +16,20 @@ func GetRecord(ctx context.Context, input GetRecordInput) ([]ResultRow, error) { return nil, errors.Wrap(err, "error in getRecord") } - result, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: input.Height, + }, + TxID: input.Platform.Txid(), + Signer: input.Platform.Deployer, + Caller: deployer.Address(), + } + + result, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "get_record", Dataset: input.DBID, Args: []any{input.DateFrom, input.DateTo, input.FrozenAt}, - TransactionData: common.TransactionData{ - Signer: input.Platform.Deployer, - Caller: deployer.Address(), - TxID: input.Platform.Txid(), - Height: input.Height, - }, }) if err != nil { return nil, errors.Wrap(err, "error in getRecord") @@ -41,16 +44,20 @@ func GetIndex(ctx context.Context, input GetIndexInput) ([]ResultRow, error) { return nil, errors.Wrap(err, "error in getIndex") } - result, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: input.Height, + }, + TxID: input.Platform.Txid(), + Signer: input.Platform.Deployer, + Caller: deployer.Address(), + } + + result, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "get_index", Dataset: input.DBID, Args: []any{input.DateFrom, input.DateTo, input.FrozenAt, input.BaseDate}, - TransactionData: common.TransactionData{ - Signer: input.Platform.Deployer, - Caller: deployer.Address(), - TxID: input.Platform.Txid(), - Height: input.Height, - }, }) if err != nil { return nil, errors.Wrap(err, "error in getIndex") @@ -65,16 +72,20 @@ func GetIndexChange(ctx context.Context, input GetIndexChangeInput) ([]ResultRow return nil, errors.Wrap(err, "error in getIndexChange") } - result, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: input.Height, + }, + TxID: input.Platform.Txid(), + Signer: input.Platform.Deployer, + Caller: deployer.Address(), + } + + result, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "get_index_change", Dataset: input.DBID, Args: []any{input.DateFrom, input.DateTo, input.FrozenAt, input.BaseDate, input.Interval}, - TransactionData: common.TransactionData{ - Signer: input.Platform.Deployer, - Caller: deployer.Address(), - TxID: input.Platform.Txid(), - Height: input.Height, - }, }) if err != nil { return nil, errors.Wrap(err, "error in getIndexChange") @@ -89,16 +100,20 @@ func GetFirstRecord(ctx context.Context, input GetFirstRecordInput) ([]ResultRow return nil, errors.Wrap(err, "error in getFirstRecord") } - result, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: input.Height, + }, + TxID: input.Platform.Txid(), + Signer: input.Platform.Deployer, + Caller: deployer.Address(), + } + + result, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "get_first_record", Dataset: input.DBID, Args: []any{input.AfterDate, input.FrozenAt}, - TransactionData: common.TransactionData{ - Signer: input.Platform.Deployer, - Caller: deployer.Address(), - TxID: input.Platform.Txid(), - Height: input.Height, - }, }) if err != nil { return nil, errors.Wrap(err, "error in getFirstRecord") @@ -140,16 +155,18 @@ func DescribeTaxonomies(ctx context.Context, input DescribeTaxonomiesInput) ([]R return nil, errors.Wrap(err, "error in DescribeTaxonomies.NewEthereumAddressFromBytes") } - result, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + BlockContext: &common.BlockContext{Height: 0}, + Signer: input.Platform.Deployer, + Caller: deployer.Address(), + TxID: input.Platform.Txid(), + Ctx: ctx, + } + + result, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "describe_taxonomies", Dataset: input.DBID, Args: []any{input.LatestVersion}, - TransactionData: common.TransactionData{ - Signer: input.Platform.Deployer, - Caller: deployer.Address(), - TxID: input.Platform.Txid(), - Height: 0, - }, }) if err != nil { return nil, errors.Wrap(err, "error in DescribeTaxonomies.Procedure") diff --git a/internal/contracts/tests/utils/setup/common.go b/internal/contracts/tests/utils/setup/common.go index db682f358..338ee4684 100644 --- a/internal/contracts/tests/utils/setup/common.go +++ b/internal/contracts/tests/utils/setup/common.go @@ -5,7 +5,7 @@ import ( "github.com/kwilteam/kwil-db/common" kwilTesting "github.com/kwilteam/kwil-db/testing" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/sdk-go/core/util" ) type InitializeContractInput struct { @@ -16,16 +16,20 @@ type InitializeContractInput struct { } func initializeContract(ctx context.Context, input InitializeContractInput) error { - _, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: input.Height, + }, + TxID: input.Platform.Txid(), + Signer: input.Deployer.Bytes(), + Caller: input.Deployer.Address(), + } + + _, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "init", Dataset: input.Dbid, Args: []any{}, - TransactionData: common.TransactionData{ - Signer: input.Deployer.Bytes(), - Caller: input.Deployer.Address(), - TxID: input.Platform.Txid(), - Height: input.Height, - }, }) return err } diff --git a/internal/contracts/tests/utils/setup/composed.go b/internal/contracts/tests/utils/setup/composed.go index a8d4b1ebf..c78158a34 100644 --- a/internal/contracts/tests/utils/setup/composed.go +++ b/internal/contracts/tests/utils/setup/composed.go @@ -10,11 +10,11 @@ import ( "github.com/kwilteam/kwil-db/parse" kwilTesting "github.com/kwilteam/kwil-db/testing" "github.com/pkg/errors" - "github.com/truflation/tsn-db/internal/contracts" - testdate "github.com/truflation/tsn-db/internal/contracts/tests/utils/date" - testtable "github.com/truflation/tsn-db/internal/contracts/tests/utils/table" - "github.com/truflation/tsn-sdk/core/types" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/contracts" + testdate "github.com/trufnetwork/node/internal/contracts/tests/utils/date" + testtable "github.com/trufnetwork/node/internal/contracts/tests/utils/table" + "github.com/trufnetwork/sdk-go/core/types" + "github.com/trufnetwork/sdk-go/core/util" ) type ComposedStreamDefinition struct { @@ -41,12 +41,15 @@ func setupComposedAndPrimitives(ctx context.Context, input SetupComposedAndPrimi } composedSchema.Name = input.ComposedStreamDefinition.StreamLocator.StreamId.String() - if err := input.Platform.Engine.CreateDataset(ctx, input.Platform.DB, composedSchema, &common.TransactionData{ - Signer: input.ComposedStreamDefinition.StreamLocator.DataProvider.Bytes(), - Caller: input.ComposedStreamDefinition.StreamLocator.DataProvider.Address(), - TxID: input.Platform.Txid(), - Height: input.Height, - }); err != nil { + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: input.Height}, + Signer: input.ComposedStreamDefinition.StreamLocator.DataProvider.Bytes(), + Caller: input.ComposedStreamDefinition.StreamLocator.DataProvider.Address(), + TxID: input.Platform.Txid(), + } + + if err := input.Platform.Engine.CreateDataset(txContext, input.Platform.DB, composedSchema); err != nil { return errors.Wrap(err, "error creating composed dataset") } @@ -226,7 +229,15 @@ func setTaxonomy(ctx context.Context, input SetTaxonomyInput) error { dbid := utils.GenerateDBID(input.composedStream.StreamLocator.StreamId.String(), input.composedStream.StreamLocator.DataProvider.Bytes()) - _, err = input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{Height: 0}, + Signer: input.Platform.Deployer, + Caller: deployer.Address(), + TxID: input.Platform.Txid(), + } + + _, err = input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "set_taxonomy", Dataset: dbid, Args: []any{ @@ -235,12 +246,6 @@ func setTaxonomy(ctx context.Context, input SetTaxonomyInput) error { weightStrings, startDate, }, - TransactionData: common.TransactionData{ - Signer: input.Platform.Deployer, - Caller: deployer.Address(), - TxID: input.Platform.Txid(), - Height: 0, - }, }) return err } diff --git a/internal/contracts/tests/utils/setup/primitive.go b/internal/contracts/tests/utils/setup/primitive.go index 5c679756f..47efc358d 100644 --- a/internal/contracts/tests/utils/setup/primitive.go +++ b/internal/contracts/tests/utils/setup/primitive.go @@ -3,7 +3,7 @@ package setup import ( "context" - "github.com/truflation/tsn-sdk/core/types" + "github.com/trufnetwork/sdk-go/core/types" "github.com/golang-sql/civil" "github.com/kwilteam/kwil-db/common" @@ -11,10 +11,10 @@ import ( "github.com/kwilteam/kwil-db/parse" kwilTesting "github.com/kwilteam/kwil-db/testing" "github.com/pkg/errors" - "github.com/truflation/tsn-db/internal/contracts" - testdate "github.com/truflation/tsn-db/internal/contracts/tests/utils/date" - testtable "github.com/truflation/tsn-db/internal/contracts/tests/utils/table" - "github.com/truflation/tsn-sdk/core/util" + "github.com/trufnetwork/node/internal/contracts" + testdate "github.com/trufnetwork/node/internal/contracts/tests/utils/date" + testtable "github.com/trufnetwork/node/internal/contracts/tests/utils/table" + "github.com/trufnetwork/sdk-go/core/util" ) type PrimitiveStreamDefinition struct { @@ -56,12 +56,17 @@ func setupPrimitive(ctx context.Context, setupInput SetupPrimitiveInput) error { return errors.Wrap(err, "error in setupPrimitive") } - if err := setupInput.Platform.Engine.CreateDataset(ctx, setupInput.Platform.DB, primitiveSchema, &common.TransactionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: setupInput.Height, + }, + TxID: setupInput.Platform.Txid(), Signer: deployer.Bytes(), Caller: deployer.Address(), - TxID: setupInput.Platform.Txid(), - Height: setupInput.Height, - }); err != nil { + } + + if err := setupInput.Platform.Engine.CreateDataset(txContext, setupInput.Platform.DB, primitiveSchema); err != nil { return errors.Wrap(err, "error creating primitive dataset") } @@ -172,16 +177,21 @@ func InsertMarkdownPrimitiveData(ctx context.Context, input InsertMarkdownDataIn if value == "" { continue } - _, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: input.Height, + }, + TxID: txid, + Signer: signer.Bytes(), + Caller: signer.Address(), + } + + _, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "insert_record", Dataset: dbid, Args: []any{testdate.MustParseDate(date), value}, - TransactionData: common.TransactionData{ - Signer: signer.Bytes(), - Caller: signer.Address(), - TxID: txid, - Height: input.Height, - }, }) if err != nil { return err @@ -216,16 +226,20 @@ func insertPrimitiveData(ctx context.Context, input InsertPrimitiveDataInput) er } for _, arg := range args { - _, err := input.Platform.Engine.Procedure(ctx, input.Platform.DB, &common.ExecutionData{ + txContext := &common.TxContext{ + Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: input.height, + }, + TxID: txid, + Signer: deployer.Bytes(), + Caller: deployer.Address(), + } + + _, err := input.Platform.Engine.Procedure(txContext, input.Platform.DB, &common.ExecutionData{ Procedure: "insert_record", Dataset: dbid, Args: arg, - TransactionData: common.TransactionData{ - Signer: deployer.Bytes(), - Caller: deployer.Address(), - TxID: txid, - Height: input.height, - }, }) if err != nil { return err diff --git a/internal/contracts/tests/utils/table/assert.go b/internal/contracts/tests/utils/table/assert.go index 4086800a6..4b1e5d7c7 100644 --- a/internal/contracts/tests/utils/table/assert.go +++ b/internal/contracts/tests/utils/table/assert.go @@ -2,7 +2,7 @@ package table import ( "github.com/stretchr/testify/assert" - "github.com/truflation/tsn-db/internal/contracts/tests/utils/procedure" + "github.com/trufnetwork/node/internal/contracts/tests/utils/procedure" "testing" ) diff --git a/internal/init-system-contract/init.go b/internal/init-system-contract/init.go index 3a0744094..0249bd945 100644 --- a/internal/init-system-contract/init.go +++ b/internal/init-system-contract/init.go @@ -16,12 +16,12 @@ import ( ) type InitSystemContractOptions struct { - // PrivateKey is the private key of the account that will deploy the system contract. i.e., the TSN wallet + // PrivateKey is the private key of the account that will deploy the system contract. i.e., the TN wallet PrivateKey string - // ProviderUrl we're using the gateway client to interact with the TSN, so it should be the gateway URL + // ProviderUrl we're using the gateway client to interact with the TN, so it should be the gateway URL ProviderUrl string SystemContractContent string - // RetryTimeout is the maximum time to wait for the TSN to start + // RetryTimeout is the maximum time to wait for the TN to start RetryTimeout time.Duration } @@ -40,7 +40,7 @@ func InitSystemContract(ctx context.Context, options InitSystemContractOptions) var kwilClient clientType.Client - // Make sure the TSN is running. We expect to receive pong. On this step, we retry for the max timeout + // Make sure the TN is running. We expect to receive pong. On this step, we retry for the max timeout err = backoff.RetryNotify(func() error { kwilClient, err = gatewayclient.NewClient(ctx, options.ProviderUrl, &gatewayclient.GatewayOptions{ Options: clientType.Options{ @@ -66,7 +66,7 @@ func InitSystemContract(ctx context.Context, options InitSystemContractOptions) backoff.WithMaxInterval(15*time.Second), backoff.WithMaxElapsedTime(options.RetryTimeout), ), func(err error, duration time.Duration) { - zap.L().Warn("Error while waiting for TSN to start", zap.Error(err), zap.String("retry_in", duration.String())) + zap.L().Warn("Error while waiting for TN to start", zap.Error(err), zap.String("retry_in", duration.String())) }) if err != nil { diff --git a/internal/init-system-contract/init_test.go b/internal/init-system-contract/init_test.go index ec670818e..6d0188ca6 100644 --- a/internal/init-system-contract/init_test.go +++ b/internal/init-system-contract/init_test.go @@ -6,7 +6,7 @@ import ( "testing" "time" - "github.com/truflation/tsn-db/internal/contracts" + "github.com/trufnetwork/node/internal/contracts" ) func TestInitSystemContract(t *testing.T) { diff --git a/scripts/ci-tests.sh b/scripts/ci-tests.sh index c66655a2f..c7c4ea0ea 100755 --- a/scripts/ci-tests.sh +++ b/scripts/ci-tests.sh @@ -43,5 +43,5 @@ function expect_success() { return $FAILURE } -echo -e "❓ Making sure we're able to ping the TSN node\n" +echo -e "❓ Making sure we're able to ping the TN node\n" expect_success "$(../.build/kwil-cli utils ping 2>&1)" \ No newline at end of file From 82bba839c8463fb370d5bf610bb786278001b34b Mon Sep 17 00:00:00 2001 From: Michael Buntarman Date: Sat, 4 Jan 2025 00:43:48 +0700 Subject: [PATCH 03/14] fix: able to run latest node version (#769) --- deployments/Dockerfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deployments/Dockerfile b/deployments/Dockerfile index 5b5c5bb4b..adaa89365 100644 --- a/deployments/Dockerfile +++ b/deployments/Dockerfile @@ -24,7 +24,7 @@ FROM alpine:latest WORKDIR /app # add postgres client tools -RUN apk add --no-cache postgresql-client +RUN apk add --no-cache postgresql16-client # move .build content to /app COPY --from=build /app/.build/* /app/ From 089ef3ceb74f2f9065ccfe130825ac7d4f77feb3 Mon Sep 17 00:00:00 2001 From: Michael Buntarman Date: Tue, 7 Jan 2025 19:03:04 +0700 Subject: [PATCH 04/14] test: benchmark new timestamp contracts (#771) --- internal/benchmark/README.md | 6 + internal/benchmark/benchmark.go | 13 +- internal/benchmark/constants.go | 9 +- internal/benchmark/load_test.go | 29 +- internal/benchmark/load_unix_test.go | 181 +++ internal/benchmark/setup.go | 43 +- internal/benchmark/utils.go | 40 +- .../composed_stream_template_unix.kf | 1106 +++++++++++++++++ internal/contracts/contracts.go | 6 + internal/contracts/primitive_stream_unix.kf | 721 +++++++++++ 10 files changed, 2108 insertions(+), 46 deletions(-) create mode 100644 internal/benchmark/load_unix_test.go create mode 100644 internal/contracts/composed_stream_template_unix.kf create mode 100644 internal/contracts/primitive_stream_unix.kf diff --git a/internal/benchmark/README.md b/internal/benchmark/README.md index 6dd2b6dc3..9b6b95038 100644 --- a/internal/benchmark/README.md +++ b/internal/benchmark/README.md @@ -29,6 +29,12 @@ From the root of the project, run the following command: go test -v ./internal/benchmark ``` +To run a specific test case, use the `-run` flag: +``` +go test -run TestBenchUnix ./internal/benchmark -v +``` +The above command will run the `TestBenchUnix` test case. + ## Results After running, the benchmark will output performance metrics for each test case, including: diff --git a/internal/benchmark/benchmark.go b/internal/benchmark/benchmark.go index fad6975ac..a678f3ddc 100644 --- a/internal/benchmark/benchmark.go +++ b/internal/benchmark/benchmark.go @@ -16,12 +16,13 @@ import ( "github.com/trufnetwork/sdk-go/core/util" ) -func runBenchmark(ctx context.Context, platform *kwilTesting.Platform, c BenchmarkCase, tree trees.Tree) ([]Result, error) { +func runBenchmark(ctx context.Context, platform *kwilTesting.Platform, c BenchmarkCase, tree trees.Tree, unixOnly bool) ([]Result, error) { var results []Result err := setupSchemas(ctx, platform, SetupSchemasInput{ BenchmarkCase: c, Tree: tree, + UnixOnly: unixOnly, }) if err != nil { return nil, errors.Wrap(err, "failed to setup schemas") @@ -35,6 +36,7 @@ func runBenchmark(ctx context.Context, platform *kwilTesting.Platform, c Benchma Days: day, Procedure: procedure, Tree: tree, + UnixOnly: unixOnly, }) if err != nil { return nil, errors.Wrap(err, "failed to run single test") @@ -52,6 +54,7 @@ type RunSingleTestInput struct { Days int Procedure ProcedureEnum Tree trees.Tree + UnixOnly bool } // runSingleTest runs a single test for the given input and returns the result. @@ -60,6 +63,10 @@ func runSingleTest(ctx context.Context, input RunSingleTestInput) (Result, error nthDbId := utils.GenerateDBID(getStreamId(0).String(), input.Platform.Deployer) fromDate := fixedDate.AddDate(0, 0, -input.Days).Format("2006-01-02") toDate := fixedDate.Format("2006-01-02") + if input.UnixOnly { + fromDate = fmt.Sprintf("%d", fixedDate.AddDate(0, 0, -input.Days).Unix()) + toDate = fmt.Sprintf("%d", fixedDate.Unix()) + } result := Result{ Case: input.Case, @@ -127,7 +134,7 @@ type RunBenchmarkInput struct { } // it returns a result channel to be accumulated by the caller -func getBenchmarFn(benchmarkCase BenchmarkCase, resultCh *chan []Result) func(ctx context.Context, platform *kwilTesting.Platform) error { +func getBenchmarFn(benchmarkCase BenchmarkCase, resultCh *chan []Result, unixOnly bool) func(ctx context.Context, platform *kwilTesting.Platform) error { return func(ctx context.Context, platform *kwilTesting.Platform) error { log.Println("running benchmark", benchmarkCase) platform.Deployer = deployer.Bytes() @@ -142,7 +149,7 @@ func getBenchmarFn(benchmarkCase BenchmarkCase, resultCh *chan []Result) func(ct return fmt.Errorf("tree max depth (%d) is greater than max depth (%d)", tree.MaxDepth, maxDepth) } - results, err := runBenchmark(ctx, platform, benchmarkCase, tree) + results, err := runBenchmark(ctx, platform, benchmarkCase, tree, unixOnly) if err != nil { return errors.Wrap(err, "failed to run benchmark") } diff --git a/internal/benchmark/constants.go b/internal/benchmark/constants.go index 7ad9ce3be..48a1feeb1 100644 --- a/internal/benchmark/constants.go +++ b/internal/benchmark/constants.go @@ -5,8 +5,9 @@ import ( ) var ( - readerAddress = MustNewEthereumAddressFromString("0x0000000000000000010000000000000000000001") - deployer = MustNewEthereumAddressFromString("0x0000000000000000000000000000000200000000") - fixedDate = time.Date(2021, 1, 1, 0, 0, 0, 0, time.UTC) - maxDepth = 179 // found empirically + readerAddress = MustNewEthereumAddressFromString("0x0000000000000000010000000000000000000001") + deployer = MustNewEthereumAddressFromString("0x0000000000000000000000000000000200000000") + fixedDate = time.Date(2021, 1, 1, 0, 0, 0, 0, time.UTC) + maxDepth = 179 // found empirically + insertFreqInTime = 1 * time.Minute ) diff --git a/internal/benchmark/load_test.go b/internal/benchmark/load_test.go index 5f6b995a0..09549d9bb 100644 --- a/internal/benchmark/load_test.go +++ b/internal/benchmark/load_test.go @@ -5,7 +5,6 @@ import ( "fmt" "math" "os" - "os/exec" "os/signal" "strconv" "testing" @@ -16,16 +15,6 @@ import ( "github.com/trufnetwork/sdk-go/core/util" ) -// should execute docker", "rm", "-f", "kwil-testing-postgres -func cleanupDocker() { - // Execute the cleanup command - cmd := exec.Command("docker", "rm", "-f", "kwil-testing-postgres") - err := cmd.Run() - if err != nil { - fmt.Printf("Error during cleanup: %v\n", err) - } -} - // Main benchmark test function func TestBench(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) @@ -41,6 +30,7 @@ func TestBench(t *testing.T) { cancel() } }() + defer cleanupDocker() // set default LOG_RESULTS to true if os.Getenv("LOG_RESULTS") == "" { @@ -127,7 +117,7 @@ func TestBench(t *testing.T) { }, }, // use pointer, so we can reassign the results channel - &resultsCh)) + &resultsCh, false)) } } @@ -189,18 +179,3 @@ func TestBench(t *testing.T) { t.Fatalf("failed to save results: %s", err) } } - -func chunk[T any](arr []T, chunkSize int) [][]T { - var result [][]T - - for i := 0; i < len(arr); i += chunkSize { - end := i + chunkSize - if end > len(arr) { - end = len(arr) - } - - result = append(result, arr[i:end]) - } - - return result -} diff --git a/internal/benchmark/load_unix_test.go b/internal/benchmark/load_unix_test.go new file mode 100644 index 000000000..9a0a5c5da --- /dev/null +++ b/internal/benchmark/load_unix_test.go @@ -0,0 +1,181 @@ +package benchmark + +import ( + "context" + "fmt" + "os" + "os/signal" + "strconv" + "testing" + "time" + + kwilTesting "github.com/kwilteam/kwil-db/testing" + "github.com/pkg/errors" + "github.com/trufnetwork/sdk-go/core/util" +) + +// Main benchmark test function +func TestBenchUnix(t *testing.T) { + ctx, cancel := context.WithCancel(context.Background()) + t.Cleanup(cancel) + + // notify on interrupt. Otherwise, tests will not stop + c := make(chan os.Signal, 1) + signal.Notify(c, os.Interrupt) + go func() { + for range c { + fmt.Println("interrupt signal received") + cleanupDocker() + cancel() + } + }() + defer cleanupDocker() + + // set default LOG_RESULTS to true + if os.Getenv("LOG_RESULTS") == "" { + os.Setenv("LOG_RESULTS", "true") + } + + // try get resultPath from env + resultPath := os.Getenv("RESULTS_PATH") + if resultPath == "" { + resultPath = "./benchmark_unix_results.csv" + } + + // Delete the file if it exists + if err := deleteFileIfExists(resultPath); err != nil { + err = errors.Wrap(err, "failed to delete file if exists") + t.Fatal(err) + } + + // -- Setup Test Parameters -- + + // shapePairs is a list of tuples, where each tuple represents a pair of qtyStreams and branchingFactor + // qtyStreams is the number of streams in the tree + // branchingFactor is the branching factor of the tree + // if branchingFactor is math.MaxInt, it means the tree is flat + + shapePairs := [][]int{ + // qtyStreams, branchingFactor + // testing 1 stream only + {1, 1}, + + //flat trees = cost of adding a new stream to our composed + //{50, math.MaxInt}, + //{100, math.MaxInt}, + //{200, math.MaxInt}, + //{400, math.MaxInt}, + // 800 streams kills t3.small instances for memory starvation. But probably because it stores the whole tree in memory + //{800, math.MaxInt}, + //{1500, math.MaxInt}, // this gives error: Out of shared memory + + // deep trees = cost of adding depth + //{50, 1}, + //{100, 1}, + //{200, 1}, // we can't go deeper than 180, for call stack size issues + + // to get difference for stream qty on a real world situation + {50, 8}, + {100, 8}, + //{200, 8}, + //{400, 8}, + //{800, 8}, + + // to get difference for branching factor + //{200, 2}, + //{200, 4}, + // {200, 8}, // already tested above + //{200, 16}, + //{200, 32}, + } + + samples := 3 + + days := []int{1, 7} + + visibilities := []util.VisibilityEnum{util.PublicVisibility, util.PrivateVisibility} + + var functionTests []kwilTesting.TestFunc + // a channel to receive results from the tests + var resultsCh chan []Result + + // create combinations of shapePairs and visibilities + for _, qtyStreams := range shapePairs { + for _, visibility := range visibilities { + functionTests = append(functionTests, getBenchmarFn(BenchmarkCase{ + Visibility: visibility, + QtyStreams: qtyStreams[0], + BranchingFactor: qtyStreams[1], + Samples: samples, + Days: days, + Procedures: []ProcedureEnum{ + ProcedureGetRecord, + ProcedureGetIndex, + //ProcedureGetChangeIndex, + //ProcedureGetFirstRecord, + }, + }, + // use pointer, so we can reassign the results channel + &resultsCh, true)) + } + } + + // let's chunk tests into groups, becuase these tests are very long + // and postgres may fail during the test + groupsOfTests := chunk(functionTests, 2) + + var successResults []Result + + for i, groupOfTests := range groupsOfTests { + schemaTest := kwilTesting.SchemaTest{ + Name: "benchmark_unix_test_" + strconv.Itoa(i), + SchemaFiles: []string{}, + FunctionTests: groupOfTests, + } + + t.Run(schemaTest.Name, func(t *testing.T) { + const maxRetries = 3 + var err error + RetryFor: + for attempt := 1; attempt <= maxRetries; attempt++ { + select { + case <-ctx.Done(): + t.Fatalf("context cancelled") + default: + // wrap in a function so we can defer close the results channel + func() { + resultsCh = make(chan []Result, len(groupOfTests)) + defer close(resultsCh) + + err = schemaTest.Run(ctx, &kwilTesting.Options{ + UseTestContainer: true, + Logger: t, + }) + }() + + if err == nil { + for result := range resultsCh { + successResults = append(successResults, result...) + } + // break the retries loop + break RetryFor + } + + t.Logf("Attempt %d failed: %s", attempt, err) + fmt.Println(errors.WithStack(err)) + if attempt < maxRetries { + time.Sleep(time.Second * time.Duration(attempt)) // Exponential backoff + } + } + } + if err != nil { + t.Fatalf("Test failed after %d attempts: %s", maxRetries, err) + } + }) + } + + // save results to file + if err := saveResults(successResults, resultPath); err != nil { + t.Fatalf("failed to save results: %s", err) + } +} diff --git a/internal/benchmark/setup.go b/internal/benchmark/setup.go index 876e3b2ec..2c5050bb3 100644 --- a/internal/benchmark/setup.go +++ b/internal/benchmark/setup.go @@ -27,6 +27,7 @@ import ( type SetupSchemasInput struct { BenchmarkCase BenchmarkCase Tree trees.Tree + UnixOnly bool } // Schema setup functions @@ -59,9 +60,17 @@ func setupSchemas( var schema *kwiltypes.Schema var err error if node.IsLeaf { - schema, err = parse.Parse(contracts.PrimitiveStreamContent) + if input.UnixOnly { + schema, err = parse.Parse(contracts.PrimitiveStreamUnixContent) + } else { + schema, err = parse.Parse(contracts.PrimitiveStreamContent) + } } else { - schema, err = parse.Parse(contracts.ComposedStreamContent) + if input.UnixOnly { + schema, err = parse.Parse(contracts.ComposedStreamUnixContent) + } else { + schema, err = parse.Parse(contracts.ComposedStreamContent) + } } if err != nil { return errors.Wrap(err, "failed to parse stream") @@ -91,6 +100,7 @@ func setupSchemas( treeNode: schema.Node, days: 380, owner: deployerAddress, + unixOnly: input.UnixOnly, }); err != nil { return errors.Wrap(err, "failed to setup schema") } @@ -145,6 +155,7 @@ type setupSchemaInput struct { days int owner util.EthereumAddress treeNode trees.TreeNode + unixOnly bool } func setupSchema(ctx context.Context, platform *kwilTesting.Platform, schema *kwiltypes.Schema, input setupSchemaInput) error { @@ -158,7 +169,7 @@ func setupSchema(ctx context.Context, platform *kwilTesting.Platform, schema *kw // if it's a leaf, then it's a primitive stream if input.treeNode.IsLeaf { - if err := insertRecordsForPrimitive(ctx, platform, dbid, input.days+1); err != nil { + if err := insertRecordsForPrimitive(ctx, platform, dbid, input.days+1, input.unixOnly); err != nil { return errors.Wrap(err, "failed to insert records for primitive") } } else { @@ -274,6 +285,9 @@ func batchInsertMetadata(ctx context.Context, platform *kwilTesting.Platform, db txContext := &common.TxContext{ Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: 1, + }, } // Execute the bulk insert @@ -289,22 +303,31 @@ func batchInsertMetadata(ctx context.Context, platform *kwilTesting.Platform, db // - it generates a random value for each record // - it inserts the records into the stream // - we use a bulk insert to speed up the process -func insertRecordsForPrimitive(ctx context.Context, platform *kwilTesting.Platform, dbid string, days int) error { +func insertRecordsForPrimitive(ctx context.Context, platform *kwilTesting.Platform, dbid string, days int, unixOnly bool) error { fromDate := fixedDate.AddDate(0, 0, -days) - records := generateRecords(fromDate, fixedDate) + records := generateRecords(fromDate, fixedDate, unixOnly) // Prepare the SQL statement for bulk insert sqlStmt := "INSERT INTO primitive_events (date_value, value, created_at) VALUES " var values []string - for _, record := range records { - values = append(values, fmt.Sprintf("('%s', %s::decimal(36,18), 0)", record[0], record[1])) + if unixOnly { + for _, record := range records { + values = append(values, fmt.Sprintf("(%d, %s::decimal(36,18), 0)", record[0], record[1])) + } + } else { + for _, record := range records { + values = append(values, fmt.Sprintf("('%s', %s::decimal(36,18), 0)", record[0], record[1])) + } } sqlStmt += strings.Join(values, ", ") txContext := &common.TxContext{ Ctx: ctx, + BlockContext: &common.BlockContext{ + Height: 1, + }, } // Execute the bulk insert @@ -342,7 +365,11 @@ func setTaxonomyForComposed(ctx context.Context, platform *kwilTesting.Platform, streamIdsArg = append(streamIdsArg, t.ChildStream.StreamId.String()) weightsArg = append(weightsArg, int(t.Weight)) } - startDateArg = randDate(fixedDate.AddDate(0, 0, -input.days), fixedDate).Format(time.DateOnly) + if input.unixOnly { + startDateArg = strconv.Itoa(int(randDate(fixedDate.AddDate(0, 0, -input.days), fixedDate).Unix())) + } else { + startDateArg = randDate(fixedDate.AddDate(0, 0, -input.days), fixedDate).Format(time.DateOnly) + } return executeStreamProcedure(ctx, platform, dbid, "set_taxonomy", []any{dataProvidersArg, streamIdsArg, weightsArg, startDateArg}, platform.Deployer) diff --git a/internal/benchmark/utils.go b/internal/benchmark/utils.go index 749834f5b..9c56865d6 100644 --- a/internal/benchmark/utils.go +++ b/internal/benchmark/utils.go @@ -5,6 +5,7 @@ import ( "fmt" "math/rand" "os" + "os/exec" "slices" "strconv" "strings" @@ -27,11 +28,18 @@ func getStreamId(index int) *util.StreamId { // generateRecords creates a slice of records with random values for each day // between the given fromDate and toDate, inclusive. -func generateRecords(fromDate, toDate time.Time) [][]any { +func generateRecords(fromDate, toDate time.Time, unixOnly bool) [][]any { var records [][]any - for d := fromDate; !d.After(toDate); d = d.AddDate(0, 0, 1) { - value, _ := apd.New(rand.Int63n(100000000000000), 0).Float64() - records = append(records, []any{d.Format("2006-01-02"), fmt.Sprintf("%.2f", value)}) + if unixOnly { + for d := fromDate; !d.After(toDate); d = d.Add(insertFreqInTime) { + value, _ := apd.New(rand.Int63n(100000000000000), 0).Float64() + records = append(records, []any{d.Unix(), fmt.Sprintf("%.2f", value)}) + } + } else { + for d := fromDate; !d.After(toDate); d = d.AddDate(0, 0, 1) { + value, _ := apd.New(rand.Int63n(100000000000000), 0).Float64() + records = append(records, []any{d.Format("2006-01-02"), fmt.Sprintf("%.2f", value)}) + } } return records } @@ -171,3 +179,27 @@ func MustEthereumAddressFromBytes(b []byte) *util.EthereumAddress { } return &addr } + +// should execute docker", "rm", "-f", "kwil-testing-postgres +func cleanupDocker() { + // Execute the cleanup command + cmd := exec.Command("docker", "rm", "-f", "kwil-testing-postgres") + err := cmd.Run() + if err != nil { + fmt.Printf("Error during cleanup: %v\n", err) + } +} + +func chunk[T any](arr []T, chunkSize int) [][]T { + var result [][]T + for i := 0; i < len(arr); i += chunkSize { + end := i + chunkSize + if end > len(arr) { + end = len(arr) + } + + result = append(result, arr[i:end]) + } + + return result +} diff --git a/internal/contracts/composed_stream_template_unix.kf b/internal/contracts/composed_stream_template_unix.kf new file mode 100644 index 000000000..c308d5740 --- /dev/null +++ b/internal/contracts/composed_stream_template_unix.kf @@ -0,0 +1,1106 @@ +// This file is the template to be used by Data Providers to deploy their own contracts. +// A stream must conform to this same interface (read and permissions) to be eligible to officialization from our +// accepted System Streams. + +database composed_stream_db_name; + +table taxonomies { + taxonomy_id uuid primary notnull, + child_stream_id text notnull, + child_data_provider text notnull, + weight decimal(36,18) notnull, + created_at int notnull, // block height + disabled_at int, // block height + version int notnull, + start_date int // indicates the start date of when the taxonomy is valid. i.e. when the weight starts to be used +} + +table metadata { + row_id uuid primary notnull, + metadata_key text notnull, + value_i int, // integer type + value_f decimal(36,18), // float type + value_b bool, // boolean type + value_s text, // string type + value_ref text, // indexed string type -- lowercase + created_at int notnull, // block height + disabled_at int, // block height + + #key_idx index(metadata_key), + #ref_idx index(value_ref), + #created_idx index(created_at) // faster sorting +} + +foreign procedure ext_get_record($date_from int, $date_to int, $frozen_at int) returns table( + date_value int, + value decimal(36,18) +) + +foreign procedure ext_get_index($date_from int, $date_to int, $frozen_at int, $base_date int) returns table( + date_value int, + value decimal(36,18) +) + +foreign procedure ext_get_first_record($after_date int, $frozen_at int) returns table( + date_value int, + value decimal(36,18) +) + +procedure stream_exists($data_provider text, $stream_id text) public view returns (result bool) { + $dbid text := get_dbid($data_provider, $stream_id); + + for $row in SELECT * FROM get_metadata('type', true, null) { + return true; + } + + return false; +} + +procedure get_dbid($data_provider text, $stream_id text) private view returns (result text) { + $starts_with_0x bool := false; + for $row in SELECT $data_provider LIKE '0x%' as a { + $starts_with_0x := $row.a; + } + + $data_provider_without_0x text; + + if $starts_with_0x == true { + $data_provider_without_0x := substring($data_provider, 3); + } else { + $data_provider_without_0x := $data_provider; + } + + return generate_dbid($stream_id, decode($data_provider_without_0x, 'hex')); +} + + +// It returns every record value for every date as its own +// row. e.g.: +// 2020-06-01, 100 +// 2020-06-01, 140 +// 2020-06-02, 103 +// 2020-06-02, 143 +procedure get_raw_record( + $date_from int, + $date_to int, + $frozen_at int, + $child_data_providers text[], + $child_stream_ids text[] +) private view returns table( + date_value int, + value decimal(36,18), + taxonomy_index int +) { + // checks are executed at the beginning of the procedure + if is_wallet_allowed_to_read(@caller) == false { + error('wallet not allowed to read'); + } + // check if the stream is allowed to compose + is_stream_allowed_to_compose(@foreign_caller); + + // check if the child_data_providers and child_stream_id are the same length + if array_length($child_data_providers) != array_length($child_stream_ids) { + error('child_data_providers and child_stream_id must be the same length'); + } + + if array_length($child_data_providers) > 0 { + // arrays are 1-indexed, so we match that + for $taxonomy_index in 1..array_length($child_data_providers) { + $dbid text := get_dbid($child_data_providers[$taxonomy_index], $child_stream_ids[$taxonomy_index]); + for $row3 in SELECT * FROM ext_get_record[$dbid, 'get_record']($date_from, $date_to, $frozen_at) { + return next $row3.date_value, $row3.value, $taxonomy_index; + } + } + } +} + + +// It returns every index value for every date as its own +// row. e.g.: +// 2020-06-01, 100 +// 2020-06-01, 140 +// 2020-06-02, 103 +// 2020-06-02, 143 +procedure get_raw_index( + $date_from int, + $date_to int, + $frozen_at int, + $child_data_providers text[], + $child_stream_ids text[], + $base_date int +) private view returns table( + date_value int, + value decimal(36,18), + taxonomy_index int +) { + // checks are executed at the beginning of the procedure + if is_wallet_allowed_to_read(@caller) == false { + error('wallet not allowed to read'); + } + // check if the stream is allowed to compose + is_stream_allowed_to_compose(@foreign_caller); + + // check if the child_data_providers and child_stream_id are the same length + if array_length($child_data_providers) != array_length($child_stream_ids) { + error('child_data_providers and child_stream_id must be the same length'); + } + + if array_length($child_data_providers) > 0 { + // arrays are 1-indexed, so we match that + for $taxonomy_index in 1..array_length($child_data_providers) { + $dbid text := get_dbid($child_data_providers[$taxonomy_index], $child_stream_ids[$taxonomy_index]); + for $row in SELECT * FROM ext_get_index[$dbid, 'get_index']($date_from, $date_to, $frozen_at, $base_date) { + return next $row.date_value, $row.value, $taxonomy_index; + } + } + } +} + +// get_record_filled retrieves the filled records for a specified date range +// and fills the gaps with the last known value for each taxonomy +// if the last known value is not available, it will not emit the record, making it a sparse table +procedure get_record_filled($date_from int, $date_to int, $frozen_at int) private view returns table( + date_value int, + value_with_weight decimal(36,18), // value * dynamic weight + // we also return the weight, so we can calculate the total weight for a date + weight decimal(36,18) +) { + // this will help us reset the arrays + $base_taxonomy_list int[]; + + // Initialize taxonomy variables + $taxonomy_count int := 0; + $last_values decimal(36,18)[]; + $child_data_providers text[]; + $child_stream_id text[]; + + // Fetch taxonomy details + for $row in SELECT * FROM describe_taxonomies(true) { + $taxonomy_count := $taxonomy_count + 1; + $base_taxonomy_list := array_append($base_taxonomy_list, $taxonomy_count); + $last_values := array_append($last_values, null::decimal(36,18)); + $child_data_providers := array_append($child_data_providers, $row.child_data_provider); + $child_stream_id := array_append($child_stream_id, $row.child_stream_id); + } + + $unemitted_taxonomies_for_date int[] := $base_taxonomy_list; + $removed_elements_count int := 0; + $prev_date int := 0; + $current_date int := 0; + + // what could make this process easier: + // - use a map to store the last values, so we can easily check if a value is null + // - better manipulation of tables as objects, then we could modify a table to + // fill forward without needing to return immediately on each loop + + // Fetch raw records and emit values with dynamic weights + for $row_raw in SELECT * FROM get_raw_record($date_from, $date_to, $frozen_at, $child_data_providers, $child_stream_id) ORDER BY date_value, taxonomy_index { + $current_date := $row_raw.date_value; + + // Fetch the dynamic weight based on the current date + $dynamic_weight decimal(36,18) := get_dynamic_weight($child_stream_id[$row_raw.taxonomy_index], $current_date); + + // Emit filled values for previous date if date has changed + if $current_date != $prev_date { + if $prev_date != 0 { + for $unemitted_taxonomy in $unemitted_taxonomies_for_date { + // TODO: remove this when we have slices or include if we have just index assignment + // if $unemitted_taxonomy is distinct from null { + + if $last_values[$unemitted_taxonomy] is distinct from null { + // Use the stored dynamic weight from the previous date + $dynamic_weight_prev := get_dynamic_weight($child_stream_id[$unemitted_taxonomy], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy] * $dynamic_weight_prev, $dynamic_weight_prev; + } + } + // Clear unemitted taxonomies for the new date + $unemitted_taxonomies_for_date := $base_taxonomy_list; + $removed_elements_count := 0; + } + } + + // TODO: uncomment when we have index assignment + // $last_values[$row_raw.taxonomy_index] := $row_raw.value; + + // Update the last values for the current date + $last_values := array_update_element($last_values, $row_raw.taxonomy_index, $row_raw.value); + + // Emit current value with the dynamic weight + return next $current_date, $row_raw.value * $dynamic_weight, $dynamic_weight; + + // Remove emitted taxonomy from the unemitted list + // we need to subtract the removed_elements_count because the array is shrinking + // TODO: we can improve it when we either can remove elements, or with array element assignment + $unemitted_taxonomies_for_date := remove_array_element($unemitted_taxonomies_for_date, $row_raw.taxonomy_index - $removed_elements_count); + $removed_elements_count := $removed_elements_count + 1; + // remove elements is not performant without slices. Then let's make the elements null for now + // $unemitted_taxonomies_for_date[$row_raw.taxonomy_index] := null; + + $prev_date := $current_date; + } + + // Emit filled values for the last date + if $prev_date != 0 { + if $taxonomy_count > 0 { + for $unemitted_taxonomy2 in $unemitted_taxonomies_for_date { + if $last_values[$unemitted_taxonomy2] is distinct from null { + // Fetch the correct dynamic weight for the last date + $dynamic_weight_last := get_dynamic_weight($child_stream_id[$unemitted_taxonomy2], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy2] * $dynamic_weight_last, $dynamic_weight_last; + } + } + } + } +} + +// get_index_filled retrieves the filled index for a specified date range +// and fills the gaps with the last known value for each taxonomy +// if the last known value is not available, it will not emit the index, making it a sparse table +procedure get_index_filled($date_from int, $date_to int, $frozen_at int, $base_date int) private view returns table( + date_value int, + value_with_weight decimal(36,18), // value * dynamic weight + // we also return the weight, so we can calculate the total weight for a date + weight decimal(36,18) // the dynamic weight for the date +) { + // this will help us reset the arrays + $base_taxonomy_list int[]; + + // Initialize taxonomy variables + $taxonomy_count int := 0; + $last_values decimal(36,18)[]; + $child_data_providers text[]; + $child_stream_id text[]; + + // Fetch taxonomy details + for $row in SELECT * FROM describe_taxonomies(true) { + $taxonomy_count := $taxonomy_count + 1; + $base_taxonomy_list := array_append($base_taxonomy_list, $taxonomy_count); + $last_values := array_append($last_values, null::decimal(36,18)); + $child_data_providers := array_append($child_data_providers, $row.child_data_provider); + $child_stream_id := array_append($child_stream_id, $row.child_stream_id); + } + + $unemitted_taxonomies_for_date int[] := $base_taxonomy_list; + $removed_elements_count int := 0; + $prev_date int := 0; + $current_date int := 0; + + // what could make this process easier: + // - use a map to store the last values, so we can easily check if a value is null + // - better manipulation of tables as objects, then we could modify a table to + // fill forward without needing to return immediately on each loop + + for $row_raw in SELECT * FROM get_raw_index($date_from, $date_to, $frozen_at, $child_data_providers, $child_stream_id, $base_date) ORDER BY date_value, taxonomy_index { + $current_date := $row_raw.date_value; + + // Fetch the dynamic weight based on the current date + $dynamic_weight decimal(36,18) := get_dynamic_weight($child_stream_id[$row_raw.taxonomy_index], $current_date); + + // Emit filled values for the previous date if the date has changed + if $current_date != $prev_date { + if $prev_date != 0 { + for $unemitted_taxonomy in $unemitted_taxonomies_for_date { + if $last_values[$unemitted_taxonomy] is distinct from null { + // TODO: remove this when we have slices or include if we have just index assignment + // if $unemitted_taxonomy is distinct from null { + + // Use the stored dynamic weight from the previous date + $dynamic_weight_prev := get_dynamic_weight($child_stream_id[$unemitted_taxonomy], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy] * $dynamic_weight_prev, $dynamic_weight_prev; + } + } + } + // Clear unemitted taxonomies for the new date + $unemitted_taxonomies_for_date := $base_taxonomy_list; + $removed_elements_count := 0; + } + + // TODO: uncomment when we have index assignment + // $last_values[$row_raw.taxonomy_index] := $row_raw.value; + + // Update the last values for the current date + $last_values := array_update_element($last_values, $row_raw.taxonomy_index, $row_raw.value); + + // Emit the current value with the dynamic weight + return next $current_date, $row_raw.value * $dynamic_weight, $dynamic_weight; + + // Remove emitted taxonomy from the unemitted list + // TODO: we can improve it when we either can remove elements, or with array element assignment + $unemitted_taxonomies_for_date := remove_array_element($unemitted_taxonomies_for_date, $row_raw.taxonomy_index - $removed_elements_count); + $removed_elements_count := $removed_elements_count + 1; + + // remove elements is not performant without slices. Then let's make the elements null for now + // $unemitted_taxonomies_for_date[$row_raw.taxonomy_index] := null; + + $prev_date := $current_date; + } + + // Emit filled values for the last date + if $prev_date != 0 { + if $taxonomy_count > 0 { + for $unemitted_taxonomy2 in $unemitted_taxonomies_for_date { + if $last_values[$unemitted_taxonomy2] is distinct from null { + // Fetch the correct dynamic weight for the last date + $dynamic_weight_last := get_dynamic_weight($child_stream_id[$unemitted_taxonomy2], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy2] * $dynamic_weight_last, $dynamic_weight_last; + } + } + } + } +} + +// get_record retrieves the records for a specified date range as expected from the stream interface +// i.e. with filled gaps +procedure get_record($date_from int, $date_to int, $frozen_at int) public view returns table( + date_value int, + value decimal(36,18) +) { + $in_range bool := false; + $last_date int; + $last_value decimal(36,18); + // here, we sum all of the records that were found by aggregating on the date_valie + for $row in SELECT date_value, total_value_with_weight / total_weight as value FROM + (SELECT + date_value, + SUM(value_with_weight)::decimal(36,18) AS total_value_with_weight, + SUM(weight)::decimal(36,18) AS total_weight + FROM get_record_filled($date_from, $date_to, $frozen_at) group by date_value) as r { + // when we arrive in the range, we start emitting. If there was a previous value, we emit it + if $in_range == false { + if $row.date_value >= $date_from { + $in_range := true; + if ($last_date is distinct from null) AND $row.date_value != $date_from { + return next $last_date, $last_value; + } + } + + $last_date := $row.date_value; + $last_value := $row.value; + } + // we check again, because it might just have entered the range + if $in_range == true { + return next $row.date_value, $row.value; + } + } + + // if we finished the loop and we never entered the range, we emit the last value + if $in_range == false AND $last_date is distinct from null { + return next $last_date, $last_value; + } +} + + +// get_index retrieves the index for a specified date range as expected from the stream interface +// i.e. with filled gaps +procedure get_index($date_from int, $date_to int, $frozen_at int, $base_date int) public view returns table( + date_value int, + value decimal(36,18) +) { + $in_range bool := false; + $last_date int; + $last_value decimal(36,18); + $effective_base_date int := $base_date; + + if $effective_base_date is null OR $effective_base_date == 0 { + for $v_row in SELECT * FROM get_metadata('default_base_date', true, null) ORDER BY created_at DESC LIMIT 1 { + $effective_base_date := $v_row.value_i; + } + } + + // here, we sum all of the indexes that were found by aggregating on the date_value + for $row in SELECT date_value, total_value_with_weight / total_weight as value FROM + (SELECT + date_value, + SUM(value_with_weight)::decimal(36,18) AS total_value_with_weight, + SUM(weight)::decimal(36,18) AS total_weight + FROM get_index_filled($date_from, $date_to, $frozen_at, $effective_base_date) group by date_value) as r { + // when we arrive in the range, we start emitting. If there was a previous value, we emit it + if $in_range == false { + if $row.date_value >= $date_from { + $in_range := true; + if ($last_date is distinct from null) AND $row.date_value != $date_from { + return next $last_date, $last_value; + } + } + + $last_date := $row.date_value; + $last_value := $row.value; + } + // we check again, because it might just have entered the range + if $in_range == true { + return next $row.date_value, $row.value; + } + } + + // if we finished the loop and we never entered the range, we emit the last value + if $in_range == false AND $last_date is distinct from null { + return next $last_date, $last_value; + } + +} + +// get_first_record returns the first record of the composed stream (optionally after a given date) +// Note: this operation performs 2 loop queries through the taxonomies (O(2n)). +// Couldn't optimize to do once because there are edge cases when after_date is provided +// and the first record is before that date. +procedure get_first_record($after_date int, $frozen_at int) public view returns table( + date_value int, + value decimal(36,18) +) { + // check read access + if is_wallet_allowed_to_read(@caller) == false { + error('wallet not allowed to read'); + } + + // check compose access + is_stream_allowed_to_compose(@foreign_caller); + + // let's coalesce null with '' + // then, if it's empty, it will always be the first value + if $after_date is null { + $after_date := 0; + } + + // 1. figure the earliest date possible after the after_date + $earliest_date int := 0; + for $taxonomy_row in SELECT * FROM describe_taxonomies(true) { + $dbid := get_dbid($taxonomy_row.child_data_provider, $taxonomy_row.child_stream_id); + for $record_row in SELECT * FROM ext_get_first_record[$dbid, 'get_first_record']($after_date, $frozen_at) { + if $earliest_date == 0 OR $record_row.date_value < $earliest_date { + $earliest_date := $record_row.date_value; + } + } + } + + // 2. get_record with the earliest date if it exists + if $earliest_date != 0 { + for $row in SELECT date_value, value FROM get_record($earliest_date, $earliest_date, $frozen_at) { + return next $row.date_value, $row.value; + } + } +} + +procedure is_initiated() private view returns (result bool) { + // check if it was already initialized + // for that we check if type is already provided + for $row in SELECT * FROM metadata WHERE metadata_key = 'type' LIMIT 1 { + return true; + } + + return false; +} + +procedure is_stream_owner($wallet text) public view returns (result bool) { + for $row in SELECT * FROM metadata WHERE metadata_key = 'stream_owner' AND value_ref = LOWER($wallet) LIMIT 1 { + return true; + } + return false; +} + +procedure is_wallet_allowed_to_read($wallet text) public view returns (value bool) { + + // if public, anyone can always read + // If there's no visibility metadata, it's public. + $visibility int := 0; + for $v_row in SELECT * FROM get_metadata('read_visibility', true, null) { + $visibility := $v_row.value_i; + } + + if $visibility == 0 { + return true; + } + + // if it's the owner, it's permitted + if is_stream_owner($wallet) { + return true; + } + + // if there's metadata allow_read_wallet -> , then its permitted + for $row in SELECT * FROM get_metadata('allow_read_wallet', false, $wallet) { + return true; + } + + return false; +} + +procedure stream_owner_only() private view { + if is_stream_owner(@caller) == false { + error('Stream owner only procedure'); + } +} + +// init method prepares the contract with default values and permanent ones +procedure init() public owner { + if is_initiated() { + error('this contract was already initialized'); + } + + // check if caller is empty + // this happens can happen in tests, but we should also protect for that on production + if @caller == '' { + error('caller is empty'); + } + + $current_block int := @height; + + // uuid's namespaces are any random generated uuid from https://www.uuidtools.com/v5 + // but each usage should be different to maintain determinism, so we reuse the previous result + $current_uuid uuid := uuid_generate_v5('41fea9f0-179f-11ef-8838-325096b39f47'::uuid, @txid); + + // type = composed + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_s, created_at) + VALUES ($current_uuid, 'type', 'composed', $current_block); + + // stream_owner = @caller + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_ref, created_at) + VALUES ($current_uuid, 'stream_owner', LOWER(@caller), 1); + + // compose_visibility = 0 (public) + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_i, created_at) + VALUES ($current_uuid, 'compose_visibility', 0, $current_block); + + // read_visibility = 0 (public) + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_i, created_at) + VALUES ($current_uuid, 'read_visibility', 0, $current_block); + + $readonly_keys text[] := [ + 'type', + 'stream_owner', + 'readonly_key', + 'taxonomy_version' + ]; + + for $key in $readonly_keys { + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_s, created_at) + VALUES ($current_uuid, 'readonly_key', $key, $current_block); + } +} + +// Note: We're letting the user be the source of truth for which type a key should have. +// To change that, we could initiate `key_type:` key on metadata table, that could be used here +// to enforce a type. However, this would force us to know every metadata key before deploying a contract +procedure insert_metadata( + $key text, + $value text, + $val_type text + // TODO: would be better to use value_x from args. However this doesn't work well for nullable inputs + // i.e. if we use a bool type we'll get an conversion error from Nil -> bool. And we don't want to force user to provide + // a value if nil is intended. + ) public { + + $value_i int; + $value_s text; + $value_f decimal(36,18); + $value_b bool; + $value_ref text; + + if $val_type == 'int' { + $value_i := $value::int; + } elseif $val_type == 'string' { + $value_s := $value; + } elseif $val_type == 'bool' { + $value_b := $value::bool; + } elseif $val_type == 'ref' { + $value_ref := $value; + } elseif $val_type == 'float' { + $value_f := $value::decimal(36,18); + } else { + error(format('unknown type used "%s". valid types = "float" | "bool" | "int" | "ref" | "string"', $val_type)); + } + + stream_owner_only(); + + if is_initiated() == false { + error('contract must be initiated'); + } + + // check if it's read-only + for $row in SELECT * FROM metadata WHERE metadata_key = 'readonly_key' AND value_s = $key LIMIT 1 { + error('Cannot insert metadata for read-only key'); + } + + // we create one deterministic uuid for each metadata record + // we can't use just @txid because a single transaction can insert multiple metadata records. + // the result will be idempotency here too. + $uuid_key := @txid || $key || $value; + + $uuid uuid := uuid_generate_v5('1361df5d-0230-47b3-b2c1-37950cf51fe9'::uuid, $uuid_key); + $current_block int := @height; + + // insert data + INSERT INTO metadata (row_id, metadata_key, value_i, value_f, value_s, value_b, value_ref, created_at) + VALUES ($uuid, $key, $value_i, $value_f, $value_s, $value_b, LOWER($value_ref), $current_block); +} + +// key: the metadata key to look for +// only_latest: if true, only return the latest version of the metadata +// ref: if provided, only return metadata with that ref +procedure get_metadata($key text, $only_latest bool, $ref text) public view returns table( + row_id uuid, + value_i int, + value_f decimal(36,18), + value_b bool, + value_s text, + value_ref text, + created_at int + ) { + + if $only_latest == true { + if $ref is distinct from null { + return SELECT + row_id, + null::int as value_i, + null::decimal(36,18) as value_f, + null::bool as value_b, + null::text as value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL AND value_ref = LOWER($ref) + ORDER BY created_at DESC + LIMIT 1; + } else { + return SELECT + row_id, + value_i, + value_f, + value_b, + value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL + ORDER BY created_at DESC + LIMIT 1; + } + } else { + // SHOULD BE THE EXACT CODE AS ABOVE, BUT WITHOUT LIMIT + if $ref is distinct from null { + return SELECT + row_id, + null::int as value_i, + null::decimal(36,18) as value_f, + null::bool as value_b, + null::text as value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL AND value_ref = LOWER($ref) + ORDER BY created_at DESC; + } else { + return SELECT + row_id, + value_i, + value_f, + value_b, + value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL + ORDER BY created_at DESC; + } + } +} + +procedure disable_metadata($row_id uuid) public { + stream_owner_only(); + + $current_block int := @height; + + $found bool := false; + + // Check if the metadata is not read-only + for $metadata_row in + SELECT metadata_key + FROM metadata + WHERE row_id = $row_id AND disabled_at IS NULL + LIMIT 1 { + $found := true; + $row_key text := $metadata_row.metadata_key; + + for $readonly_row in SELECT row_id FROM metadata WHERE metadata_key = 'readonly_key' AND value_s = $row_key LIMIT 1 { + error('Cannot disable read-only metadata'); + } + + UPDATE metadata SET disabled_at = $current_block + WHERE row_id = $row_id; + } + + if $found == false { + error('metadata record not found'); + } +} + +procedure transfer_stream_ownership($new_owner text) public { + stream_owner_only(); + + // fail if not a valid address + check_eth_address($new_owner); + + UPDATE metadata SET value_ref = LOWER($new_owner) + WHERE metadata_key = 'stream_owner'; +} + +procedure check_eth_address($address text) private { + // TODO better check when kwil supports regexp and {}[] inside strings + // regex: ^0x[0-9a-fA-F]{40}$ + // for $row in SELECT regexp_match($address, '^0x[0-9a-fA-F]{40}$') { + // return true; + // } + + if (length($address) != 42) { + error('invalid address length'); + } + + // check if starts with 0x + for $row in SELECT $address LIKE '0x%' as a { + if $row.a == false { + error('address does not start with 0x'); + } + } +} + +procedure get_current_version($show_disabled bool) private view returns (result int) { + if $show_disabled == false { + for $row in SELECT version FROM taxonomies WHERE disabled_at IS NULL ORDER BY version DESC LIMIT 1 { + return $row.version; + } + } else { + for $row2 in SELECT version FROM taxonomies ORDER BY version DESC LIMIT 1 { + return $row2.version; + } + } + + return 0; +} + + +// NOTE we won't use this signature jsonb is not supported +// procedure set_taxonomy($payload text) public { +// stream_owner_only(); +// $next_version int := get_current_version() + 1; +// $block_height int := 2; +// $current_uuid uuid := uuid_generate_v5('e92064da-19c5-11ef-9bc0-325096b39f47'::uuid, @txid); +// for $row in SELECT +// elem->>'stream_id' as stream_id, +// elem->>'data_provider' as data_provider, +// elem->>'weight' as weight +// FROM jsonb_array_elements($payload::jsonb) as elem { +// $current_uuid := uuid_generate_v5($current_uuid, @txid); +// INSERT INTO taxonomy ( child_stream_id, child_stream_id, weight, created_at, version) +// VALUES ($row.stream_id, $row.data_provider, $row.weight::int, $block_height, $next_version); +// } +// } + +// set_taxonomy is a batch insert that +// - gets an array of values for each properties, zipping them as needed +// - increases the version +// - adds taxonomy records in batch +procedure set_taxonomy($data_providers text[], $stream_ids text[], $weights decimal(36,18)[], $start_date int) public { + stream_owner_only(); + + $next_version int := get_current_version(true) + 1; + $block_height int := @height; + $current_uuid uuid := uuid_generate_v5('e92064da-19c5-11ef-9bc0-325096b39f47'::uuid, @txid); + + $length int := array_length($data_providers); + + // check lengths + if $length != array_length($stream_ids) { + error('data_providers and stream_ids must have the same length'); + } + if $length != array_length($weights) { + error('data_providers and weights must have the same length'); + } + + for $i in 1..$length { + $current_uuid := uuid_generate_v5($current_uuid, @txid); + + INSERT INTO taxonomies (taxonomy_id, child_stream_id, child_data_provider, weight, created_at, version, start_date) + VALUES ($current_uuid, $stream_ids[$i], $data_providers[$i], $weights[$i], $block_height, $next_version, $start_date); + } +} + +procedure describe_taxonomies($latest_version bool) public view returns table( + child_stream_id text, + child_data_provider text, + weight decimal(36,18), + created_at int, + version int, + start_date int + ) { + + if $latest_version == true { + // just the latest enabled version should be returned + return SELECT + child_stream_id, + child_data_provider, + weight, + created_at, + version, + start_date + FROM taxonomies + WHERE version = get_current_version(false) AND disabled_at IS NULL + ORDER BY created_at DESC; + } else { + return SELECT + child_stream_id, + child_data_provider, + weight, + created_at, + version, + start_date + FROM taxonomies + WHERE disabled_at IS NULL + ORDER BY version DESC; + } +} + +// TODO: we should have a way to disable a taxonomy by taxonomy_id +procedure disable_taxonomy($version int) public { + stream_owner_only(); + + $current_block int := @height; + + $found bool := false; + + // Check if the taxonomies with the given version exist and disable them + for $row in SELECT child_stream_id FROM taxonomies WHERE version = $version AND disabled_at IS NULL { + $found := true; + UPDATE taxonomies SET disabled_at = $current_block + WHERE version = $version AND disabled_at IS NULL; + } + + if $found == false { + error('No taxonomies found for the given version'); + } +} + +procedure is_stream_allowed_to_compose($foreign_caller text) public view returns (value bool) { + // if foreign_caller is empty, then it's a direct call + if $foreign_caller == '' { + return true; + } + + // if public, anyone can always read + // If there's no visibility metadata, it's public. + $visibility int := 0; + for $v_row in SELECT * FROM get_metadata('compose_visibility', true, null) { + $visibility := $v_row.value_i; + } + + if $visibility == 0 { + return true; + } + + // if there's metadata allow_compose_stream -> , then its permitted + for $row in SELECT * FROM get_metadata('allow_compose_stream', true, $foreign_caller) LIMIT 1 { + return true; + } + + error('Stream not allowed to compose'); +} + + +procedure get_index_change($date_from int, $date_to int, $frozen_at int, $base_date int, $days_interval int) public view returns table( + date_value int, + value decimal(36,18) +) { + // the full process is this: + // 1. we find the current values for the given date range + // | date | value | + // | 01-2001 | ... | + // | 05-2001 | ... | + // | 09-2001 | ... | + // | 10-2001 | ... | + + // 2. we get a list of expected previous dates + // | current_date | expected_prev_date | + // | 01-2001 | 01-2000 | + // | 05-2001 | 05-2000 | + // | 09-2001 | 09-2000 | + // | 10-2001 | 10-2000 | + + // 3. we query the real previous values for the expected previous dates, using earliest and latest dates + // they might not match the expected previous dates + // | real_prev_date | value | + // | 01-2000 | ... | + // | 03-2000 | ... | + // | 04-2000 | ... | + // | 09-2000 | ... | + + // 4. we try to match the expected prev dates with the real prev dates. + // see that each expected date should be 1:1 with a result prev date. + // | expected_prev_date | real_prev_date | result_prev_date | + // | 01-2000 | 01-2000 | 01-2000 | + // | - | 03-2000 | - | + // | - | 04-2000 | - | + // | 05-2000 | - | 04-2000 | + // | 09-2000 | 09-2000 | 09-2000 | + // | 10-2000 | - | 09-2000 | + + // 5. we calculate the index change for the current values and the result prev values + // and done. + + + + if $frozen_at == null { + $frozen_at := 0; + } + + if $days_interval == null { + error('days_interval is required'); + } + + $current_values decimal(36,18)[]; + // example: [01-2001, 05-2001, 09-2001, 10-2001] + $current_dates int[]; + // example: [01-2000, 05-2000, 09-2000, 10-2000] + $expected_prev_dates int[]; + + for $row_current in SELECT * FROM get_index($date_from, $date_to, $frozen_at, $base_date) { + $prev_date := $row_current.date_value - ($days_interval * 86400); + $expected_prev_dates := array_append($expected_prev_dates, $prev_date); + $current_values := array_append($current_values, $row_current.value); + $current_dates := array_append($current_dates, $row_current.date_value); + } + + // example: 01-2000] + $earliest_prev_date := $expected_prev_dates[1]; + // example: 09-2000 + $latest_prev_date := $expected_prev_dates[array_length($expected_prev_dates)]; + + // real previous values doesn't match the same length as expected previous dates + // because the interval can have much more values than the expected dates + $real_prev_values decimal(36,18)[]; + $real_prev_dates int[]; + + // now we query the prev dates + for $row_prev in SELECT * FROM get_index($earliest_prev_date, $latest_prev_date, $frozen_at, $base_date) { + $real_prev_values := array_append($real_prev_values, $row_prev.value); + $real_prev_dates := array_append($real_prev_dates, $row_prev.date_value); + } + + // now we calculate the matching dates for the real prev values + $result_prev_dates int[]; + $result_prev_values decimal(36,18)[]; + + $real_prev_date_idx int := 1; + + // for each expected prev date, we find the matching real prev date + if array_length($expected_prev_dates) > 0 { + for $expected_prev_date_idx in 1..array_length($expected_prev_dates) { + // we start from the last index of real prev dates. we don't need to check previous values + for $selector in $real_prev_date_idx..array_length($real_prev_dates) { + // if next real prev date is greater than expected prev date (or null), then we need to use the current real value + if $real_prev_dates[$selector + 1] > $expected_prev_dates[$expected_prev_date_idx] OR $real_prev_dates[$selector + 1] IS NULL { + // if the current real prev date is already greater than expected prev date + // we use NULL. We're probably before the first real prev date here + if $real_prev_dates[$selector] > $expected_prev_dates[$expected_prev_date_idx] { + $result_prev_dates := array_append($result_prev_dates, null::int); + $result_prev_values := array_append($result_prev_values, null::decimal(36,18)); + } else { + $result_prev_dates := array_append($result_prev_dates, $real_prev_dates[$selector]); + $result_prev_values := array_append($result_prev_values, $real_prev_values[$selector]); + } + // we already appended one for current $real_prev_date_idx, then we need to go to next + $real_prev_date_idx := $selector; + break; + } + } + } + } + + // check if we have the same number of values and dates + if array_length($current_dates) != array_length($result_prev_dates) { + error('we have different number of dates and values'); + } + if array_length($current_values) != array_length($result_prev_values) { + error('we have different number of dates and values'); + } + + // calculate the index change + if array_length($result_prev_dates) > 0 { + for $row_result in 1..array_length($result_prev_dates) { + // if the expected_prev_date is null, then we don't have a real prev date + if $result_prev_dates[$row_result] IS DISTINCT FROM NULL { + return next $current_dates[$row_result], ($current_values[$row_result] - $result_prev_values[$row_result]) * 100.00::decimal(36,18) / $result_prev_values[$row_result]; + } + } + } +} + +// # Miscellaneous helpers + +// emit_values_if is a helper function to emit values if a condition is met +procedure emit_values_if($condition bool, $date_value int, $values decimal(36,18)[], $weights decimal(36,18)[]) private view returns table( + date_value int, + value decimal(36,18), + weight decimal(36,18) +) { + if $condition == true { + if array_length($values) > 0 { + for $i in 1..array_length($values) { + return next $date_value, $values[$i], $weights[$i]; + } + } + } +} + + +procedure remove_array_element($array int[], $index int) private view returns (result int[]) { + // todo: this is too inefficient. + // we should use slices (i.e. arr[1:3]) when supported + $new_array int[]; + for $i in 1..array_length($array) { + if $i != $index { + $new_array := array_append($new_array, $array[$i]); + } + } + + return $new_array; +} + +procedure array_update_element($array decimal(36,18)[], $index int, $value decimal(36,18)) private view returns (result decimal(36,18)[]) { + $new_array decimal(36,18)[]; + for $i in 1..array_length($array) { + if $i == $index { + $new_array := array_append($new_array, $value); + } else { + $new_array := array_append($new_array, $array[$i]); + } + } + return $new_array; +} + +// Returns the weight based on the closest start_date before or equal to the given date +procedure get_dynamic_weight($stream_id text, $date_value int) private view returns ( + weight decimal(36,18) +) { + // Select the closest weight by stream_id where start_date is before or equal to the current date_value + for $row in SELECT weight + FROM taxonomies + WHERE child_stream_id = $stream_id + AND (start_date IS NULL OR start_date = 0 OR start_date <= $date_value) + ORDER BY start_date DESC + LIMIT 1 { + return $row.weight; + } + + // If no weight is found, we return the earliest weight available, this to ensure that we always have a weight + // for the given date even if it's before the first weight's start_date + // Would be better if COALESCE was supported + for $row2 in SELECT weight + FROM taxonomies + WHERE child_stream_id = $stream_id + ORDER BY start_date ASC + LIMIT 1 { + return $row2.weight; + } +} diff --git a/internal/contracts/contracts.go b/internal/contracts/contracts.go index 53878217c..e152e8f38 100644 --- a/internal/contracts/contracts.go +++ b/internal/contracts/contracts.go @@ -15,3 +15,9 @@ var ComposedStreamContent []byte //go:embed primitive_stream_template.kf var PrimitiveStreamContent []byte + +//go:embed composed_stream_template_unix.kf +var ComposedStreamUnixContent []byte + +//go:embed primitive_stream_unix.kf +var PrimitiveStreamUnixContent []byte diff --git a/internal/contracts/primitive_stream_unix.kf b/internal/contracts/primitive_stream_unix.kf new file mode 100644 index 000000000..8e066a65a --- /dev/null +++ b/internal/contracts/primitive_stream_unix.kf @@ -0,0 +1,721 @@ +// This file is the template to be used by Data Providers to deploy their own contracts. +// A stream must conform to this same interface (read and permissions) to be eligible to officialization from our +// accepted System Streams. + +database primitive_stream_db_name; + +table primitive_events { + date_value int notnull, // unix timestamp + value decimal(36,18) notnull, + created_at int notnull, // based on blockheight + + #identifier_idx primary(date_value, created_at) +} + +table metadata { + row_id uuid primary notnull, + metadata_key text notnull, + value_i int, // integer type + value_f decimal(36,18), + value_b bool, // boolean type + value_s text, // string type + value_ref text, // indexed string type -- lowercase + created_at int notnull, // block height + disabled_at int, // block height + + #key_idx index(metadata_key), + #ref_idx index(value_ref), + #created_idx index(created_at) // faster sorting +} + +procedure is_initiated() private view returns (result bool) { + // check if it was already initialized + // for that we check if type is already provided + for $row in SELECT * FROM metadata WHERE metadata_key = 'type' LIMIT 1 { + return true; + } + + return false; +} + +procedure is_stream_owner($wallet text) public view returns (result bool) { + for $row in SELECT * FROM metadata WHERE metadata_key = 'stream_owner' AND value_ref = LOWER($wallet) LIMIT 1 { + return true; + } + return false; +} + +procedure is_wallet_allowed_to_write($wallet text) public view returns (value bool) { + // if it's the owner, it's permitted + if is_stream_owner($wallet) { + return true; + } + + // if there's metadata allow_write_wallet -> , then its permitted + for $row in SELECT * FROM get_metadata('allow_write_wallet', false, $wallet) { + return true; + } + + return false; +} + +procedure is_wallet_allowed_to_read($wallet text) public view returns (value bool) { + + // if public, anyone can always read + // If there's no visibility metadata, it's public. + $visibility int := 0; + for $v_row in SELECT * FROM get_metadata('read_visibility', true, null) { + $visibility := $v_row.value_i; + } + + if $visibility == 0 { + return true; + } + + // if it's the owner, it's permitted + if is_stream_owner($wallet) { + return true; + } + + // if there's metadata allow_read_wallet -> , then its permitted + for $row in SELECT * FROM get_metadata('allow_read_wallet', false, $wallet) { + return true; + } + + return false; +} + +procedure stream_owner_only() private view { + if is_stream_owner(@caller) == false { + error('Stream owner only procedure'); + } +} + +// init method prepares the contract with default values and permanent ones +procedure init() public owner { + if is_initiated() { + error('this contract was already initialized'); + } + + // check if caller is empty + // this happens can happen in tests, but we should also protect for that on production + if @caller == '' { + error('caller is empty'); + } + + $current_block int := @height; + + // uuid's namespaces are any random generated uuid from https://www.uuidtools.com/v5 + // but each usage should be different to maintain determinism, so we reuse the previous result + $current_uuid uuid := uuid_generate_v5('111bfa42-17a2-11ef-bf03-325096b39f47'::uuid, @txid); + + // type = primitive + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_s, created_at) + VALUES ($current_uuid, 'type', 'primitive', $current_block); + + // stream_owner = @caller + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_ref, created_at) + VALUES ($current_uuid, 'stream_owner', LOWER(@caller), 1); + + // compose_visibility = 0 (public) + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_i, created_at) + VALUES ($current_uuid, 'compose_visibility', 0, $current_block); + + // read_visibility = 0 (public) + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_i, created_at) + VALUES ($current_uuid, 'read_visibility', 0, $current_block); + + $readonly_keys text[] := [ + 'type', + 'stream_owner', + 'readonly_key' + ]; + + for $key in $readonly_keys { + $current_uuid := uuid_generate_v5($current_uuid, @txid); + INSERT INTO metadata (row_id, metadata_key, value_s, created_at) + VALUES ($current_uuid, 'readonly_key', $key, $current_block); + } +} + +// Note: We're letting the user be the source of truth for which type a key should have. +// To change that, we could initiate `key_type:` key on metadata table, that could be used here +// to enforce a type. However, this would force us to know every metadata key before deploying a contract +procedure insert_metadata( + $key text, + $value text, + $val_type text + // TODO: would be better to use value_x from args. However this doesn't work well for nullable inputs + // i.e. if we use a bool type we'll get an conversion error from Nil -> bool. And we don't want to force user to provide + // a value if nil is intended. + ) public { + + $value_i int; + $value_s text; + $value_f decimal(36,18); + $value_b bool; + $value_ref text; + + if $val_type == 'int' { + $value_i := $value::int; + } elseif $val_type == 'string' { + $value_s := $value; + } elseif $val_type == 'bool' { + $value_b := $value::bool; + } elseif $val_type == 'ref' { + $value_ref := $value; + } elseif $val_type == 'float' { + $value_f := $value::decimal(36,18); + } else { + error(format('unknown type used "%s". valid types = "float" | "bool" | "int" | "ref" | "string"', $val_type)); + } + + stream_owner_only(); + + if is_initiated() == false { + error('contract must be initiated'); + } + + // check if it's read-only + for $row in SELECT * FROM metadata WHERE metadata_key = 'readonly_key' AND value_s = $key LIMIT 1 { + error('Cannot insert metadata for read-only key'); + } + + // we create one deterministic uuid for each metadata record + // we can't use just @txid because a single transaction can insert multiple metadata records. + // the result will be idempotency here too. + $uuid_key := @txid || $key || $value; + + $uuid uuid := uuid_generate_v5('1361df5d-0230-47b3-b2c1-37950cf51fe9'::uuid, $uuid_key); + $current_block int := @height; + + // insert data + INSERT INTO metadata (row_id, metadata_key, value_i, value_f, value_s, value_b, value_ref, created_at) + VALUES ($uuid, $key, $value_i, $value_f, $value_s, $value_b, LOWER($value_ref), $current_block); +} + +// key: the metadata key to look for +// only_latest: if true, only return the latest version of the metadata +// ref: if provided, only return metadata with that ref +procedure get_metadata($key text, $only_latest bool, $ref text) public view returns table( + row_id uuid, + value_i int, + value_f decimal(36,18), + value_b bool, + value_s text, + value_ref text, + created_at int + ) { + + if $only_latest == true { + if $ref is distinct from null { + return SELECT + row_id, + null::int as value_i, + null::decimal(36,18) as value_f, + null::bool as value_b, + null::text as value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL AND value_ref = LOWER($ref) + ORDER BY created_at DESC + LIMIT 1; + } else { + return SELECT + row_id, + value_i, + value_f, + value_b, + value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL + ORDER BY created_at DESC + LIMIT 1; + } + } else { + // SHOULD BE THE EXACT CODE AS ABOVE, BUT WITHOUT LIMIT + if $ref is distinct from null { + return SELECT + row_id, + null::int as value_i, + null::decimal(36,18) as value_f, + null::bool as value_b, + null::text as value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL AND value_ref = LOWER($ref) + ORDER BY created_at DESC; + } else { + return SELECT + row_id, + value_i, + value_f, + value_b, + value_s, + value_ref, + created_at + FROM metadata + WHERE metadata_key = $key AND disabled_at IS NULL + ORDER BY created_at DESC; + } + } +} + + +procedure disable_metadata($row_id uuid) public { + stream_owner_only(); + + $current_block int := @height; + + $found bool := false; + + // Check if the metadata is not read-only + for $metadata_row in + SELECT metadata_key + FROM metadata + WHERE row_id = $row_id AND disabled_at IS NULL + LIMIT 1 { + $found := true; + $row_key text := $metadata_row.metadata_key; + + for $readonly_row in SELECT row_id FROM metadata WHERE metadata_key = 'readonly_key' AND value_s = $row_key LIMIT 1 { + error('Cannot disable read-only metadata'); + } + + UPDATE metadata SET disabled_at = $current_block + WHERE row_id = $row_id; + } + + if $found == false { + error('metadata record not found'); + } +} + +procedure insert_record($date_value int, $value decimal(36,18)) public { + if is_wallet_allowed_to_write(@caller) == false { + error('wallet not allowed to write'); + } + + if is_initiated() == false { + error('contract must be initiated'); + } + + $current_block int := @height; + + // insert data + INSERT INTO primitive_events (date_value, value, created_at) + VALUES ($date_value, $value, $current_block); +} + +// get_index calculation is ((current_primitive/first_primitive)*100). +// This essentially gives us the same result, but with an extra 3 digits of precision. +// index := (currentPrimitive * 100) / basePrimitive +procedure get_index($date_from int, $date_to int, $frozen_at int, $base_date int) public view returns table( + date_value int, + value decimal(36,18) + ) { + + $effective_base_date int := $base_date; + if ($effective_base_date == 0 OR $effective_base_date IS NULL) { + for $v_row in SELECT * FROM get_metadata('default_base_date', true, null) ORDER BY created_at DESC LIMIT 1 { + $effective_base_date := $v_row.value_i; + } + } + + $baseValue decimal(36,18) := get_base_value($effective_base_date, $frozen_at); + if $baseValue == 0::decimal(36,18) { + error('base value is 0'); + } + + return SELECT date_value, (value * 100::decimal(36,18)) / $baseValue as value FROM get_record($date_from, $date_to, $frozen_at); +} + +// get_base_value returns the first nearest value of the primitive stream before the given date +procedure get_base_value($base_date int, $frozen_at int) private view returns (value decimal(36,18)) { + // If $base_date is null or empty, return the first-ever value from the primitive stream. + // This ensures that when no valid $base_date is provided, we still return the earliest available data point. + if $base_date is null OR $base_date = 0 { + for $row in SELECT * FROM primitive_events WHERE (created_at <= $frozen_at OR $frozen_at = 0 OR $frozen_at IS NULL) ORDER BY date_value ASC, created_at DESC LIMIT 1 { + return $row.value; + } + } + + for $row2 in SELECT * FROM primitive_events WHERE date_value <= $base_date AND (created_at <= $frozen_at OR $frozen_at = 0 OR $frozen_at IS NULL) ORDER BY date_value DESC, created_at DESC LIMIT 1 { + return $row2.value; + } + + // if no value is found, we find the first value after the given date + // This will raise a red flag in the system and the data will undergo the usual process for when a new data provider is added. + for $row3 in SELECT * FROM primitive_events WHERE date_value > $base_date AND (created_at <= $frozen_at OR $frozen_at = 0 OR $frozen_at IS NULL) ORDER BY date_value ASC, created_at DESC LIMIT 1 { + return $row3.value; + } + + // if no value is found, we return an error + error('no base value found'); +} + +// get_first_record returns the first record of the primitive stream (optionally after a given date - inclusive) +procedure get_first_record($after_date int, $frozen_at int) public view returns table( + date_value int, + value decimal(36,18) +) { + // check read access + if is_wallet_allowed_to_read(@caller) == false { + error('wallet not allowed to read'); + } + + // check compose access + is_stream_allowed_to_compose(@foreign_caller); + + // let's coalesce after_date with '' + // then, if it's empty, it will always be the first value + if $after_date is null { + $after_date := 0; + } + + // coalesce frozen_at with 0 + if $frozen_at is null { + $frozen_at := 0; + } + + return SELECT date_value, value FROM primitive_events WHERE date_value >= $after_date AND (created_at <= $frozen_at OR $frozen_at = 0 OR $frozen_at IS NULL) ORDER BY date_value ASC, created_at DESC LIMIT 1; +} + +// get_original_record returns the original value of the primitive stream for a given date +// it does not fill the gaps in the primitive stream +procedure get_original_record( + $date_from int, + $date_to int, + $frozen_at int + ) private view returns table( + date_value int, + value decimal(36,18) + ) { + + // check read access + if is_wallet_allowed_to_read(@caller) == false { + error('wallet not allowed to read'); + } + // check compose access + is_stream_allowed_to_compose(@foreign_caller); + + $frozenValue int := 0; + if $frozen_at IS DISTINCT FROM NULL { + $frozenValue := $frozen_at::int; + } + + // TODO: whereClause here is a placeholder only, not supported yet, but it will make things cleaner if it available + //$whereClause text := 'WHERE 1=1 '; + //if $date_from != '' { + // $whereClause := $whereClause || 'AND date_value >= $date_from '; + //} + + //if $date_to != '' { + // $whereClause := $whereClause || 'AND date_value <= $date_to '; + //} + + + // TODO: Normally we would use the following query to get the latest value of each date + // But it's not working for JOIN and MAX() function + //for $row in SELECT date_value, value FROM primitive_events JOIN (SELECT date_value, MAX(created_at) as created_at FROM primitive_events GROUP BY date_value) as max_created + //ON primitive_events.date_value = max_created.date_value AND primitive_events.created_at = max_created.created_at + //$whereClause + //ORDER BY date_value DESC { + // return next $row.date_value, $row.value; + //} + + // TODO: had to use this workaround because && operator is not working + $last_result_date int := 0; + if $date_from IS DISTINCT FROM NULL { + if $date_to IS DISTINCT FROM NULL { + // date_from and date_to are provided + // we will fetch all records from date_from to date_to + for $row in SELECT date_value, value FROM primitive_events + WHERE date_value >= $date_from AND date_value <= $date_to + AND (created_at <= $frozenValue OR $frozenValue = 0) + AND $last_result_date != date_value + ORDER BY date_value DESC, created_at DESC { + if $last_result_date != $row.date_value { + $last_result_date := $row.date_value; + return next $row.date_value, $row.value; + } + } + } else { + // only date_from is provided + // we will fetch all records from date_from to the latest + for $row2 in SELECT date_value, value FROM primitive_events + WHERE date_value >= $date_from + AND (created_at <= $frozenValue OR $frozenValue = 0) + AND $last_result_date != date_value + ORDER BY date_value DESC, created_at DESC { + if $last_result_date != $row2.date_value { + $last_result_date := $row2.date_value; + return next $row2.date_value, $row2.value; + } + } + } + } else { + if $date_to IS NOT DISTINCT FROM NULL { + // no date_from and date_to provided + // we fetch only the latest record + return SELECT date_value, value FROM primitive_events + WHERE created_at <= $frozenValue OR $frozenValue = 0 + AND $last_result_date != date_value + ORDER BY date_value DESC, created_at DESC LIMIT 1; + } else { + // date_to is provided but date_from is not + error('date_from is required if date_to is provided'); + } + } +} + +// get_record returns the value of the primitive stream for a given date +// it is able to fill the gaps in the primitive stream by using the last value before the given date +// in our original implementation, the steps taken was much simpler +// 1. fetches the original range +// 2. checks if the original range's first value = start_date +// 3. if it's not, it tries fetching the last date before it, and prepends result +// The cause of this added complexity is the lack of support for storing table results in a variable +procedure get_record( + $date_from int, + $date_to int, + $frozen_at int + ) public view returns table( + date_value int, + value decimal(36,18) + ) { + + $is_first_result bool := true; + + for $row in SELECT * FROM get_original_record($date_from, $date_to, $frozen_at) { + // we will only fetch the last record before the first result + // if the first result is not the same as the start date + if $is_first_result == true { + $first_result_date int := $row.date_value; + + // if the first result date is not the same as the start date, then we need to fetch the last record before it + if $first_result_date != $date_from { + for $last_row in SELECT * FROM get_last_record_before_date($first_result_date) { + // Note: although the user requested a date_from, we are returning the previous date here + // e.g., the user used date_from:2021-01-02, and we are returning 2021-01-01 as first value + // + // that happens because the accuracy is guaranteed with this behavior, otherwise the + // user won't be able to know when a data point really exist in our database or not. + + return next $last_row.date_value, $last_row.value; + } + } + + $is_first_result := false; + } + + return next $row.date_value, $row.value; + } + + // it's still the first result? i.e. there were no results + // so let's try finding the last record before the start date + if $is_first_result == true { + for $last_row2 in SELECT * FROM get_last_record_before_date($date_from) { + return next $last_row2.date_value, $last_row2.value; + } + } +} + +// get_last_record_before_date returns the last record before the given date +procedure get_last_record_before_date( + $date_from int +) public view returns table( + date_value int, + value decimal(36,18) + ) { + return SELECT date_value, value FROM primitive_events WHERE date_value < $date_from ORDER BY date_value DESC, created_at DESC LIMIT 1; +} + +procedure transfer_stream_ownership($new_owner text) public { + stream_owner_only(); + + // fail if not a valid address + check_eth_address($new_owner); + + UPDATE metadata SET value_ref = LOWER($new_owner) + WHERE metadata_key = 'stream_owner'; +} + +procedure check_eth_address($address text) private { + // TODO better check when kwil supports regexp and {}[] inside strings + // regex: ^0x[0-9a-fA-F]{40}$ + // for $row in SELECT regexp_match($address, '^0x[0-9a-fA-F]{40}$') { + // return true; + // } + + if (length($address) != 42) { + error('invalid address length'); + } + + // check if starts with 0x + for $row in SELECT $address LIKE '0x%' as a { + if $row.a == false { + error('address does not start with 0x'); + } + } +} + +procedure is_stream_allowed_to_compose($foreign_caller text) public view returns (value bool) { + // if foreign_caller is empty, then it's a direct call + if $foreign_caller == '' { + return true; + } + + // if public, anyone can always read + // If there's no visibility metadata, it's public. + $visibility int := 0; + for $v_row in SELECT * FROM get_metadata('compose_visibility', true, null) { + $visibility := $v_row.value_i; + } + + if $visibility == 0 { + return true; + } + + // if there's metadata allow_compose_stream -> , then its permitted + for $row in SELECT * FROM get_metadata('allow_compose_stream', true, $foreign_caller) LIMIT 1 { + return true; + } + + error('stream not allowed to compose'); +} + +procedure get_index_change($date_from int, $date_to int, $frozen_at int, $base_date int, $days_interval int) public view returns table( + date_value int, + value decimal(36,18) +) { + // the full process is this: + // 1. we find the current values for the given date range + // | date | value | + // | 01-2001 | ... | + // | 05-2001 | ... | + // | 09-2001 | ... | + // | 10-2001 | ... | + + // 2. we get a list of expected previous dates + // | current_date | expected_prev_date | + // | 01-2001 | 01-2000 | + // | 05-2001 | 05-2000 | + // | 09-2001 | 09-2000 | + // | 10-2001 | 10-2000 | + + // 3. we query the real previous values for the expected previous dates, using earliest and latest dates + // they might not match the expected previous dates + // | real_prev_date | value | + // | 01-2000 | ... | + // | 03-2000 | ... | + // | 04-2000 | ... | + // | 09-2000 | ... | + + // 4. we try to match the expected prev dates with the real prev dates. + // see that each expected date should be 1:1 with a result prev date. + // | expected_prev_date | real_prev_date | result_prev_date | + // | 01-2000 | 01-2000 | 01-2000 | + // | - | 03-2000 | - | + // | - | 04-2000 | - | + // | 05-2000 | - | 04-2000 | + // | 09-2000 | 09-2000 | 09-2000 | + // | 10-2000 | - | 09-2000 | + + // 5. we calculate the index change for the current values and the result prev values + // and done. + + if $frozen_at == null { + $frozen_at := 0; + } + + if $days_interval == null { + error('days_interval is required'); + } + + $current_values decimal(36,18)[]; + // example: [01-2001, 05-2001, 09-2001, 10-2001] + $current_dates int[]; + // example: [01-2000, 05-2000, 09-2000, 10-2000] + $expected_prev_dates int[]; + + for $row_current in SELECT * FROM get_index($date_from, $date_to, $frozen_at, $base_date) { + $prev_date := $row_current.date_value - ($days_interval * 86400); + $expected_prev_dates := array_append($expected_prev_dates, $prev_date); + $current_values := array_append($current_values, $row_current.value); + $current_dates := array_append($current_dates, $row_current.date_value); + } + + // example: 01-2000] + $earliest_prev_date := $expected_prev_dates[1]; + // example: 09-2000 + $latest_prev_date := $expected_prev_dates[array_length($expected_prev_dates)]; + + // real previous values doesn't match the same length as expected previous dates + // because the interval can have much more values than the expected dates + $real_prev_values decimal(36,18)[]; + $real_prev_dates int[]; + + // now we query the prev dates + for $row_prev in SELECT * FROM get_index($earliest_prev_date, $latest_prev_date, $frozen_at, $base_date) { + $real_prev_values := array_append($real_prev_values, $row_prev.value); + $real_prev_dates := array_append($real_prev_dates, $row_prev.date_value); + } + + // now we calculate the matching dates for the real prev values + $result_prev_dates int[]; + $result_prev_values decimal(36,18)[]; + + $real_prev_date_idx int := 1; + + // for each expected prev date, we find the matching real prev date + if array_length($expected_prev_dates) > 0 { + for $expected_prev_date_idx in 1..array_length($expected_prev_dates) { + // we start from the last index of real prev dates. we don't need to check previous values + for $selector in $real_prev_date_idx..array_length($real_prev_dates) { + // if next real prev date is greater than expected prev date (or null), then we need to use the current real value + if $real_prev_dates[$selector + 1] > $expected_prev_dates[$expected_prev_date_idx] + OR $real_prev_dates[$selector + 1] IS NULL { + // if the current real prev date is already greater than expected prev date + // we use NULL. We're probably before the first real prev date here + if $real_prev_dates[$selector] > $expected_prev_dates[$expected_prev_date_idx] { + $result_prev_dates := array_append($result_prev_dates, null::int); + $result_prev_values := array_append($result_prev_values, null::decimal(36,18)); + } else { + $result_prev_dates := array_append($result_prev_dates, $real_prev_dates[$selector]); + $result_prev_values := array_append($result_prev_values, $real_prev_values[$selector]); + } + // we already appended one for current $real_prev_date_idx, then we need to go to next + $real_prev_date_idx := $selector; + break; + } + } + } + } + + // check if we have the same number of values and dates + if array_length($current_dates) != array_length($result_prev_dates) { + error('we have different number of dates and values'); + } + if array_length($current_values) != array_length($result_prev_values) { + error('we have different number of dates and values'); + } + + // calculate the index change + if array_length($result_prev_dates) > 0 { + for $row_result in 1..array_length($result_prev_dates) { + // if the expected_prev_date is null, then we don't have a real prev date + if $result_prev_dates[$row_result] IS DISTINCT FROM NULL { + return next $current_dates[$row_result], ($current_values[$row_result] - $result_prev_values[$row_result]) * 100.00::decimal(36,18) / $result_prev_values[$row_result]; + } + } + } +} From 2d5b405e8c316789784a523208cfa48981d6f0f9 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Thu, 9 Jan 2025 05:07:00 -0300 Subject: [PATCH 05/14] chore: working report of benchmark with unix (#777) * chore: update references from truflation to trufnetwork This commit updates all import paths and module references from the old organization name `truflation` to the new organization name `trufnetwork`. This change ensures consistency across the codebase and aligns with the recent organizational restructuring. Additionally, several dependencies in `go.mod` and `go.sum` have been updated to their latest versions, enhancing compatibility and stability. * refactor: simplify record handling in composed stream template This commit removes the unused variable `$removed_elements_count` from the `get_record_filled` and `get_index_filled` procedures, streamlining the code. It also updates the way last values are assigned in these procedures, replacing the array update function with direct assignment for improved clarity. Additionally, several TODO comments have been refined to indicate future improvements related to array handling and performance optimizations. * refactor: update benchmark case structure and improve data handling This commit refactors the benchmark code to replace the `Days` field with `DataPointsSet` in the `BenchmarkCase` struct, enhancing clarity and flexibility in data handling. The `runBenchmark` function has been updated to utilize `DataPointsSet`, and related functions have been adjusted accordingly. Additionally, the `setupSchemas` function now uses a new `RangeParameters` struct to manage date ranges more effectively. The changes also include the removal of the `load_unix_test.go` file and the introduction of new benchmark result CSV files, reflecting the updated data structure. Overall, these modifications streamline the benchmarking process and improve code maintainability. * delete accidentaly included results --- deployments/infra/cdk_main.go | 8 +- deployments/infra/config/config.go | 2 +- deployments/infra/go.mod | 32 +- deployments/infra/go.sum | 56 ++- .../infra/lib/kwil-gateway/kgw_instance.go | 6 +- .../lib/kwil-gateway/kgw_startup_scripts.go | 2 +- .../kwil-indexer/indexer_startup_scripts.go | 6 +- .../lib/kwil-indexer/kwil_indexer_instance.go | 8 +- .../lib/kwil-network/generate_genesis.go | 4 +- .../lib/kwil-network/generate_node_keys.go | 2 +- .../lib/kwil-network/network_config_gen.go | 4 +- .../infra/lib/kwil-network/peer_config_gen.go | 4 +- deployments/infra/lib/observer/attach.go | 8 +- .../lib/observer/initialization_scripts.go | 2 +- .../infra/lib/observer/start_observer.go | 2 +- .../infra/lib/tsn/cluster/auto_tsn_cluster.go | 4 +- .../infra/lib/tsn/cluster/tsn_cluster.go | 8 +- .../tsn/cluster/tsn_cluster_from_config.go | 6 +- deployments/infra/lib/tsn/security_group.go | 2 +- deployments/infra/lib/tsn/tsn_image.go | 2 +- deployments/infra/lib/tsn/tsn_instance.go | 6 +- .../infra/lib/tsn/tsn_startup_scripts.go | 4 +- .../infra/stacks/benchmark/benchmark_stack.go | 4 +- .../benchmark/lambdas/exportresults/main.go | 2 +- deployments/infra/stacks/cert_stack.go | 6 +- deployments/infra/stacks/tsn_auto_stack.go | 8 +- .../infra/stacks/tsn_from_config_stack.go | 6 +- deployments/infra/stacks/tsn_stack.go | 16 +- internal/benchmark/benchexport/csv.go | 3 +- internal/benchmark/benchexport/csv_test.go | 12 +- internal/benchmark/benchexport/markdown.go | 467 ++++++++++++------ .../benchmark/benchexport/markdown_test.go | 66 ++- internal/benchmark/benchmark.go | 49 +- internal/benchmark/constants.go | 11 +- internal/benchmark/load_test.go | 122 ++++- internal/benchmark/load_unix_test.go | 181 ------- internal/benchmark/setup.go | 43 +- internal/benchmark/types.go | 5 +- internal/benchmark/utils.go | 42 +- .../composed_stream_template_unix.kf | 109 ++-- 40 files changed, 722 insertions(+), 608 deletions(-) delete mode 100644 internal/benchmark/load_unix_test.go diff --git a/deployments/infra/cdk_main.go b/deployments/infra/cdk_main.go index 55f187483..155c6484c 100644 --- a/deployments/infra/cdk_main.go +++ b/deployments/infra/cdk_main.go @@ -3,10 +3,10 @@ package main import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/utils" - "github.com/truflation/tsn-db/infra/stacks" - "github.com/truflation/tsn-db/infra/stacks/benchmark" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/utils" + "github.com/trufnetwork/node/infra/stacks" + "github.com/trufnetwork/node/infra/stacks/benchmark" "go.uber.org/zap" ) diff --git a/deployments/infra/config/config.go b/deployments/infra/config/config.go index b927b9887..c034f8563 100644 --- a/deployments/infra/config/config.go +++ b/deployments/infra/config/config.go @@ -4,7 +4,7 @@ import ( "fmt" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/lib/domain_utils" + "github.com/trufnetwork/node/infra/lib/domain_utils" "go.uber.org/zap" "strconv" ) diff --git a/deployments/infra/go.mod b/deployments/infra/go.mod index a5518182d..5bc05cd3b 100644 --- a/deployments/infra/go.mod +++ b/deployments/infra/go.mod @@ -1,17 +1,29 @@ -module github.com/truflation/tsn-db/infra +module github.com/trufnetwork/node/infra go 1.22.1 require ( github.com/aws/aws-cdk-go/awscdk/v2 v2.146.0 github.com/aws/aws-cdk-go/awscdklambdagoalpha/v2 v2.146.0-alpha.0 + github.com/aws/aws-lambda-go v1.47.0 + github.com/aws/aws-sdk-go v1.55.5 github.com/aws/constructs-go/constructs/v10 v10.3.0 github.com/aws/jsii-runtime-go v1.99.0 + github.com/caarlos0/env/v11 v11.3.1 + github.com/trufnetwork/node v1.2.0 + go.uber.org/zap v1.27.0 ) +replace github.com/trufnetwork/node/infra => ./ + +replace github.com/trufnetwork/node => ../../ + require ( - go.uber.org/multierr v1.10.0 // indirect - go.uber.org/zap v1.27.0 // indirect + github.com/fbiville/markdown-table-formatter v0.3.0 // indirect + github.com/jmespath/go-jmespath v0.4.0 // indirect + go.uber.org/multierr v1.11.0 // indirect + golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect + golang.org/x/sync v0.8.0 // indirect ) require ( @@ -19,7 +31,6 @@ require ( github.com/cdklabs/awscdk-asset-awscli-go/awscliv1/v2 v2.2.202 // indirect github.com/cdklabs/awscdk-asset-kubectl-go/kubectlv20/v2 v2.1.2 // indirect github.com/cdklabs/awscdk-asset-node-proxy-agent-go/nodeproxyagentv6/v2 v2.0.3 // indirect - github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/fatih/color v1.17.0 // indirect github.com/gabriel-vasile/mimetype v1.4.3 // indirect github.com/go-playground/locales v0.14.1 // indirect @@ -28,13 +39,12 @@ require ( github.com/leodido/go-urn v1.4.0 // indirect github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-isatty v0.0.20 // indirect - github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/yuin/goldmark v1.4.13 // indirect - golang.org/x/crypto v0.23.0 // indirect + golang.org/x/crypto v0.27.0 // indirect golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect - golang.org/x/mod v0.17.0 // indirect - golang.org/x/net v0.25.0 // indirect - golang.org/x/sys v0.20.0 // indirect - golang.org/x/text v0.15.0 // indirect - golang.org/x/tools v0.21.0 // indirect + golang.org/x/mod v0.19.0 // indirect + golang.org/x/net v0.29.0 // indirect + golang.org/x/sys v0.25.0 // indirect + golang.org/x/text v0.18.0 // indirect + golang.org/x/tools v0.23.0 // indirect ) diff --git a/deployments/infra/go.sum b/deployments/infra/go.sum index 245d06d14..02cb83c2e 100644 --- a/deployments/infra/go.sum +++ b/deployments/infra/go.sum @@ -4,28 +4,43 @@ github.com/aws/aws-cdk-go/awscdk/v2 v2.146.0 h1:0d2balamwmqrcy+8RxARzKnjALFctZ5w github.com/aws/aws-cdk-go/awscdk/v2 v2.146.0/go.mod h1:WF3lt7ah4wNktbClICIBbKdITtCqyCrPBQl3nkaLug4= github.com/aws/aws-cdk-go/awscdklambdagoalpha/v2 v2.146.0-alpha.0 h1:sm5ZwLdgMF3d4Gvt6O4v+VZ8vMKLbho5o/4kaHBzjJ0= github.com/aws/aws-cdk-go/awscdklambdagoalpha/v2 v2.146.0-alpha.0/go.mod h1:I5LaBL6HdTkNPf26G3ehTiXlRfQ45hLBvw1uoDNx6dQ= +github.com/aws/aws-lambda-go v1.47.0 h1:0H8s0vumYx/YKs4sE7YM0ktwL2eWse+kfopsRI1sXVI= +github.com/aws/aws-lambda-go v1.47.0/go.mod h1:dpMpZgvWx5vuQJfBt0zqBha60q7Dd7RfgJv23DymV8A= +github.com/aws/aws-sdk-go v1.55.5 h1:KKUZBfBoyqy5d3swXyiC7Q76ic40rYcbqH7qjh59kzU= +github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= github.com/aws/constructs-go/constructs/v10 v10.3.0 h1:LsjBIMiaDX/vqrXWhzTquBJ9pPdi02/H+z1DCwg0PEM= github.com/aws/constructs-go/constructs/v10 v10.3.0/go.mod h1:GgzwIwoRJ2UYsr3SU+JhAl+gq5j39bEMYf8ev3J+s9s= github.com/aws/jsii-runtime-go v1.99.0 h1:IYnRyyimwcWwa/bR39/tCkkGqhzqjAsdtfLin4atpcc= github.com/aws/jsii-runtime-go v1.99.0/go.mod h1:zAzY1FrPQXQwG/ZrtHIyZI22vKoZm9dufF7LqsJ1poE= +github.com/caarlos0/env/v11 v11.3.1 h1:cArPWC15hWmEt+gWk7YBi7lEXTXCvpaSdCiZE2X5mCA= +github.com/caarlos0/env/v11 v11.3.1/go.mod h1:qupehSf/Y0TUTsxKywqRt/vJjN5nz6vauiYEUUr8P4U= github.com/cdklabs/awscdk-asset-awscli-go/awscliv1/v2 v2.2.202 h1:VixXB9DnHN8oP7pXipq8GVFPjWCOdeNxIaS/ZyUwTkI= github.com/cdklabs/awscdk-asset-awscli-go/awscliv1/v2 v2.2.202/go.mod h1:iPUti/SWjA3XAS3CpnLciFjS8TN9Y+8mdZgDfSgcyus= github.com/cdklabs/awscdk-asset-kubectl-go/kubectlv20/v2 v2.1.2 h1:k+WD+6cERd59Mao84v0QtRrcdZuuSMfzlEmuIypKnVs= github.com/cdklabs/awscdk-asset-kubectl-go/kubectlv20/v2 v2.1.2/go.mod h1:CvFHBo0qcg8LUkJqIxQtP1rD/sNGv9bX3L2vHT2FUAo= github.com/cdklabs/awscdk-asset-node-proxy-agent-go/nodeproxyagentv6/v2 v2.0.3 h1:8NLWOIVaxAtpUXv5reojlAeDP7R8yswm9mDONf7F/3o= github.com/cdklabs/awscdk-asset-node-proxy-agent-go/nodeproxyagentv6/v2 v2.0.3/go.mod h1:ZjFqfhYpCLzh4z7ChcHCrkXfqCuEiRlNApDfJd6plts= +github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/fatih/color v1.17.0 h1:GlRw1BRJxkpqUCBKzKOw098ed57fEsKeNjpTe3cSjK4= github.com/fatih/color v1.17.0/go.mod h1:YZ7TlrGPkiz6ku9fK3TLD/pl3CpsiFyu8N92HLgmosI= +github.com/fbiville/markdown-table-formatter v0.3.0 h1:PIm1UNgJrFs8q1htGTw+wnnNYvwXQMMMIKNZop2SSho= +github.com/fbiville/markdown-table-formatter v0.3.0/go.mod h1:q89TDtSEVDdTaufgSbfHpNVdPU/bmfvqNkrC5HagmLY= github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0= github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk= +github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= +github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= github.com/go-playground/validator/v10 v10.22.0 h1:k6HsTZ0sTnROkhS//R0O+55JgM8C4Bx7ia+JlgcnOao= github.com/go-playground/validator/v10 v10.22.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM= +github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= +github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= +github.com/jmespath/go-jmespath/internal/testify v1.5.1 h1:shLQSRRSCCPj3f2gpwzGwWFoC7ycTf1rcQZHOlsJ6N8= +github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U= github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA= @@ -33,44 +48,53 @@ github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovk github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= +github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/yuin/goldmark v1.4.13 h1:fVcFKWvrslecOb/tg+Cc05dkeYx540o0FuFt3nUVDoE= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= -go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ= -go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= +go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= -golang.org/x/crypto v0.23.0 h1:dIJU/v2J8Mdglj/8rJ6UUOM3Zc9zLZxVZwwxMooUSAI= -golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= +golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A= +golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70= +golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8= +golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY= golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 h1:VLliZ0d+/avPrXXH+OakdXhpJuEoBZuwh1m2j7U6Iug= golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY= golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg= -golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA= -golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= +golang.org/x/mod v0.19.0 h1:fEdghXQSo20giMthA7cd28ZC+jts4amQ3YMXiP5oMQ8= +golang.org/x/mod v0.19.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= -golang.org/x/net v0.25.0 h1:d/OCCoBEUq33pjydKrGQhw7IlUPI2Oylr+8qLx49kac= -golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= +golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo= +golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.7.0 h1:YsImfSBoP9QPYL0xyKJPq0gcaJdG3rInoqxTWbfQu9M= -golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ= +golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.20.0 h1:Od9JTbYCk261bKm4M/mw7AklTlFYIa0bIp9BgSm1S8Y= -golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34= +golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= -golang.org/x/text v0.15.0 h1:h1V/4gjBv8v9cjcR6+AR5+/cIYK5N/WAgiv4xlsEtAk= -golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= +golang.org/x/text v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224= +golang.org/x/text v0.18.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28= -golang.org/x/tools v0.21.0 h1:qc0xYgIbsSDt9EyWz05J5wfa7LOVW0YTLOXrqdLAWIw= -golang.org/x/tools v0.21.0/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk= +golang.org/x/tools v0.23.0 h1:SGsXPZ+2l4JsgaCKkx+FQ9YZ5XEtA1GZYuoDjenLjvg= +golang.org/x/tools v0.23.0/go.mod h1:pnu6ufv6vQkll6szChhK3C3L/ruaIv5eBeztNG8wtsI= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= +gopkg.in/yaml.v2 v2.2.8 h1:obN1ZagJSUGI0Ek/LBmuj4SNLPfIny3KsKFopxRdj10= +gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= diff --git a/deployments/infra/lib/kwil-gateway/kgw_instance.go b/deployments/infra/lib/kwil-gateway/kgw_instance.go index fdd626366..f532a677a 100644 --- a/deployments/infra/lib/kwil-gateway/kgw_instance.go +++ b/deployments/infra/lib/kwil-gateway/kgw_instance.go @@ -8,9 +8,9 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awss3assets" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/tsn" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/tsn" + "github.com/trufnetwork/node/infra/lib/utils" ) type KGWConfig struct { diff --git a/deployments/infra/lib/kwil-gateway/kgw_startup_scripts.go b/deployments/infra/lib/kwil-gateway/kgw_startup_scripts.go index 5d7347213..73e956053 100644 --- a/deployments/infra/lib/kwil-gateway/kgw_startup_scripts.go +++ b/deployments/infra/lib/kwil-gateway/kgw_startup_scripts.go @@ -3,7 +3,7 @@ package kwil_gateway import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/lib/utils" ) type AddKwilGatewayStartupScriptsOptions struct { diff --git a/deployments/infra/lib/kwil-indexer/indexer_startup_scripts.go b/deployments/infra/lib/kwil-indexer/indexer_startup_scripts.go index 2878a92ba..4e1c901b0 100644 --- a/deployments/infra/lib/kwil-indexer/indexer_startup_scripts.go +++ b/deployments/infra/lib/kwil-indexer/indexer_startup_scripts.go @@ -5,9 +5,9 @@ import ( "strconv" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" - "github.com/truflation/tsn-db/infra/lib/tsn" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/lib/tsn" + "github.com/trufnetwork/node/infra/lib/utils" ) type IndexerEnvConfig struct { diff --git a/deployments/infra/lib/kwil-indexer/kwil_indexer_instance.go b/deployments/infra/lib/kwil-indexer/kwil_indexer_instance.go index 9fe4d4b20..217920ef8 100644 --- a/deployments/infra/lib/kwil-indexer/kwil_indexer_instance.go +++ b/deployments/infra/lib/kwil-indexer/kwil_indexer_instance.go @@ -10,10 +10,10 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awss3assets" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" - "github.com/truflation/tsn-db/infra/lib/tsn" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/lib/tsn" + "github.com/trufnetwork/node/infra/lib/utils" ) type KGWConfig struct { diff --git a/deployments/infra/lib/kwil-network/generate_genesis.go b/deployments/infra/lib/kwil-network/generate_genesis.go index 68dd0f9d8..e1ce23aaf 100644 --- a/deployments/infra/lib/kwil-network/generate_genesis.go +++ b/deployments/infra/lib/kwil-network/generate_genesis.go @@ -5,8 +5,8 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" "go.uber.org/zap" "os" "os/exec" diff --git a/deployments/infra/lib/kwil-network/generate_node_keys.go b/deployments/infra/lib/kwil-network/generate_node_keys.go index dc69cbddb..a978cf0af 100644 --- a/deployments/infra/lib/kwil-network/generate_node_keys.go +++ b/deployments/infra/lib/kwil-network/generate_node_keys.go @@ -3,7 +3,7 @@ package kwil_network import ( "encoding/json" "github.com/aws/constructs-go/constructs/v10" - "github.com/truflation/tsn-db/infra/config" + "github.com/trufnetwork/node/infra/config" "go.uber.org/zap" "os/exec" ) diff --git a/deployments/infra/lib/kwil-network/network_config_gen.go b/deployments/infra/lib/kwil-network/network_config_gen.go index 4e1d6c690..133821f46 100644 --- a/deployments/infra/lib/kwil-network/network_config_gen.go +++ b/deployments/infra/lib/kwil-network/network_config_gen.go @@ -4,8 +4,8 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awss3assets" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" "strconv" ) diff --git a/deployments/infra/lib/kwil-network/peer_config_gen.go b/deployments/infra/lib/kwil-network/peer_config_gen.go index a35a84132..c848cc915 100644 --- a/deployments/infra/lib/kwil-network/peer_config_gen.go +++ b/deployments/infra/lib/kwil-network/peer_config_gen.go @@ -10,8 +10,8 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" ) type GeneratePeerConfigInput struct { diff --git a/deployments/infra/lib/observer/attach.go b/deployments/infra/lib/observer/attach.go index 8cdf1f375..50ce4d3a4 100644 --- a/deployments/infra/lib/observer/attach.go +++ b/deployments/infra/lib/observer/attach.go @@ -9,10 +9,10 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awsiam" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - kwil_gateway "github.com/truflation/tsn-db/infra/lib/kwil-gateway" - kwil_indexer_instance "github.com/truflation/tsn-db/infra/lib/kwil-indexer" - "github.com/truflation/tsn-db/infra/lib/tsn/cluster" + "github.com/trufnetwork/node/infra/config" + kwil_gateway "github.com/trufnetwork/node/infra/lib/kwil-gateway" + kwil_indexer_instance "github.com/trufnetwork/node/infra/lib/kwil-indexer" + "github.com/trufnetwork/node/infra/lib/tsn/cluster" ) type AttachObservabilityInput struct { diff --git a/deployments/infra/lib/observer/initialization_scripts.go b/deployments/infra/lib/observer/initialization_scripts.go index 76a96c2ef..0f5d32b3c 100644 --- a/deployments/infra/lib/observer/initialization_scripts.go +++ b/deployments/infra/lib/observer/initialization_scripts.go @@ -4,7 +4,7 @@ import ( "fmt" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/lib/utils" ) type ObserverScriptInput struct { diff --git a/deployments/infra/lib/observer/start_observer.go b/deployments/infra/lib/observer/start_observer.go index a2b09ee9b..48e236d66 100644 --- a/deployments/infra/lib/observer/start_observer.go +++ b/deployments/infra/lib/observer/start_observer.go @@ -6,7 +6,7 @@ import ( "strings" "github.com/aws/aws-cdk-go/awscdk/v2" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/lib/utils" ) type CreateStartObserverScriptInput struct { diff --git a/deployments/infra/lib/tsn/cluster/auto_tsn_cluster.go b/deployments/infra/lib/tsn/cluster/auto_tsn_cluster.go index b5e007d99..c48966e73 100644 --- a/deployments/infra/lib/tsn/cluster/auto_tsn_cluster.go +++ b/deployments/infra/lib/tsn/cluster/auto_tsn_cluster.go @@ -3,8 +3,8 @@ package cluster import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/jsii-runtime-go" - kwil_network "github.com/truflation/tsn-db/infra/lib/kwil-network" - "github.com/truflation/tsn-db/infra/lib/utils" + kwil_network "github.com/trufnetwork/node/infra/lib/kwil-network" + "github.com/trufnetwork/node/infra/lib/utils" ) type AutoTsnClusterProvider struct { diff --git a/deployments/infra/lib/tsn/cluster/tsn_cluster.go b/deployments/infra/lib/tsn/cluster/tsn_cluster.go index 6e3967769..4aa5e1e8a 100644 --- a/deployments/infra/lib/tsn/cluster/tsn_cluster.go +++ b/deployments/infra/lib/tsn/cluster/tsn_cluster.go @@ -11,10 +11,10 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awsroute53" "github.com/aws/aws-cdk-go/awscdk/v2/awss3assets" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - kwil_network "github.com/truflation/tsn-db/infra/lib/kwil-network" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" - "github.com/truflation/tsn-db/infra/lib/tsn" + "github.com/trufnetwork/node/infra/config" + kwil_network "github.com/trufnetwork/node/infra/lib/kwil-network" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/lib/tsn" ) type TSNCluster struct { diff --git a/deployments/infra/lib/tsn/cluster/tsn_cluster_from_config.go b/deployments/infra/lib/tsn/cluster/tsn_cluster_from_config.go index e19e1b19f..41377e6cf 100644 --- a/deployments/infra/lib/tsn/cluster/tsn_cluster_from_config.go +++ b/deployments/infra/lib/tsn/cluster/tsn_cluster_from_config.go @@ -5,9 +5,9 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/aws-cdk-go/awscdk/v2/awss3assets" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/kwil-network" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/kwil-network" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" ) type TsnClusterFromConfigInput struct { diff --git a/deployments/infra/lib/tsn/security_group.go b/deployments/infra/lib/tsn/security_group.go index 88520c7aa..8a50ed71c 100644 --- a/deployments/infra/lib/tsn/security_group.go +++ b/deployments/infra/lib/tsn/security_group.go @@ -5,7 +5,7 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awsec2" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/lib/kwil-network/peer" ) type NewTSNSecurityGroupInput struct { diff --git a/deployments/infra/lib/tsn/tsn_image.go b/deployments/infra/lib/tsn/tsn_image.go index 8855d5518..971419973 100644 --- a/deployments/infra/lib/tsn/tsn_image.go +++ b/deployments/infra/lib/tsn/tsn_image.go @@ -4,7 +4,7 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/aws-cdk-go/awscdk/v2/awsecrassets" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/lib/utils" ) func NewTSNImageAsset( diff --git a/deployments/infra/lib/tsn/tsn_instance.go b/deployments/infra/lib/tsn/tsn_instance.go index c36579c01..94760a139 100644 --- a/deployments/infra/lib/tsn/tsn_instance.go +++ b/deployments/infra/lib/tsn/tsn_instance.go @@ -8,9 +8,9 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awss3assets" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - peer2 "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/config" + peer2 "github.com/trufnetwork/node/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/lib/utils" ) type NewTSNInstanceInput struct { diff --git a/deployments/infra/lib/tsn/tsn_startup_scripts.go b/deployments/infra/lib/tsn/tsn_startup_scripts.go index 18f1eef68..be1887f3d 100644 --- a/deployments/infra/lib/tsn/tsn_startup_scripts.go +++ b/deployments/infra/lib/tsn/tsn_startup_scripts.go @@ -4,8 +4,8 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2" "github.com/aws/aws-cdk-go/awscdk/v2/awsecrassets" "github.com/aws/jsii-runtime-go" - peer2 "github.com/truflation/tsn-db/infra/lib/kwil-network/peer" - "github.com/truflation/tsn-db/infra/lib/utils" + peer2 "github.com/trufnetwork/node/infra/lib/kwil-network/peer" + "github.com/trufnetwork/node/infra/lib/utils" ) type AddStartupScriptsOptions struct { diff --git a/deployments/infra/stacks/benchmark/benchmark_stack.go b/deployments/infra/stacks/benchmark/benchmark_stack.go index 9d26ef34f..804cdddf6 100644 --- a/deployments/infra/stacks/benchmark/benchmark_stack.go +++ b/deployments/infra/stacks/benchmark/benchmark_stack.go @@ -12,8 +12,8 @@ import ( "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/utils/asset" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/utils/asset" ) // Main stack function diff --git a/deployments/infra/stacks/benchmark/lambdas/exportresults/main.go b/deployments/infra/stacks/benchmark/lambdas/exportresults/main.go index a5261504f..ddcc893df 100644 --- a/deployments/infra/stacks/benchmark/lambdas/exportresults/main.go +++ b/deployments/infra/stacks/benchmark/lambdas/exportresults/main.go @@ -16,7 +16,7 @@ import ( "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3" - "github.com/truflation/tsn-db/internal/benchmark/benchexport" + "github.com/trufnetwork/node/internal/benchmark/benchexport" ) // ------------------------------------------------------------------------------------------------- diff --git a/deployments/infra/stacks/cert_stack.go b/deployments/infra/stacks/cert_stack.go index 480ca9733..0ffe505d3 100644 --- a/deployments/infra/stacks/cert_stack.go +++ b/deployments/infra/stacks/cert_stack.go @@ -5,9 +5,9 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awscertificatemanager" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/domain_utils" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/domain_utils" + "github.com/trufnetwork/node/infra/lib/utils" ) type CertStackExports struct { diff --git a/deployments/infra/stacks/tsn_auto_stack.go b/deployments/infra/stacks/tsn_auto_stack.go index ed954025e..44f435794 100644 --- a/deployments/infra/stacks/tsn_auto_stack.go +++ b/deployments/infra/stacks/tsn_auto_stack.go @@ -5,10 +5,10 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awsec2" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/observer" - "github.com/truflation/tsn-db/infra/lib/tsn/cluster" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/observer" + "github.com/trufnetwork/node/infra/lib/tsn/cluster" + "github.com/trufnetwork/node/infra/lib/utils" ) type TsnAutoStackProps struct { diff --git a/deployments/infra/stacks/tsn_from_config_stack.go b/deployments/infra/stacks/tsn_from_config_stack.go index c84ecf9c8..642c681c3 100644 --- a/deployments/infra/stacks/tsn_from_config_stack.go +++ b/deployments/infra/stacks/tsn_from_config_stack.go @@ -7,9 +7,9 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awsec2" "github.com/aws/constructs-go/constructs/v10" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/observer" - "github.com/truflation/tsn-db/infra/lib/tsn/cluster" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/observer" + "github.com/trufnetwork/node/infra/lib/tsn/cluster" ) type TsnFromConfigStackProps struct { diff --git a/deployments/infra/stacks/tsn_stack.go b/deployments/infra/stacks/tsn_stack.go index df5aad803..cbd538974 100644 --- a/deployments/infra/stacks/tsn_stack.go +++ b/deployments/infra/stacks/tsn_stack.go @@ -9,14 +9,14 @@ import ( "github.com/aws/aws-cdk-go/awscdk/v2/awss3" "github.com/aws/aws-cdk-go/awscdk/v2/awss3assets" "github.com/aws/jsii-runtime-go" - "github.com/truflation/tsn-db/infra/config" - "github.com/truflation/tsn-db/infra/lib/domain_utils" - kwil_gateway "github.com/truflation/tsn-db/infra/lib/kwil-gateway" - kwil_indexer_instance "github.com/truflation/tsn-db/infra/lib/kwil-indexer" - system_contract "github.com/truflation/tsn-db/infra/lib/system-contract" - "github.com/truflation/tsn-db/infra/lib/tsn" - "github.com/truflation/tsn-db/infra/lib/tsn/cluster" - "github.com/truflation/tsn-db/infra/lib/utils" + "github.com/trufnetwork/node/infra/config" + "github.com/trufnetwork/node/infra/lib/domain_utils" + kwil_gateway "github.com/trufnetwork/node/infra/lib/kwil-gateway" + kwil_indexer_instance "github.com/trufnetwork/node/infra/lib/kwil-indexer" + system_contract "github.com/trufnetwork/node/infra/lib/system-contract" + "github.com/trufnetwork/node/infra/lib/tsn" + "github.com/trufnetwork/node/infra/lib/tsn/cluster" + "github.com/trufnetwork/node/infra/lib/utils" ) type TsnStackProps struct { diff --git a/internal/benchmark/benchexport/csv.go b/internal/benchmark/benchexport/csv.go index 494436683..972db7b19 100644 --- a/internal/benchmark/benchexport/csv.go +++ b/internal/benchmark/benchexport/csv.go @@ -14,10 +14,11 @@ type SavedResults struct { Procedure string `json:"procedure"` BranchingFactor int `json:"branching_factor"` QtyStreams int `json:"qty_streams"` - Days int `json:"days"` + DataPoints int `json:"data_points"` DurationMs int64 `json:"duration_ms"` Visibility string `json:"visibility"` Samples int `json:"samples"` + UnixOnly bool `json:"unix_only"` } // SaveOrAppendToCSV saves a slice of any struct type to a CSV file, using JSON tags for headers. diff --git a/internal/benchmark/benchexport/csv_test.go b/internal/benchmark/benchexport/csv_test.go index 01d8e4be9..0f5797792 100644 --- a/internal/benchmark/benchexport/csv_test.go +++ b/internal/benchmark/benchexport/csv_test.go @@ -10,8 +10,8 @@ import ( func TestSaveOrAppendToCSV(t *testing.T) { testData := []SavedResults{ - {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 7, Days: 100, Visibility: "Public", Samples: 10, DurationMs: 100}, - {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 14, Days: 200, Visibility: "Private", Samples: 10, DurationMs: 200}, + {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 7, DataPoints: 100, Visibility: "Public", Samples: 10, DurationMs: 100, UnixOnly: false}, + {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 14, DataPoints: 200, Visibility: "Private", Samples: 10, DurationMs: 200, UnixOnly: false}, } tempFile, err := os.CreateTemp("", "test_csv_*.csv") @@ -24,12 +24,12 @@ func TestSaveOrAppendToCSV(t *testing.T) { content, err := os.ReadFile(tempFile.Name()) assert.NoError(t, err) - expectedContent := "procedure,branching_factor,qty_streams,days,duration_ms,visibility,samples\nTest1,1,7,100,100,Public,10\nTest2,2,14,200,200,Private,10\n" + expectedContent := "procedure,branching_factor,qty_streams,data_points,duration_ms,visibility,samples,unix_only\nTest1,1,7,100,100,Public,10,false\nTest2,2,14,200,200,Private,10,false\n" assert.Equal(t, expectedContent, string(content)) } func TestLoadCSV(t *testing.T) { - csvData := "procedure,branching_factor,qty_streams,days,duration_ms,visibility,samples\nTest1,1,7,10,100,Public,10\nTest2,2,14,20,200,Private,10\n" + csvData := "procedure,branching_factor,qty_streams,data_points,duration_ms,visibility,samples,unix_only\nTest1,1,7,100,100,Public,10,false\nTest2,2,14,200,200,Private,10,false\n" reader := bytes.NewBufferString(csvData) results, err := LoadCSV[SavedResults](reader) @@ -42,8 +42,8 @@ func TestLoadCSV(t *testing.T) { } expectedResults := []SavedResults{ - {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 7, DurationMs: 100, Visibility: "Public", Days: 10, Samples: 10}, - {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 14, DurationMs: 200, Visibility: "Private", Days: 20, Samples: 10}, + {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 7, DataPoints: 100, Visibility: "Public", Samples: 10, DurationMs: 100, UnixOnly: false}, + {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 14, DataPoints: 200, Visibility: "Private", Samples: 10, DurationMs: 200, UnixOnly: false}, } assert.Equal(t, expectedResults, results) diff --git a/internal/benchmark/benchexport/markdown.go b/internal/benchmark/benchexport/markdown.go index 47a73989d..a497b3386 100644 --- a/internal/benchmark/benchexport/markdown.go +++ b/internal/benchmark/benchexport/markdown.go @@ -4,13 +4,18 @@ import ( "fmt" "log" "os" + "sort" + "strings" "time" "github.com/fbiville/markdown-table-formatter/pkg/markdown" - "golang.org/x/exp/slices" ) +// ----------------------------------------------------------------------------- +// Types +// ----------------------------------------------------------------------------- + type SaveAsMarkdownInput struct { Results []SavedResults CurrentDate time.Time @@ -18,198 +23,350 @@ type SaveAsMarkdownInput struct { FilePath string } -func SaveAsMarkdown(input SaveAsMarkdownInput) error { - days := make([]int, 0) - qtyStreams := make([]int, 0) - branchingFactor := make([]int, 0) +// groupKey is our single structure for grouping, replacing nested maps. +type groupKey struct { + BranchingFactor int + Procedure string + Visibility string + DataPoints int + QtyStreams int + UnixOnly bool +} - for _, result := range input.Results { - days = append(days, result.Days) - qtyStreams = append(qtyStreams, result.QtyStreams) - branchingFactor = append(branchingFactor, result.BranchingFactor) - } +// ----------------------------------------------------------------------------- +// Main Entry Point +// ----------------------------------------------------------------------------- - // remove duplicates - slices.Sort(qtyStreams) - slices.Sort(branchingFactor) - slices.Sort(days) +// SaveAsMarkdown is the main entry point. +func SaveAsMarkdown(input SaveAsMarkdownInput) error { + if err := validateSampleCounts(input.Results); err != nil { + return err + } - qtyStreams = slices.Compact(qtyStreams) - branchingFactor = slices.Compact(branchingFactor) - days = slices.Compact(days) + // Gather distinct values for data points, qty streams, and branching factors. + dataPoints := distinctDataPoints(input.Results) + qtyStreams := distinctQtyStreams(input.Results) + branchingFactors := distinctBranchingFactors(input.Results) - log.Printf("Saving to %s", input.FilePath) + // Group your results by the key (branchingFactor + procedure + visibility + dataPoints + qtyStreams + unixOnly). + grouped := groupResults(input.Results) - // Open the file in append mode, or create it if it doesn't exist + // Open the target file and handle header writing if empty. file, err := os.OpenFile(input.FilePath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644) if err != nil { - return err + return fmt.Errorf("failed to open file %s: %w", input.FilePath, err) } defer file.Close() - // Check if the file is empty to determine whether to write the header + // Check if file is empty => if so, write some basic header info. stat, err := file.Stat() if err != nil { - return err - } - - // check if all results have the same amount of samples. We proceed only if they are the same - sampleCount := make(map[int]int) - for _, result := range input.Results { - sampleCount[result.Samples]++ - } - if len(sampleCount) > 1 { - return fmt.Errorf("results have different amount of samples") + return fmt.Errorf("failed to stat file: %w", err) } - - // Write the header row only if the file is empty if stat.Size() == 0 { - // Write the current date - date := input.CurrentDate.Format("2006-01-02 15:04:05") - _, err = file.WriteString(fmt.Sprintf("Date: %s\n\n## Dates x Qty Streams\n\n", date)) - if err != nil { - return err - } - // add how many samples - _, err = file.WriteString(fmt.Sprintf("Samples per query: %d\n", input.Results[0].Samples)) - if err != nil { - return err - } - _, err = file.WriteString("Results in milliseconds\n\n") - if err != nil { + if err := writeHeader(file, input); err != nil { return err } } - type BranchingFactorType int - type ProcedureType string - type VisibilityType string - type DaysType int - type QtyStreamsType int - - // Group results by [branching_factor][procedure][visibility][days][qtyStreams][duration] - groupedResults := make(map[BranchingFactorType]map[ProcedureType]map[VisibilityType]map[DaysType]map[QtyStreamsType]int64) - for _, result := range input.Results { - branchingFactor := BranchingFactorType(result.BranchingFactor) - procedure := ProcedureType(result.Procedure) - visibility := VisibilityType(result.Visibility) - days := DaysType(result.Days) - qtyStreams := QtyStreamsType(result.QtyStreams) - duration := result.DurationMs - - if _, ok := groupedResults[BranchingFactorType(branchingFactor)]; !ok { - groupedResults[BranchingFactorType(branchingFactor)] = make(map[ProcedureType]map[VisibilityType]map[DaysType]map[QtyStreamsType]int64) - } - if _, ok := groupedResults[BranchingFactorType(branchingFactor)][procedure]; !ok { - groupedResults[BranchingFactorType(branchingFactor)][procedure] = make(map[VisibilityType]map[DaysType]map[QtyStreamsType]int64) - } - if _, ok := groupedResults[BranchingFactorType(branchingFactor)][procedure][VisibilityType(visibility)]; !ok { - groupedResults[BranchingFactorType(branchingFactor)][procedure][VisibilityType(visibility)] = make(map[DaysType]map[QtyStreamsType]int64) + // Create the entire markdown content in-memory and then write it once. + var sb strings.Builder + sb.WriteString(fmt.Sprintf("### %s\n\n", input.InstanceType)) + + for _, bf := range branchingFactors { + sb.WriteString(fmt.Sprintf("#### Branching Factor: %d\n\n", bf)) + + // For each branching factor, gather all unique procedures and sort them. + procs := distinctProceduresForBF(grouped, bf) + sort.Slice(procs, func(i, j int) bool { + return procs[i] < procs[j] + }) + + for _, proc := range procs { + visList := distinctVisibilities(grouped, bf, proc) + sort.Slice(visList, func(i, j int) bool { + return visList[i] < visList[j] + }) + + for _, vis := range visList { + sb.WriteString(fmt.Sprintf("%s - %s - %s\n\n", input.InstanceType, proc, vis)) + + // Distinguish by UnixOnly => gather which UnixOnly modes exist + unixModes := distinctUnixOnlyValues(grouped, bf, proc, vis) + sort.Slice(unixModes, func(i, j int) bool { + // false should come before true, purely by taste + return !unixModes[i] && unixModes[j] + }) + + for _, uMode := range unixModes { + if err := writeTableForUnixMode(&sb, grouped, input, bf, proc, vis, uMode, dataPoints, qtyStreams); err != nil { + return fmt.Errorf("failed to write table for UnixOnly=%v: %w", uMode, err) + } + } + sb.WriteString("\n") + } + sb.WriteString("\n") } - if _, ok := groupedResults[BranchingFactorType(branchingFactor)][procedure][VisibilityType(visibility)][DaysType(days)]; !ok { - groupedResults[BranchingFactorType(branchingFactor)][procedure][VisibilityType(visibility)][DaysType(days)] = make(map[QtyStreamsType]int64) + } + + // Finally, write everything we collected to the file at once. + if _, err = file.WriteString(sb.String()); err != nil { + return fmt.Errorf("failed to write markdown to file: %w", err) + } + + log.Printf("Saving to %s complete!", input.FilePath) + return nil +} + +// ----------------------------------------------------------------------------- +// Table Generation +// ----------------------------------------------------------------------------- + +// writeTableForUnixMode writes a single table for a specific UnixOnly mode. +func writeTableForUnixMode( + sb *strings.Builder, + grouped map[groupKey]int64, + input SaveAsMarkdownInput, + bf int, + proc string, + vis string, + uMode bool, + dataPoints []int, + qtyStreams []int, +) error { + sb.WriteString(fmt.Sprintf("**UnixOnly = %v**\n\n", uMode)) + + // Collect the columns (QtyStreams) that actually have data + existingQty := existingQtyStreams(grouped, bf, proc, vis, uMode, dataPoints, qtyStreams) + + // Make a table with a top row = [ "Days / Qty", q1, q2, ... ] + headers := make([]string, 0, len(existingQty)+1) + headers = append(headers, "Data points / Qty streams") + for _, q := range existingQty { + headers = append(headers, fmt.Sprintf("%d", q)) + } + + // Build table + tableFormatter := markdown.NewTableFormatterBuilder(). + WithPrettyPrint(). + Build(headers...) + + var rows [][]string + for _, d := range dataPoints { + var row []string + row = append(row, fmt.Sprintf("%d", d)) + var rowHasData bool + + for _, q := range existingQty { + key := groupKey{ + BranchingFactor: bf, + Procedure: proc, + Visibility: vis, + DataPoints: d, + QtyStreams: q, + UnixOnly: uMode, + } + duration, ok := grouped[key] + if ok { + row = append(row, fmt.Sprintf("%d", duration)) + rowHasData = true + } else { + row = append(row, "-") + } } - if _, ok := groupedResults[BranchingFactorType(branchingFactor)][procedure][VisibilityType(visibility)][DaysType(days)][QtyStreamsType(qtyStreams)]; !ok { - groupedResults[BranchingFactorType(branchingFactor)][procedure][VisibilityType(visibility)][DaysType(days)][QtyStreamsType(qtyStreams)] = duration + + // If no data for the entire row, skip it. + if rowHasData { + rows = append(rows, row) } } - // Write markdown for each instance type, procedure, and visibility combination - if _, err = file.WriteString(fmt.Sprintf("### %s\n\n", input.InstanceType)); err != nil { + formattedTable, err := tableFormatter.Format(rows) + if err != nil { + return fmt.Errorf("failed to format table: %w", err) + } + sb.WriteString(formattedTable + "\n\n") + return nil +} + +// ----------------------------------------------------------------------------- +// Validation and Header Writing +// ----------------------------------------------------------------------------- + +// validateSampleCounts ensures all results have the same sample count. +func validateSampleCounts(results []SavedResults) error { + counts := make(map[int]int) + for _, r := range results { + counts[r.Samples]++ + } + if len(counts) > 1 { + return fmt.Errorf("results have different amount of samples") + } + return nil +} + +// writeHeader writes the initial lines (date, sample info, etc.) to an empty file. +func writeHeader(file *os.File, input SaveAsMarkdownInput) error { + dateStr := input.CurrentDate.Format("2006-01-02 15:04:05") + if _, err := file.WriteString(fmt.Sprintf("Date: %s\n\n## Data points / Qty streams\n\n", dateStr)); err != nil { return err } + samples := input.Results[0].Samples + if _, err := file.WriteString(fmt.Sprintf("Samples per query: %d\n", samples)); err != nil { + return err + } + if _, err := file.WriteString("Results in milliseconds\n\n"); err != nil { + return err + } + return nil +} - // branching factor - for _, branchingFactor := range branchingFactor { - if _, err = file.WriteString(fmt.Sprintf("#### Branching Factor: %d\n\n", branchingFactor)); err != nil { - return err +// ----------------------------------------------------------------------------- +// Data Grouping and Transformation +// ----------------------------------------------------------------------------- + +// groupResults returns a map of groupKey -> durationMs, so we can do simple lookups. +func groupResults(results []SavedResults) map[groupKey]int64 { + out := make(map[groupKey]int64) + for _, r := range results { + key := groupKey{ + BranchingFactor: r.BranchingFactor, + Procedure: r.Procedure, + Visibility: r.Visibility, + DataPoints: r.DataPoints, + QtyStreams: r.QtyStreams, + UnixOnly: r.UnixOnly, } + out[key] = r.DurationMs + } + return out +} - procedures := groupedResults[BranchingFactorType(branchingFactor)] +// ----------------------------------------------------------------------------- +// Distinct Value Helpers (Raw Results) +// ----------------------------------------------------------------------------- - // sort procedures - proceduresKeys := make([]ProcedureType, 0, len(procedures)) - for procedure := range procedures { - proceduresKeys = append(proceduresKeys, procedure) - } - slices.Sort(proceduresKeys) +// distinctDataPoints gathers distinct data points (days) from results, sorted ascending. +func distinctDataPoints(results []SavedResults) []int { + set := make(map[int]struct{}) + for _, r := range results { + set[r.DataPoints] = struct{}{} + } + out := make([]int, 0, len(set)) + for d := range set { + out = append(out, d) + } + slices.Sort(out) + return out +} - for _, procedure := range proceduresKeys { - visibilities := procedures[procedure] - visibilitiesKeys := make([]VisibilityType, 0, len(visibilities)) - for visibility := range visibilities { - visibilitiesKeys = append(visibilitiesKeys, visibility) - } - slices.Sort(visibilitiesKeys) +// distinctQtyStreams gathers distinct quantity of streams from results, sorted ascending. +func distinctQtyStreams(results []SavedResults) []int { + set := make(map[int]struct{}) + for _, r := range results { + set[r.QtyStreams] = struct{}{} + } + out := make([]int, 0, len(set)) + for q := range set { + out = append(out, q) + } + slices.Sort(out) + return out +} - for _, visibility := range visibilitiesKeys { - daysMap := visibilities[visibility] +// distinctBranchingFactors gathers distinct branching factors, sorted ascending. +func distinctBranchingFactors(results []SavedResults) []int { + set := make(map[int]struct{}) + for _, r := range results { + set[r.BranchingFactor] = struct{}{} + } + out := make([]int, 0, len(set)) + for bf := range set { + out = append(out, bf) + } + slices.Sort(out) + return out +} - // Write full information for each table - if _, err = file.WriteString(fmt.Sprintf("%s - %s - %s \n\n", input.InstanceType, procedure, visibility)); err != nil { - return err - } +// ----------------------------------------------------------------------------- +// Distinct Value Helpers (Using Grouped Map) +// ----------------------------------------------------------------------------- - // Create headers for the table - headers := []string{"queried days / qty streams"} - existingQtyStreams := make([]int, 0) - for _, qtyStream := range qtyStreams { - // check if there's a result for this qtyStream - exists := false - for _, day := range days { - if _, ok := daysMap[DaysType(day)][QtyStreamsType(qtyStream)]; ok { - exists = true - break - } - } - if exists { - existingQtyStreams = append(existingQtyStreams, qtyStream) - headers = append(headers, fmt.Sprintf("%d", qtyStream)) - } - } +// distinctProceduresForBF returns all distinct procedures associated with a particular BF. +func distinctProceduresForBF(grouped map[groupKey]int64, bf int) []string { + procs := make(map[string]struct{}) + for k := range grouped { + if k.BranchingFactor == bf { + procs[k.Procedure] = struct{}{} + } + } + out := make([]string, 0, len(procs)) + for p := range procs { + out = append(out, p) + } + return out +} - // Create a new table formatter - tableFormatter := markdown.NewTableFormatterBuilder(). - WithPrettyPrint(). - Build(headers...) - - rows := make([][]string, 0) - - // Add rows for each day - for _, day := range days { - exists := false - row := []string{fmt.Sprintf("%d", day)} - for _, qtyStream := range existingQtyStreams { - if duration, ok := daysMap[DaysType(day)][QtyStreamsType(qtyStream)]; ok { - row = append(row, fmt.Sprintf("%d", duration)) - exists = true - } else { - row = append(row, "-") - } - } - if exists { - rows = append(rows, row) - } - } +// distinctVisibilities returns all distinct visibilities for a given BF + procedure. +func distinctVisibilities(grouped map[groupKey]int64, bf int, proc string) []string { + visSet := make(map[string]struct{}) + for k := range grouped { + if k.BranchingFactor == bf && k.Procedure == proc { + visSet[k.Visibility] = struct{}{} + } + } + out := make([]string, 0, len(visSet)) + for v := range visSet { + out = append(out, v) + } + return out +} - // Format the table - formattedTable, err := tableFormatter.Format(rows) - if err != nil { - return err - } +// distinctUnixOnlyValues returns all UnixOnly modes (true/false) used for BF+proc+vis. +func distinctUnixOnlyValues(grouped map[groupKey]int64, bf int, proc string, vis string) []bool { + modeSet := make(map[bool]struct{}) + for k := range grouped { + if k.BranchingFactor == bf && k.Procedure == proc && k.Visibility == vis { + modeSet[k.UnixOnly] = struct{}{} + } + } + out := make([]bool, 0, len(modeSet)) + for m := range modeSet { + out = append(out, m) + } + return out +} - // Write the formatted table to the file - if _, err = file.WriteString(formattedTable + "\n\n"); err != nil { - return err - } +// existingQtyStreams picks only those QtyStreams that actually have data for a given BF/proc/vis/unixOnly combination. +func existingQtyStreams( + grouped map[groupKey]int64, + bf int, + proc string, + vis string, + unixMode bool, + dataPoints []int, + qtyStreams []int, +) []int { + var result []int + for _, q := range qtyStreams { + hasData := false + for _, d := range dataPoints { + key := groupKey{ + BranchingFactor: bf, + Procedure: proc, + Visibility: vis, + DataPoints: d, + QtyStreams: q, + UnixOnly: unixMode, } - - // Add an extra newline between procedures for better readability - if _, err = file.WriteString("\n"); err != nil { - return err + if _, ok := grouped[key]; ok { + hasData = true + break } } + if hasData { + result = append(result, q) + } } - - return nil + return result } diff --git a/internal/benchmark/benchexport/markdown_test.go b/internal/benchmark/benchexport/markdown_test.go index bfd76bed3..93f075dc6 100644 --- a/internal/benchmark/benchexport/markdown_test.go +++ b/internal/benchmark/benchexport/markdown_test.go @@ -10,14 +10,14 @@ import ( func TestSaveAsMarkdown(t *testing.T) { testData := []SavedResults{ - {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 1, DurationMs: 100, Visibility: "Public", Samples: 10, Days: 7}, - {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 2, DurationMs: 100, Visibility: "Public", Samples: 10, Days: 7}, - {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 3, DurationMs: 100, Visibility: "Public", Samples: 10, Days: 7}, - {Procedure: "Test2", BranchingFactor: 1, QtyStreams: 100, DurationMs: 150, Visibility: "Private", Samples: 10, Days: 365}, - {Procedure: "Test1", BranchingFactor: 2, QtyStreams: 10, DurationMs: 200, Visibility: "Public", Samples: 10, Days: 1}, - {Procedure: "Test1", BranchingFactor: 2, QtyStreams: 10, DurationMs: 300, Visibility: "Public", Samples: 10, Days: 7}, - {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 10, DurationMs: 250, Visibility: "Private", Samples: 10, Days: 365}, - {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 100, DurationMs: 350, Visibility: "Private", Samples: 10, Days: 365}, + {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 1, DataPoints: 100, DurationMs: 100, Visibility: "Public", Samples: 10, UnixOnly: false}, + {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 2, DataPoints: 100, DurationMs: 100, Visibility: "Public", Samples: 10, UnixOnly: false}, + {Procedure: "Test1", BranchingFactor: 1, QtyStreams: 3, DataPoints: 100, DurationMs: 100, Visibility: "Public", Samples: 10, UnixOnly: false}, + {Procedure: "Test2", BranchingFactor: 1, QtyStreams: 100, DataPoints: 150, DurationMs: 150, Visibility: "Private", Samples: 10, UnixOnly: false}, + {Procedure: "Test1", BranchingFactor: 2, QtyStreams: 10, DataPoints: 200, DurationMs: 200, Visibility: "Public", Samples: 10, UnixOnly: false}, + {Procedure: "Test1", BranchingFactor: 2, QtyStreams: 10, DataPoints: 300, DurationMs: 300, Visibility: "Public", Samples: 10, UnixOnly: false}, + {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 10, DataPoints: 250, DurationMs: 250, Visibility: "Private", Samples: 10, UnixOnly: false}, + {Procedure: "Test2", BranchingFactor: 2, QtyStreams: 100, DataPoints: 350, DurationMs: 350, Visibility: "Private", Samples: 10, UnixOnly: false}, } tempFile, err := os.CreateTemp("", "test_markdown_*.md") @@ -40,7 +40,7 @@ func TestSaveAsMarkdown(t *testing.T) { expectedContent := `Date: 2023-04-15 12:00:00 -## Dates x Qty Streams +## Data points / Qty streams Samples per query: 10 Results in milliseconds @@ -49,41 +49,55 @@ Results in milliseconds #### Branching Factor: 1 -TestInstance - Test1 - Public +TestInstance - Test1 - Public -| queried days / qty streams | 1 | 2 | 3 | -| -------------------------- | --- | --- | --- | -| 7 | 100 | 100 | 100 | +**UnixOnly = false** +| Data points / Qty streams | 1 | 2 | 3 | +| ------------------------- | --- | --- | --- | +| 100 | 100 | 100 | 100 | -TestInstance - Test2 - Private -| queried days / qty streams | 100 | -| -------------------------- | --- | -| 365 | 150 | + +TestInstance - Test2 - Private + +**UnixOnly = false** + +| Data points / Qty streams | 100 | +| ------------------------- | --- | +| 150 | 150 | + #### Branching Factor: 2 -TestInstance - Test1 - Public +TestInstance - Test1 - Public + +**UnixOnly = false** -| queried days / qty streams | 10 | -| -------------------------- | --- | -| 1 | 200 | -| 7 | 300 | +| Data points / Qty streams | 10 | +| ------------------------- | --- | +| 200 | 200 | +| 300 | 300 | -TestInstance - Test2 - Private -| queried days / qty streams | 10 | 100 | -| -------------------------- | --- | --- | -| 365 | 250 | 350 | +TestInstance - Test2 - Private + +**UnixOnly = false** + +| Data points / Qty streams | 10 | 100 | +| ------------------------- | --- | --- | +| 250 | 250 | - | +| 350 | - | 350 | + ` + assert.Equal(t, expectedContent, string(content)) } diff --git a/internal/benchmark/benchmark.go b/internal/benchmark/benchmark.go index a678f3ddc..40a27e9fb 100644 --- a/internal/benchmark/benchmark.go +++ b/internal/benchmark/benchmark.go @@ -3,11 +3,12 @@ package benchmark import ( "context" "fmt" - benchutil "github.com/trufnetwork/node/internal/benchmark/util" "log" "os" "time" + benchutil "github.com/trufnetwork/node/internal/benchmark/util" + "github.com/kwilteam/kwil-db/core/utils" "github.com/pkg/errors" @@ -16,27 +17,25 @@ import ( "github.com/trufnetwork/sdk-go/core/util" ) -func runBenchmark(ctx context.Context, platform *kwilTesting.Platform, c BenchmarkCase, tree trees.Tree, unixOnly bool) ([]Result, error) { +func runBenchmark(ctx context.Context, platform *kwilTesting.Platform, c BenchmarkCase, tree trees.Tree) ([]Result, error) { var results []Result err := setupSchemas(ctx, platform, SetupSchemasInput{ BenchmarkCase: c, Tree: tree, - UnixOnly: unixOnly, }) if err != nil { return nil, errors.Wrap(err, "failed to setup schemas") } - for _, day := range c.Days { + for _, dataPoints := range c.DataPointsSet { for _, procedure := range c.Procedures { result, err := runSingleTest(ctx, RunSingleTestInput{ - Platform: platform, - Case: c, - Days: day, - Procedure: procedure, - Tree: tree, - UnixOnly: unixOnly, + Platform: platform, + Case: c, + DataPoints: dataPoints, + Procedure: procedure, + Tree: tree, }) if err != nil { return nil, errors.Wrap(err, "failed to run single test") @@ -49,29 +48,29 @@ func runBenchmark(ctx context.Context, platform *kwilTesting.Platform, c Benchma } type RunSingleTestInput struct { - Platform *kwilTesting.Platform - Case BenchmarkCase - Days int - Procedure ProcedureEnum - Tree trees.Tree - UnixOnly bool + Platform *kwilTesting.Platform + Case BenchmarkCase + DataPoints int + Procedure ProcedureEnum + Tree trees.Tree } // runSingleTest runs a single test for the given input and returns the result. func runSingleTest(ctx context.Context, input RunSingleTestInput) (Result, error) { // we're querying the index-0 stream because this is the root stream nthDbId := utils.GenerateDBID(getStreamId(0).String(), input.Platform.Deployer) - fromDate := fixedDate.AddDate(0, 0, -input.Days).Format("2006-01-02") - toDate := fixedDate.Format("2006-01-02") - if input.UnixOnly { - fromDate = fmt.Sprintf("%d", fixedDate.AddDate(0, 0, -input.Days).Unix()) - toDate = fmt.Sprintf("%d", fixedDate.Unix()) + rangeParams := getRangeParameters(input.DataPoints, input.Case.UnixOnly) + fromDate := rangeParams.FromDate.Format("2006-01-02") + toDate := rangeParams.ToDate.Format("2006-01-02") + if input.Case.UnixOnly { + fromDate = fmt.Sprintf("%d", rangeParams.FromDate.Unix()) + toDate = fmt.Sprintf("%d", rangeParams.ToDate.Unix()) } result := Result{ Case: input.Case, Procedure: input.Procedure, - DaysQueried: input.Days, + DataPoints: input.DataPoints, MaxDepth: input.Tree.MaxDepth, CaseDurations: make([]time.Duration, input.Case.Samples), } @@ -129,12 +128,12 @@ type RunBenchmarkInput struct { ResultPath string Visibility util.VisibilityEnum QtyStreams int - Days []int + DataPoints []int Samples int } // it returns a result channel to be accumulated by the caller -func getBenchmarFn(benchmarkCase BenchmarkCase, resultCh *chan []Result, unixOnly bool) func(ctx context.Context, platform *kwilTesting.Platform) error { +func getBenchmarkFn(benchmarkCase BenchmarkCase, resultCh *chan []Result) func(ctx context.Context, platform *kwilTesting.Platform) error { return func(ctx context.Context, platform *kwilTesting.Platform) error { log.Println("running benchmark", benchmarkCase) platform.Deployer = deployer.Bytes() @@ -149,7 +148,7 @@ func getBenchmarFn(benchmarkCase BenchmarkCase, resultCh *chan []Result, unixOnl return fmt.Errorf("tree max depth (%d) is greater than max depth (%d)", tree.MaxDepth, maxDepth) } - results, err := runBenchmark(ctx, platform, benchmarkCase, tree, unixOnly) + results, err := runBenchmark(ctx, platform, benchmarkCase, tree) if err != nil { return errors.Wrap(err, "failed to run benchmark") } diff --git a/internal/benchmark/constants.go b/internal/benchmark/constants.go index 48a1feeb1..5f8d7d8c8 100644 --- a/internal/benchmark/constants.go +++ b/internal/benchmark/constants.go @@ -5,9 +5,10 @@ import ( ) var ( - readerAddress = MustNewEthereumAddressFromString("0x0000000000000000010000000000000000000001") - deployer = MustNewEthereumAddressFromString("0x0000000000000000000000000000000200000000") - fixedDate = time.Date(2021, 1, 1, 0, 0, 0, 0, time.UTC) - maxDepth = 179 // found empirically - insertFreqInTime = 1 * time.Minute + readerAddress = MustNewEthereumAddressFromString("0x0000000000000000010000000000000000000001") + deployer = MustNewEthereumAddressFromString("0x0000000000000000000000000000000200000000") + fixedDate = time.Date(2021, 1, 1, 0, 0, 0, 0, time.UTC) + maxDepth = 179 // found empirically + dailyInterval = time.Hour * 24 + secondInterval = time.Second ) diff --git a/internal/benchmark/load_test.go b/internal/benchmark/load_test.go index 09549d9bb..553e30c3e 100644 --- a/internal/benchmark/load_test.go +++ b/internal/benchmark/load_test.go @@ -15,6 +15,10 @@ import ( "github.com/trufnetwork/sdk-go/core/util" ) +// ----------------------------------------------------------------------------- +// Main benchmark test function +// ----------------------------------------------------------------------------- + // Main benchmark test function func TestBench(t *testing.T) { ctx, cancel := context.WithCancel(context.Background()) @@ -51,12 +55,39 @@ func TestBench(t *testing.T) { // -- Setup Test Parameters -- + // Common parameters + // number of samples to run for each test + samples := 1 + // visibilities to test + visibilities := []util.VisibilityEnum{ + util.PublicVisibility, + // util.PrivateVisibility, + } + + // Specific Parameters + + type SpecificParams struct { + ShapePairs [][]int + DataPoints []int + UnixOnly bool + } + + // Date Parameters + + // number of days to query + dateDataPoints := []int{ + 1, + 7, + 30, + 365, + } + // shapePairs is a list of tuples, where each tuple represents a pair of qtyStreams and branchingFactor // qtyStreams is the number of streams in the tree // branchingFactor is the branching factor of the tree // if branchingFactor is math.MaxInt, it means the tree is flat - shapePairs := [][]int{ + dateShapePairs := [][]int{ // qtyStreams, branchingFactor // testing 1 stream only {1, 1}, @@ -90,34 +121,85 @@ func TestBench(t *testing.T) { {200, 32}, } - samples := 3 + dateSpecificParams := SpecificParams{ + ShapePairs: dateShapePairs, + DataPoints: dateDataPoints, + UnixOnly: false, + } - days := []int{1, 7, 30, 365} + // Unix Only Parameters - visibilities := []util.VisibilityEnum{util.PublicVisibility, util.PrivateVisibility} + unixOnlyShapePairs := [][]int{ + // single stream + {1, 1}, + // the effect of adding 1 composed stream + {2, 1}, + // flat trees + {10, math.MaxInt}, + {20, math.MaxInt}, + {30, math.MaxInt}, + // {100, math.MaxInt}, // to much to test + // {200, math.MaxInt}, + // {400, math.MaxInt}, + } + + getRecordsInAMonthWithInterval := func(interval time.Duration) int { + return int(time.Hour * 24 * 30 / interval) + } + + unixOnlyDataPoints := []int{ + // sanity check + // 1, + // data points on one month: + // 1 record per 5 seconds + // getRecordsInAMonthWithInterval(time.Second * 5), + // 1 record per 1 minute + // getRecordsInAMonthWithInterval(time.Minute), + // 1 record per 5 minutes + getRecordsInAMonthWithInterval(time.Minute * 5), + // 1 record per 1 hour + getRecordsInAMonthWithInterval(time.Hour), + } + + unixOnlySpecificParams := SpecificParams{ + ShapePairs: unixOnlyShapePairs, + DataPoints: unixOnlyDataPoints, + UnixOnly: true, + } + + // ----- + + _ = dateSpecificParams + allParams := []SpecificParams{ + // dateSpecificParams, + unixOnlySpecificParams, + } var functionTests []kwilTesting.TestFunc // a channel to receive results from the tests var resultsCh chan []Result // create combinations of shapePairs and visibilities - for _, qtyStreams := range shapePairs { - for _, visibility := range visibilities { - functionTests = append(functionTests, getBenchmarFn(BenchmarkCase{ - Visibility: visibility, - QtyStreams: qtyStreams[0], - BranchingFactor: qtyStreams[1], - Samples: samples, - Days: days, - Procedures: []ProcedureEnum{ - ProcedureGetRecord, - ProcedureGetIndex, - ProcedureGetChangeIndex, - ProcedureGetFirstRecord, + for _, specificParams := range allParams { + for _, shapePair := range specificParams.ShapePairs { + for _, visibility := range visibilities { + functionTests = append(functionTests, getBenchmarkFn(BenchmarkCase{ + Visibility: visibility, + QtyStreams: shapePair[0], + BranchingFactor: shapePair[1], + Samples: samples, + DataPointsSet: specificParams.DataPoints, + UnixOnly: specificParams.UnixOnly, + Procedures: []ProcedureEnum{ + // ProcedureGetRecord, + ProcedureGetIndex, + // ProcedureGetChangeIndex, + // ProcedureGetFirstRecord, + }, }, - }, - // use pointer, so we can reassign the results channel - &resultsCh, false)) + // use pointer, so we can reassign the results channel + &resultsCh)) + } } } diff --git a/internal/benchmark/load_unix_test.go b/internal/benchmark/load_unix_test.go deleted file mode 100644 index 9a0a5c5da..000000000 --- a/internal/benchmark/load_unix_test.go +++ /dev/null @@ -1,181 +0,0 @@ -package benchmark - -import ( - "context" - "fmt" - "os" - "os/signal" - "strconv" - "testing" - "time" - - kwilTesting "github.com/kwilteam/kwil-db/testing" - "github.com/pkg/errors" - "github.com/trufnetwork/sdk-go/core/util" -) - -// Main benchmark test function -func TestBenchUnix(t *testing.T) { - ctx, cancel := context.WithCancel(context.Background()) - t.Cleanup(cancel) - - // notify on interrupt. Otherwise, tests will not stop - c := make(chan os.Signal, 1) - signal.Notify(c, os.Interrupt) - go func() { - for range c { - fmt.Println("interrupt signal received") - cleanupDocker() - cancel() - } - }() - defer cleanupDocker() - - // set default LOG_RESULTS to true - if os.Getenv("LOG_RESULTS") == "" { - os.Setenv("LOG_RESULTS", "true") - } - - // try get resultPath from env - resultPath := os.Getenv("RESULTS_PATH") - if resultPath == "" { - resultPath = "./benchmark_unix_results.csv" - } - - // Delete the file if it exists - if err := deleteFileIfExists(resultPath); err != nil { - err = errors.Wrap(err, "failed to delete file if exists") - t.Fatal(err) - } - - // -- Setup Test Parameters -- - - // shapePairs is a list of tuples, where each tuple represents a pair of qtyStreams and branchingFactor - // qtyStreams is the number of streams in the tree - // branchingFactor is the branching factor of the tree - // if branchingFactor is math.MaxInt, it means the tree is flat - - shapePairs := [][]int{ - // qtyStreams, branchingFactor - // testing 1 stream only - {1, 1}, - - //flat trees = cost of adding a new stream to our composed - //{50, math.MaxInt}, - //{100, math.MaxInt}, - //{200, math.MaxInt}, - //{400, math.MaxInt}, - // 800 streams kills t3.small instances for memory starvation. But probably because it stores the whole tree in memory - //{800, math.MaxInt}, - //{1500, math.MaxInt}, // this gives error: Out of shared memory - - // deep trees = cost of adding depth - //{50, 1}, - //{100, 1}, - //{200, 1}, // we can't go deeper than 180, for call stack size issues - - // to get difference for stream qty on a real world situation - {50, 8}, - {100, 8}, - //{200, 8}, - //{400, 8}, - //{800, 8}, - - // to get difference for branching factor - //{200, 2}, - //{200, 4}, - // {200, 8}, // already tested above - //{200, 16}, - //{200, 32}, - } - - samples := 3 - - days := []int{1, 7} - - visibilities := []util.VisibilityEnum{util.PublicVisibility, util.PrivateVisibility} - - var functionTests []kwilTesting.TestFunc - // a channel to receive results from the tests - var resultsCh chan []Result - - // create combinations of shapePairs and visibilities - for _, qtyStreams := range shapePairs { - for _, visibility := range visibilities { - functionTests = append(functionTests, getBenchmarFn(BenchmarkCase{ - Visibility: visibility, - QtyStreams: qtyStreams[0], - BranchingFactor: qtyStreams[1], - Samples: samples, - Days: days, - Procedures: []ProcedureEnum{ - ProcedureGetRecord, - ProcedureGetIndex, - //ProcedureGetChangeIndex, - //ProcedureGetFirstRecord, - }, - }, - // use pointer, so we can reassign the results channel - &resultsCh, true)) - } - } - - // let's chunk tests into groups, becuase these tests are very long - // and postgres may fail during the test - groupsOfTests := chunk(functionTests, 2) - - var successResults []Result - - for i, groupOfTests := range groupsOfTests { - schemaTest := kwilTesting.SchemaTest{ - Name: "benchmark_unix_test_" + strconv.Itoa(i), - SchemaFiles: []string{}, - FunctionTests: groupOfTests, - } - - t.Run(schemaTest.Name, func(t *testing.T) { - const maxRetries = 3 - var err error - RetryFor: - for attempt := 1; attempt <= maxRetries; attempt++ { - select { - case <-ctx.Done(): - t.Fatalf("context cancelled") - default: - // wrap in a function so we can defer close the results channel - func() { - resultsCh = make(chan []Result, len(groupOfTests)) - defer close(resultsCh) - - err = schemaTest.Run(ctx, &kwilTesting.Options{ - UseTestContainer: true, - Logger: t, - }) - }() - - if err == nil { - for result := range resultsCh { - successResults = append(successResults, result...) - } - // break the retries loop - break RetryFor - } - - t.Logf("Attempt %d failed: %s", attempt, err) - fmt.Println(errors.WithStack(err)) - if attempt < maxRetries { - time.Sleep(time.Second * time.Duration(attempt)) // Exponential backoff - } - } - } - if err != nil { - t.Fatalf("Test failed after %d attempts: %s", maxRetries, err) - } - }) - } - - // save results to file - if err := saveResults(successResults, resultPath); err != nil { - t.Fatalf("failed to save results: %s", err) - } -} diff --git a/internal/benchmark/setup.go b/internal/benchmark/setup.go index 2c5050bb3..9778027bf 100644 --- a/internal/benchmark/setup.go +++ b/internal/benchmark/setup.go @@ -27,7 +27,6 @@ import ( type SetupSchemasInput struct { BenchmarkCase BenchmarkCase Tree trees.Tree - UnixOnly bool } // Schema setup functions @@ -60,13 +59,13 @@ func setupSchemas( var schema *kwiltypes.Schema var err error if node.IsLeaf { - if input.UnixOnly { + if input.BenchmarkCase.UnixOnly { schema, err = parse.Parse(contracts.PrimitiveStreamUnixContent) } else { schema, err = parse.Parse(contracts.PrimitiveStreamContent) } } else { - if input.UnixOnly { + if input.BenchmarkCase.UnixOnly { schema, err = parse.Parse(contracts.ComposedStreamUnixContent) } else { schema, err = parse.Parse(contracts.ComposedStreamContent) @@ -96,11 +95,11 @@ func setupSchemas( } if err := setupSchema(grpCtx, platform, schema.Schema, setupSchemaInput{ - visibility: input.BenchmarkCase.Visibility, - treeNode: schema.Node, - days: 380, - owner: deployerAddress, - unixOnly: input.UnixOnly, + visibility: input.BenchmarkCase.Visibility, + treeNode: schema.Node, + rangeParams: getMaxRangeParams(input.BenchmarkCase.DataPointsSet, input.BenchmarkCase.UnixOnly), + owner: deployerAddress, + unixOnly: input.BenchmarkCase.UnixOnly, }); err != nil { return errors.Wrap(err, "failed to setup schema") } @@ -151,11 +150,11 @@ func createAndInitializeSchema(ctx context.Context, platform *kwilTesting.Platfo } type setupSchemaInput struct { - visibility util.VisibilityEnum - days int - owner util.EthereumAddress - treeNode trees.TreeNode - unixOnly bool + visibility util.VisibilityEnum + owner util.EthereumAddress + treeNode trees.TreeNode + rangeParams RangeParameters + unixOnly bool } func setupSchema(ctx context.Context, platform *kwilTesting.Platform, schema *kwiltypes.Schema, input setupSchemaInput) error { @@ -169,7 +168,7 @@ func setupSchema(ctx context.Context, platform *kwilTesting.Platform, schema *kw // if it's a leaf, then it's a primitive stream if input.treeNode.IsLeaf { - if err := insertRecordsForPrimitive(ctx, platform, dbid, input.days+1, input.unixOnly); err != nil { + if err := insertRecordsForPrimitive(ctx, platform, dbid, input.rangeParams, input.unixOnly); err != nil { return errors.Wrap(err, "failed to insert records for primitive") } } else { @@ -303,9 +302,8 @@ func batchInsertMetadata(ctx context.Context, platform *kwilTesting.Platform, db // - it generates a random value for each record // - it inserts the records into the stream // - we use a bulk insert to speed up the process -func insertRecordsForPrimitive(ctx context.Context, platform *kwilTesting.Platform, dbid string, days int, unixOnly bool) error { - fromDate := fixedDate.AddDate(0, 0, -days) - records := generateRecords(fromDate, fixedDate, unixOnly) +func insertRecordsForPrimitive(ctx context.Context, platform *kwilTesting.Platform, dbid string, rangeParams RangeParameters, unixOnly bool) error { + records := generateRecords(rangeParams, unixOnly) // Prepare the SQL statement for bulk insert sqlStmt := "INSERT INTO primitive_events (date_value, value, created_at) VALUES " @@ -338,6 +336,12 @@ func insertRecordsForPrimitive(ctx context.Context, platform *kwilTesting.Platfo return nil } +type RangeParameters struct { + DataPoints int + FromDate time.Time + ToDate time.Time +} + // setTaxonomyForComposed sets the taxonomy for a composed stream. // - it creates a new taxonomy item for each child stream func setTaxonomyForComposed(ctx context.Context, platform *kwilTesting.Platform, dbid string, input setupSchemaInput) error { @@ -365,10 +369,11 @@ func setTaxonomyForComposed(ctx context.Context, platform *kwilTesting.Platform, streamIdsArg = append(streamIdsArg, t.ChildStream.StreamId.String()) weightsArg = append(weightsArg, int(t.Weight)) } + if input.unixOnly { - startDateArg = strconv.Itoa(int(randDate(fixedDate.AddDate(0, 0, -input.days), fixedDate).Unix())) + startDateArg = strconv.Itoa(int(input.rangeParams.FromDate.Unix())) } else { - startDateArg = randDate(fixedDate.AddDate(0, 0, -input.days), fixedDate).Format(time.DateOnly) + startDateArg = input.rangeParams.FromDate.Format(time.DateOnly) } return executeStreamProcedure(ctx, platform, dbid, "set_taxonomy", diff --git a/internal/benchmark/types.go b/internal/benchmark/types.go index 11dbbd1ff..d38fb25a8 100644 --- a/internal/benchmark/types.go +++ b/internal/benchmark/types.go @@ -11,7 +11,8 @@ type ( BenchmarkCase struct { QtyStreams int BranchingFactor int - Days []int + DataPointsSet []int + UnixOnly bool Visibility util.VisibilityEnum Samples int Procedures []ProcedureEnum @@ -21,7 +22,7 @@ type ( MaxDepth int MemoryUsage uint64 Procedure ProcedureEnum - DaysQueried int + DataPoints int CaseDurations []time.Duration } ) diff --git a/internal/benchmark/utils.go b/internal/benchmark/utils.go index 9c56865d6..997cc07c0 100644 --- a/internal/benchmark/utils.go +++ b/internal/benchmark/utils.go @@ -28,15 +28,15 @@ func getStreamId(index int) *util.StreamId { // generateRecords creates a slice of records with random values for each day // between the given fromDate and toDate, inclusive. -func generateRecords(fromDate, toDate time.Time, unixOnly bool) [][]any { +func generateRecords(rangeParams RangeParameters, unixOnly bool) [][]any { var records [][]any if unixOnly { - for d := fromDate; !d.After(toDate); d = d.Add(insertFreqInTime) { + for d := rangeParams.FromDate; !d.After(rangeParams.ToDate); d = d.Add(secondInterval) { value, _ := apd.New(rand.Int63n(100000000000000), 0).Float64() records = append(records, []any{d.Unix(), fmt.Sprintf("%.2f", value)}) } } else { - for d := fromDate; !d.After(toDate); d = d.AddDate(0, 0, 1) { + for d := rangeParams.FromDate; d.Before(rangeParams.ToDate) || d.Equal(rangeParams.ToDate); d = d.Add(dailyInterval) { value, _ := apd.New(rand.Int63n(100000000000000), 0).Float64() records = append(records, []any{d.Format("2006-01-02"), fmt.Sprintf("%.2f", value)}) } @@ -71,14 +71,15 @@ func printResults(results []Result) { fmt.Println("Benchmark Results:") for _, r := range results { fmt.Printf( - "Qty Streams: %d, Branching Factor: %d, Days Queried: %d, Visibility: %s, Procedure: %s, Samples: %d, Memory Usage: %s\n", + "Qty Streams: %d, Branching Factor: %d, Data Points: %d, Visibility: %s, Procedure: %s, Samples: %d, Memory Usage: %s, Unix Only: %t\n", r.Case.QtyStreams, r.Case.BranchingFactor, - r.DaysQueried, + r.DataPoints, visibilityToString(r.Case.Visibility), string(r.Procedure), r.Case.Samples, formatMemoryUsage(r.MemoryUsage), + r.Case.UnixOnly, ) fmt.Printf(" Mean Duration: %v\n", Average(r.CaseDurations)) fmt.Printf(" Min Duration: %v\n", slices.Min(r.CaseDurations)) @@ -103,9 +104,10 @@ func saveResults(results []Result, filePath string) error { Samples: r.Case.Samples, BranchingFactor: r.Case.BranchingFactor, // depth QtyStreams: r.Case.QtyStreams, // n_of_streams - Days: r.DaysQueried, // n_of_dates + DataPoints: r.DataPoints, // n_of_dates DurationMs: Average(r.CaseDurations).Milliseconds(), // duration_ms Visibility: visibilityToString(r.Case.Visibility), // visibility + UnixOnly: r.Case.UnixOnly, // unix_only } } // Save as CSV @@ -203,3 +205,31 @@ func chunk[T any](arr []T, chunkSize int) [][]T { return result } + +// getRangeParameters generates the range parameters for the given data points and unixOnly flag. +// - it generates the fromDate and toDate based on the data points and unixOnly flag +// - it returns the range parameters +func getRangeParameters(dataPoints int, unixOnly bool) RangeParameters { + toDate := fixedDate + var delta int + switch unixOnly { + case true: + delta = int(secondInterval) + case false: + delta = int(dailyInterval) + } + // Subtract (dataPoints - 1) because we want to include the interval at toDate + fromDate := toDate.Add(-time.Duration(delta * (dataPoints - 1))) + return RangeParameters{ + DataPoints: dataPoints, + FromDate: fromDate, + ToDate: toDate, + } +} + +// getMaxRangeParams returns the maximum range parameters for the given data points and unixOnly flag. +// - it returns the maximum data points and the range parameters +func getMaxRangeParams(dataPoints []int, unixOnly bool) RangeParameters { + maxDataPoints := slices.Max(dataPoints) + return getRangeParameters(maxDataPoints, unixOnly) +} diff --git a/internal/contracts/composed_stream_template_unix.kf b/internal/contracts/composed_stream_template_unix.kf index c308d5740..b17055ad5 100644 --- a/internal/contracts/composed_stream_template_unix.kf +++ b/internal/contracts/composed_stream_template_unix.kf @@ -184,7 +184,6 @@ procedure get_record_filled($date_from int, $date_to int, $frozen_at int) privat } $unemitted_taxonomies_for_date int[] := $base_taxonomy_list; - $removed_elements_count int := 0; $prev_date int := 0; $current_date int := 0; @@ -204,37 +203,33 @@ procedure get_record_filled($date_from int, $date_to int, $frozen_at int) privat if $current_date != $prev_date { if $prev_date != 0 { for $unemitted_taxonomy in $unemitted_taxonomies_for_date { - // TODO: remove this when we have slices or include if we have just index assignment - // if $unemitted_taxonomy is distinct from null { - - if $last_values[$unemitted_taxonomy] is distinct from null { - // Use the stored dynamic weight from the previous date - $dynamic_weight_prev := get_dynamic_weight($child_stream_id[$unemitted_taxonomy], $prev_date); - return next $prev_date, $last_values[$unemitted_taxonomy] * $dynamic_weight_prev, $dynamic_weight_prev; + // TODO: remove this when we have slices + if $unemitted_taxonomy is distinct from null { + if $last_values[$unemitted_taxonomy] is distinct from null { + // Use the stored dynamic weight from the previous date + $dynamic_weight_prev := get_dynamic_weight($child_stream_id[$unemitted_taxonomy], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy] * $dynamic_weight_prev, $dynamic_weight_prev; + } } } // Clear unemitted taxonomies for the new date $unemitted_taxonomies_for_date := $base_taxonomy_list; - $removed_elements_count := 0; } } - // TODO: uncomment when we have index assignment - // $last_values[$row_raw.taxonomy_index] := $row_raw.value; - // Update the last values for the current date - $last_values := array_update_element($last_values, $row_raw.taxonomy_index, $row_raw.value); + $last_values[$row_raw.taxonomy_index] := $row_raw.value; // Emit current value with the dynamic weight return next $current_date, $row_raw.value * $dynamic_weight, $dynamic_weight; // Remove emitted taxonomy from the unemitted list // we need to subtract the removed_elements_count because the array is shrinking - // TODO: we can improve it when we either can remove elements, or with array element assignment - $unemitted_taxonomies_for_date := remove_array_element($unemitted_taxonomies_for_date, $row_raw.taxonomy_index - $removed_elements_count); - $removed_elements_count := $removed_elements_count + 1; + // TODO: we can improve it when we can remove elements + // $unemitted_taxonomies_for_date := remove_array_element($unemitted_taxonomies_for_date, $row_raw.taxonomy_index - $removed_elements_count); + // TODO: we're making the elements null for now // remove elements is not performant without slices. Then let's make the elements null for now - // $unemitted_taxonomies_for_date[$row_raw.taxonomy_index] := null; + $unemitted_taxonomies_for_date[$row_raw.taxonomy_index] := null; $prev_date := $current_date; } @@ -243,10 +238,13 @@ procedure get_record_filled($date_from int, $date_to int, $frozen_at int) privat if $prev_date != 0 { if $taxonomy_count > 0 { for $unemitted_taxonomy2 in $unemitted_taxonomies_for_date { - if $last_values[$unemitted_taxonomy2] is distinct from null { - // Fetch the correct dynamic weight for the last date - $dynamic_weight_last := get_dynamic_weight($child_stream_id[$unemitted_taxonomy2], $prev_date); - return next $prev_date, $last_values[$unemitted_taxonomy2] * $dynamic_weight_last, $dynamic_weight_last; + // TODO: remove this when we have slices + if $unemitted_taxonomy2 is distinct from null { + if $last_values[$unemitted_taxonomy2] is distinct from null { + // Fetch the correct dynamic weight for the last date + $dynamic_weight_last := get_dynamic_weight($child_stream_id[$unemitted_taxonomy2], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy2] * $dynamic_weight_last, $dynamic_weight_last; + } } } } @@ -281,7 +279,6 @@ procedure get_index_filled($date_from int, $date_to int, $frozen_at int, $base_d } $unemitted_taxonomies_for_date int[] := $base_taxonomy_list; - $removed_elements_count int := 0; $prev_date int := 0; $current_date int := 0; @@ -300,37 +297,33 @@ procedure get_index_filled($date_from int, $date_to int, $frozen_at int, $base_d if $current_date != $prev_date { if $prev_date != 0 { for $unemitted_taxonomy in $unemitted_taxonomies_for_date { - if $last_values[$unemitted_taxonomy] is distinct from null { - // TODO: remove this when we have slices or include if we have just index assignment - // if $unemitted_taxonomy is distinct from null { - - // Use the stored dynamic weight from the previous date - $dynamic_weight_prev := get_dynamic_weight($child_stream_id[$unemitted_taxonomy], $prev_date); - return next $prev_date, $last_values[$unemitted_taxonomy] * $dynamic_weight_prev, $dynamic_weight_prev; + // TODO: remove this when we have slices + if $unemitted_taxonomy is distinct from null { + if $last_values[$unemitted_taxonomy] is distinct from null { + // Use the stored dynamic weight from the previous date + $dynamic_weight_prev := get_dynamic_weight($child_stream_id[$unemitted_taxonomy], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy] * $dynamic_weight_prev, $dynamic_weight_prev; + } } } } // Clear unemitted taxonomies for the new date $unemitted_taxonomies_for_date := $base_taxonomy_list; - $removed_elements_count := 0; } - // TODO: uncomment when we have index assignment - // $last_values[$row_raw.taxonomy_index] := $row_raw.value; - // Update the last values for the current date - $last_values := array_update_element($last_values, $row_raw.taxonomy_index, $row_raw.value); + $last_values[$row_raw.taxonomy_index] := $row_raw.value; - // Emit the current value with the dynamic weight + // Emit current value with the dynamic weight return next $current_date, $row_raw.value * $dynamic_weight, $dynamic_weight; // Remove emitted taxonomy from the unemitted list - // TODO: we can improve it when we either can remove elements, or with array element assignment - $unemitted_taxonomies_for_date := remove_array_element($unemitted_taxonomies_for_date, $row_raw.taxonomy_index - $removed_elements_count); - $removed_elements_count := $removed_elements_count + 1; - + // we need to subtract the removed_elements_count because the array is shrinking + // TODO: we can improve it when we can remove elements + // $unemitted_taxonomies_for_date := remove_array_element($unemitted_taxonomies_for_date, $row_raw.taxonomy_index - $removed_elements_count); + // TODO: we're making the elements null for now // remove elements is not performant without slices. Then let's make the elements null for now - // $unemitted_taxonomies_for_date[$row_raw.taxonomy_index] := null; + $unemitted_taxonomies_for_date[$row_raw.taxonomy_index] := null; $prev_date := $current_date; } @@ -339,10 +332,13 @@ procedure get_index_filled($date_from int, $date_to int, $frozen_at int, $base_d if $prev_date != 0 { if $taxonomy_count > 0 { for $unemitted_taxonomy2 in $unemitted_taxonomies_for_date { - if $last_values[$unemitted_taxonomy2] is distinct from null { - // Fetch the correct dynamic weight for the last date - $dynamic_weight_last := get_dynamic_weight($child_stream_id[$unemitted_taxonomy2], $prev_date); - return next $prev_date, $last_values[$unemitted_taxonomy2] * $dynamic_weight_last, $dynamic_weight_last; + // TODO: remove this when we have slices + if $unemitted_taxonomy2 is distinct from null { + if $last_values[$unemitted_taxonomy2] is distinct from null { + // Fetch the correct dynamic weight for the last date + $dynamic_weight_last := get_dynamic_weight($child_stream_id[$unemitted_taxonomy2], $prev_date); + return next $prev_date, $last_values[$unemitted_taxonomy2] * $dynamic_weight_last, $dynamic_weight_last; + } } } } @@ -1054,31 +1050,6 @@ procedure emit_values_if($condition bool, $date_value int, $values decimal(36,18 } -procedure remove_array_element($array int[], $index int) private view returns (result int[]) { - // todo: this is too inefficient. - // we should use slices (i.e. arr[1:3]) when supported - $new_array int[]; - for $i in 1..array_length($array) { - if $i != $index { - $new_array := array_append($new_array, $array[$i]); - } - } - - return $new_array; -} - -procedure array_update_element($array decimal(36,18)[], $index int, $value decimal(36,18)) private view returns (result decimal(36,18)[]) { - $new_array decimal(36,18)[]; - for $i in 1..array_length($array) { - if $i == $index { - $new_array := array_append($new_array, $value); - } else { - $new_array := array_append($new_array, $array[$i]); - } - } - return $new_array; -} - // Returns the weight based on the closest start_date before or equal to the given date procedure get_dynamic_weight($stream_id text, $date_value int) private view returns ( weight decimal(36,18) From 53599df8219093aaf46d9e399e699285b6c9b0f2 Mon Sep 17 00:00:00 2001 From: Angelica Willianto <78342026+angelicawill@users.noreply.github.com> Date: Fri, 10 Jan 2025 15:49:24 +0700 Subject: [PATCH 06/14] feat(docs): view terminology around stream metadata (#778) * feat(docs): view terminology around stream metadata * docs: owner and deployer term * docs: read updated terminology --------- Co-authored-by: Mark Curchin <46079535+markholdex@users.noreply.github.com> Co-authored-by: Vadim --- TERMINOLOGY.md | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/TERMINOLOGY.md b/TERMINOLOGY.md index 1ace707d0..75a759ca2 100644 --- a/TERMINOLOGY.md +++ b/TERMINOLOGY.md @@ -32,6 +32,20 @@ This document is a reference for the terminology used in the TN project. It is i - TN GOVERNANCE: The entity or group responsible for approving streams and managing the System Contract. - SAFE READ: A query made through the System Contract that only returns data from official streams. - UNSAFE READ: A query made through the System Contract that can return data from any stream, official or unofficial. +- STREAM DEPLOYER: A wallet address that deployed a specific stream contract. +- STREAM OWNER: A wallet address that owns a stream contract. The Stream Owner is not necessarily the Stream Deployer but can create a Composed Stream using contracts deployed by other wallets. +- STREAM READER: A wallet address with permission to read data from a stream. For public streams, any address can be a reader. For private streams, the address must be whitelisted. +- STREAM WRITER: A wallet address with permission to write data to a stream. Must be whitelisted by a stream owner to have write access. +- STREAM VISIBILITY: Access control setting that determines data read permissions: + - PUBLIC: Data can be read by any wallet address + - PRIVATE: Data can only be read by whitelisted wallet addresses +- COMPOSITION VISIBILITY: Access control setting that determines which streams can use this stream as an input: + - PUBLIC: Any stream can compose using this stream + - PRIVATE: Only whitelisted streams can compose using this stream +- WALLET PROFILE: Public information associated with a wallet address in the Explorer, including: + - Identity information + - Associated streams (deployed, owned, writable, readable) + - Public activity and contributions ## Avoid From 70455007f7706bc632691fbebca4705c68e05c01 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 3 Feb 2025 09:33:00 -0300 Subject: [PATCH 07/14] test: increase stream storage test scale to 100 streams --- tests/integration/storage/config.go | 15 + tests/integration/storage/stream_manager.go | 172 +++++++ .../storage/stream_storage_test.go | 485 ++++++++++++++++++ 3 files changed, 672 insertions(+) create mode 100644 tests/integration/storage/config.go create mode 100644 tests/integration/storage/stream_manager.go create mode 100644 tests/integration/storage/stream_storage_test.go diff --git a/tests/integration/storage/config.go b/tests/integration/storage/config.go new file mode 100644 index 000000000..b7240c2e6 --- /dev/null +++ b/tests/integration/storage/config.go @@ -0,0 +1,15 @@ +package stream_storage_test + +// Configuration constants +const ( + // Test configuration + numStreams = 100 + workers = 30 + + // Docker configuration + networkName = "test-network" + + // SDK configuration + TestPrivateKey = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" + TestKwilProvider = "http://localhost:8484" +) diff --git a/tests/integration/storage/stream_manager.go b/tests/integration/storage/stream_manager.go new file mode 100644 index 000000000..8db3949d0 --- /dev/null +++ b/tests/integration/storage/stream_manager.go @@ -0,0 +1,172 @@ +package stream_storage_test + +import ( + "context" + "fmt" + "strings" + "testing" + "time" + + kwilcrypto "github.com/kwilteam/kwil-db/core/crypto" + "github.com/kwilteam/kwil-db/core/crypto/auth" + "github.com/kwilteam/kwil-db/core/types/transactions" + "github.com/trufnetwork/sdk-go/core/tnclient" + "github.com/trufnetwork/sdk-go/core/types" + "github.com/trufnetwork/sdk-go/core/util" + "golang.org/x/sync/errgroup" +) + +// streamInfo holds information about a deployed stream +// It assumes the constants TestPrivateKey, TestKwilProvider and workers are defined in the package + +type streamInfo struct { + name string + streamId util.StreamId + streamLocator types.StreamLocator +} + +// txInfo holds information about a transaction + +type txInfo struct { + hash transactions.TxHash + streamId string +} + +// streamManager handles stream operations + +type streamManager struct { + client *tnclient.Client + t *testing.T +} + +// newStreamManager creates a new stream manager using the global TestPrivateKey and TestKwilProvider constants +func newStreamManager(ctx context.Context, t *testing.T) (*streamManager, error) { + pk, err := kwilcrypto.Secp256k1PrivateKeyFromHex(TestPrivateKey) + if err != nil { + return nil, fmt.Errorf("failed to parse private key: %w", err) + } + + client, err := tnclient.NewClient( + ctx, + TestKwilProvider, + tnclient.WithSigner(&auth.EthPersonalSigner{Key: *pk}), + ) + if err != nil { + return nil, fmt.Errorf("failed to create TN client: %w", err) + } + + return &streamManager{client: client, t: t}, nil +} + +// retryIfNonceError tries the operation up to 5 times if a nonce-related error is detected. +func (sm *streamManager) retryIfNonceError(ctx context.Context, operation func() (transactions.TxHash, error)) (transactions.TxHash, error) { + const maxAttempts = 5 + var lastErr error + + for attempts := 1; attempts <= maxAttempts; attempts++ { + txHash, err := operation() + if err == nil { + return txHash, nil + } + + if strings.Contains(strings.ToLower(err.Error()), "nonce") { + lastErr = err + sm.t.Logf("Nonce error detected (attempt %d/%d): %v", attempts, maxAttempts, err) + time.Sleep(1 * time.Second) + continue + } + return txHash, err + } + return transactions.TxHash(""), fmt.Errorf("operation failed after %d attempts. Last error: %w", maxAttempts, lastErr) +} + +// submitAndWaitForTxs is a generic helper to reduce duplication in deploy, initialize, and destroy steps. +func (sm *streamManager) submitAndWaitForTxs(ctx context.Context, items []string, submitFunc func(string) (transactions.TxHash, error)) error { + txInfos := make([]txInfo, 0, len(items)) + + for _, item := range items { + txHash, err := sm.retryIfNonceError(ctx, func() (transactions.TxHash, error) { + return submitFunc(item) + }) + if err != nil { + return fmt.Errorf("failed to submit tx for item %s: %w", item, err) + } + sm.t.Logf("Submitted TX for %s, hash: %s", item, txHash) + txInfos = append(txInfos, txInfo{hash: txHash, streamId: item}) + } + + eg, ctx := errgroup.WithContext(ctx) + sem := make(chan struct{}, workers) + for _, tx := range txInfos { + tx := tx + sem <- struct{}{} + eg.Go(func() error { + defer func() { <-sem }() + _, err := sm.client.WaitForTx(ctx, tx.hash, time.Second) + if err != nil { + return fmt.Errorf("tx %s for %s not mined: %w", tx.hash, tx.streamId, err) + } + sm.t.Logf("TX mined for %s, hash: %s", tx.streamId, tx.hash) + return nil + }) + } + return eg.Wait() +} + +// deployStreams deploys streams using the generic submitAndWaitForTxs helper. +func (sm *streamManager) deployStreams(ctx context.Context, count int) ([]streamInfo, error) { + sm.t.Logf("Deploying %d streams", count) + streamNames := make([]string, count) + for i := 0; i < count; i++ { + streamNames[i] = fmt.Sprintf("stream-%d", i) + } + + err := sm.submitAndWaitForTxs(ctx, streamNames, func(s string) (transactions.TxHash, error) { + streamId := util.GenerateStreamId(s) + return sm.client.DeployStream(ctx, streamId, types.StreamTypePrimitiveUnix) + }) + if err != nil { + return nil, err + } + + streams := make([]streamInfo, count) + for i, name := range streamNames { + sid := util.GenerateStreamId(name) + streams[i] = streamInfo{ + name: name, + streamId: sid, + streamLocator: sm.client.OwnStreamLocator(sid), + } + } + return streams, nil +} + +// initializeStreams initializes streams using the submitAndWaitForTxs helper. +func (sm *streamManager) initializeStreams(ctx context.Context, streams []streamInfo) error { + sm.t.Logf("Initializing %d streams", len(streams)) + names := make([]string, len(streams)) + for i, s := range streams { + names[i] = s.name + } + return sm.submitAndWaitForTxs(ctx, names, func(s string) (transactions.TxHash, error) { + sid := util.GenerateStreamId(s) + stream, err := sm.client.LoadPrimitiveStream(sm.client.OwnStreamLocator(sid)) + if err != nil { + return transactions.TxHash(""), err + } + return stream.InitializeStream(ctx) + }) +} + +// destroyStreams destroys streams using the submitAndWaitForTxs helper. +func (sm *streamManager) destroyStreams(ctx context.Context, count int) error { + sm.t.Logf("Destroying %d streams", count) + names := make([]string, count) + for i := 0; i < count; i++ { + names[i] = fmt.Sprintf("stream-%d", i) + } + return sm.submitAndWaitForTxs(ctx, names, func(s string) (transactions.TxHash, error) { + sid := util.GenerateStreamId(s) + return sm.client.DestroyStream(ctx, sid) + }) +} diff --git a/tests/integration/storage/stream_storage_test.go b/tests/integration/storage/stream_storage_test.go new file mode 100644 index 000000000..8ad6edf6e --- /dev/null +++ b/tests/integration/storage/stream_storage_test.go @@ -0,0 +1,485 @@ +package stream_storage_test + +import ( + "bytes" + "context" + "errors" + "fmt" + "os/exec" + "strconv" + "strings" + "testing" + "time" + + "github.com/ethereum/go-ethereum/crypto" + kwilcrypto "github.com/kwilteam/kwil-db/core/crypto" + "github.com/kwilteam/kwil-db/core/crypto/auth" + "github.com/trufnetwork/sdk-go/core/tnclient" + "github.com/trufnetwork/sdk-go/core/util" +) + +/* + * Test the stream storage + + 1. Start containers for kwil and postgres, isolated from the rest of the system + 2. measure the current directory size: + - postgres: /var/lib/postgresql/data + - kwil: /root/.kwildb/ + 3. run the stream storage test: + - create 1,000 streams + - initialize all streams with 1000 messages each + 4. measure the directory size of kwil and postgres + 5. drop all streams + 6. measure the directory size of kwil and postgres + 7. log all sizes + 8. teardown containers +*/ + +// containerSpec defines the configuration for a container +type containerSpec struct { + name string + image string + tmpfsPath string + envVars []string + healthCheck func(d *docker) error +} + +// testContainers defines the containers needed for the test +var containers = struct { + postgres containerSpec + tsndb containerSpec +}{ + postgres: containerSpec{ + name: "test-kwil-postgres", + image: "kwildb/postgres:latest", + tmpfsPath: "/var/lib/postgresql/data", + envVars: []string{"POSTGRES_HOST_AUTH_METHOD=trust"}, + healthCheck: func(d *docker) error { + _, err := d.exec("test-kwil-postgres", "pg_isready", "-U", "postgres") + return err + }, + }, + tsndb: containerSpec{ + name: "test-tsn-db", + image: "tsn-db:local", + tmpfsPath: "/root/.kwild", + envVars: []string{ + "CONFIG_PATH=/root/.kwild", + "KWILD_APP_HOSTNAME=test-tsn-db", + "KWILD_APP_PG_DB_HOST=test-kwil-postgres", + "KWILD_CHAIN_P2P_EXTERNAL_ADDRESS=http://test-tsn-db:26656", + }, + healthCheck: func(d *docker) error { + // Wait for the service to be ready + time.Sleep(5 * time.Second) + _, err := d.exec("test-tsn-db", "ps", "aux") + return err + }, + }, +} + +// docker provides a simplified interface for docker operations +type docker struct { + t *testing.T +} + +// newDocker creates a new docker helper +func newDocker(t *testing.T) *docker { + return &docker{t: t} +} + +// exec executes a command in a container +func (d *docker) exec(container string, args ...string) (string, error) { + cmdArgs := append([]string{"exec", container}, args...) + return d.run(cmdArgs...) +} + +// run executes a docker command +func (d *docker) run(args ...string) (string, error) { + cmd := exec.Command("docker", args...) + var out bytes.Buffer + cmd.Stdout = &out + cmd.Stderr = &out + err := cmd.Run() + return out.String(), err +} + +// failWithLogsOnError logs container logs and fails the test if err is non-nil. +func (d *docker) failWithLogsOnError(err error, containerName string) { + if err != nil { + if logs, logsErr := d.run("logs", containerName); logsErr == nil { + d.t.Logf("Logs for %s:\n%s", containerName, logs) + } + d.t.Fatal(err) + } +} + +// pollUntilTrue polls a condition until it returns true or a timeout is reached. +func pollUntilTrue(ctx context.Context, timeout time.Duration, check func() bool) error { + deadline := time.Now().Add(timeout) + for time.Now().Before(deadline) { + if check() { + return nil + } + time.Sleep(time.Second) + } + return errors.New("condition not met within timeout") +} + +// startContainer starts a container with the given spec and waits for it to be healthy. +func (d *docker) startContainer(spec containerSpec) error { + args := []string{"run", "--rm", "--name", spec.name, "--network", networkName, "-d"} + + if spec.tmpfsPath != "" { + args = append(args, "--tmpfs", spec.tmpfsPath) + } + + for _, env := range spec.envVars { + args = append(args, "-e", env) + } + + if spec.name == "test-tsn-db" { + args = append(args, + "-p", "50051:50051", + "-p", "50151:50151", + "-p", "8080:8080", + "-p", "8484:8484", + "-p", "26656:26656", + "-p", "26657:26657", + "--entrypoint", "/app/kwild", + spec.image, + "--autogen", + "--app.pg-db-host", "test-kwil-postgres", + "--app.hostname", "test-tsn-db", + "--chain.p2p.external-address", "http://test-tsn-db:26656", + ) + } else { + args = append(args, spec.image) + } + + out, err := d.run(args...) + if err != nil { + return fmt.Errorf("failed to start container %s: %w\nOutput: %s", spec.name, err, out) + } + + if spec.healthCheck != nil { + err := pollUntilTrue(context.Background(), 10*time.Second, func() bool { + return spec.healthCheck(d) == nil + }) + if err != nil { + if logs, logsErr := d.run("logs", spec.name); logsErr == nil { + d.t.Logf("Container logs for %s:\n%s", spec.name, logs) + } + return fmt.Errorf("container %s failed to become healthy: %w", spec.name, err) + } + } + + if spec.name == "test-tsn-db" { + err := pollUntilTrue(context.Background(), 30*time.Second, func() bool { + out, err := exec.Command("curl", "-s", "-o", "/dev/null", "-w", "%{http_code}", "http://localhost:8484/api/v1/health").Output() + if err != nil { + return false + } + return strings.TrimSpace(string(out)) == "200" + }) + if err != nil { + if logs, logsErr := d.run("logs", spec.name); logsErr == nil { + d.t.Logf("Container logs for %s:\n%s", spec.name, logs) + } + return fmt.Errorf("RPC server in container %s failed to become ready: %w", spec.name, err) + } + } + + return nil +} + +// stopContainer stops a container +func (d *docker) stopContainer(name string) error { + _, err := d.run("stop", name) + if err != nil { + return fmt.Errorf("failed to stop container %s: %w", name, err) + } + d.t.Logf("Stopped container %s", name) + return nil +} + +// setupNetwork creates a docker network +func (d *docker) setupNetwork() error { + d.run("network", "rm", networkName) + _, err := d.run("network", "create", networkName) + return err +} + +// teardownNetwork removes the docker network +func (d *docker) teardownNetwork() error { + _, err := d.run("network", "rm", networkName) + return err +} + +// measureSize measures the size of a directory in a container +func (d *docker) measureSize(container, path string) (int64, error) { + out, err := d.exec(container, "du", "-sb", path) + if err != nil { + return 0, err + } + return parseDuOutput(out) +} + +// runCommand executes a command and returns its combined output or error. +func runCommand(name string, args ...string) (string, error) { + cmd := exec.Command(name, args...) + var out bytes.Buffer + cmd.Stdout = &out + cmd.Stderr = &out + err := cmd.Run() + return out.String(), err +} + +// parseDuOutput parses the output of du command and returns the size in bytes +func parseDuOutput(output string) (int64, error) { + fields := strings.Fields(output) + if len(fields) < 1 { + return 0, fmt.Errorf("unexpected du output: %s", output) + } + return strconv.ParseInt(fields[0], 10, 64) +} + +// bytesToMB converts bytes to megabytes with 2 decimal places +func bytesToMB(bytes int64) float64 { + return float64(bytes) / (1024 * 1024) +} + +// cleanup removes all docker resources +func (d *docker) cleanup() { + // Get all container IDs + out, err := d.run("ps", "-aq") + if err == nil && out != "" { + containers := strings.Fields(out) + if len(containers) > 0 { + killArgs := append([]string{"kill"}, containers...) + d.run(killArgs...) + rmArgs := append([]string{"rm"}, containers...) + d.run(rmArgs...) + } + } + // Remove networks + d.run("network", "prune", "-f") + // Remove volume + d.run("volume", "rm", "tsn-config") +} + +func TestStreamStorage(t *testing.T) { + ctx := context.Background() + + // Setup docker helper + d := newDocker(t) + + // Clean up any existing resources + d.cleanup() + + // Create network + if err := d.setupNetwork(); err != nil { + t.Fatal(err) + } + defer d.teardownNetwork() + + // Start postgres first + if err := d.startContainer(containers.postgres); err != nil { + t.Fatal(err) + } + defer d.stopContainer(containers.postgres.name) + + // Wait for postgres to be healthy + for i := 0; i < 10; i++ { + if err := containers.postgres.healthCheck(d); err == nil { + break + } + if i == 9 { + t.Fatal("postgres failed to become healthy") + } + time.Sleep(time.Second) + } + + // Start tsn-db with autogen + t.Log("Starting tsn-db container...") + if err := d.startContainer(containers.tsndb); err != nil { + // Get logs before failing + if out, err := d.run("logs", containers.tsndb.name); err == nil { + t.Logf("TSN-DB container logs:\n%s", out) + } else { + t.Logf("Failed to get TSN-DB logs: %v", err) + } + // Get container status + if status, err := d.run("inspect", "--format", "{{.State.Status}}", containers.tsndb.name); err == nil { + t.Logf("TSN-DB container status: %s", status) + } + t.Fatalf("Failed to start tsn-db container: %v", err) + } + t.Log("TSN-DB container started successfully") + + // Wait for node to be fully initialized + t.Log("Waiting for node to be fully initialized...") + for i := 0; i < 30; i++ { // 30 seconds max wait + healthCmd := exec.Command("curl", "-s", TestKwilProvider+"/api/v1/health") + healthOut, healthErr := healthCmd.CombinedOutput() + if healthErr == nil { + t.Logf("Health check response: %s", string(healthOut)) + if strings.Contains(string(healthOut), `"healthy":true`) && + strings.Contains(string(healthOut), `"block_height":1`) { + t.Log("Node is healthy and has produced the first block") + break + } + } + if i == 29 { + t.Fatal("Node failed to become healthy or produce the first block") + } + time.Sleep(time.Second) + } + + // Get initial container logs + if out, err := d.run("logs", containers.tsndb.name); err == nil { + t.Logf("Initial TSN-DB container logs:\n%s", out) + } + + defer d.stopContainer(containers.tsndb.name) + + // Measure initial sizes + pgSizeBefore, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) + if err != nil { + t.Fatal(err) + } + tsnSizeBefore, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) + if err != nil { + t.Fatal(err) + } + t.Logf("Initial sizes - Postgres: %.2f MB, TSN-DB: %.2f MB", bytesToMB(pgSizeBefore), bytesToMB(tsnSizeBefore)) + + // Initialize stream manager + t.Log("Creating private key...") + pk, err := kwilcrypto.Secp256k1PrivateKeyFromHex(TestPrivateKey) + if err != nil { + t.Fatalf("Failed to parse private key: %v", err) + } + t.Log("Successfully created private key") + + t.Log("Creating TN client...") + t.Logf("Using provider: %s", TestKwilProvider) + + // Get the Ethereum address from the public key + pubKeyBytes := pk.PubKey().Bytes() + // Remove the first byte which is the compression flag + pubKeyBytes = pubKeyBytes[1:] + addr, err := util.NewEthereumAddressFromBytes(crypto.Keccak256(pubKeyBytes)[12:]) + if err != nil { + t.Fatalf("Failed to get address from public key: %v", err) + } + t.Logf("Using signer with address: %s", addr.Address()) + + t.Log("Attempting to create client...") + var client *tnclient.Client + var lastErr error + for i := 0; i < 60; i++ { // 60 seconds max wait + t.Logf("Attempt %d/60: Creating client with provider URL %s", i+1, TestKwilProvider) + + // First check if the server is accepting connections + cmd := exec.Command("curl", "-s", "-w", "\n%{http_code}", "http://localhost:8484/api/v1/health") + out, err := cmd.CombinedOutput() + if err != nil { + lastErr = fmt.Errorf("health check command failed: %w", err) + t.Logf("Health check command failed: %v", err) + time.Sleep(time.Second) + continue + } + + // Split output into response body and status code + parts := strings.Split(string(out), "\n") + if len(parts) != 2 { + lastErr = fmt.Errorf("unexpected health check output format: %s", string(out)) + t.Logf("Health check output format error: %s", string(out)) + time.Sleep(time.Second) + continue + } + + statusCode := strings.TrimSpace(parts[1]) + t.Logf("Health check response - Status: %s", statusCode) + + if statusCode != "200" { + lastErr = fmt.Errorf("health check returned non-200 status: %s", statusCode) + t.Logf("Health check failed with status %s", statusCode) + time.Sleep(time.Second) + continue + } + + t.Log("Health check passed, attempting to create client...") + + // Try to create the client now that we know the server is accepting connections + client, err = tnclient.NewClient( + ctx, + TestKwilProvider, + tnclient.WithSigner(&auth.EthPersonalSigner{Key: *pk}), + ) + if err != nil { + lastErr = fmt.Errorf("failed to create TN client: %w", err) + t.Logf("Client creation failed: %v", err) + time.Sleep(time.Second) + continue + } + + // Successfully created client + t.Log("Client created successfully") + break + } + + if client == nil { + t.Fatalf("Failed to create client after 60 attempts. Last error: %v", lastErr) + } + + sm, err := newStreamManager(ctx, t) + if err != nil { + t.Fatalf("Failed to create stream manager: %v", err) + } + + // Deploy streams + streams, err := sm.deployStreams(ctx, numStreams) + if err != nil { + t.Fatal(err) + } + + // Initialize streams + if err := sm.initializeStreams(ctx, streams); err != nil { + t.Fatal(err) + } + + // Measure sizes after creation + pgSizeAfter, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) + if err != nil { + t.Fatal(err) + } + tsnSizeAfter, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) + if err != nil { + t.Fatal(err) + } + t.Logf("After creation - Postgres: %.2f MB, TSN-DB: %.2f MB", bytesToMB(pgSizeAfter), bytesToMB(tsnSizeAfter)) + + // Destroy streams + if err := sm.destroyStreams(ctx, numStreams); err != nil { + t.Fatal(err) + } + + // Measure final sizes + pgSizeFinal, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) + if err != nil { + t.Fatal(err) + } + tsnSizeFinal, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) + if err != nil { + t.Fatal(err) + } + + // Log all measurements + t.Log("Final measurements:") + t.Logf("Postgres (MB): before=%.2f, after=%.2f, final=%.2f", + bytesToMB(pgSizeBefore), bytesToMB(pgSizeAfter), bytesToMB(pgSizeFinal)) + t.Logf("TSN-DB (MB): before=%.2f, after=%.2f, final=%.2f", + bytesToMB(tsnSizeBefore), bytesToMB(tsnSizeAfter), bytesToMB(tsnSizeFinal)) +} From 978b23ab0cd803e66f8b10845cff246919cf86e4 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 3 Feb 2025 09:33:49 -0300 Subject: [PATCH 08/14] chore: update dependencies and add validation packages --- go.mod | 7 ++++++- go.sum | 18 ++++++++++++++++-- 2 files changed, 22 insertions(+), 3 deletions(-) diff --git a/go.mod b/go.mod index ce5d2db38..fc82902c8 100644 --- a/go.mod +++ b/go.mod @@ -19,7 +19,7 @@ require ( github.com/pkg/errors v0.9.1 github.com/samber/lo v1.47.0 github.com/stretchr/testify v1.9.0 - github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 + github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337 go.uber.org/zap v1.27.0 golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 golang.org/x/sync v0.8.0 @@ -67,6 +67,7 @@ require ( github.com/ethereum/go-ethereum v1.14.8 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect + github.com/gabriel-vasile/mimetype v1.4.3 // indirect github.com/getsentry/sentry-go v0.27.0 // indirect github.com/go-kit/kit v0.12.0 // indirect github.com/go-kit/log v0.2.1 // indirect @@ -74,6 +75,9 @@ require ( github.com/go-logr/logr v1.4.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-ole/go-ole v1.3.0 // indirect + github.com/go-playground/locales v0.14.1 // indirect + github.com/go-playground/universal-translator v0.18.1 // indirect + github.com/go-playground/validator/v10 v10.22.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/glog v1.2.1 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect @@ -102,6 +106,7 @@ require ( github.com/kr/pretty v0.3.1 // indirect github.com/kr/text v0.2.0 // indirect github.com/kwilteam/kwil-extensions v0.0.0-20230727040522-1cfd930226b7 // indirect + github.com/leodido/go-urn v1.4.0 // indirect github.com/lib/pq v1.10.7 // indirect github.com/linxGnu/grocksdb v1.8.14 // indirect github.com/magiconair/properties v1.8.7 // indirect diff --git a/go.sum b/go.sum index 09fccb113..c39278497 100644 --- a/go.sum +++ b/go.sum @@ -160,6 +160,8 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= +github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0= +github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff h1:tY80oXqGNY4FhTFhk+o9oFHGINQ/+vhlm8HFzi6znCI= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww= github.com/getsentry/sentry-go v0.27.0 h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps= @@ -180,6 +182,14 @@ github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE= github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78= +github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= +github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= +github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= +github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= +github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= +github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= +github.com/go-playground/validator/v10 v10.22.0 h1:k6HsTZ0sTnROkhS//R0O+55JgM8C4Bx7ia+JlgcnOao= +github.com/go-playground/validator/v10 v10.22.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM= github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw= github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= @@ -305,6 +315,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0 github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c= github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= +github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= +github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= github.com/lib/pq v1.10.7 h1:p7ZhMD+KsSRozJr34udlUrhboJwWAgCg34+/ZZNvZZw= github.com/lib/pq v1.10.7/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/linxGnu/grocksdb v1.8.14 h1:HTgyYalNwBSG/1qCQUIott44wU5b2Y9Kr3z7SK5OfGQ= @@ -467,8 +479,10 @@ github.com/tklauser/numcpus v0.8.0 h1:Mx4Wwe/FjZLeQsK/6kt2EOepwwSl7SmJrK5bV/dXYg github.com/tklauser/numcpus v0.8.0/go.mod h1:ZJZlAY+dmR4eut8epnzf0u/VwodKmryxR8txiloSqBE= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346 h1:TvtdmeYsYEij78hS4oxnwikoiLdIrgav3BA+CbhaDAI= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346/go.mod h1:xKQhd7snlzKFuUi1taTGWjpRE8iFTA06DeacYi3CVFQ= -github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 h1:LNZ99bITHatmYKVHH4YqBhpuAg2bUx8SlP2VHAWR4jE= -github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52/go.mod h1:xfGmTkamZxyAOG331+P2oceLFxR7u/4lIoELyYEeCR4= +github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282 h1:LTzgVlKoYE+laTizgju24seT2fnIIcNqC+fN2SYYsRM= +github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= +github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337 h1:NlaXTkXnUiKCektYmF0WodsvYN/fCHvB4fuu0DDkpmM= +github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= github.com/tyler-smith/go-bip39 v1.1.0 h1:5eUemwrMargf3BSLRRCalXT93Ns6pQJIjYQN2nyfOP8= github.com/tyler-smith/go-bip39 v1.1.0/go.mod h1:gUYDtqQw1JS3ZJ8UWVcGTGqqr6YIN3CWg+kkNaLt55U= github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= From b250805af3a8293c51a01f6b01f3608b2bd36a72 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 3 Feb 2025 09:51:34 -0300 Subject: [PATCH 09/14] fix: correct snapshot flag in entrypoint script Update the snapshots flag from `--app.snapshots.enabled` to `--app.snapshots.enable` in the tsn-entrypoint.sh script to ensure proper snapshot configuration. --- deployments/tsn-entrypoint.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deployments/tsn-entrypoint.sh b/deployments/tsn-entrypoint.sh index 5e2d4a447..d94ce6d19 100644 --- a/deployments/tsn-entrypoint.sh +++ b/deployments/tsn-entrypoint.sh @@ -7,6 +7,6 @@ exec /app/kwild --root-dir $CONFIG_PATH \ --app.jsonrpc-listen-addr "0.0.0.0:8484"\ --app.db-read-timeout "60s"\ - --app.snapshots.enabled\ + --app.snapshots.enable\ --chain.p2p.listen-addr "tcp://0.0.0.0:26656"\ --chain.rpc.listen-addr "tcp://0.0.0.0:26657" From bab9cf1d4881250b38e46b845c7e0489ca3b1ae1 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 3 Feb 2025 10:03:46 -0300 Subject: [PATCH 10/14] Revert "chore: update dependencies and add validation packages" This reverts commit 978b23ab0cd803e66f8b10845cff246919cf86e4. --- go.mod | 7 +------ go.sum | 18 ++---------------- 2 files changed, 3 insertions(+), 22 deletions(-) diff --git a/go.mod b/go.mod index fc82902c8..ce5d2db38 100644 --- a/go.mod +++ b/go.mod @@ -19,7 +19,7 @@ require ( github.com/pkg/errors v0.9.1 github.com/samber/lo v1.47.0 github.com/stretchr/testify v1.9.0 - github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337 + github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 go.uber.org/zap v1.27.0 golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 golang.org/x/sync v0.8.0 @@ -67,7 +67,6 @@ require ( github.com/ethereum/go-ethereum v1.14.8 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect - github.com/gabriel-vasile/mimetype v1.4.3 // indirect github.com/getsentry/sentry-go v0.27.0 // indirect github.com/go-kit/kit v0.12.0 // indirect github.com/go-kit/log v0.2.1 // indirect @@ -75,9 +74,6 @@ require ( github.com/go-logr/logr v1.4.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-ole/go-ole v1.3.0 // indirect - github.com/go-playground/locales v0.14.1 // indirect - github.com/go-playground/universal-translator v0.18.1 // indirect - github.com/go-playground/validator/v10 v10.22.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/glog v1.2.1 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect @@ -106,7 +102,6 @@ require ( github.com/kr/pretty v0.3.1 // indirect github.com/kr/text v0.2.0 // indirect github.com/kwilteam/kwil-extensions v0.0.0-20230727040522-1cfd930226b7 // indirect - github.com/leodido/go-urn v1.4.0 // indirect github.com/lib/pq v1.10.7 // indirect github.com/linxGnu/grocksdb v1.8.14 // indirect github.com/magiconair/properties v1.8.7 // indirect diff --git a/go.sum b/go.sum index c39278497..09fccb113 100644 --- a/go.sum +++ b/go.sum @@ -160,8 +160,6 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= -github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0= -github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff h1:tY80oXqGNY4FhTFhk+o9oFHGINQ/+vhlm8HFzi6znCI= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww= github.com/getsentry/sentry-go v0.27.0 h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps= @@ -182,14 +180,6 @@ github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE= github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78= -github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= -github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= -github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= -github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= -github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= -github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= -github.com/go-playground/validator/v10 v10.22.0 h1:k6HsTZ0sTnROkhS//R0O+55JgM8C4Bx7ia+JlgcnOao= -github.com/go-playground/validator/v10 v10.22.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM= github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw= github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= @@ -315,8 +305,6 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0 github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c= github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= -github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= -github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= github.com/lib/pq v1.10.7 h1:p7ZhMD+KsSRozJr34udlUrhboJwWAgCg34+/ZZNvZZw= github.com/lib/pq v1.10.7/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/linxGnu/grocksdb v1.8.14 h1:HTgyYalNwBSG/1qCQUIott44wU5b2Y9Kr3z7SK5OfGQ= @@ -479,10 +467,8 @@ github.com/tklauser/numcpus v0.8.0 h1:Mx4Wwe/FjZLeQsK/6kt2EOepwwSl7SmJrK5bV/dXYg github.com/tklauser/numcpus v0.8.0/go.mod h1:ZJZlAY+dmR4eut8epnzf0u/VwodKmryxR8txiloSqBE= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346 h1:TvtdmeYsYEij78hS4oxnwikoiLdIrgav3BA+CbhaDAI= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346/go.mod h1:xKQhd7snlzKFuUi1taTGWjpRE8iFTA06DeacYi3CVFQ= -github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282 h1:LTzgVlKoYE+laTizgju24seT2fnIIcNqC+fN2SYYsRM= -github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= -github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337 h1:NlaXTkXnUiKCektYmF0WodsvYN/fCHvB4fuu0DDkpmM= -github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= +github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 h1:LNZ99bITHatmYKVHH4YqBhpuAg2bUx8SlP2VHAWR4jE= +github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52/go.mod h1:xfGmTkamZxyAOG331+P2oceLFxR7u/4lIoELyYEeCR4= github.com/tyler-smith/go-bip39 v1.1.0 h1:5eUemwrMargf3BSLRRCalXT93Ns6pQJIjYQN2nyfOP8= github.com/tyler-smith/go-bip39 v1.1.0/go.mod h1:gUYDtqQw1JS3ZJ8UWVcGTGqqr6YIN3CWg+kkNaLt55U= github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= From c32a0b5b7fcaf3cd59e6ffbf8445b826ba2731e0 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 3 Feb 2025 10:04:11 -0300 Subject: [PATCH 11/14] Revert "test: increase stream storage test scale to 100 streams" This reverts commit 70455007f7706bc632691fbebca4705c68e05c01. --- tests/integration/storage/config.go | 15 - tests/integration/storage/stream_manager.go | 172 ------- .../storage/stream_storage_test.go | 485 ------------------ 3 files changed, 672 deletions(-) delete mode 100644 tests/integration/storage/config.go delete mode 100644 tests/integration/storage/stream_manager.go delete mode 100644 tests/integration/storage/stream_storage_test.go diff --git a/tests/integration/storage/config.go b/tests/integration/storage/config.go deleted file mode 100644 index b7240c2e6..000000000 --- a/tests/integration/storage/config.go +++ /dev/null @@ -1,15 +0,0 @@ -package stream_storage_test - -// Configuration constants -const ( - // Test configuration - numStreams = 100 - workers = 30 - - // Docker configuration - networkName = "test-network" - - // SDK configuration - TestPrivateKey = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" - TestKwilProvider = "http://localhost:8484" -) diff --git a/tests/integration/storage/stream_manager.go b/tests/integration/storage/stream_manager.go deleted file mode 100644 index 8db3949d0..000000000 --- a/tests/integration/storage/stream_manager.go +++ /dev/null @@ -1,172 +0,0 @@ -package stream_storage_test - -import ( - "context" - "fmt" - "strings" - "testing" - "time" - - kwilcrypto "github.com/kwilteam/kwil-db/core/crypto" - "github.com/kwilteam/kwil-db/core/crypto/auth" - "github.com/kwilteam/kwil-db/core/types/transactions" - "github.com/trufnetwork/sdk-go/core/tnclient" - "github.com/trufnetwork/sdk-go/core/types" - "github.com/trufnetwork/sdk-go/core/util" - "golang.org/x/sync/errgroup" -) - -// streamInfo holds information about a deployed stream -// It assumes the constants TestPrivateKey, TestKwilProvider and workers are defined in the package - -type streamInfo struct { - name string - streamId util.StreamId - streamLocator types.StreamLocator -} - -// txInfo holds information about a transaction - -type txInfo struct { - hash transactions.TxHash - streamId string -} - -// streamManager handles stream operations - -type streamManager struct { - client *tnclient.Client - t *testing.T -} - -// newStreamManager creates a new stream manager using the global TestPrivateKey and TestKwilProvider constants -func newStreamManager(ctx context.Context, t *testing.T) (*streamManager, error) { - pk, err := kwilcrypto.Secp256k1PrivateKeyFromHex(TestPrivateKey) - if err != nil { - return nil, fmt.Errorf("failed to parse private key: %w", err) - } - - client, err := tnclient.NewClient( - ctx, - TestKwilProvider, - tnclient.WithSigner(&auth.EthPersonalSigner{Key: *pk}), - ) - if err != nil { - return nil, fmt.Errorf("failed to create TN client: %w", err) - } - - return &streamManager{client: client, t: t}, nil -} - -// retryIfNonceError tries the operation up to 5 times if a nonce-related error is detected. -func (sm *streamManager) retryIfNonceError(ctx context.Context, operation func() (transactions.TxHash, error)) (transactions.TxHash, error) { - const maxAttempts = 5 - var lastErr error - - for attempts := 1; attempts <= maxAttempts; attempts++ { - txHash, err := operation() - if err == nil { - return txHash, nil - } - - if strings.Contains(strings.ToLower(err.Error()), "nonce") { - lastErr = err - sm.t.Logf("Nonce error detected (attempt %d/%d): %v", attempts, maxAttempts, err) - time.Sleep(1 * time.Second) - continue - } - return txHash, err - } - return transactions.TxHash(""), fmt.Errorf("operation failed after %d attempts. Last error: %w", maxAttempts, lastErr) -} - -// submitAndWaitForTxs is a generic helper to reduce duplication in deploy, initialize, and destroy steps. -func (sm *streamManager) submitAndWaitForTxs(ctx context.Context, items []string, submitFunc func(string) (transactions.TxHash, error)) error { - txInfos := make([]txInfo, 0, len(items)) - - for _, item := range items { - txHash, err := sm.retryIfNonceError(ctx, func() (transactions.TxHash, error) { - return submitFunc(item) - }) - if err != nil { - return fmt.Errorf("failed to submit tx for item %s: %w", item, err) - } - sm.t.Logf("Submitted TX for %s, hash: %s", item, txHash) - txInfos = append(txInfos, txInfo{hash: txHash, streamId: item}) - } - - eg, ctx := errgroup.WithContext(ctx) - sem := make(chan struct{}, workers) - for _, tx := range txInfos { - tx := tx - sem <- struct{}{} - eg.Go(func() error { - defer func() { <-sem }() - _, err := sm.client.WaitForTx(ctx, tx.hash, time.Second) - if err != nil { - return fmt.Errorf("tx %s for %s not mined: %w", tx.hash, tx.streamId, err) - } - sm.t.Logf("TX mined for %s, hash: %s", tx.streamId, tx.hash) - return nil - }) - } - return eg.Wait() -} - -// deployStreams deploys streams using the generic submitAndWaitForTxs helper. -func (sm *streamManager) deployStreams(ctx context.Context, count int) ([]streamInfo, error) { - sm.t.Logf("Deploying %d streams", count) - streamNames := make([]string, count) - for i := 0; i < count; i++ { - streamNames[i] = fmt.Sprintf("stream-%d", i) - } - - err := sm.submitAndWaitForTxs(ctx, streamNames, func(s string) (transactions.TxHash, error) { - streamId := util.GenerateStreamId(s) - return sm.client.DeployStream(ctx, streamId, types.StreamTypePrimitiveUnix) - }) - if err != nil { - return nil, err - } - - streams := make([]streamInfo, count) - for i, name := range streamNames { - sid := util.GenerateStreamId(name) - streams[i] = streamInfo{ - name: name, - streamId: sid, - streamLocator: sm.client.OwnStreamLocator(sid), - } - } - return streams, nil -} - -// initializeStreams initializes streams using the submitAndWaitForTxs helper. -func (sm *streamManager) initializeStreams(ctx context.Context, streams []streamInfo) error { - sm.t.Logf("Initializing %d streams", len(streams)) - names := make([]string, len(streams)) - for i, s := range streams { - names[i] = s.name - } - return sm.submitAndWaitForTxs(ctx, names, func(s string) (transactions.TxHash, error) { - sid := util.GenerateStreamId(s) - stream, err := sm.client.LoadPrimitiveStream(sm.client.OwnStreamLocator(sid)) - if err != nil { - return transactions.TxHash(""), err - } - return stream.InitializeStream(ctx) - }) -} - -// destroyStreams destroys streams using the submitAndWaitForTxs helper. -func (sm *streamManager) destroyStreams(ctx context.Context, count int) error { - sm.t.Logf("Destroying %d streams", count) - names := make([]string, count) - for i := 0; i < count; i++ { - names[i] = fmt.Sprintf("stream-%d", i) - } - return sm.submitAndWaitForTxs(ctx, names, func(s string) (transactions.TxHash, error) { - sid := util.GenerateStreamId(s) - return sm.client.DestroyStream(ctx, sid) - }) -} diff --git a/tests/integration/storage/stream_storage_test.go b/tests/integration/storage/stream_storage_test.go deleted file mode 100644 index 8ad6edf6e..000000000 --- a/tests/integration/storage/stream_storage_test.go +++ /dev/null @@ -1,485 +0,0 @@ -package stream_storage_test - -import ( - "bytes" - "context" - "errors" - "fmt" - "os/exec" - "strconv" - "strings" - "testing" - "time" - - "github.com/ethereum/go-ethereum/crypto" - kwilcrypto "github.com/kwilteam/kwil-db/core/crypto" - "github.com/kwilteam/kwil-db/core/crypto/auth" - "github.com/trufnetwork/sdk-go/core/tnclient" - "github.com/trufnetwork/sdk-go/core/util" -) - -/* - * Test the stream storage - - 1. Start containers for kwil and postgres, isolated from the rest of the system - 2. measure the current directory size: - - postgres: /var/lib/postgresql/data - - kwil: /root/.kwildb/ - 3. run the stream storage test: - - create 1,000 streams - - initialize all streams with 1000 messages each - 4. measure the directory size of kwil and postgres - 5. drop all streams - 6. measure the directory size of kwil and postgres - 7. log all sizes - 8. teardown containers -*/ - -// containerSpec defines the configuration for a container -type containerSpec struct { - name string - image string - tmpfsPath string - envVars []string - healthCheck func(d *docker) error -} - -// testContainers defines the containers needed for the test -var containers = struct { - postgres containerSpec - tsndb containerSpec -}{ - postgres: containerSpec{ - name: "test-kwil-postgres", - image: "kwildb/postgres:latest", - tmpfsPath: "/var/lib/postgresql/data", - envVars: []string{"POSTGRES_HOST_AUTH_METHOD=trust"}, - healthCheck: func(d *docker) error { - _, err := d.exec("test-kwil-postgres", "pg_isready", "-U", "postgres") - return err - }, - }, - tsndb: containerSpec{ - name: "test-tsn-db", - image: "tsn-db:local", - tmpfsPath: "/root/.kwild", - envVars: []string{ - "CONFIG_PATH=/root/.kwild", - "KWILD_APP_HOSTNAME=test-tsn-db", - "KWILD_APP_PG_DB_HOST=test-kwil-postgres", - "KWILD_CHAIN_P2P_EXTERNAL_ADDRESS=http://test-tsn-db:26656", - }, - healthCheck: func(d *docker) error { - // Wait for the service to be ready - time.Sleep(5 * time.Second) - _, err := d.exec("test-tsn-db", "ps", "aux") - return err - }, - }, -} - -// docker provides a simplified interface for docker operations -type docker struct { - t *testing.T -} - -// newDocker creates a new docker helper -func newDocker(t *testing.T) *docker { - return &docker{t: t} -} - -// exec executes a command in a container -func (d *docker) exec(container string, args ...string) (string, error) { - cmdArgs := append([]string{"exec", container}, args...) - return d.run(cmdArgs...) -} - -// run executes a docker command -func (d *docker) run(args ...string) (string, error) { - cmd := exec.Command("docker", args...) - var out bytes.Buffer - cmd.Stdout = &out - cmd.Stderr = &out - err := cmd.Run() - return out.String(), err -} - -// failWithLogsOnError logs container logs and fails the test if err is non-nil. -func (d *docker) failWithLogsOnError(err error, containerName string) { - if err != nil { - if logs, logsErr := d.run("logs", containerName); logsErr == nil { - d.t.Logf("Logs for %s:\n%s", containerName, logs) - } - d.t.Fatal(err) - } -} - -// pollUntilTrue polls a condition until it returns true or a timeout is reached. -func pollUntilTrue(ctx context.Context, timeout time.Duration, check func() bool) error { - deadline := time.Now().Add(timeout) - for time.Now().Before(deadline) { - if check() { - return nil - } - time.Sleep(time.Second) - } - return errors.New("condition not met within timeout") -} - -// startContainer starts a container with the given spec and waits for it to be healthy. -func (d *docker) startContainer(spec containerSpec) error { - args := []string{"run", "--rm", "--name", spec.name, "--network", networkName, "-d"} - - if spec.tmpfsPath != "" { - args = append(args, "--tmpfs", spec.tmpfsPath) - } - - for _, env := range spec.envVars { - args = append(args, "-e", env) - } - - if spec.name == "test-tsn-db" { - args = append(args, - "-p", "50051:50051", - "-p", "50151:50151", - "-p", "8080:8080", - "-p", "8484:8484", - "-p", "26656:26656", - "-p", "26657:26657", - "--entrypoint", "/app/kwild", - spec.image, - "--autogen", - "--app.pg-db-host", "test-kwil-postgres", - "--app.hostname", "test-tsn-db", - "--chain.p2p.external-address", "http://test-tsn-db:26656", - ) - } else { - args = append(args, spec.image) - } - - out, err := d.run(args...) - if err != nil { - return fmt.Errorf("failed to start container %s: %w\nOutput: %s", spec.name, err, out) - } - - if spec.healthCheck != nil { - err := pollUntilTrue(context.Background(), 10*time.Second, func() bool { - return spec.healthCheck(d) == nil - }) - if err != nil { - if logs, logsErr := d.run("logs", spec.name); logsErr == nil { - d.t.Logf("Container logs for %s:\n%s", spec.name, logs) - } - return fmt.Errorf("container %s failed to become healthy: %w", spec.name, err) - } - } - - if spec.name == "test-tsn-db" { - err := pollUntilTrue(context.Background(), 30*time.Second, func() bool { - out, err := exec.Command("curl", "-s", "-o", "/dev/null", "-w", "%{http_code}", "http://localhost:8484/api/v1/health").Output() - if err != nil { - return false - } - return strings.TrimSpace(string(out)) == "200" - }) - if err != nil { - if logs, logsErr := d.run("logs", spec.name); logsErr == nil { - d.t.Logf("Container logs for %s:\n%s", spec.name, logs) - } - return fmt.Errorf("RPC server in container %s failed to become ready: %w", spec.name, err) - } - } - - return nil -} - -// stopContainer stops a container -func (d *docker) stopContainer(name string) error { - _, err := d.run("stop", name) - if err != nil { - return fmt.Errorf("failed to stop container %s: %w", name, err) - } - d.t.Logf("Stopped container %s", name) - return nil -} - -// setupNetwork creates a docker network -func (d *docker) setupNetwork() error { - d.run("network", "rm", networkName) - _, err := d.run("network", "create", networkName) - return err -} - -// teardownNetwork removes the docker network -func (d *docker) teardownNetwork() error { - _, err := d.run("network", "rm", networkName) - return err -} - -// measureSize measures the size of a directory in a container -func (d *docker) measureSize(container, path string) (int64, error) { - out, err := d.exec(container, "du", "-sb", path) - if err != nil { - return 0, err - } - return parseDuOutput(out) -} - -// runCommand executes a command and returns its combined output or error. -func runCommand(name string, args ...string) (string, error) { - cmd := exec.Command(name, args...) - var out bytes.Buffer - cmd.Stdout = &out - cmd.Stderr = &out - err := cmd.Run() - return out.String(), err -} - -// parseDuOutput parses the output of du command and returns the size in bytes -func parseDuOutput(output string) (int64, error) { - fields := strings.Fields(output) - if len(fields) < 1 { - return 0, fmt.Errorf("unexpected du output: %s", output) - } - return strconv.ParseInt(fields[0], 10, 64) -} - -// bytesToMB converts bytes to megabytes with 2 decimal places -func bytesToMB(bytes int64) float64 { - return float64(bytes) / (1024 * 1024) -} - -// cleanup removes all docker resources -func (d *docker) cleanup() { - // Get all container IDs - out, err := d.run("ps", "-aq") - if err == nil && out != "" { - containers := strings.Fields(out) - if len(containers) > 0 { - killArgs := append([]string{"kill"}, containers...) - d.run(killArgs...) - rmArgs := append([]string{"rm"}, containers...) - d.run(rmArgs...) - } - } - // Remove networks - d.run("network", "prune", "-f") - // Remove volume - d.run("volume", "rm", "tsn-config") -} - -func TestStreamStorage(t *testing.T) { - ctx := context.Background() - - // Setup docker helper - d := newDocker(t) - - // Clean up any existing resources - d.cleanup() - - // Create network - if err := d.setupNetwork(); err != nil { - t.Fatal(err) - } - defer d.teardownNetwork() - - // Start postgres first - if err := d.startContainer(containers.postgres); err != nil { - t.Fatal(err) - } - defer d.stopContainer(containers.postgres.name) - - // Wait for postgres to be healthy - for i := 0; i < 10; i++ { - if err := containers.postgres.healthCheck(d); err == nil { - break - } - if i == 9 { - t.Fatal("postgres failed to become healthy") - } - time.Sleep(time.Second) - } - - // Start tsn-db with autogen - t.Log("Starting tsn-db container...") - if err := d.startContainer(containers.tsndb); err != nil { - // Get logs before failing - if out, err := d.run("logs", containers.tsndb.name); err == nil { - t.Logf("TSN-DB container logs:\n%s", out) - } else { - t.Logf("Failed to get TSN-DB logs: %v", err) - } - // Get container status - if status, err := d.run("inspect", "--format", "{{.State.Status}}", containers.tsndb.name); err == nil { - t.Logf("TSN-DB container status: %s", status) - } - t.Fatalf("Failed to start tsn-db container: %v", err) - } - t.Log("TSN-DB container started successfully") - - // Wait for node to be fully initialized - t.Log("Waiting for node to be fully initialized...") - for i := 0; i < 30; i++ { // 30 seconds max wait - healthCmd := exec.Command("curl", "-s", TestKwilProvider+"/api/v1/health") - healthOut, healthErr := healthCmd.CombinedOutput() - if healthErr == nil { - t.Logf("Health check response: %s", string(healthOut)) - if strings.Contains(string(healthOut), `"healthy":true`) && - strings.Contains(string(healthOut), `"block_height":1`) { - t.Log("Node is healthy and has produced the first block") - break - } - } - if i == 29 { - t.Fatal("Node failed to become healthy or produce the first block") - } - time.Sleep(time.Second) - } - - // Get initial container logs - if out, err := d.run("logs", containers.tsndb.name); err == nil { - t.Logf("Initial TSN-DB container logs:\n%s", out) - } - - defer d.stopContainer(containers.tsndb.name) - - // Measure initial sizes - pgSizeBefore, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) - if err != nil { - t.Fatal(err) - } - tsnSizeBefore, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) - if err != nil { - t.Fatal(err) - } - t.Logf("Initial sizes - Postgres: %.2f MB, TSN-DB: %.2f MB", bytesToMB(pgSizeBefore), bytesToMB(tsnSizeBefore)) - - // Initialize stream manager - t.Log("Creating private key...") - pk, err := kwilcrypto.Secp256k1PrivateKeyFromHex(TestPrivateKey) - if err != nil { - t.Fatalf("Failed to parse private key: %v", err) - } - t.Log("Successfully created private key") - - t.Log("Creating TN client...") - t.Logf("Using provider: %s", TestKwilProvider) - - // Get the Ethereum address from the public key - pubKeyBytes := pk.PubKey().Bytes() - // Remove the first byte which is the compression flag - pubKeyBytes = pubKeyBytes[1:] - addr, err := util.NewEthereumAddressFromBytes(crypto.Keccak256(pubKeyBytes)[12:]) - if err != nil { - t.Fatalf("Failed to get address from public key: %v", err) - } - t.Logf("Using signer with address: %s", addr.Address()) - - t.Log("Attempting to create client...") - var client *tnclient.Client - var lastErr error - for i := 0; i < 60; i++ { // 60 seconds max wait - t.Logf("Attempt %d/60: Creating client with provider URL %s", i+1, TestKwilProvider) - - // First check if the server is accepting connections - cmd := exec.Command("curl", "-s", "-w", "\n%{http_code}", "http://localhost:8484/api/v1/health") - out, err := cmd.CombinedOutput() - if err != nil { - lastErr = fmt.Errorf("health check command failed: %w", err) - t.Logf("Health check command failed: %v", err) - time.Sleep(time.Second) - continue - } - - // Split output into response body and status code - parts := strings.Split(string(out), "\n") - if len(parts) != 2 { - lastErr = fmt.Errorf("unexpected health check output format: %s", string(out)) - t.Logf("Health check output format error: %s", string(out)) - time.Sleep(time.Second) - continue - } - - statusCode := strings.TrimSpace(parts[1]) - t.Logf("Health check response - Status: %s", statusCode) - - if statusCode != "200" { - lastErr = fmt.Errorf("health check returned non-200 status: %s", statusCode) - t.Logf("Health check failed with status %s", statusCode) - time.Sleep(time.Second) - continue - } - - t.Log("Health check passed, attempting to create client...") - - // Try to create the client now that we know the server is accepting connections - client, err = tnclient.NewClient( - ctx, - TestKwilProvider, - tnclient.WithSigner(&auth.EthPersonalSigner{Key: *pk}), - ) - if err != nil { - lastErr = fmt.Errorf("failed to create TN client: %w", err) - t.Logf("Client creation failed: %v", err) - time.Sleep(time.Second) - continue - } - - // Successfully created client - t.Log("Client created successfully") - break - } - - if client == nil { - t.Fatalf("Failed to create client after 60 attempts. Last error: %v", lastErr) - } - - sm, err := newStreamManager(ctx, t) - if err != nil { - t.Fatalf("Failed to create stream manager: %v", err) - } - - // Deploy streams - streams, err := sm.deployStreams(ctx, numStreams) - if err != nil { - t.Fatal(err) - } - - // Initialize streams - if err := sm.initializeStreams(ctx, streams); err != nil { - t.Fatal(err) - } - - // Measure sizes after creation - pgSizeAfter, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) - if err != nil { - t.Fatal(err) - } - tsnSizeAfter, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) - if err != nil { - t.Fatal(err) - } - t.Logf("After creation - Postgres: %.2f MB, TSN-DB: %.2f MB", bytesToMB(pgSizeAfter), bytesToMB(tsnSizeAfter)) - - // Destroy streams - if err := sm.destroyStreams(ctx, numStreams); err != nil { - t.Fatal(err) - } - - // Measure final sizes - pgSizeFinal, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) - if err != nil { - t.Fatal(err) - } - tsnSizeFinal, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) - if err != nil { - t.Fatal(err) - } - - // Log all measurements - t.Log("Final measurements:") - t.Logf("Postgres (MB): before=%.2f, after=%.2f, final=%.2f", - bytesToMB(pgSizeBefore), bytesToMB(pgSizeAfter), bytesToMB(pgSizeFinal)) - t.Logf("TSN-DB (MB): before=%.2f, after=%.2f, final=%.2f", - bytesToMB(tsnSizeBefore), bytesToMB(tsnSizeAfter), bytesToMB(tsnSizeFinal)) -} From 09b4b676ca94e47e6499db4922ba33e0b03ba4e6 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 3 Feb 2025 18:19:36 -0300 Subject: [PATCH 12/14] chore: stream storage requirement (#787) * chore: update dependencies and add validation packages * test: increase stream storage test scale to 100 streams --- go.mod | 7 +- go.sum | 18 +- tests/integration/storage/config.go | 15 + tests/integration/storage/stream_manager.go | 172 +++++++ .../storage/stream_storage_test.go | 485 ++++++++++++++++++ 5 files changed, 694 insertions(+), 3 deletions(-) create mode 100644 tests/integration/storage/config.go create mode 100644 tests/integration/storage/stream_manager.go create mode 100644 tests/integration/storage/stream_storage_test.go diff --git a/go.mod b/go.mod index ce5d2db38..fc82902c8 100644 --- a/go.mod +++ b/go.mod @@ -19,7 +19,7 @@ require ( github.com/pkg/errors v0.9.1 github.com/samber/lo v1.47.0 github.com/stretchr/testify v1.9.0 - github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 + github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337 go.uber.org/zap v1.27.0 golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 golang.org/x/sync v0.8.0 @@ -67,6 +67,7 @@ require ( github.com/ethereum/go-ethereum v1.14.8 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect + github.com/gabriel-vasile/mimetype v1.4.3 // indirect github.com/getsentry/sentry-go v0.27.0 // indirect github.com/go-kit/kit v0.12.0 // indirect github.com/go-kit/log v0.2.1 // indirect @@ -74,6 +75,9 @@ require ( github.com/go-logr/logr v1.4.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-ole/go-ole v1.3.0 // indirect + github.com/go-playground/locales v0.14.1 // indirect + github.com/go-playground/universal-translator v0.18.1 // indirect + github.com/go-playground/validator/v10 v10.22.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/glog v1.2.1 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect @@ -102,6 +106,7 @@ require ( github.com/kr/pretty v0.3.1 // indirect github.com/kr/text v0.2.0 // indirect github.com/kwilteam/kwil-extensions v0.0.0-20230727040522-1cfd930226b7 // indirect + github.com/leodido/go-urn v1.4.0 // indirect github.com/lib/pq v1.10.7 // indirect github.com/linxGnu/grocksdb v1.8.14 // indirect github.com/magiconair/properties v1.8.7 // indirect diff --git a/go.sum b/go.sum index 09fccb113..c39278497 100644 --- a/go.sum +++ b/go.sum @@ -160,6 +160,8 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= +github.com/gabriel-vasile/mimetype v1.4.3 h1:in2uUcidCuFcDKtdcBxlR0rJ1+fsokWf+uqxgUFjbI0= +github.com/gabriel-vasile/mimetype v1.4.3/go.mod h1:d8uq/6HKRL6CGdk+aubisF/M5GcPfT7nKyLpA0lbSSk= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff h1:tY80oXqGNY4FhTFhk+o9oFHGINQ/+vhlm8HFzi6znCI= github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww= github.com/getsentry/sentry-go v0.27.0 h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps= @@ -180,6 +182,14 @@ github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre github.com/go-ole/go-ole v1.2.5/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE= github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78= +github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= +github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= +github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= +github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= +github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= +github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= +github.com/go-playground/validator/v10 v10.22.0 h1:k6HsTZ0sTnROkhS//R0O+55JgM8C4Bx7ia+JlgcnOao= +github.com/go-playground/validator/v10 v10.22.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM= github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw= github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= @@ -305,6 +315,8 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0 github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c= github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= +github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= +github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= github.com/lib/pq v1.10.7 h1:p7ZhMD+KsSRozJr34udlUrhboJwWAgCg34+/ZZNvZZw= github.com/lib/pq v1.10.7/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/linxGnu/grocksdb v1.8.14 h1:HTgyYalNwBSG/1qCQUIott44wU5b2Y9Kr3z7SK5OfGQ= @@ -467,8 +479,10 @@ github.com/tklauser/numcpus v0.8.0 h1:Mx4Wwe/FjZLeQsK/6kt2EOepwwSl7SmJrK5bV/dXYg github.com/tklauser/numcpus v0.8.0/go.mod h1:ZJZlAY+dmR4eut8epnzf0u/VwodKmryxR8txiloSqBE= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346 h1:TvtdmeYsYEij78hS4oxnwikoiLdIrgav3BA+CbhaDAI= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346/go.mod h1:xKQhd7snlzKFuUi1taTGWjpRE8iFTA06DeacYi3CVFQ= -github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52 h1:LNZ99bITHatmYKVHH4YqBhpuAg2bUx8SlP2VHAWR4jE= -github.com/trufnetwork/sdk-go v0.1.1-0.20241126115735-addca8e1da52/go.mod h1:xfGmTkamZxyAOG331+P2oceLFxR7u/4lIoELyYEeCR4= +github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282 h1:LTzgVlKoYE+laTizgju24seT2fnIIcNqC+fN2SYYsRM= +github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= +github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337 h1:NlaXTkXnUiKCektYmF0WodsvYN/fCHvB4fuu0DDkpmM= +github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= github.com/tyler-smith/go-bip39 v1.1.0 h1:5eUemwrMargf3BSLRRCalXT93Ns6pQJIjYQN2nyfOP8= github.com/tyler-smith/go-bip39 v1.1.0/go.mod h1:gUYDtqQw1JS3ZJ8UWVcGTGqqr6YIN3CWg+kkNaLt55U= github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= diff --git a/tests/integration/storage/config.go b/tests/integration/storage/config.go new file mode 100644 index 000000000..b7240c2e6 --- /dev/null +++ b/tests/integration/storage/config.go @@ -0,0 +1,15 @@ +package stream_storage_test + +// Configuration constants +const ( + // Test configuration + numStreams = 100 + workers = 30 + + // Docker configuration + networkName = "test-network" + + // SDK configuration + TestPrivateKey = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef" + TestKwilProvider = "http://localhost:8484" +) diff --git a/tests/integration/storage/stream_manager.go b/tests/integration/storage/stream_manager.go new file mode 100644 index 000000000..8db3949d0 --- /dev/null +++ b/tests/integration/storage/stream_manager.go @@ -0,0 +1,172 @@ +package stream_storage_test + +import ( + "context" + "fmt" + "strings" + "testing" + "time" + + kwilcrypto "github.com/kwilteam/kwil-db/core/crypto" + "github.com/kwilteam/kwil-db/core/crypto/auth" + "github.com/kwilteam/kwil-db/core/types/transactions" + "github.com/trufnetwork/sdk-go/core/tnclient" + "github.com/trufnetwork/sdk-go/core/types" + "github.com/trufnetwork/sdk-go/core/util" + "golang.org/x/sync/errgroup" +) + +// streamInfo holds information about a deployed stream +// It assumes the constants TestPrivateKey, TestKwilProvider and workers are defined in the package + +type streamInfo struct { + name string + streamId util.StreamId + streamLocator types.StreamLocator +} + +// txInfo holds information about a transaction + +type txInfo struct { + hash transactions.TxHash + streamId string +} + +// streamManager handles stream operations + +type streamManager struct { + client *tnclient.Client + t *testing.T +} + +// newStreamManager creates a new stream manager using the global TestPrivateKey and TestKwilProvider constants +func newStreamManager(ctx context.Context, t *testing.T) (*streamManager, error) { + pk, err := kwilcrypto.Secp256k1PrivateKeyFromHex(TestPrivateKey) + if err != nil { + return nil, fmt.Errorf("failed to parse private key: %w", err) + } + + client, err := tnclient.NewClient( + ctx, + TestKwilProvider, + tnclient.WithSigner(&auth.EthPersonalSigner{Key: *pk}), + ) + if err != nil { + return nil, fmt.Errorf("failed to create TN client: %w", err) + } + + return &streamManager{client: client, t: t}, nil +} + +// retryIfNonceError tries the operation up to 5 times if a nonce-related error is detected. +func (sm *streamManager) retryIfNonceError(ctx context.Context, operation func() (transactions.TxHash, error)) (transactions.TxHash, error) { + const maxAttempts = 5 + var lastErr error + + for attempts := 1; attempts <= maxAttempts; attempts++ { + txHash, err := operation() + if err == nil { + return txHash, nil + } + + if strings.Contains(strings.ToLower(err.Error()), "nonce") { + lastErr = err + sm.t.Logf("Nonce error detected (attempt %d/%d): %v", attempts, maxAttempts, err) + time.Sleep(1 * time.Second) + continue + } + return txHash, err + } + return transactions.TxHash(""), fmt.Errorf("operation failed after %d attempts. Last error: %w", maxAttempts, lastErr) +} + +// submitAndWaitForTxs is a generic helper to reduce duplication in deploy, initialize, and destroy steps. +func (sm *streamManager) submitAndWaitForTxs(ctx context.Context, items []string, submitFunc func(string) (transactions.TxHash, error)) error { + txInfos := make([]txInfo, 0, len(items)) + + for _, item := range items { + txHash, err := sm.retryIfNonceError(ctx, func() (transactions.TxHash, error) { + return submitFunc(item) + }) + if err != nil { + return fmt.Errorf("failed to submit tx for item %s: %w", item, err) + } + sm.t.Logf("Submitted TX for %s, hash: %s", item, txHash) + txInfos = append(txInfos, txInfo{hash: txHash, streamId: item}) + } + + eg, ctx := errgroup.WithContext(ctx) + sem := make(chan struct{}, workers) + for _, tx := range txInfos { + tx := tx + sem <- struct{}{} + eg.Go(func() error { + defer func() { <-sem }() + _, err := sm.client.WaitForTx(ctx, tx.hash, time.Second) + if err != nil { + return fmt.Errorf("tx %s for %s not mined: %w", tx.hash, tx.streamId, err) + } + sm.t.Logf("TX mined for %s, hash: %s", tx.streamId, tx.hash) + return nil + }) + } + return eg.Wait() +} + +// deployStreams deploys streams using the generic submitAndWaitForTxs helper. +func (sm *streamManager) deployStreams(ctx context.Context, count int) ([]streamInfo, error) { + sm.t.Logf("Deploying %d streams", count) + streamNames := make([]string, count) + for i := 0; i < count; i++ { + streamNames[i] = fmt.Sprintf("stream-%d", i) + } + + err := sm.submitAndWaitForTxs(ctx, streamNames, func(s string) (transactions.TxHash, error) { + streamId := util.GenerateStreamId(s) + return sm.client.DeployStream(ctx, streamId, types.StreamTypePrimitiveUnix) + }) + if err != nil { + return nil, err + } + + streams := make([]streamInfo, count) + for i, name := range streamNames { + sid := util.GenerateStreamId(name) + streams[i] = streamInfo{ + name: name, + streamId: sid, + streamLocator: sm.client.OwnStreamLocator(sid), + } + } + return streams, nil +} + +// initializeStreams initializes streams using the submitAndWaitForTxs helper. +func (sm *streamManager) initializeStreams(ctx context.Context, streams []streamInfo) error { + sm.t.Logf("Initializing %d streams", len(streams)) + names := make([]string, len(streams)) + for i, s := range streams { + names[i] = s.name + } + return sm.submitAndWaitForTxs(ctx, names, func(s string) (transactions.TxHash, error) { + sid := util.GenerateStreamId(s) + stream, err := sm.client.LoadPrimitiveStream(sm.client.OwnStreamLocator(sid)) + if err != nil { + return transactions.TxHash(""), err + } + return stream.InitializeStream(ctx) + }) +} + +// destroyStreams destroys streams using the submitAndWaitForTxs helper. +func (sm *streamManager) destroyStreams(ctx context.Context, count int) error { + sm.t.Logf("Destroying %d streams", count) + names := make([]string, count) + for i := 0; i < count; i++ { + names[i] = fmt.Sprintf("stream-%d", i) + } + return sm.submitAndWaitForTxs(ctx, names, func(s string) (transactions.TxHash, error) { + sid := util.GenerateStreamId(s) + return sm.client.DestroyStream(ctx, sid) + }) +} diff --git a/tests/integration/storage/stream_storage_test.go b/tests/integration/storage/stream_storage_test.go new file mode 100644 index 000000000..8ad6edf6e --- /dev/null +++ b/tests/integration/storage/stream_storage_test.go @@ -0,0 +1,485 @@ +package stream_storage_test + +import ( + "bytes" + "context" + "errors" + "fmt" + "os/exec" + "strconv" + "strings" + "testing" + "time" + + "github.com/ethereum/go-ethereum/crypto" + kwilcrypto "github.com/kwilteam/kwil-db/core/crypto" + "github.com/kwilteam/kwil-db/core/crypto/auth" + "github.com/trufnetwork/sdk-go/core/tnclient" + "github.com/trufnetwork/sdk-go/core/util" +) + +/* + * Test the stream storage + + 1. Start containers for kwil and postgres, isolated from the rest of the system + 2. measure the current directory size: + - postgres: /var/lib/postgresql/data + - kwil: /root/.kwildb/ + 3. run the stream storage test: + - create 1,000 streams + - initialize all streams with 1000 messages each + 4. measure the directory size of kwil and postgres + 5. drop all streams + 6. measure the directory size of kwil and postgres + 7. log all sizes + 8. teardown containers +*/ + +// containerSpec defines the configuration for a container +type containerSpec struct { + name string + image string + tmpfsPath string + envVars []string + healthCheck func(d *docker) error +} + +// testContainers defines the containers needed for the test +var containers = struct { + postgres containerSpec + tsndb containerSpec +}{ + postgres: containerSpec{ + name: "test-kwil-postgres", + image: "kwildb/postgres:latest", + tmpfsPath: "/var/lib/postgresql/data", + envVars: []string{"POSTGRES_HOST_AUTH_METHOD=trust"}, + healthCheck: func(d *docker) error { + _, err := d.exec("test-kwil-postgres", "pg_isready", "-U", "postgres") + return err + }, + }, + tsndb: containerSpec{ + name: "test-tsn-db", + image: "tsn-db:local", + tmpfsPath: "/root/.kwild", + envVars: []string{ + "CONFIG_PATH=/root/.kwild", + "KWILD_APP_HOSTNAME=test-tsn-db", + "KWILD_APP_PG_DB_HOST=test-kwil-postgres", + "KWILD_CHAIN_P2P_EXTERNAL_ADDRESS=http://test-tsn-db:26656", + }, + healthCheck: func(d *docker) error { + // Wait for the service to be ready + time.Sleep(5 * time.Second) + _, err := d.exec("test-tsn-db", "ps", "aux") + return err + }, + }, +} + +// docker provides a simplified interface for docker operations +type docker struct { + t *testing.T +} + +// newDocker creates a new docker helper +func newDocker(t *testing.T) *docker { + return &docker{t: t} +} + +// exec executes a command in a container +func (d *docker) exec(container string, args ...string) (string, error) { + cmdArgs := append([]string{"exec", container}, args...) + return d.run(cmdArgs...) +} + +// run executes a docker command +func (d *docker) run(args ...string) (string, error) { + cmd := exec.Command("docker", args...) + var out bytes.Buffer + cmd.Stdout = &out + cmd.Stderr = &out + err := cmd.Run() + return out.String(), err +} + +// failWithLogsOnError logs container logs and fails the test if err is non-nil. +func (d *docker) failWithLogsOnError(err error, containerName string) { + if err != nil { + if logs, logsErr := d.run("logs", containerName); logsErr == nil { + d.t.Logf("Logs for %s:\n%s", containerName, logs) + } + d.t.Fatal(err) + } +} + +// pollUntilTrue polls a condition until it returns true or a timeout is reached. +func pollUntilTrue(ctx context.Context, timeout time.Duration, check func() bool) error { + deadline := time.Now().Add(timeout) + for time.Now().Before(deadline) { + if check() { + return nil + } + time.Sleep(time.Second) + } + return errors.New("condition not met within timeout") +} + +// startContainer starts a container with the given spec and waits for it to be healthy. +func (d *docker) startContainer(spec containerSpec) error { + args := []string{"run", "--rm", "--name", spec.name, "--network", networkName, "-d"} + + if spec.tmpfsPath != "" { + args = append(args, "--tmpfs", spec.tmpfsPath) + } + + for _, env := range spec.envVars { + args = append(args, "-e", env) + } + + if spec.name == "test-tsn-db" { + args = append(args, + "-p", "50051:50051", + "-p", "50151:50151", + "-p", "8080:8080", + "-p", "8484:8484", + "-p", "26656:26656", + "-p", "26657:26657", + "--entrypoint", "/app/kwild", + spec.image, + "--autogen", + "--app.pg-db-host", "test-kwil-postgres", + "--app.hostname", "test-tsn-db", + "--chain.p2p.external-address", "http://test-tsn-db:26656", + ) + } else { + args = append(args, spec.image) + } + + out, err := d.run(args...) + if err != nil { + return fmt.Errorf("failed to start container %s: %w\nOutput: %s", spec.name, err, out) + } + + if spec.healthCheck != nil { + err := pollUntilTrue(context.Background(), 10*time.Second, func() bool { + return spec.healthCheck(d) == nil + }) + if err != nil { + if logs, logsErr := d.run("logs", spec.name); logsErr == nil { + d.t.Logf("Container logs for %s:\n%s", spec.name, logs) + } + return fmt.Errorf("container %s failed to become healthy: %w", spec.name, err) + } + } + + if spec.name == "test-tsn-db" { + err := pollUntilTrue(context.Background(), 30*time.Second, func() bool { + out, err := exec.Command("curl", "-s", "-o", "/dev/null", "-w", "%{http_code}", "http://localhost:8484/api/v1/health").Output() + if err != nil { + return false + } + return strings.TrimSpace(string(out)) == "200" + }) + if err != nil { + if logs, logsErr := d.run("logs", spec.name); logsErr == nil { + d.t.Logf("Container logs for %s:\n%s", spec.name, logs) + } + return fmt.Errorf("RPC server in container %s failed to become ready: %w", spec.name, err) + } + } + + return nil +} + +// stopContainer stops a container +func (d *docker) stopContainer(name string) error { + _, err := d.run("stop", name) + if err != nil { + return fmt.Errorf("failed to stop container %s: %w", name, err) + } + d.t.Logf("Stopped container %s", name) + return nil +} + +// setupNetwork creates a docker network +func (d *docker) setupNetwork() error { + d.run("network", "rm", networkName) + _, err := d.run("network", "create", networkName) + return err +} + +// teardownNetwork removes the docker network +func (d *docker) teardownNetwork() error { + _, err := d.run("network", "rm", networkName) + return err +} + +// measureSize measures the size of a directory in a container +func (d *docker) measureSize(container, path string) (int64, error) { + out, err := d.exec(container, "du", "-sb", path) + if err != nil { + return 0, err + } + return parseDuOutput(out) +} + +// runCommand executes a command and returns its combined output or error. +func runCommand(name string, args ...string) (string, error) { + cmd := exec.Command(name, args...) + var out bytes.Buffer + cmd.Stdout = &out + cmd.Stderr = &out + err := cmd.Run() + return out.String(), err +} + +// parseDuOutput parses the output of du command and returns the size in bytes +func parseDuOutput(output string) (int64, error) { + fields := strings.Fields(output) + if len(fields) < 1 { + return 0, fmt.Errorf("unexpected du output: %s", output) + } + return strconv.ParseInt(fields[0], 10, 64) +} + +// bytesToMB converts bytes to megabytes with 2 decimal places +func bytesToMB(bytes int64) float64 { + return float64(bytes) / (1024 * 1024) +} + +// cleanup removes all docker resources +func (d *docker) cleanup() { + // Get all container IDs + out, err := d.run("ps", "-aq") + if err == nil && out != "" { + containers := strings.Fields(out) + if len(containers) > 0 { + killArgs := append([]string{"kill"}, containers...) + d.run(killArgs...) + rmArgs := append([]string{"rm"}, containers...) + d.run(rmArgs...) + } + } + // Remove networks + d.run("network", "prune", "-f") + // Remove volume + d.run("volume", "rm", "tsn-config") +} + +func TestStreamStorage(t *testing.T) { + ctx := context.Background() + + // Setup docker helper + d := newDocker(t) + + // Clean up any existing resources + d.cleanup() + + // Create network + if err := d.setupNetwork(); err != nil { + t.Fatal(err) + } + defer d.teardownNetwork() + + // Start postgres first + if err := d.startContainer(containers.postgres); err != nil { + t.Fatal(err) + } + defer d.stopContainer(containers.postgres.name) + + // Wait for postgres to be healthy + for i := 0; i < 10; i++ { + if err := containers.postgres.healthCheck(d); err == nil { + break + } + if i == 9 { + t.Fatal("postgres failed to become healthy") + } + time.Sleep(time.Second) + } + + // Start tsn-db with autogen + t.Log("Starting tsn-db container...") + if err := d.startContainer(containers.tsndb); err != nil { + // Get logs before failing + if out, err := d.run("logs", containers.tsndb.name); err == nil { + t.Logf("TSN-DB container logs:\n%s", out) + } else { + t.Logf("Failed to get TSN-DB logs: %v", err) + } + // Get container status + if status, err := d.run("inspect", "--format", "{{.State.Status}}", containers.tsndb.name); err == nil { + t.Logf("TSN-DB container status: %s", status) + } + t.Fatalf("Failed to start tsn-db container: %v", err) + } + t.Log("TSN-DB container started successfully") + + // Wait for node to be fully initialized + t.Log("Waiting for node to be fully initialized...") + for i := 0; i < 30; i++ { // 30 seconds max wait + healthCmd := exec.Command("curl", "-s", TestKwilProvider+"/api/v1/health") + healthOut, healthErr := healthCmd.CombinedOutput() + if healthErr == nil { + t.Logf("Health check response: %s", string(healthOut)) + if strings.Contains(string(healthOut), `"healthy":true`) && + strings.Contains(string(healthOut), `"block_height":1`) { + t.Log("Node is healthy and has produced the first block") + break + } + } + if i == 29 { + t.Fatal("Node failed to become healthy or produce the first block") + } + time.Sleep(time.Second) + } + + // Get initial container logs + if out, err := d.run("logs", containers.tsndb.name); err == nil { + t.Logf("Initial TSN-DB container logs:\n%s", out) + } + + defer d.stopContainer(containers.tsndb.name) + + // Measure initial sizes + pgSizeBefore, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) + if err != nil { + t.Fatal(err) + } + tsnSizeBefore, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) + if err != nil { + t.Fatal(err) + } + t.Logf("Initial sizes - Postgres: %.2f MB, TSN-DB: %.2f MB", bytesToMB(pgSizeBefore), bytesToMB(tsnSizeBefore)) + + // Initialize stream manager + t.Log("Creating private key...") + pk, err := kwilcrypto.Secp256k1PrivateKeyFromHex(TestPrivateKey) + if err != nil { + t.Fatalf("Failed to parse private key: %v", err) + } + t.Log("Successfully created private key") + + t.Log("Creating TN client...") + t.Logf("Using provider: %s", TestKwilProvider) + + // Get the Ethereum address from the public key + pubKeyBytes := pk.PubKey().Bytes() + // Remove the first byte which is the compression flag + pubKeyBytes = pubKeyBytes[1:] + addr, err := util.NewEthereumAddressFromBytes(crypto.Keccak256(pubKeyBytes)[12:]) + if err != nil { + t.Fatalf("Failed to get address from public key: %v", err) + } + t.Logf("Using signer with address: %s", addr.Address()) + + t.Log("Attempting to create client...") + var client *tnclient.Client + var lastErr error + for i := 0; i < 60; i++ { // 60 seconds max wait + t.Logf("Attempt %d/60: Creating client with provider URL %s", i+1, TestKwilProvider) + + // First check if the server is accepting connections + cmd := exec.Command("curl", "-s", "-w", "\n%{http_code}", "http://localhost:8484/api/v1/health") + out, err := cmd.CombinedOutput() + if err != nil { + lastErr = fmt.Errorf("health check command failed: %w", err) + t.Logf("Health check command failed: %v", err) + time.Sleep(time.Second) + continue + } + + // Split output into response body and status code + parts := strings.Split(string(out), "\n") + if len(parts) != 2 { + lastErr = fmt.Errorf("unexpected health check output format: %s", string(out)) + t.Logf("Health check output format error: %s", string(out)) + time.Sleep(time.Second) + continue + } + + statusCode := strings.TrimSpace(parts[1]) + t.Logf("Health check response - Status: %s", statusCode) + + if statusCode != "200" { + lastErr = fmt.Errorf("health check returned non-200 status: %s", statusCode) + t.Logf("Health check failed with status %s", statusCode) + time.Sleep(time.Second) + continue + } + + t.Log("Health check passed, attempting to create client...") + + // Try to create the client now that we know the server is accepting connections + client, err = tnclient.NewClient( + ctx, + TestKwilProvider, + tnclient.WithSigner(&auth.EthPersonalSigner{Key: *pk}), + ) + if err != nil { + lastErr = fmt.Errorf("failed to create TN client: %w", err) + t.Logf("Client creation failed: %v", err) + time.Sleep(time.Second) + continue + } + + // Successfully created client + t.Log("Client created successfully") + break + } + + if client == nil { + t.Fatalf("Failed to create client after 60 attempts. Last error: %v", lastErr) + } + + sm, err := newStreamManager(ctx, t) + if err != nil { + t.Fatalf("Failed to create stream manager: %v", err) + } + + // Deploy streams + streams, err := sm.deployStreams(ctx, numStreams) + if err != nil { + t.Fatal(err) + } + + // Initialize streams + if err := sm.initializeStreams(ctx, streams); err != nil { + t.Fatal(err) + } + + // Measure sizes after creation + pgSizeAfter, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) + if err != nil { + t.Fatal(err) + } + tsnSizeAfter, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) + if err != nil { + t.Fatal(err) + } + t.Logf("After creation - Postgres: %.2f MB, TSN-DB: %.2f MB", bytesToMB(pgSizeAfter), bytesToMB(tsnSizeAfter)) + + // Destroy streams + if err := sm.destroyStreams(ctx, numStreams); err != nil { + t.Fatal(err) + } + + // Measure final sizes + pgSizeFinal, err := d.measureSize(containers.postgres.name, containers.postgres.tmpfsPath) + if err != nil { + t.Fatal(err) + } + tsnSizeFinal, err := d.measureSize(containers.tsndb.name, containers.tsndb.tmpfsPath) + if err != nil { + t.Fatal(err) + } + + // Log all measurements + t.Log("Final measurements:") + t.Logf("Postgres (MB): before=%.2f, after=%.2f, final=%.2f", + bytesToMB(pgSizeBefore), bytesToMB(pgSizeAfter), bytesToMB(pgSizeFinal)) + t.Logf("TSN-DB (MB): before=%.2f, after=%.2f, final=%.2f", + bytesToMB(tsnSizeBefore), bytesToMB(tsnSizeAfter), bytesToMB(tsnSizeFinal)) +} From 2f4881e73081fb487ff84f682729f400b9de14d5 Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 24 Feb 2025 23:30:14 -0300 Subject: [PATCH 13/14] chore: update dependencies and Go toolchain (#791) Update Go version to 1.22.7 with Go 1.23.6 toolchain, and upgrade various dependencies including: - Kwil-DB to v0.9.4 - CometBFT to v0.38.15 - Ethereum to v1.14.8 - Prometheus and other minor library updates Also update the PostgreSQL Docker image to version 16.8-1. --- compose.yaml | 2 +- go.mod | 49 ++++++++++++++------------- go.sum | 94 +++++++++++++++++++++++++--------------------------- 3 files changed, 72 insertions(+), 73 deletions(-) diff --git a/compose.yaml b/compose.yaml index e4d4461d7..bb22b7dd8 100644 --- a/compose.yaml +++ b/compose.yaml @@ -1,6 +1,6 @@ services: kwil-postgres: - image: "kwildb/postgres:latest" + image: "kwildb/postgres:16.8-1" hostname: kwil-postgres shm_size: 2G restart: unless-stopped diff --git a/go.mod b/go.mod index fc82902c8..c83d81fac 100644 --- a/go.mod +++ b/go.mod @@ -1,6 +1,8 @@ module github.com/trufnetwork/node -go 1.22.1 +go 1.22.7 + +toolchain go1.23.6 require ( github.com/aws/aws-lambda-go v1.47.0 @@ -9,10 +11,11 @@ require ( github.com/cenkalti/backoff/v4 v4.3.0 github.com/cockroachdb/apd/v3 v3.2.1 github.com/docker/docker v27.3.1+incompatible + github.com/ethereum/go-ethereum v1.14.8 github.com/fbiville/markdown-table-formatter v0.3.0 github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 github.com/google/uuid v1.6.0 - github.com/kwilteam/kwil-db v0.9.2 + github.com/kwilteam/kwil-db v0.9.4 github.com/kwilteam/kwil-db/core v0.3.1-0.20241118220427-bd35ded7db55 github.com/kwilteam/kwil-db/parse v0.3.1-0.20241118220427-bd35ded7db55 github.com/mitchellh/mapstructure v1.5.0 @@ -43,8 +46,8 @@ require ( github.com/cockroachdb/pebble v1.1.1 // indirect github.com/cockroachdb/redact v1.1.5 // indirect github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 // indirect - github.com/cometbft/cometbft v0.38.12 // indirect - github.com/cometbft/cometbft-db v0.11.0 // indirect + github.com/cometbft/cometbft v0.38.15 // indirect + github.com/cometbft/cometbft-db v0.14.1 // indirect github.com/consensys/bavard v0.1.13 // indirect github.com/consensys/gnark-crypto v0.12.1 // indirect github.com/containerd/log v0.1.0 // indirect @@ -55,21 +58,20 @@ require ( github.com/deckarep/golang-set/v2 v2.6.0 // indirect github.com/decred/dcrd/certgen v1.1.2 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 // indirect - github.com/dgraph-io/badger/v2 v2.2007.4 // indirect github.com/dgraph-io/badger/v3 v3.2103.5 // indirect + github.com/dgraph-io/badger/v4 v4.2.0 // indirect github.com/dgraph-io/ristretto v0.1.1 // indirect - github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 // indirect github.com/distribution/reference v0.6.0 // indirect github.com/docker/go-connections v0.5.0 // indirect github.com/docker/go-units v0.5.0 // indirect github.com/dustin/go-humanize v1.0.1 // indirect github.com/ethereum/c-kzg-4844 v1.0.2 // indirect - github.com/ethereum/go-ethereum v1.14.8 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/gabriel-vasile/mimetype v1.4.3 // indirect github.com/getsentry/sentry-go v0.27.0 // indirect - github.com/go-kit/kit v0.12.0 // indirect + github.com/go-chi/chi/v5 v5.1.0 // indirect + github.com/go-kit/kit v0.13.0 // indirect github.com/go-kit/log v0.2.1 // indirect github.com/go-logfmt/logfmt v0.6.0 // indirect github.com/go-logr/logr v1.4.2 // indirect @@ -79,11 +81,11 @@ require ( github.com/go-playground/universal-translator v0.18.1 // indirect github.com/go-playground/validator/v10 v10.22.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect - github.com/golang/glog v1.2.1 // indirect + github.com/golang/glog v1.2.2 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/protobuf v1.5.4 // indirect github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb // indirect - github.com/google/btree v1.1.2 // indirect + github.com/google/btree v1.1.3 // indirect github.com/google/flatbuffers v23.5.26+incompatible // indirect github.com/google/go-cmp v0.6.0 // indirect github.com/google/orderedcode v0.0.1 // indirect @@ -107,14 +109,15 @@ require ( github.com/kr/text v0.2.0 // indirect github.com/kwilteam/kwil-extensions v0.0.0-20230727040522-1cfd930226b7 // indirect github.com/leodido/go-urn v1.4.0 // indirect - github.com/lib/pq v1.10.7 // indirect + github.com/lib/pq v1.10.9 // indirect github.com/linxGnu/grocksdb v1.8.14 // indirect github.com/magiconair/properties v1.8.7 // indirect github.com/manifoldco/promptui v0.9.0 // indirect github.com/mattn/go-runewidth v0.0.14 // indirect - github.com/minio/highwayhash v1.0.2 // indirect + github.com/minio/highwayhash v1.0.3 // indirect github.com/mmcloughlin/addchain v0.4.0 // indirect github.com/moby/docker-image-spec v1.3.1 // indirect + github.com/morikuni/aec v1.0.0 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/near/borsh-go v0.3.1 // indirect github.com/oasisprotocol/curve25519-voi v0.0.0-20220708102147-0a8a51822cae // indirect @@ -122,11 +125,11 @@ require ( github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/image-spec v1.1.0 // indirect github.com/pelletier/go-toml/v2 v2.2.2 // indirect - github.com/petermattis/goid v0.0.0-20240503122002-4b96552b8156 // indirect + github.com/petermattis/goid v0.0.0-20240813172612-4fcff4a6cae7 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect - github.com/prometheus/client_golang v1.20.1 // indirect + github.com/prometheus/client_golang v1.20.5 // indirect github.com/prometheus/client_model v0.6.1 // indirect - github.com/prometheus/common v0.55.0 // indirect + github.com/prometheus/common v0.60.1 // indirect github.com/prometheus/procfs v0.15.1 // indirect github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 // indirect github.com/rivo/uniseg v0.4.7 // indirect @@ -134,7 +137,7 @@ require ( github.com/rs/cors v1.11.1 // indirect github.com/sagikazarmark/locafero v0.4.0 // indirect github.com/sagikazarmark/slog-shim v0.1.0 // indirect - github.com/sasha-s/go-deadlock v0.3.1 // indirect + github.com/sasha-s/go-deadlock v0.3.5 // indirect github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible // indirect github.com/sourcegraph/conc v0.3.0 // indirect github.com/spf13/afero v1.11.0 // indirect @@ -148,7 +151,7 @@ require ( github.com/tklauser/go-sysconf v0.3.14 // indirect github.com/tklauser/numcpus v0.8.0 // indirect github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346 // indirect - go.etcd.io/bbolt v1.3.10 // indirect + go.etcd.io/bbolt v1.4.0-alpha.0.0.20240404170359-43604f3112c5 // indirect go.opencensus.io v0.24.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.55.0 // indirect go.opentelemetry.io/otel v1.30.0 // indirect @@ -157,14 +160,14 @@ require ( go.opentelemetry.io/otel/sdk v1.30.0 // indirect go.opentelemetry.io/otel/trace v1.30.0 // indirect go.uber.org/multierr v1.11.0 // indirect - golang.org/x/crypto v0.27.0 // indirect - golang.org/x/net v0.29.0 // indirect - golang.org/x/sys v0.25.0 // indirect - golang.org/x/text v0.18.0 // indirect + golang.org/x/crypto v0.28.0 // indirect + golang.org/x/net v0.30.0 // indirect + golang.org/x/sys v0.26.0 // indirect + golang.org/x/text v0.19.0 // indirect golang.org/x/time v0.6.0 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect - google.golang.org/grpc v1.66.1 // indirect - google.golang.org/protobuf v1.34.2 // indirect + google.golang.org/grpc v1.67.1 // indirect + google.golang.org/protobuf v1.35.1 // indirect gopkg.in/ini.v1 v1.67.0 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect gotest.tools/v3 v3.5.1 // indirect diff --git a/go.sum b/go.sum index c39278497..d467450eb 100644 --- a/go.sum +++ b/go.sum @@ -19,8 +19,8 @@ github.com/VictoriaMetrics/fastcache v1.12.2 h1:N0y9ASrJ0F6h0QaC3o6uJb3NIZ9VKLjC github.com/VictoriaMetrics/fastcache v1.12.2/go.mod h1:AmC+Nzz1+3G2eCPapF6UcsnkThDcMsQicp4xDukwJYI= github.com/VividCortex/gohistogram v1.0.0 h1:6+hBz+qvs0JOrrNhhmR7lFxo5sINxBCGXrdtl/UvroE= github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g= -github.com/adlio/schema v1.3.3 h1:oBJn8I02PyTB466pZO1UZEn1TV5XLlifBSyMrmHl/1I= -github.com/adlio/schema v1.3.3/go.mod h1:1EsRssiv9/Ce2CMzq5DoL7RiMshhuigQxrR4DMV9fHg= +github.com/adlio/schema v1.3.6 h1:k1/zc2jNfeiZBA5aFTRy37jlBIuCkXCm0XmvpzCKI9I= +github.com/adlio/schema v1.3.6/go.mod h1:qkxwLgPBd1FgLRHYVCmQT/rrBr3JH38J9LjmVzWNudg= github.com/antlr4-go/antlr/v4 v4.13.1 h1:SqQKkuVZ+zWkMMNkjy5FZe5mr5WURWnlpmOuzYWrPrQ= github.com/antlr4-go/antlr/v4 v4.13.1/go.mod h1:GKmUxMtwp6ZgGwZSva4eWPC5mS6vUAmOABFgjdkM7Nw= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= @@ -77,10 +77,10 @@ github.com/cockroachdb/redact v1.1.5 h1:u1PMllDkdFfPWaNGMyLD1+so+aq3uUItthCFqzwP github.com/cockroachdb/redact v1.1.5/go.mod h1:BVNblN9mBWFyMyqK1k3AAiSxhvhfK2oOZZ2lK+dpvRg= github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06 h1:zuQyyAKVxetITBuuhv3BI9cMrmStnpT18zmgmTxunpo= github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:7nc4anLGjupUW/PeY5qiNYsdNXj7zopG+eqsS7To5IQ= -github.com/cometbft/cometbft v0.38.12 h1:OWsLZN2KcSSFe8bet9xCn07VwhBnavPea3VyPnNq1bg= -github.com/cometbft/cometbft v0.38.12/go.mod h1:GPHp3/pehPqgX1930HmK1BpBLZPxB75v/dZg8Viwy+o= -github.com/cometbft/cometbft-db v0.11.0 h1:M3Lscmpogx5NTbb1EGyGDaFRdsoLWrUWimFEyf7jej8= -github.com/cometbft/cometbft-db v0.11.0/go.mod h1:GDPJAC/iFHNjmZZPN8V8C1yr/eyityhi2W1hz2MGKSc= +github.com/cometbft/cometbft v0.38.15 h1:5veFd8k1uXM27PBg9sMO3hAfRJ3vbh4OmmLf6cVrqXg= +github.com/cometbft/cometbft v0.38.15/go.mod h1:+wh6ap6xctVG+JOHwbl8pPKZ0GeqdPYqISu7F4b43cQ= +github.com/cometbft/cometbft-db v0.14.1 h1:SxoamPghqICBAIcGpleHbmoPqy+crij/++eZz3DlerQ= +github.com/cometbft/cometbft-db v0.14.1/go.mod h1:KHP1YghilyGV/xjD5DP3+2hyigWx0WTp9X+0Gnx0RxQ= github.com/consensys/bavard v0.1.13 h1:oLhMLOFGTLdlda/kma4VOJazblc7IM5y5QPd2A/YjhQ= github.com/consensys/bavard v0.1.13/go.mod h1:9ItSMtA/dXMAiL7BG6bqW2m3NdSEObYWoH223nGHukI= github.com/consensys/gnark-crypto v0.12.1 h1:lHH39WuuFgVHONRl3J0LRBtuYdQTumFSDtJF7HpyG8M= @@ -117,11 +117,10 @@ github.com/decred/dcrd/crypto/blake256 v1.0.1 h1:7PltbUIQB7u/FfZ39+DGa/ShuMyJ5il github.com/decred/dcrd/crypto/blake256 v1.0.1/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo= github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0 h1:rpfIENRNNilwHwZeG5+P150SMrnNEcHYvcCuK6dPZSg= github.com/decred/dcrd/dcrec/secp256k1/v4 v4.3.0/go.mod h1:v57UDF4pDQJcEfFUCRop3lJL149eHGSe9Jvczhzjo/0= -github.com/dgraph-io/badger/v2 v2.2007.4 h1:TRWBQg8UrlUhaFdco01nO2uXwzKS7zd+HVdwV/GHc4o= -github.com/dgraph-io/badger/v2 v2.2007.4/go.mod h1:vSw/ax2qojzbN6eXHIx6KPKtCSHJN/Uz0X0VPruTIhk= github.com/dgraph-io/badger/v3 v3.2103.5 h1:ylPa6qzbjYRQMU6jokoj4wzcaweHylt//CH0AKt0akg= github.com/dgraph-io/badger/v3 v3.2103.5/go.mod h1:4MPiseMeDQ3FNCYwRbbcBOGJLf5jsE0PPFzRiKjtcdw= -github.com/dgraph-io/ristretto v0.0.3-0.20200630154024-f66de99634de/go.mod h1:KPxhHT9ZxKefz+PCeOGsrHpl1qZ7i70dGTu2u+Ahh6E= +github.com/dgraph-io/badger/v4 v4.2.0 h1:kJrlajbXXL9DFTNuhhu9yCx7JJa4qpYWxtE8BzuWsEs= +github.com/dgraph-io/badger/v4 v4.2.0/go.mod h1:qfCqhPoWDFJRx1gp5QwwyGo8xk1lbHUxvK9nK0OGAak= github.com/dgraph-io/ristretto v0.1.1 h1:6CWw5tJNgpegArSHpNHJKldNeq03FQCwYvfMVWajOK8= github.com/dgraph-io/ristretto v0.1.1/go.mod h1:S1GPSBCYCIhmVNfcth17y2zZtQT6wzkzgwUve0VDWWA= github.com/dgryski/go-farm v0.0.0-20190423205320-6a90982ecee2/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= @@ -166,10 +165,12 @@ github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff h1:tY80oXqG github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww= github.com/getsentry/sentry-go v0.27.0 h1:Pv98CIbtB3LkMWmXi4Joa5OOcwbmnX88sF5qbK3r3Ps= github.com/getsentry/sentry-go v0.27.0/go.mod h1:lc76E2QywIyW8WuBnwl8Lc4bkmQH4+w1gwTf25trprY= +github.com/go-chi/chi/v5 v5.1.0 h1:acVI1TYaD+hhedDJ3r54HyA6sExp3HfXq7QWEEY/xMw= +github.com/go-chi/chi/v5 v5.1.0/go.mod h1:DslCQbL2OYiznFReuXYUmQ2hGd1aDpCnlMNITLSKoi8= github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA= github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og= -github.com/go-kit/kit v0.12.0 h1:e4o3o3IsBfAKQh5Qbbiqyfu97Ku7jrO/JbohvztANh4= -github.com/go-kit/kit v0.12.0/go.mod h1:lHd+EkCZPIwYItmGDDRdhinkzX2A1sj+M9biaEaizzs= +github.com/go-kit/kit v0.13.0 h1:OoneCcHKHQ03LfBpoQCUfCluwd2Vt3ohz+kvbJneZAU= +github.com/go-kit/kit v0.13.0/go.mod h1:phqEHMMUbyrCFCTgH48JueqrM3md2HcAZ8N3XE4FKDg= github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU= github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0= github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4= @@ -199,8 +200,8 @@ github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 h1:au07oEsX2xN0ktxqI+Sida1w446QrXBRJ0nee3SNZlA= github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/glog v1.2.1 h1:OptwRhECazUx5ix5TTWC3EZhsZEHWcYWY4FQHTIubm4= -github.com/golang/glog v1.2.1/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w= +github.com/golang/glog v1.2.2 h1:1+mZ9upx1Dh6FmUTFR1naJ77miKiXgALjWOZ3NVFPmY= +github.com/golang/glog v1.2.2/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE= @@ -223,8 +224,8 @@ github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEW github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb h1:PBC98N2aIaM3XXiurYmW7fx4GZkL8feAMVq7nEjURHk= github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= -github.com/google/btree v1.1.2 h1:xf4v41cLI2Z6FxbKm+8Bu+m8ifhj15JuZ9sa0jZCMUU= -github.com/google/btree v1.1.2/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= +github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= +github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= github.com/google/flatbuffers v1.12.1/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8= github.com/google/flatbuffers v23.5.26+incompatible h1:M9dgRyhJemaM4Sw8+66GHBu8ioaQmyPLg1b8VwK5WJg= github.com/google/flatbuffers v23.5.26+incompatible/go.mod h1:1AeVuKshWv4vARoZatz6mlQ0JxURH0Kv5+zNeJKJCa8= @@ -303,8 +304,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= -github.com/kwilteam/kwil-db v0.9.2 h1:2uH0oJ52DSTeph7RVpjZ0r36aFBy3xLkJfCbRgbwMd4= -github.com/kwilteam/kwil-db v0.9.2/go.mod h1:r2hTxKXVK+P9rEYP6sDis62cONQC+UCNbpy5YTB0wmM= +github.com/kwilteam/kwil-db v0.9.4 h1:Wqeu1k7qrqWiVDnXMx5r5D9ifA5GUj74+HFS9bFkOFQ= +github.com/kwilteam/kwil-db v0.9.4/go.mod h1:mebW4PpIH3n0rEIyzxoubwE+DDD5TJNK0ZZHOg89JeU= github.com/kwilteam/kwil-db/core v0.3.1-0.20241118220427-bd35ded7db55 h1:OY95vFq6Bft6sAeBAbgNqQ/UAQ1F2Lu4Z3ByYllyQKw= github.com/kwilteam/kwil-db/core v0.3.1-0.20241118220427-bd35ded7db55/go.mod h1:rTXHWgWannGuOaR0vK2o7/kBXu5opLWZOqlAhLSRP1Y= github.com/kwilteam/kwil-db/parse v0.3.1-0.20241118220427-bd35ded7db55 h1:qCF3pS4+rrQJYhOBpI/dPjXYiDR45GyR74gBDjGOCU4= @@ -317,8 +318,8 @@ github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7 github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= -github.com/lib/pq v1.10.7 h1:p7ZhMD+KsSRozJr34udlUrhboJwWAgCg34+/ZZNvZZw= -github.com/lib/pq v1.10.7/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= +github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw= +github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/linxGnu/grocksdb v1.8.14 h1:HTgyYalNwBSG/1qCQUIott44wU5b2Y9Kr3z7SK5OfGQ= github.com/linxGnu/grocksdb v1.8.14/go.mod h1:QYiYypR2d4v63Wj1adOOfzglnoII0gLj3PNh4fZkcFA= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= @@ -333,8 +334,8 @@ github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D github.com/mattn/go-runewidth v0.0.10/go.mod h1:RAqKPSqVFrSLVXbA8x7dzmKdmGzieGRCM46jaSJTDAk= github.com/mattn/go-runewidth v0.0.14 h1:+xnbZSEeDbOIg5/mE6JF0w6n9duR1l3/WmbinWVwUuU= github.com/mattn/go-runewidth v0.0.14/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= -github.com/minio/highwayhash v1.0.2 h1:Aak5U0nElisjDCfPSG79Tgzkn2gl66NxOMspRrKnA/g= -github.com/minio/highwayhash v1.0.2/go.mod h1:BQskDq+xkJ12lmlUUi7U0M5Swg3EWR+dLTk+kldvVxY= +github.com/minio/highwayhash v1.0.3 h1:kbnuUMoHYyVl7szWjSxJnxw11k2U709jqFPPmIUyD6Q= +github.com/minio/highwayhash v1.0.3/go.mod h1:GGYsuwP/fPD6Y9hMiXuapVvlIUEhFhMTh0rxU3ik1LQ= github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0= github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= @@ -379,27 +380,25 @@ github.com/ory/dockertest v3.3.5+incompatible/go.mod h1:1vX4m9wsvi00u5bseYwXaSnh github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM= github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs= -github.com/petermattis/goid v0.0.0-20180202154549-b0b1615b78e5/go.mod h1:jvVRKCrJTQWu0XVbaOlby/2lO20uSCHEMzzplHXte1o= -github.com/petermattis/goid v0.0.0-20240503122002-4b96552b8156 h1:UOk0WKXxKXmHSlIkwQNhT5AWlMtkijU5pfj8bCOI9vQ= -github.com/petermattis/goid v0.0.0-20240503122002-4b96552b8156/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4= +github.com/petermattis/goid v0.0.0-20240813172612-4fcff4a6cae7 h1:Dx7Ovyv/SFnMFw3fD4oEoeorXc6saIiQ23LrGLth0Gw= +github.com/petermattis/goid v0.0.0-20240813172612-4fcff4a6cae7/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4= github.com/pganalyze/pg_query_go/v5 v5.1.0 h1:MlxQqHZnvA3cbRQYyIrjxEjzo560P6MyTgtlaf3pmXg= github.com/pganalyze/pg_query_go/v5 v5.1.0/go.mod h1:FsglvxidZsVN+Ltw3Ai6nTgPVcK2BPukH3jCDEqc1Ug= github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= github.com/pingcap/errors v0.11.4/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8= github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= -github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= -github.com/prometheus/client_golang v1.20.1 h1:IMJXHOD6eARkQpxo8KkhgEVFlBNm+nkrFUyGlIu7Na8= -github.com/prometheus/client_golang v1.20.1/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= +github.com/prometheus/client_golang v1.20.5 h1:cxppBPuYhUnsO6yo/aoRol4L7q7UFfdm+bR9r+8l63Y= +github.com/prometheus/client_golang v1.20.5/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= -github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc= -github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8= +github.com/prometheus/common v0.60.1 h1:FUas6GcOw66yB/73KC+BOZoFJmbo/1pojoILArPAaSc= +github.com/prometheus/common v0.60.1/go.mod h1:h0LYf1R1deLSKtD4Vdg8gy4RuOvENW2J/h19V5NADQw= github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= github.com/rcrowley/go-metrics v0.0.0-20201227073835-cf1acfcdf475 h1:N/ElC8H3+5XpJzTSTfLsJV/mx9Q9g7kxmchpfZyxgzM= @@ -423,8 +422,8 @@ github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6g github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ= github.com/samber/lo v1.47.0 h1:z7RynLwP5nbyRscyvcD043DWYoOcYRv3mV8lBeqOCLc= github.com/samber/lo v1.47.0/go.mod h1:RmDH9Ct32Qy3gduHQuKJ3gW1fMHAnE/fAzQuf6He5cU= -github.com/sasha-s/go-deadlock v0.3.1 h1:sqv7fDNShgjcaxkO0JNcOAlr8B9+cV5Ey/OB71efZx0= -github.com/sasha-s/go-deadlock v0.3.1/go.mod h1:F73l+cr82YSh10GxyRI6qZiCgK64VaZjwesgfQ1/iLM= +github.com/sasha-s/go-deadlock v0.3.5 h1:tNCOEEDG6tBqrNDOX35j/7hL5FcFViG6awUGROb2NsU= +github.com/sasha-s/go-deadlock v0.3.5/go.mod h1:bugP6EGbdGYObIlx7pUZtWqlvo8k9H6vCBBsiChJQ5U= github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible h1:Bn1aCHHRnjv4Bl16T8rcaFjYSrGrIZvpiGO6P3Q4GpU= github.com/shirou/gopsutil v3.21.4-0.20210419000835-c7a38de76ee5+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= @@ -479,8 +478,6 @@ github.com/tklauser/numcpus v0.8.0 h1:Mx4Wwe/FjZLeQsK/6kt2EOepwwSl7SmJrK5bV/dXYg github.com/tklauser/numcpus v0.8.0/go.mod h1:ZJZlAY+dmR4eut8epnzf0u/VwodKmryxR8txiloSqBE= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346 h1:TvtdmeYsYEij78hS4oxnwikoiLdIrgav3BA+CbhaDAI= github.com/tonistiigi/go-rosetta v0.0.0-20220804170347-3f4430f2d346/go.mod h1:xKQhd7snlzKFuUi1taTGWjpRE8iFTA06DeacYi3CVFQ= -github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282 h1:LTzgVlKoYE+laTizgju24seT2fnIIcNqC+fN2SYYsRM= -github.com/trufnetwork/sdk-go v0.1.1-0.20250201205045-bfc7285a8282/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337 h1:NlaXTkXnUiKCektYmF0WodsvYN/fCHvB4fuu0DDkpmM= github.com/trufnetwork/sdk-go v0.1.1-0.20250202003909-f8b63d22a337/go.mod h1:XMszfniaEGqyyj+EuFy73S3YUcxLzk0WhQpC3AUwFnE= github.com/tyler-smith/go-bip39 v1.1.0 h1:5eUemwrMargf3BSLRRCalXT93Ns6pQJIjYQN2nyfOP8= @@ -493,8 +490,8 @@ github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 h1:bAn7/zixMGCfxrRT github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsrSj3ccvlPHLoLsHnpR27oXr4ZE984MbSER8= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= -go.etcd.io/bbolt v1.3.10 h1:+BqfJTcCzTItrop8mq/lbzL8wSGtj94UO/3U31shqG0= -go.etcd.io/bbolt v1.3.10/go.mod h1:bK3UQLPJZly7IlNmV7uVHJDxfe5aK9Ll93e/74Y9oEQ= +go.etcd.io/bbolt v1.4.0-alpha.0.0.20240404170359-43604f3112c5 h1:qxen9oVGzDdIRP6ejyAJc760RwW4SnVDiTYTzwnXuxo= +go.etcd.io/bbolt v1.4.0-alpha.0.0.20240404170359-43604f3112c5/go.mod h1:eW0HG9/oHQhvRCvb1/pIXW4cOvtDqeQK+XSi3TnwaXY= go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= @@ -524,8 +521,8 @@ golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnf golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= -golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A= -golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70= +golang.org/x/crypto v0.28.0 h1:GBDwsMXVQi34v5CCYUm2jkJvu4cbtru2U4TN2PSyQnw= +golang.org/x/crypto v0.28.0/go.mod h1:rmgy+3RHxRZMyY0jjAJShp2zgEdOqj2AO7U0pYmeQ7U= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8= golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY= @@ -546,8 +543,8 @@ golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/ golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= -golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo= -golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0= +golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4= +golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -561,11 +558,9 @@ golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5h golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= -golang.org/x/sys v0.0.0-20190130150945-aca44879d564/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= -golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -577,13 +572,14 @@ golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20221010170243-090e33056c14/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34= -golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo= +golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.18.0 h1:XvMDiNzPAl0jr17s6W9lcaIhGUfUORdGCNsuLmPG224= -golang.org/x/text v0.18.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= +golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM= +golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= golang.org/x/time v0.6.0 h1:eTDhh4ZXt5Qf0augr54TN6suAUudPcawVZeIAPU7D4U= golang.org/x/time v0.6.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= @@ -615,8 +611,8 @@ google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyac google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY= google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk= google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc= -google.golang.org/grpc v1.66.1 h1:hO5qAXR19+/Z44hmvIM4dQFMSYX9XcWsByfoxutBpAM= -google.golang.org/grpc v1.66.1/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y= +google.golang.org/grpc v1.67.1 h1:zWnc1Vrcno+lHZCOofnIMvycFcc0QRGIzm9dhnDX68E= +google.golang.org/grpc v1.67.1/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM= @@ -626,8 +622,8 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2 google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU= google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c= -google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg= -google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= +google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA= +google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= From c6218fcb75e3ce5482bfa813216e6198a221a2fc Mon Sep 17 00:00:00 2001 From: Raffael Campos Date: Mon, 24 Feb 2025 23:48:43 -0300 Subject: [PATCH 14/14] chore: update GitHub Actions release workflow permissions (#793) Add explicit permissions for the GITHUB_TOKEN to enable proper release asset uploads and improve workflow security --- .github/workflows/release.yaml | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/.github/workflows/release.yaml b/.github/workflows/release.yaml index 5c36bf4e6..0a8bc2b98 100644 --- a/.github/workflows/release.yaml +++ b/.github/workflows/release.yaml @@ -5,6 +5,13 @@ on: release: types: [published, edited] +# Add permissions for the GITHUB_TOKEN +permissions: + contents: write + packages: read + # This is required for creating and modifying releases + id-token: write + jobs: build-release: name: Build & release @@ -60,4 +67,5 @@ jobs: ./.build/tsn_${{ env.VERSION }}_darwin_amd64.tar.gz ./.build/tsn_${{ env.VERSION }}_darwin_arm64.tar.gz ./.build/tsn_${{ env.VERSION }}_linux_amd64.tar.gz - ./.build/tsn_${{ env.VERSION }}_linux_arm64.tar.gz \ No newline at end of file + ./.build/tsn_${{ env.VERSION }}_linux_arm64.tar.gz + token: ${{ secrets.GITHUB_TOKEN }} \ No newline at end of file