diff --git a/docs/qa/README.md b/docs/qa/README.md
index 745c1d4a57c..2260562bed9 100644
--- a/docs/qa/README.md
+++ b/docs/qa/README.md
@@ -19,6 +19,6 @@ used to decide if a release is passing the Quality Assurance process.
The results obtained in each release are stored in their own directory.
The following releases have undergone the Quality Assurance process:
-* [TM v0.34.x](./tm_v034/), which was tested just before releasing Tendermint Core v0.34.22
-* [v0.34.x](./v034/), which was tested just before releasing v0.34.27
+* [TM v0.34.x](./v034/TMCore.md), which was tested just before releasing Tendermint Core v0.34.22
+* [v0.34.x](./v034/CometBFT.md), which was tested just before releasing v0.34.27
* [v0.37.x](./v037/), with TM v.34.x acting as a baseline
diff --git a/docs/qa/v034/CometBFT.md b/docs/qa/v034/CometBFT.md
new file mode 100644
index 00000000000..d1dbf09480c
--- /dev/null
+++ b/docs/qa/v034/CometBFT.md
@@ -0,0 +1,325 @@
+---
+order: 1
+parent:
+ title: CometBFT Quality Assurance Results for v0.34.x
+ description: This is a report on the results obtained when running v0.34.x on testnets
+ order: 2
+---
+
+# v0.34.x - From Tendermint Core to CometBFT
+
+This section reports on the QA process we followed before releasing the first `v0.34.x` version
+from our CometBFT repository.
+
+The changes with respect to the last version of `v0.34.x`
+(namely `v0.34.26`, released from the Informal Systems' Tendermint Core fork)
+are minimal, and focus on rebranding our fork of Tendermint Core to CometBFT at places
+where there is no substantial risk of breaking compatibility
+with earlier Tendermint Core versions of `v0.34.x`.
+
+Indeed, CometBFT versions of `v0.34.x` (`v0.34.27` and subsequent) should fulfill
+the following compatibility-related requirements.
+
+* Operators can easily upgrade a `v0.34.x` version of Tendermint Core to CometBFT.
+* Upgrades from Tendermint Core to CometBFT can be uncoordinated for versions of the `v0.34.x` branch.
+* Nodes running CometBFT must be interoperable with those running Tendermint Core in the same chain,
+ as long as all are running a `v0.34.x` version.
+
+These QA tests focus on the third bullet, whereas the first two bullets are tested using our _e2e tests_.
+
+It would be prohibitively time consuming to test mixed networks of all combinations of existing `v0.34.x`
+versions, combined with the CometBFT release candidate under test.
+Therefore our testing focuses on the last Tendermint Core version (`v0.34.26`) and the CometBFT release
+candidate under test.
+
+We only run the _200 node test_, and not the _rotating node test_. The effort of running the latter
+is not justified given the amount and nature of the changes we are testing with respect to the
+full QA cycle run previously on `v0.34.x`.
+Since the changes to the system's logic are minimal, we are interested in these performance requirements:
+
+* The CometBFT release candidate under test performs similarly to Tendermint Core (i.e., the baseline)
+ * when used at scale (i.e., in a large network of CometBFT nodes)
+ * when used at scale in a mixed network (i.e., some nodes are running CometBFT
+ and others are running an older Tendermint Core version)
+
+Therefore we carry out a complete run of the _200-node test_ on the following networks:
+
+* A homogeneous 200-node testnet, where all nodes are running the CometBFT release candidate under test.
+* A mixed network where 1/3 of the nodes are running the CometBFT release candidate under test,
+ and the rest are running Tendermint Core `v0.34.26`.
+* A mixed network where 2/3 of the nodes are running the CometBFT release candidate under test,
+ and the rest are running Tendermint Core `v0.34.26`.
+
+## Saturation Point
+
+As the CometBFT release candidate under test has minimal changes
+with respect to Tendermint Core `v0.34.26`, other than the rebranding changes,
+we can confidently reuse the results from the `v0.34.x` baseline test regarding
+the [saturation point](./TMCore.md#finding-the-saturation-point).
+
+Therefore, we will simply use a load of `r=200,c=2`
+(see the explanation [here](./TMCore.md#finding-the-saturation-point)).
+
+## Examining latencies
+
+In this section and the remaining, we provide the results of the _200 node test_.
+Each section is divided into three parts,
+reporting on the homogeneous network (all CometBFT nodes),
+mixed network with 1/3 of Tendermint Core nodes,
+and mixed network with 2/3 of Tendermint Core nodes.
+
+On each of the three networks, the test consists of 4 experiments, with the goal
+ensuring the data obtained is consistent across experiments.
+On each of the networks, we pick only one representative run to present and discuss the results.
+
+### CometBFT Homogeneous network
+
+The figure below plots the four experiments carried out with this network.
+We can see that the latencies follow a comparable pattern across experiments.
+
+
+
+### 1/3 Tendermint Core - 2/3 CometBFT
+
+
+
+We have picked the experiment whose identifier starts with fc5edd13.
+
+### 2/3 Tendermint Core - 1/3 CometBFT
+
+
+
+We have picked the experiment whose identifier starts with 47595c66.
+
+## Prometheus Metrics
+
+This section reports on the key Prometheus metrics extracted from the experiments.
+
+* For the baseline results for `v0.34.x`, obtained in October 202
+ and reported [here](./TMCore.md).
+* For the CometBFT homogeneous network, we choose to present the
+ experiment with UUID starting with `be8c` (see the latencies section above),
+ as its latency data is representative,
+ and it contains the maximum latency of all runs (worst case scenario).
+* For the mixed network with 1/3 of nodes running Tendermint Core `v0.34.26`
+ and 2/3 running CometBFT.
+ TODO
+* For the mixed network with 2/3 of nodes running Tendermint Core `v0.34.26`
+ and 1/3 running CometBFT.
+ TODO
+
+### Mempool Size
+
+For reference, the plots below correspond to the baseline results.
+The first shows the evolution over time of the cumulative number of transactions
+inside all full nodes' mempools at a given time.
+
+
+
+The second one shows evolution of the average over all full nodes, which oscillates between 1500 and 2000
+outstanding transactions.
+
+
+
+#### CometBFT Homogeneous network
+
+The mempool size was as stable at all full nodes as in the baseline.
+These are the corresponding plots for the homogeneous network test.
+
+
+
+
+
+#### 1/3 Tendermint Core - 2/3 CometBFT
+
+
+
+
+
+#### 2/3 Tendermint Core - 1/3 CometBFT
+
+
+
+
+
+### Consensus Rounds per Height
+
+For reference, this is the baseline plot. We can see that round 1 is reached with a certain frequency.
+
+
+
+#### CometBFT Homogeneous network
+
+Most heights finished in round 0, some nodes needed to advance to round 1 at various moments,
+and a few nodes even needed to advance to round 2 at one point.
+This coincides with the time at which we observed the biggest peak in mempool size
+on the corresponding plot, shown above.
+
+
+
+#### 1/3 Tendermint Core - 2/3 CometBFT
+
+
+
+#### 2/3 Tendermint Core - 1/3 CometBFT
+
+
+
+### Peers
+
+The plot below corresponds to the baseline results, for reference.
+It shows the stability of peers throughout the experiment.
+Seed nodes typically have a higher number of peers.
+The fact that non-seed nodes reach more than 50 peers is due to
+[#9548](https://github.com/tendermint/tendermint/issues/9548).
+
+The thick red dashed line represents the moving average over a sliding window of 20 seconds.
+
+
+
+#### CometBFT Homogeneous network
+
+The plot below shows the result for the homogeneous network.
+It is very similar to the baseline. The only difference being that
+the seed nodes seem to loose peers in the middle of the experiment.
+However this cannot be attributed to the differences in the code,
+which are mainly rebranding.
+
+
+
+#### 1/3 Tendermint Core - 2/3 CometBFT
+
+
+
+#### 2/3 Tendermint Core - 1/3 CometBFT
+
+
+
+### Blocks Produced per Minute, Transactions Processed per Minute
+
+The following plot shows the rate of block production, with a sliding window of 20 seconds,
+throughout the experiment.
+
+
+
+The next plot is the rate of transactions delivered, with a sliding window of 20 seconds,
+throughout the experiment.
+
+
+
+Both plots correspond to the baseline results.
+The thick red dashed line represents the moving average over a sliding window of 20 seconds.
+
+#### CometBFT Homogeneous network
+
+The plot showing the block production rate shows that the rate oscillates around 20 blocks/minute.
+
+
+
+The plot showing the transaction rate shows the rate stays around 20000 transactions per minute.
+
+
+
+The thick red dashed line represents the moving average over a sliding window of 20 seconds.
+
+#### 1/3 Tendermint Core - 2/3 CometBFT
+
+Height rate
+
+
+
+Transaction rate
+
+
+
+#### 2/3 Tendermint Core - 1/3 CometBFT
+
+
+
+In two minutes the height goes from 32 to 90 which gives an average of 29 blocks per minutes.
+
+
+
+In 1 minutes and 30 seconds the system processes 35600 transactions which amounts to 23000 transactions per minute.
+
+### Memory Resident Set Size
+
+Baseline plot for Resident Set Size (RSS) of all monitored processes, for reference.
+
+
+
+And this is the baseline average plot.
+
+
+
+#### CometBFT Homogeneous network
+
+This is the plot for the homogeneous network, which is slightly more stable than the baseline over
+the time of the experiment.
+
+
+
+And this is the average plot. It oscillates around 560 MiB, which is noticeably lower than the baseline.
+
+
+
+#### 1/3 Tendermint Core - 2/3 CometBFT
+
+
+
+Here
+
+
+
+
+#### 2/3 Tendermint Core - 1/3 CometBFT
+
+
+
+
+
+### CPU utilization
+
+For reference, this is the baseline `load1` plot (as typically shown in the first line of the Unix `top` command).
+
+
+
+This is the baseline average plot.
+
+
+
+#### CometBFT Homogeneous network
+
+
+
+Similarly to the baseline, it is contained in most cases below 5.
+
+This is the average plot.
+
+
+
+#### 1/3 Tendermint Core - 2/3 CometBFT
+
+Total
+
+
+
+Average
+
+
+
+#### 2/3 Tendermint Core - 1/3 CometBFT
+
+
+
+Average
+
+
+
+## Test Results
+
+| Scenario | Date | Version | Result |
+|--|--|--|--|
+|CometBFT Homogeneous network | 2023-02-08 | 3b783434f26b0e87994e6a77c5411927aad9ce3f | Pass
+|1/3 Tendermint Core
2/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass|
+|2/3 Tendermint Core
1/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass |
diff --git a/docs/qa/v034/README.md b/docs/qa/v034/TMCore.md
similarity index 51%
rename from docs/qa/v034/README.md
rename to docs/qa/v034/TMCore.md
index 77d680d3a86..6b898638984 100644
--- a/docs/qa/v034/README.md
+++ b/docs/qa/v034/TMCore.md
@@ -274,307 +274,4 @@ transactions, via RPC, from the load runner process.
Date: 2022-10-10
-Version: a28c987f5a604ff66b515dd415270063e6fb069d
-
-
-
-# THIS GOES TO A DIFFERENT FILE
-____
-
-# v0.34.x - From Tendermint Core to CometBFT
-
-This section reports on the QA process we followed before releasing the first `v0.34.x` version
-from our CometBFT repository.
-
-The changes with respect to the last version of `v0.34.x`
-(namely `v0.34.26`, released from the Informal Systems' Tendermint Core fork)
-are minimal, and focus on rebranding our fork of Tendermint Core to CometBFT at places
-where there is no substantial risk of breaking compatibility
-with earlier Tendermint Core versions of `v0.34.x`.
-
-Indeed, CometBFT versions of `v0.34.x` (`v0.34.27` and subsequent) should fulfill
-the following compatibility-related requirements.
-
-* Operators can easily upgrade a `v0.34.x` version of Tendermint Core to CometBFT.
-* Upgrades from Tendermint Core to CometBFT can be uncoordinated for versions of the `v0.34.x` branch.
-* Nodes running CometBFT must be interoperable with those running Tendermint Core in the same chain,
- as long as all are running a `v0.34.x` version.
-
-These QA tests focus on the third bullet, whereas the first two bullets are tested using our _e2e tests_.
-
-It would be prohibitively time consuming to test mixed networks of all combinations of existing `v0.34.x`
-versions, combined with the CometBFT release candidate under test.
-Therefore our testing focuses on the last Tendermint Core version (`v0.34.26`) and the CometBFT release
-candidate under test.
-
-We only run the _200 node test_, and not the _rotating node test_.
-Since the changes to the system's logic are minimal, we are interested in these performance requirements:
-
-* The CometBFT release candidate under test performs similarly to Tendermint Core
- * when used at scale (i.e., in a large network of CometBFT nodes)
- * when used at scale in a mixed network (i.e., some nodes are running CometBFT
- and others are running an older Tendermint Core version)
-
-Therefore we carry out a complete run of the _200-node test_ on the following networks:
-
-* A homogeneous 200-node testnet, where all nodes are running the CometBFT release candidate under test.
-* A mixed network where 1/3 of the nodes are running the CometBFT release candidate under test,
- and the rest are running Tendermint Core `v0.34.26`.
-* A mixed network where 2/3 of the nodes are running the CometBFT release candidate under test,
- and the rest are running Tendermint Core `v0.34.26`.
-
-## Saturation Point
-
-As the CometBFT release candidate under test has minimal changes
-with respect to Tendermint Core `v0.34.26`, other than the rebranding changes,
-we can confidently reuse the results from the `v0.34.x` baseline test regarding
-the [saturation point](#finding-the-saturation-point).
-
-Therefore, we will simply use a load of `r=200,c=2`
-(see the explanation [here](#finding-the-saturation-point)).
-
-## Examining latencies
-
-In this section and the remaining, we provide the results of the _200 node test_.
-Each section is divided into three parts,
-reporting on the homogeneous network (all CometBFT nodes),
-mixed network with 1/3 of Tendermint Core nodes,
-and mixed network with 2/3 of Tendermint Core nodes.
-
-On each of the three networks, the test consists of 4 or 5 runs/experiments, with the goal
-ensuring the data obtained is consistent.
-On each of the networks, we pick only one representative run to present and discuss the results.
-
-### CometBFT Homogeneous network
-
-
-
-TODO: Explain
-
-
-### 1/3 Tendermint Core - 2/3 CometBFT
-
-
-
-### 2/3 Tendermint Core - 1/3 CometBFT
-
-
-
-{width=250}
-
-## Prometheus Metrics
-
-This section reports on the key prometheus metrics extracted from the experiments.
-
-* For the CometBFT homogeneous network, we choose to present the third run
- (see the latencies section above), as its latency date is representative, and
- it contains the maximum latency of all runs (worst case scenario).
-* For the mixed network with 1/3 of nodes running Tendermint Core `v0.34.26`
- and 2/3 running CometBFT.
- TODO
-* For the mixed network with 2/3 of nodes running Tendermint Core `v0.34.26`
- and 1/3 running CometBFT.
- TODO
-
-### Mempool Size
-
-For reference, the plots below correspond to the baseline results.
-The first shows the evolution over time of the cumulative number of transactions
-inside all full nodes' mempools at a given time.
-
-
-
-The second one shows evolution of the average over all full nodes, which oscillates between 1500 and 2000
-outstanding transactions.
-
-
-
-#### CometBFT Homogeneous network
-
-The mempool size was as stable at all full nodes as in the baseline.
-These are the corresponding plots for the homogeneous network test.
-
-
-
-
-
-#### 1/3 Tendermint Core - 2/3 CometBFT
-
-
-
-
-
-#### 2/3 Tendermint Core - 1/3 CometBFT
-
-
-
-
-
-### Peers
-
-The plot below corresponds to the baseline results, for reference.
-It shows the stability of peers throughout the experiment.
-Seed nodes typically have a higher number of peers.
-The fact that non-seed nodes reach more than 50 peers is due to
-[#9548](https://github.com/tendermint/tendermint/issues/9548).
-
-
-
-#### CometBFT Homogeneous network
-
-The plot below shows the result for the homogeneous network.
-It is very similar to the baseline. The only difference being that
-the seed nodes seem to loose peers in the middle of the experiment.
-However this cannot be attributed to the differences in the code,
-which are mainly rebranding.
-
-
-
-#### 1/3 Tendermint Core - 2/3 CometBFT
-
-
-
-#### 2/3 Tendermint Core - 1/3 CometBFT
-
-
-
-### Consensus Rounds per Height
-
-TODO Move this under mempool as we refer to it
-
-
-For reference, this is the baseline plot.
-
-
-
-
-#### CometBFT Homogeneous network
-
-Most heights took just one round, some nodes needed to advance to round 1 at various moments,
-and a few nodes even needed to advance to the third round at one point.
-This coincides with the time at which we observed the biggest peak in mempool size
-on the corresponding plot, shown above.
-
-
-
-#### 1/3 Tendermint Core - 2/3 CometBFT
-
-
-
-#### 2/3 Tendermint Core - 1/3 CometBFT
-
-
-
-### Blocks Produced per Minute, Transactions Processed per Minute
-
-The blocks produced per minute are the slope of this plot, which corresponds to the baseline results.
-
-
-
-The transactions processed per minute are the slope of this plot,
-which, again, corresponds to the baseline results.
-
-
-
-#### CometBFT Homogeneous network
-
-
-
-Over a period of 2 minutes and 4 seconds, the height goes from 251 to 295.
-This results in an average of 21.3 blocks produced per minute.
-
-
-
-Over a period of 1 minute and 45 seconds (adjusted time window),
-the total goes from 70201 to 104537 transactions,
-resulting in 19620 transactions per minute.
-This is similar to the baseline.
-
-#### 1/3 Tendermint Core - 2/3 CometBFT
-
-Height rate
-
-
-
-Transaction rate
-
-
-
-#### 2/3 Tendermint Core - 1/3 CometBFT
-
-
-
-In two minutes the height goes from 32 to 90 which gives an average of 29 blocks per minutes.
-
-
-
-In 1 minutes and 30 seconds the system processes 35600 transactions which amounts to 23000 transactions per minute.
-
-### Memory Resident Set Size
-
-Reference plot for Resident Set Size (RSS) of all monitored processes.
-
-
-
-And this is the baseline average plot.
-
-
-
-#### CometBFT Homogeneous network
-
-This is the plot for the homogeneous network, which slightly more stable than the baseline over
-the time of the experiment.
-
-
-
-And this is the average plot. It oscillates around 560 MiB, which is noticeably lower than the baseline.
-
-
-
-#### 1/3 Tendermint Core - 2/3 CometBFT
-
-
-Here
-
-
-
-#### 2/3 Tendermint Core - 1/3 CometBFT
-
-
-
-
-
-### CPU utilization
-
-This is the baseline `load1` plot, for reference.
-
-
-
-#### CometBFT Homogeneous network
-
-
-
-Similarly to the baseline, it is contained in most cases below 5.
-
-#### 1/3 Tendermint Core - 2/3 CometBFT
-
-Total
-
-
-Average
-
-
-#### 2/3 Tendermint Core - 1/3 CometBFT
-
-
-
-Average
-
-
-## Test Results
-
-| Scenario | Date | Version | Result |
-|--|--|--|--|
-|CometBFT Homogeneous network | 2023-02-08 | 3b783434f26b0e87994e6a77c5411927aad9ce3f | Pass
-|1/3 Tendermint Core
2/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass|
-|2/3 Tendermint Core
1/3 CometBFT | 2023-02-08 | CometBFT: 3b783434f26b0e87994e6a77c5411927aad9ce3f
Tendermint Core: 66c2cb63416e66bff08e11f9088e21a0ed142790 | Pass |
\ No newline at end of file
+Version: a28c987f5a604ff66b515dd415270063e6fb069d
\ No newline at end of file
diff --git a/docs/qa/v034/img/baseline/avg_cpu.png b/docs/qa/v034/img/baseline/avg_cpu.png
new file mode 100644
index 00000000000..622456df644
Binary files /dev/null and b/docs/qa/v034/img/baseline/avg_cpu.png differ
diff --git a/docs/qa/v034/img/baseline/avg_memory.png b/docs/qa/v034/img/baseline/avg_memory.png
new file mode 100644
index 00000000000..55f213f5e15
Binary files /dev/null and b/docs/qa/v034/img/baseline/avg_memory.png differ
diff --git a/docs/qa/v034/img/baseline/avg_mempool_size.png b/docs/qa/v034/img/baseline/avg_mempool_size.png
new file mode 100644
index 00000000000..ec740729507
Binary files /dev/null and b/docs/qa/v034/img/baseline/avg_mempool_size.png differ
diff --git a/docs/qa/v034/img/baseline/block_rate_regular.png b/docs/qa/v034/img/baseline/block_rate_regular.png
new file mode 100644
index 00000000000..bdc7aa28d7b
Binary files /dev/null and b/docs/qa/v034/img/baseline/block_rate_regular.png differ
diff --git a/docs/qa/v034/img/baseline/cpu.png b/docs/qa/v034/img/baseline/cpu.png
new file mode 100644
index 00000000000..ac4fc2695f5
Binary files /dev/null and b/docs/qa/v034/img/baseline/cpu.png differ
diff --git a/docs/qa/v034/img/baseline/memory.png b/docs/qa/v034/img/baseline/memory.png
new file mode 100644
index 00000000000..17336bd1b96
Binary files /dev/null and b/docs/qa/v034/img/baseline/memory.png differ
diff --git a/docs/qa/v034/img/baseline/mempool_size.png b/docs/qa/v034/img/baseline/mempool_size.png
new file mode 100644
index 00000000000..fafba68c1a8
Binary files /dev/null and b/docs/qa/v034/img/baseline/mempool_size.png differ
diff --git a/docs/qa/v034/img/baseline/peers.png b/docs/qa/v034/img/baseline/peers.png
new file mode 100644
index 00000000000..05a288a3562
Binary files /dev/null and b/docs/qa/v034/img/baseline/peers.png differ
diff --git a/docs/qa/v034/img/baseline/rounds.png b/docs/qa/v034/img/baseline/rounds.png
new file mode 100644
index 00000000000..79f3348a256
Binary files /dev/null and b/docs/qa/v034/img/baseline/rounds.png differ
diff --git a/docs/qa/v034/img/baseline/total_txs_rate_regular.png b/docs/qa/v034/img/baseline/total_txs_rate_regular.png
new file mode 100644
index 00000000000..d80bef12c0b
Binary files /dev/null and b/docs/qa/v034/img/baseline/total_txs_rate_regular.png differ
diff --git a/docs/qa/v034/img/cmt2tm1/latencies.png b/docs/qa/v034/img/cmt2tm1/latencies.png
index 494ee38eaa6..4e6f73d3552 100644
Binary files a/docs/qa/v034/img/cmt2tm1/latencies.png and b/docs/qa/v034/img/cmt2tm1/latencies.png differ
diff --git a/docs/qa/v034/img/homogeneous/avg_cpu.png b/docs/qa/v034/img/homogeneous/avg_cpu.png
new file mode 100644
index 00000000000..7df188951f6
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/avg_cpu.png differ
diff --git a/docs/qa/v034/img/homogeneous/avg_memory.png b/docs/qa/v034/img/homogeneous/avg_memory.png
new file mode 100644
index 00000000000..e800cbce229
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/avg_memory.png differ
diff --git a/docs/qa/v034/img/homogeneous/avg_mempool_size.png b/docs/qa/v034/img/homogeneous/avg_mempool_size.png
new file mode 100644
index 00000000000..beb323e646c
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/avg_mempool_size.png differ
diff --git a/docs/qa/v034/img/homogeneous/block_rate_regular.png b/docs/qa/v034/img/homogeneous/block_rate_regular.png
new file mode 100644
index 00000000000..2a71ab70df9
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/block_rate_regular.png differ
diff --git a/docs/qa/v034/img/homogeneous/cpu.png b/docs/qa/v034/img/homogeneous/cpu.png
new file mode 100644
index 00000000000..8e8c9227af5
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/cpu.png differ
diff --git a/docs/qa/v034/img/homogeneous/latencies.png b/docs/qa/v034/img/homogeneous/latencies.png
new file mode 100644
index 00000000000..d8768f6a5de
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/latencies.png differ
diff --git a/docs/qa/v034/img/homogeneous/memory.png b/docs/qa/v034/img/homogeneous/memory.png
new file mode 100644
index 00000000000..190c622a346
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/memory.png differ
diff --git a/docs/qa/v034/img/homogeneous/mempool_size.png b/docs/qa/v034/img/homogeneous/mempool_size.png
new file mode 100644
index 00000000000..ec1c79a242f
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/mempool_size.png differ
diff --git a/docs/qa/v034/img/homogeneous/peers.png b/docs/qa/v034/img/homogeneous/peers.png
new file mode 100644
index 00000000000..3c8b0a2e0df
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/peers.png differ
diff --git a/docs/qa/v034/img/homogeneous/rounds.png b/docs/qa/v034/img/homogeneous/rounds.png
new file mode 100644
index 00000000000..660f31d9394
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/rounds.png differ
diff --git a/docs/qa/v034/img/homogeneous/total_txs_rate_regular.png b/docs/qa/v034/img/homogeneous/total_txs_rate_regular.png
new file mode 100644
index 00000000000..a9025b6665d
Binary files /dev/null and b/docs/qa/v034/img/homogeneous/total_txs_rate_regular.png differ
diff --git a/docs/qa/v034/img/v034_200node_homog_latencies.png b/docs/qa/v034/img/v034_200node_homog_latencies.png
deleted file mode 100644
index b4c69ed8224..00000000000
Binary files a/docs/qa/v034/img/v034_200node_homog_latencies.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_heights.png b/docs/qa/v034/img/v034_homog_heights.png
deleted file mode 100644
index 716c5e97670..00000000000
Binary files a/docs/qa/v034/img/v034_homog_heights.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_load1.png b/docs/qa/v034/img/v034_homog_load1.png
deleted file mode 100644
index 5b9a0fa0def..00000000000
Binary files a/docs/qa/v034/img/v034_homog_load1.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_mempool_size.png b/docs/qa/v034/img/v034_homog_mempool_size.png
deleted file mode 100644
index 73b123f87bf..00000000000
Binary files a/docs/qa/v034/img/v034_homog_mempool_size.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_mempool_size_avg.png b/docs/qa/v034/img/v034_homog_mempool_size_avg.png
deleted file mode 100644
index 9efcf4652bb..00000000000
Binary files a/docs/qa/v034/img/v034_homog_mempool_size_avg.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_peers.png b/docs/qa/v034/img/v034_homog_peers.png
deleted file mode 100644
index 475ba91d15a..00000000000
Binary files a/docs/qa/v034/img/v034_homog_peers.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_rounds.png b/docs/qa/v034/img/v034_homog_rounds.png
deleted file mode 100644
index d73e18695c8..00000000000
Binary files a/docs/qa/v034/img/v034_homog_rounds.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_rss.png b/docs/qa/v034/img/v034_homog_rss.png
deleted file mode 100644
index ec1247773ca..00000000000
Binary files a/docs/qa/v034/img/v034_homog_rss.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_rss_avg.png b/docs/qa/v034/img/v034_homog_rss_avg.png
deleted file mode 100644
index 14a18524066..00000000000
Binary files a/docs/qa/v034/img/v034_homog_rss_avg.png and /dev/null differ
diff --git a/docs/qa/v034/img/v034_homog_total-txs.png b/docs/qa/v034/img/v034_homog_total-txs.png
deleted file mode 100644
index 798d2346730..00000000000
Binary files a/docs/qa/v034/img/v034_homog_total-txs.png and /dev/null differ
diff --git a/scripts/qa/reporting/README.md b/scripts/qa/reporting/README.md
index ff7f379c7a6..65710469c84 100644
--- a/scripts/qa/reporting/README.md
+++ b/scripts/qa/reporting/README.md
@@ -1,16 +1,21 @@
# Reporting Scripts
-This directory contains just one utility script at present that is used in
-reporting/QA.
+This directory contains some utility scripts used in the reporting/QA.
-## Latency vs Throughput Plotting
+* [`latency_throughput.py`](./latency_throughput.py) is a Python script that uses
+ [matplotlib] to plot a graph of transaction latency vs throughput rate based on
+ the CSV output generated by the [loadtime reporting
+ tool](../../../test/loadtime/cmd/report/).
+
+* [`latency_plotter.py`](./latency_plotter.py) is a Python script that uses
+ [matplotlib] and [pandas] to plot graph of transaction latency vs throughput rate based on
+ the CSV output generated by the [loadtime reporting
+ tool](../../../test/loadtime/cmd/report/), for multiple experiments and configurations.
-[`latency_throughput.py`](./latency_throughput.py) is a Python script that uses
-[matplotlib] to plot a graph of transaction latency vs throughput rate based on
-the CSV output generated by the [loadtime reporting
-tool](../../../test/loadtime/cmd/report/).
+* [`prometheus_plotter.py`](./prometheus_plotter.py) is a Python script that uses
+ [matplotlib] and [pandas] to plot graphs of several metrics from Prometheus.
-### Setup
+## Setup
Execute the following within this directory (the same directory as the
`latency_throughput.py` file).
@@ -24,12 +29,16 @@ source .venv/bin/activate
# Install dependencies listed in requirements.txt
pip install -r requirements.txt
+```
-# Show usage instructions and parameters
+## Latency vs Throughput Plotting
+To show the instructions and parameter options, execute
+
+```bash
./latency_throughput.py --help
```
-### Running
+Example:
```bash
# Do the following while ensuring that the virtual environment is activated (see
@@ -45,4 +54,40 @@ pip install -r requirements.txt
/path/to/csv/files/raw.csv
```
+## Latency vs Throughput Plotting (version 2)
+Example:
+
+```bash
+# Do the following while ensuring that the virtual environment is activated (see
+# the Setup steps).
+#
+# This will generate a series of plots in the same folder as the `raw.csv` file.
+# Plots include combined experiment plots and experiments as subplots.
+
+python3 latency_plotter.py /path/to/csv/files/raw.csv
+```
+
+## Prometheus metrics
+Ensure that Prometheus is running locally and listening on port 9090.
+
+Then run to script to plot the default metrics.
+You can tweak the metrics directly in the script.
+
+Example:
+
+```bash
+# Do the following while ensuring that the virtual environment is activated (see
+# the Setup steps).
+#
+# This will generate a series of plots in the folder `imgs` of the current folder.
+
+mkdir imgs
+python3 prometheus_plotter.py
+```
+
+
[matplotlib]: https://matplotlib.org/
+[pandas]: https://pandas.pydata.org
+
+
+
diff --git a/scripts/qa/reporting/latency_plotter.py b/scripts/qa/reporting/latency_plotter.py
index b725fe47390..f50e138cfcd 100644
--- a/scripts/qa/reporting/latency_plotter.py
+++ b/scripts/qa/reporting/latency_plotter.py
@@ -1,5 +1,6 @@
import sys
import os
+from datetime import datetime
import matplotlib as mpl
import matplotlib.pyplot as plt
@@ -52,6 +53,7 @@
for (subKey) in paramGroups.groups.keys():
subGroup = paramGroups.get_group(subKey)
startTime = subGroup['block_time'].min()
+ print('exp ' + key + ' starts at ' + str(datetime.fromtimestamp(startTime)))
subGroupMod = subGroup['block_time'].apply(lambda x: x - startTime)
(con,rate) = subKey
diff --git a/scripts/qa/reporting/prometheus_plotter.py b/scripts/qa/reporting/prometheus_plotter.py
index ea4e1f660e5..9283944360a 100644
--- a/scripts/qa/reporting/prometheus_plotter.py
+++ b/scripts/qa/reporting/prometheus_plotter.py
@@ -14,43 +14,50 @@
from prometheus_pandas import query
-release = 'v0.34.27'
+release = 'v0.34.x-baseline'
path = os.path.join('imgs')
-prometheus = query.Prometheus('http://localhost:9090')
+prometheus = query.Prometheus('http://localhost:9091')
# Time window
-window_size = dict(seconds=150)
+#window_size = dict(seconds=150)
+#window_size = dict(seconds=130) #homogeneous
+window_size = dict(seconds=127) #baseline
ext_window_size = dict(seconds=180)
-right_end = '2023-02-08T13:14:50Z' #cmt2 tm1
+#right_end = '2023-02-08T13:14:50Z' #cmt2 tm1
#right_end = '2023-02-08T10:33:50Z' #cmt1 tm2
+#right_end = '2023-02-07T18:09:10Z' #homogeneous
+right_end = '2022-10-13T19:43:30Z' #baseline
left_end = pd.to_datetime(right_end) - pd.Timedelta(**window_size)
time_window = (left_end.strftime('%Y-%m-%dT%H:%M:%SZ'), right_end)
ext_left_end = pd.to_datetime(right_end) - pd.Timedelta(**ext_window_size)
ext_time_window = (ext_left_end.strftime('%Y-%m-%dT%H:%M:%SZ'), right_end)
+#fork='cometbft'
+fork='tendermint'
+
# Do prometheus queries
queries = [
- (( 'cometbft_p2p_peers', time_window[0], time_window[1], '1s'), 'peers', dict(ylabel='# Peers', xlabel='time (s)', title='Peers', legend=False, figsize=(10,6), grid=True), True),
- (( 'cometbft_mempool_size', time_window[0], time_window[1], '1s'), 'mempool_size', dict(ylabel='TXs', xlabel='time (s)', title='Mempool Size', legend=False, figsize=(10,6), grid=True, kind='area',stacked=True), False),
- (( 'avg(cometbft_mempool_size)', time_window[0], time_window[1], '1s'), 'avg_mempool_size', dict(ylabel='TXs', xlabel='time (s)', title='Average Mempool Size', legend=False, figsize=(10,6), grid=True), False),
+ (( fork + '_mempool_size', time_window[0], time_window[1], '1s'), 'mempool_size', dict(ylabel='TXs', xlabel='time (s)', title='Mempool Size', legend=False, figsize=(10,6), grid=True, kind='area',stacked=True), False),
+ (( fork + '_p2p_peers', time_window[0], time_window[1], '1s'), 'peers', dict(ylabel='# Peers', xlabel='time (s)', title='Peers', legend=False, figsize=(10,6), grid=True), True),
+ (( 'avg(' + fork + '_mempool_size)', time_window[0], time_window[1], '1s'), 'avg_mempool_size', dict(ylabel='TXs', xlabel='time (s)', title='Average Mempool Size', legend=False, figsize=(10,6), grid=True), False),
#(( 'cometbft_consensus_height', time_window[0], time_window[1], '1s'), 'blocks_regular', dict(ylabel='# Blocks', xlabel='time (s)', title='Blocks in time', legend=False, figsize=(10,6), grid=True), False),
- (( 'cometbft_consensus_rounds', time_window[0], time_window[1], '1s'), 'rounds', dict(ylabel='# Rounds', xlabel='time (s)', title='Rounds per block', legend=False, figsize=(10,6), grid=True), False),
- (( 'rate(cometbft_consensus_height[20s])*60', time_window[0], time_window[1], '1s'), 'block_rate_regular', dict(ylabel='Blocks/min', xlabel='time (s)', title='Rate of block creation', legend=False, figsize=(10,6), grid=True), True),
+ (( fork + '_consensus_rounds', time_window[0], time_window[1], '1s'), 'rounds', dict(ylabel='# Rounds', xlabel='time (s)', title='Rounds per block', legend=False, figsize=(10,6), grid=True), False),
+ (( 'rate(' + fork + '_consensus_height[20s])*60', time_window[0], time_window[1], '1s'), 'block_rate_regular', dict(ylabel='Blocks/min', xlabel='time (s)', title='Rate of block creation', legend=False, figsize=(10,6), grid=True), True),
#(( 'avg(rate(cometbft_consensus_height[20s])*60)', time_window[0], time_window[1], '1s'), 'block_rate_avg_reg', dict(ylabel='Blocks/min', xlabel='time (s)', title='Rate of block creation', legend=False, figsize=(10,6), grid=True), False),
#(( 'cometbft_consensus_total_txs', time_window[0], time_window[1], '1s'), 'total_txs_regular', dict(ylabel='# TXs', xlabel='time (s)', title='Transactions in time', legend=False, figsize=(10,6), grid=True), False),
- (( 'rate(cometbft_consensus_total_txs[20s])*60', time_window[0], time_window[1], '1s'), 'total_txs_rate_regular', dict(ylabel='TXs/min', xlabel='time (s)', title='Rate of transaction processing', legend=False, figsize=(10,6), grid=True), True),
+ (( 'rate(' + fork + '_consensus_total_txs[20s])*60', time_window[0], time_window[1], '1s'), 'total_txs_rate_regular', dict(ylabel='TXs/min', xlabel='time (s)', title='Rate of transaction processing', legend=False, figsize=(10,6), grid=True), True),
#(( 'avg(rate(cometbft_consensus_total_txs[20s])*60)', time_window[0], time_window[1], '1s'), 'total_txs_rate_avg_reg', dict(ylabel='TXs/min', xlabel='time (s)', title='Rate of transaction processing', legend=False, figsize=(10,6), grid=True), False),
(( 'process_resident_memory_bytes', time_window[0], time_window[1], '1s'), 'memory', dict(ylabel='Memory (bytes)', xlabel='time (s)', title='Memory usage', legend=False, figsize=(10,6), grid=True), False),
(( 'avg(process_resident_memory_bytes)', time_window[0], time_window[1], '1s'), 'avg_memory', dict(ylabel='Memory (bytes)', xlabel='time (s)', title='Average Memory usage', legend=False, figsize=(10,6), grid=True), False),
(( 'node_load1', time_window[0], time_window[1], '1s'), 'cpu', dict(ylabel='Load', xlabel='time (s)', title='Node load', legend=False, figsize=(10,6), grid=True), False),
(( 'avg(node_load1)', time_window[0], time_window[1], '1s'), 'avg_cpu', dict(ylabel='Load', xlabel='time (s)', title='Average Node load', legend=False, figsize=(10,6), grid=True), False),
#extended window metrics
- (( 'cometbft_consensus_height', ext_time_window[0], ext_time_window[1], '1s'), 'blocks', dict(ylabel='# Blocks', xlabel='time (s)', title='Blocks in time', legend=False, figsize=(10,6), grid=True), False),
- (( 'rate(cometbft_consensus_height[20s])*60', ext_time_window[0], ext_time_window[1], '1s'), 'block_rate', dict(ylabel='Blocks/min', xlabel='time (s)', title='Rate of block creation', legend=False, figsize=(10,6), grid=True), True),
- (( 'cometbft_consensus_total_txs', ext_time_window[0], ext_time_window[1], '1s'), 'total_txs', dict(ylabel='# TXs', xlabel='time (s)', title='Transactions in time', legend=False, figsize=(10,6), grid=True), False),
- (( 'rate(cometbft_consensus_total_txs[20s])*60', ext_time_window[0], ext_time_window[1], '1s'), 'total_txs_rate', dict(ylabel='TXs/min', xlabel='time (s)', title='Rate of transaction processing', legend=False, figsize=(10,6), grid=True), True),
+ (( fork + '_consensus_height', ext_time_window[0], ext_time_window[1], '1s'), 'blocks', dict(ylabel='# Blocks', xlabel='time (s)', title='Blocks in time', legend=False, figsize=(10,6), grid=True), False),
+ (( 'rate(' + fork + '_consensus_height[20s])*60', ext_time_window[0], ext_time_window[1], '1s'), 'block_rate', dict(ylabel='Blocks/min', xlabel='time (s)', title='Rate of block creation', legend=False, figsize=(10,6), grid=True), True),
+ (( fork + '_consensus_total_txs', ext_time_window[0], ext_time_window[1], '1s'), 'total_txs', dict(ylabel='# TXs', xlabel='time (s)', title='Transactions in time', legend=False, figsize=(10,6), grid=True), False),
+ (( 'rate(' + fork + '_consensus_total_txs[20s])*60', ext_time_window[0], ext_time_window[1], '1s'), 'total_txs_rate', dict(ylabel='TXs/min', xlabel='time (s)', title='Rate of transaction processing', legend=False, figsize=(10,6), grid=True), True),
]
for (query, file_name, pandas_params, plot_average) in queries:
diff --git a/scripts/qa/reporting/requirements.txt b/scripts/qa/reporting/requirements.txt
index 6c6fb00971c..8102e040475 100644
--- a/scripts/qa/reporting/requirements.txt
+++ b/scripts/qa/reporting/requirements.txt
@@ -2,10 +2,13 @@ contourpy==1.0.5
cycler==0.11.0
fonttools==4.37.4
kiwisolver==1.4.4
-matplotlib==3.6.1
-numpy==1.23.4
+matplotlib==3.6.3
+numpy==1.24.2
packaging==21.3
Pillow==9.3.0
pyparsing==3.0.9
python-dateutil==2.8.2
six==1.16.0
+pandas=1.5.3
+prometheus-pandas==0.3.2
+requests==2.28.2