8000 addnode json-rpc command compliant with bitcoin-core: by qlrd · Pull Request #465 · vinteumorg/Floresta · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

addnode json-rpc command compliant with bitcoin-core: #465

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 29, 2025

Conversation

qlrd
Copy link
Contributor
@qlrd qlrd commented Apr 28, 2025

Partial fix of #453

What is the purpose of this pull request?

  • Bug fix
  • Documentation update
  • New feature
  • Test
  • Other:

Which crates are being modified?

  • floresta-chain
  • floresta-cli
  • floresta-common
  • floresta-compact-filters
  • floresta-electrum
  • floresta-watch-only
  • floresta-wire
  • floresta
  • florestad
  • Other: .

Description

  • added [add|remove|onetry] "command" to addnode command;
  • added v2transport option to addnode command.

Notes to the reviewers

According to the #453, the addnode command is "Implemented, not tested (missing one parameter)". In truth, it miss one required parameter (command, could be add, remove or onetry, and one optional parameter (v2transport if the v2transport will be used or not).

Until now, isnt added functional tests neither the v2transport is well implemented in this addnode command. So, this PR will be in draft mode until resolved

Checklist

  • I've signed all my commits
  • I ran just lint
  • I ran cargo test
  • I've checked the integration tests
  • I've followed the contribution guidelines
  • I'm linking the issue being fixed by this PR (if any 8000 )

@Davidson-Souza Davidson-Souza added chore Cleaning, refactoring, reducing complexity RPC Changes something with our JSON-RPC interface labels Apr 28, 2025
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from e19b472 to 9227c64 Compare April 29, 2025 13:23
@Davidson-Souza Davidson-Souza added this to the v0.8.0 milestone Apr 29, 2025
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 2 times, most recently from adedc0f to 44351c6 Compare May 1, 2025 11:38
@qlrd qlrd marked this pull request as ready for review May 1, 2025 11:46
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from 44351c6 to 2478841 Compare May 1, 2025 12:19
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A first pass over 9e58232.

I think you also need to add a added_peers list to UtreexoNode. Then we need to check whether this peer is connected or not, if they are disconnect, retry. This is the logic for non-oneshot peers

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 5 times, most recently from 77a9aab to dd7f1e7 Compare May 7, 2025 00:25
@qlrd qlrd requested a review from Davidson-Souza May 7, 2025 01:29
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think most of the logic is already there, just some comments to improve things.

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 2 times, most recently from 8bd7c66 to bcfbdad Compare May 8, 2025 15:05
@qlrd qlrd requested a review from Davidson-Souza May 8, 2025 15:18
@Davidson-Souza
Copy link
Collaborator

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

@JoseSK999
Copy link
Contributor

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

I have no context about this, but couldn't it be that we use both the IPv4 and IPv6 adresses from the same peer, treating it as two peers? The DNS lookups returns both kind of addresses

@Davidson-Souza
Copy link
Collaborator
Davidson-Souza commented May 9, 2025

@JoseSK999

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

I have no context about this, but couldn't it be that we use both the IPv4 and IPv6 adresses from the same peer, treating it as two peers? The DNS lookups returns both kind of addresses

I don't think so, we only treat individual addresses, we don't know if the address belong to the same node.

I'm seeing something like this, for every address:

  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },
  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },

@qlrd
Copy link
Contributor Author
qlrd commented May 10, 2025

@JoseSK999

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

I have no context about this, but couldn't it be that we use both the IPv4 and IPv6 adresses from the same peer, treating it as two peers? The DNS lookups returns both kind of addresses

I don't think so, we only treat individual addresses, we don't know if the address belong to the same node.

I'm seeing something like this, for every address:

  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },
  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },

@Davidson-Souza, @JoseSK999, reproduced it on tests (i believe). Made this little change:

diff --git a/crates/floresta-wire/src/p2p_wire/node.rs b/crates/floresta-wire/src/p2p_wire/node.rs
index a579fe9..1f4393a 100644
--- a/crates/floresta-wire/src/p2p_wire/node.rs
+++ b/crates/floresta-wire/src/p2p_wire/node.rs
@@ -398,10 +398,11 @@ where
     ) -> Result<(), WireError> {
         debug!("Trying to add peer {addr}:{port} with _v2transport={v2transport}");
 
-        let kind = ConnectionKind::Regular(ServiceFlags::NONE);
-
+        let protocol_version = 70016;
+        let blocks = self.blocks.iter().len() as u32;
+        let services = ServiceFlags::NONE;
+        let kind = ConnectionKind::Regular(services);
         let peer_id = self.peer_id_count;
-
         let addr_v2 = match addr {
             IpAddr::V4(addr) => AddrV2::Ipv4(addr),
             IpAddr::V6(addr) => AddrV2::Ipv6(addr),
@@ -411,10 +412,11 @@ where
             addr_v2,
             0,
             AddressState::NeverTried,
-            0.into(),
+            ServiceFlags::NONE,
             port,
             self.peer_id_count as usize,
         );
+        let address_id = address.id;
 
         let transport_protocol = if v2transport {
             TransportProtocol::V2
@@ -455,6 +457,28 @@ where
             .send(response)
             .map_err(|_| WireError::ResponseSendError)?;
 
+        // If the connection is successful, and anything went wrong
+        // send back a "PeerMessages::Ready" notification,
+        // so sync_node.rs::handle_message can handle it with
+        // handle_peer_ready method.
+        let version = Version {
+            user_agent: self.config.user_agent.clone(),
+            protocol_version,
+            id: peer_id,
+            blocks,
+            address_id,
+            services,
+            kind,
+            transport_protocol,
+        };
+
+        self.node_tx
+            .send(NodeNotification::FromPeer(
+                peer_id,
+                PeerMessages::Ready(version),
+            ))
+            .map_err(|_| WireError::ResponseSendError)?;
+
         Ok(())
     }

And received that:

FLORESTARPC 2025-05-10 14:34:03.733097+00:00] GET http://127.0.0.1:18442/addnode?params=['127.0.0.1:18443', 'add', False]
[FLORESTARPC 2025-05-10 14:34:03.733122+00:00] None
[FLORESTARPC 2025-05-10 14:34:08.740269+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-10 14:34:08.740283+00:00] [{'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}, {'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}]

Like you said, the same peer appeard twice. Maybe the issue can be in handle_peer_ready?

@qlrd
Copy link
Contributor Author
qlrd commented May 10, 2025

Removed that line at the end of handle_peer_ready (patch it after patch the one before this):

diff --git a/crates/floresta-wire/src/p2p_wire/node.rs b/crates/floresta-wire/src/p2p_wire/node.rs
index 1f4393a..73c45f3 100644
--- a/crates/floresta-wire/src/p2p_wire/node.rs
+++ b/crates/floresta-wire/src/p2p_wire/node.rs
@@ -1118,8 +1118,6 @@ where
             self.address_man
                 .update_set_state(version.address_id, AddressState::Connected)
                 .update_set_service_flag(version.address_id, version.services);
-
-            self.peer_ids.push(peer);
         }

And appears to be that, now received just one peer:

[FLORESTARPC 2025-05-10 15:51:57.175350+00:00] GET http://127.0.0.1:18442/addnode?params=['127.0.0.1:18443', 'add', False]
[FLORESTARPC 2025-05-10 15:51:57.175372+00:00] None
[FLORESTARPC 2025-05-10 15:52:02.187119+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-10 15:52:02.187174+00:00] [{'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}]

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 5 times, most recently from 660f5d3 to 355d26c Compare May 15, 2025 12:40
@qlrd qlrd requested a review from Davidson-Souza May 21, 2025 19:22
@Davidson-Souza
Copy link
Collaborator

Please appy this diff. It seems to fix all the problems with the first connection

diff --git a/crates/floresta-wire/src/p2p_wire/node.rs b/crates/floresta-wire/src/p2p_wire/node.rs
index 58938f5..99bbcfe 100644
--- a/crates/floresta-wire/src/p2p_wire/node.rs
+++ b/crates/floresta-wire/src/p2p_wire/node.rs
@@ -1006,12 +1006,6 @@ where
         );
 
         if let Some(peer_data) = self.common.peers.get_mut(&peer) {
-            // Avoid borrowing the mutably borrowed peer_data
-            // (that will be used in an checker at the end of this block)
-            // copying mutable references
-            let net_addr = peer_data.address;
-            let net_port = peer_data.port;
-
             peer_data.state = PeerStatus::Ready;
             peer_data.services = version.services;
             peer_data.user_agent.clone_from(&version.user_agent);
@@ -1070,14 +1064,7 @@ where
                 .update_set_state(version.address_id, AddressState::Connected)
                 .update_set_service_flag(version.address_id, version.services);
 
-            if !self
-                .common
-                .peers
-                .iter()
-                .any(|(_, p)| p.address == net_addr && p.port == net_port)
-            {
-                self.peer_ids.push(peer);
-            }
+            self.peer_ids.push(peer);         
         }
 
         #[cfg(feature = "metrics")]
@@ -1702,7 +1689,6 @@ where
         // Increment peer_id count and the list of peer ids
         // so we can get information about connected or
         // added peers when requesting with getpeerinfo command
-        self.peer_ids.push(peer_count);
         self.peer_id_count += 1;
 
         Ok(())

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from 159475c to 5905016 Compare May 21, 2025 22:42
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My last round of review, mostly nits, and we should ready to go.

@qlrd
Copy link
Contributor Author
qlrd commented May 23, 2025

Did a manual test:

  • started both nodes;
  • added utreexod node to floresta with add, received the id=0;
  • shutdown'ed the utreexod;
  • restarted utreexod, it received the id=2;

Shouldn't it be 0 or, at least, 1?

[2025-05-22 22:09:09 INFO floresta_wire::p2p_wire::node] Added peer 127.0.0.1:18444
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::transport] Failed to establish V2 protocol connection to 127.0.0.1:18444: Io(Os { code: 54, kind: ConnectionReset, message: "Connection reset by peer" }, RetryV1)
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::transport] Using V1 protocol for connection to 127.0.0.1:18444
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::peer] Writing version to peer 0
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::peer] Received version from peer 0
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::peer] Writing verack to peer 0
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::peer] Received verack from peer 0
[2025-05-22 22:09:15 INFO floresta_wire::p2p_wire::node] New peer id=0 version=/btcwire:0.5.0/utreexod:0.4.1/ blocks=0 services=ServiceFlags(BLOOM|WITNESS|0x1000000)
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::peer] Received sendheaders from peer 0
[2025-05-22 22:09:15 DEBUG floresta_wire::p2p_wire::peer] Writing sendheaders to peer 0
[2025-05-22 22:09:16 DEBUG floresta_wire::p2p_wire::peer] Handling node request: GetHeaders([0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206])
[2025-05-22 22:09:16 DEBUG floresta_wire::p2p_wire::peer] Writing getheaders to peer 0
[2025-05-22 22:09:25 DEBUG floresta_wire::p2p_wire::node] attempting c
9E88
onnection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:09:25 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Feeler
[2025-05-22 22:09:35 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:09:41 DEBUG floresta_wire::p2p_wire::peer] Peer 0 connection loop closed: Err(Transport(Io(Custom { kind: UnexpectedEof, error: "early eof" })))
[2025-05-22 22:09:41 DEBUG floresta_wire::p2p_wire::peer] Failed to propagate cancellation signal for Peer 0: ()
[2025-05-22 22:09:41 DEBUG floresta_wire::p2p_wire::peer] Peer 0 connection loop closed: Transport(Io(Custom { kind: UnexpectedEof, error: "early eof" }))
[2025-05-22 22:09:41 INFO floresta_wire::p2p_wire::node] Peer disconnected: 0
[2025-05-22 22:09:41 ERROR floresta_wire::p2p_wire::chain_selector] 654: crates/floresta-wire/src/p2p_wire/chain_selector.rs - NoPeersAvailable
[2025-05-22 22:09:45 INFO floresta_wire::p2p_wire::node] No peers found, using hardcoded addresses
[2025-05-22 22:09:45 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:09:55 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:09:55 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Feeler
[2025-05-22 22:10:05 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:10:15 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:10:25 INFO floresta_wire::p2p_wire::node] No peers found, using hardcoded addresses
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Regular(ServiceFlags(0))
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::node] attempting connection with address=None kind=Feeler
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::transport] Failed to establish V2 protocol connection to 127.0.0.1:18444: Io(Os { code: 54, kind: ConnectionReset, message: "Connection reset by peer" }, RetryV1)
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::transport] Using V1 protocol for connection to 127.0.0.1:18444
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::peer] Writing version to peer 2
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::peer] Received version from peer 2
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::peer] Writing verack to peer 2
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::peer] Received verack from peer 2
[2025-05-22 22:10:25 INFO floresta_wire::p2p_wire::node] New peer id=2 version=/btcwire:0.5.0/utreexod:0.4.1/ blocks=0 services=ServiceFlags(BLOOM|WITNESS|0x1000000)
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::peer] Received sendheaders from peer 2
[2025-05-22 22:10:25 DEBUG floresta_wire::p2p_wire::peer] Writing sendheaders to peer 2
[2025-05-22 22:10:26 DEBUG floresta_wire::p2p_wire::peer] Handling node request: GetHeaders([0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206])
[2025-05-22 22:10:26 DEBUG floresta_wire::p2p_wire::peer] Writing getheaders to peer 2

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 4 times, most recently from e90fc1c to a6d781c Compare May 26, 2025 19:00
@qlrd qlrd requested a review from Davidson-Souza May 26, 2025 20:57
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very minor nits

F438
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 2 times, most recently from b5c4720 to bb886e3 Compare May 27, 2025 23:24
Copy link
Contributor
@lucad70 lucad70 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a NIT on naming and raised a few questions for clarification.

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from bb886e3 to 3269342 Compare May 28, 2025 15:54
qlrd and others added 2 commits May 28, 2025 14:12
Partial fix of vinteumorg#453

* added [add|remove|onetry] required argument to addnode command on
floresta-cli crate;
* added v2transport optional argument to addnode command on floresta-cli
crate;
* changed the return type of addnode command to null json (compliant
with bitcoin-core);
* removed the connect method in florestad/server and replaced them to
addnode method that deal with the [add|remove|onetry] things;
* added PeerAlreadyExists and FailedToAddPeer Errors on floresta-wire
crate;
* added handlers for connect and disconnect in floresta-wire crate (to
comply with the changes with floresta-cli).

Co-authored-by: Davidson Souza <me@dlsouza.lol>
* removed --daemon from test_framework/__init__.py to better logging the
tests;
* added a assertFalse assertion method to test_framework/__init__.py;
* added a static method for test_framework/rpc/base.py to human readable
logging in rpc calls;
* fixed addnode method in test_framework/rpc/floresta.py to accept
bitcoin-core's compliant JSONRPC call;
* added bitcoin-core's compliant addnode and getpeerinfo into
test_framework/rpc/utreexo.py.
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 3269342

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 2 times, most recently from c2f8d8a to c5ae16c Compare May 28, 2025 18:50
@qlrd qlrd requested a review from lucad70 May 28, 2025 18:50
* split functional/integration tests for addnode into two tests in
context of v1 and v2 transports;
* addnode tests between floresta and utreexod nodes in a regtest network
with only genesis block.
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from c5ae16c to 47864c8 Compare May 28, 2025 21:27
@qlrd qlrd requested a review from Davidson-Souza May 28, 2025 22:08
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re-ACK 47864c8

@Davidson-Souza Davidson-Souza merged commit f0cd884 into vinteumorg:master May 29, 2025
10 checks passed
@qlrd qlrd deleted the fix-addnode-rpc-saga branch June 2, 2025 23:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore Cleaning, refactoring, reducing complexity RPC Changes something with our JSON-RPC interface
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0