8000 addnode json-rpc command compliant with bitcoin-core: by qlrd · Pull Request #465 · vinteumorg/Floresta · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

addnode json-rpc command compliant with bitcoin-core: #465

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

qlrd
Copy link
Contributor
@qlrd qlrd commented Apr 28, 2025

Partial fix of #453

What is the purpose of this pull request?

  • Bug fix
  • Documentation update
  • New feature
  • Test
  • Other:

Which crates are being modified?

  • floresta-chain
  • floresta-cli
  • floresta-common
  • floresta-compact-filters
  • floresta-electrum
  • floresta-watch-only
  • floresta-wire
  • floresta
  • florestad
  • Other: .

Description

  • added [add|remove|onetry] "command" to addnode command;
  • added v2transport option to addnode command.

Notes to the reviewers

According to the #453, the addnode command is "Implemented, not tested (missing one parameter)". In truth, it miss one required parameter (command, could be add, remove or onetry, and one optional parameter (v2transport if the v2transport will be used or not).

Until now, isnt added functional tests neither the v2transport is well implemented in this addnode command. So, this PR will be in draft mode until resolved

Checklist

  • I've signed all my commits
  • I ran just lint
  • I ran cargo test
  • I've checked the integration tests
  • I've followed the contribution guidelines
  • I'm linking the issue being fixed by this PR (if any)

@Davidson-Souza Davidson-Souza added chore Cleaning, refactoring, reducing complexity RPC Changes something with our JSON-RPC interface labels Apr 28, 2025
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from e19b472 to 9227c64 Compare April 29, 2025 13:23
@Davidson-Souza Davidson-Souza added this to the v0.8.0 milestone Apr 29, 2025
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 2 times, most recently from adedc0f to 44351c6 Compare May 1, 2025 11:38
@qlrd qlrd marked this pull request as ready for review May 1, 2025 11:46
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from 44351c6 to 2478841 Compare May 1, 2025 12:19
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A first pass over 9e58232.

I think you also need to add a added_peers list to UtreexoNode. Then we need to check whether this peer is connected or not, if they are disconnect, retry. This is the logic for non-oneshot peers

AddNode { node: String },
AddNode {
node: String,
command: String,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9e58232

I believe command can be an enum here, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

/// it's false, it will only try to connect to the peer (used by `addnode <node> onetry` command).
pub async fn handle_connect_peer(
&mut self,
responder: tokio::sync::oneshot::Sender<NodeResponse>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9e58232

Is the full path needed here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I put the full path to follow the pattern in lines 72, 346, 439 and 494. It's worth to add a line

use tokio::sync::oneshot::Sender;

At the top of file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed some paths to oneshot::<thing>

)
.await;

let success = result.is_ok();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9e58232

if let Err(e) = result {
    warn!("{e:?} {addr}:{port}");
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

let peer_id = *peer_id;
let idx = peer_data.address_id;

let success = self
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9e58232

You shouldn't call handle_disconnection. This is a callback called when the other side disconnects. Use a peer.channel.send(NodeRequest::Shutdown) instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

used self.send_to_peer since it appeared to be similar in the requested review and has an await feature.

Copy link
Collaborator
@Davidson-Souza Davidson-Souza May 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

used self.send_to_peer since it appeared to be similar in the requested review and has an await feature.

Yeap, that's actually the preferred way of doing it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

},
);

// It should successfully insert the peer. It returns None if
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9e58232

I don't think this handling is needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't understood, the else part or the entire if ... else?
(github showed only the comments)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to mark 1517...1524

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 5 times, most recently from 77a9aab to dd7f1e7 Compare May 7, 2025 00:25
@qlrd qlrd requested a review from Davidson-Souza May 7, 2025 01:29
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think most of the logic is already there, just some comments to improve things.

// If the remotion happens well, we need to drop the peer channel and respond with true.
let response = match self.send_to_peer(peer_id, NodeRequest::Shutdown).await {
Ok(_) => {
if let Some((_, address, _)) = self.added_peers.remove(&peer_id) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is all this error handling required here? If you use the ? operator (e.g. self.do_something()?; you propagate the error to the caller, and you only need to handle it once in the caller.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

);
self.open_connection(ConnectionKind::Regular(ServiceFlags::NONE), 0, local_addr)
UserRequest::Add((addr, port, v2transport)) => {
let _ = self
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, instead of ignoring the error you would handle it and return true/false given the response.


if insert_peer {
// Check if the peer already exists
if self.added_peers.values().any(|(_, peer_addr, _)| {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would avoid long statements like this, they are usually harder to read. Something like this is usually better (although we do have cases like this in the code :/ )

let alread_exists = self.added_peers.values()....
if aready_exists { ... }


/// A simple map of added peers, used to track the ones we added manually
/// by `addnode <ip:port> add` command.
pub(crate) added_peers: HashMap<PeerId, (ConnectionKind, LocalAddress, TransportProtocol)>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Why using this HashMap here? The connection kind is always regular, and the PeerId is sorta irrelevant, since we can compare to another peer by the ip:port tuple. I think a simple Vec<LocalAddress, TransaportProtocol> would be enough? For clarity, they could be added to a new AddedPeerInfo struct.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I think this could live inside the NodeCommon struct.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think yes, used HashMap because i wasn't sure about the connection kind always being Regular. I will try here with the Vec.

# Test assertions
result_0 = node_0.rpc.addnode(node="0.0.0.0:18443")
self.assertTrue(result_0)
def test_should_add_each_other_to_peers_list(self, nodes, v2transport=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use getpeerinfo to check if it actually succeeded.

Also, do this work? Floresta doesn't listen for incoming connections, you shouldn't be able to connect with each other like this.

Copy link
Contributor Author
@qlrd qlrd May 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, was working and checking with boolean results.

But while fixing, remembered that the Core's rpc returns null json and we're returning a boolean.

Happy to know that we can use getpeerinfo to check, so i think we can change the NodeResponse to return () maybe?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to know that we can use getpeerinfo to check, so i think we can change the NodeResponse to return () maybe?

Yes, I think you can just use a unit variant (a variant with no fields).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think you can just use a unit variant (a variant with no fields).

Changed the addnode to return Option<()> (so in floresta we can track some error). In python will still receive None.

But now the problem is with getpeerinfo. While testing the new approach, i received that (time need a fix about GMT+3, so consider the real_time = time - 3):

[FLORESTARPC 2025-05-08 12:12:49.441144+00:00] GET http://127.0.0.1:18442/addnode?params=['0.0.0.0:18443', 'add', False] 
[FLORESTARPC 2025-05-08 12:12:49.441165+00:00] None 
[FLORESTARPC 2025-05-08 12:12:49.443173+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-08 12:12:49.443188+00:00] []

From server perspective :

[2025-05-08 09:12:48 INFO florestad::florestad] Server running on: 0.0.0.0:50002
[2025-05-08 09:12:48 WARN floresta_wire::p2p_wire::running_node] Failed to init Utreexo peers: anchors.json does not exist yet
[2025-05-08 09:12:48 INFO floresta_wire::p2p_wire::chain_selector] Starting IBD, selecting the best chain
[2025-05-08 09:12:49 INFO floresta_electrum::electrum_protocol] New client connection
[2025-05-08 09:12:49 INFO floresta_electrum::electrum_protocol] Client closed connection: 0
[2025-05-08 09:12:49 INFO floresta_wire::p2p_wire::node] Peer 0 (0.0.0.0:18443) connected

Copy link
Contributor Author
@qlrd qlrd May 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think is fixed. I forgot to increment peer_id_count and also push peer_id list at nearly end of open_connection:

@@ -1675,28 +1682,35 @@ where
             (peer_count, Instant::now()),
         );
+        self.peers.insert(
+            peer_count,
+            LocalPeerView {
+                address: address.get_net_address(),
+                port: address.get_port(),
+                user_agent: "".to_string(),
+                state: PeerStatus::Awaiting,
+                channel: requests_tx,
+                services: ServiceFlags::NONE,
+                _last_message: Instant::now(),
+                kind,
+                address_id: peer_id as u32,
+                height: 0,
+                banscore: 0,
+                transport_protocol,
+            },
+        );

-        self.peers.insert(peer_count, peer_view.clone());
+        // Increment peer_id count and the list of peer ids
+        // so we can get information about connected or
+        // added peers when requesting with getpeerinfo command
+        self.peer_ids.push(peer_count);
+        self.peer_id_count += 1;

         if insert_peer {

and received that:

[FLORESTARPC 2025-05-08 12:41:13.977749+00:00] GET http://127.0.0.1:18442/addnode?params=['0.0.0.0:18443', 'add', False]
[FLORESTARPC 2025-05-08 12:41:13.977774+00:00] None
[FLORESTARPC 2025-05-08 12:41:13.979773+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-08 12:41:13.979784+00:00] [{'address': '0.0.0.0:18443', 'initial_height': 0, 'kind': 'regular', 'services': 'ServiceFlags(NONE)', 'state': 'Awaiting', 'transport_protocol': 'V1', 'user_agent': ''}]

A thing worth to mention is that the state is Awaiting. Shouldn't be Ready?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@qlrd

Awaiting means we didn't handshake yet, I think 0.0.0.0 isn't a valid address in this context. It will be Ready after you see the log about handshake succeeded (Something like New peer id=X version=/blah/ ...)

Copy link
Contributor Author
@qlrd qlrd May 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should'nt we send the Ready message after a successfully add/onetry? I mean, sync_node::handle_message receive the FromPeers::Ready, but in node::handle_connect_peer we never send back some info.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Ready message comes from the peer itself, regardless of how we decided to connect with it. They send it after finishing handshake.

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 2 times, most recently from 8bd7c66 to bcfbdad Compare May 8, 2025 15:05
@qlrd qlrd requested a review from Davidson-Souza May 8, 2025 15:18
@Davidson-Souza
Copy link
Collaborator

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

@JoseSK999
Copy link
Contributor

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

I have no context about this, but couldn't it be that we use both the IPv4 and IPv6 adresses from the same peer, treating it as two peers? The DNS lookups returns both kind of addresses

@Davidson-Souza
Copy link
Collaborator
Davidson-Souza commented May 9, 2025

@JoseSK999

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

I have no context about this, but couldn't it be that we use both the IPv4 and IPv6 adresses from the same peer, treating it as two peers? The DNS lookups returns both kind of addresses

I don't think so, we only treat individual addresses, we don't know if the address belong to the same node.

I'm seeing something like this, for every address:

  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },
  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },

@qlrd
Copy link
Contributor Author
qlrd commented May 10, 2025

@JoseSK999

For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer.

I have no context about this, but couldn't it be that we use both the IPv4 and IPv6 adresses from the same peer, treating it as two peers? The DNS lookups returns both kind of addresses

I don't think so, we only treat individual addresses, we don't know if the address belong to the same node.

I'm seeing something like this, for every address:

  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },
  {
    "address": "162.120.19.21:8333",
    "services": "ServiceFlags(NETWORK|WITNESS|NETWORK_LIMITED|P2P_V2)",
    "user_agent": "/Satoshi:29.0.0/",
    "initial_height": 895999,
    "kind": "regular",
    "state": "Ready",
    "transport_protocol": "V2"
  },

@Davidson-Souza, @JoseSK999, reproduced it on tests (i believe). Made this little change:

diff --git a/crates/floresta-wire/src/p2p_wire/node.rs b/crates/floresta-wire/src/p2p_wire/node.rs
index a579fe9..1f4393a 100644
--- a/crates/floresta-wire/src/p2p_wire/node.rs
+++ b/crates/floresta-wire/src/p2p_wire/node.rs
@@ -398,10 +398,11 @@ where
     ) -> Result<(), WireError> {
         debug!("Trying to add peer {addr}:{port} with _v2transport={v2transport}");
 
-        let kind = ConnectionKind::Regular(ServiceFlags::NONE);
-
+        let protocol_version = 70016;
+        let blocks = self.blocks.iter().len() as u32;
+        let services = ServiceFlags::NONE;
+        let kind = ConnectionKind::Regular(services);
         let peer_id = self.peer_id_count;
-
         let addr_v2 = match addr {
             IpAddr::V4(addr) => AddrV2::Ipv4(addr),
             IpAddr::V6(addr) => AddrV2::Ipv6(addr),
@@ -411,10 +412,11 @@ where
             addr_v2,
             0,
             AddressState::NeverTried,
-            0.into(),
+            ServiceFlags::NONE,
             port,
             self.peer_id_count as usize,
         );
+        let address_id = address.id;
 
         let transport_protocol = if v2transport {
             TransportProtocol::V2
@@ -455,6 +457,28 @@ where
             .send(response)
             .map_err(|_| WireError::ResponseSendError)?;
 
+        // If the connection is successful, and anything went wrong
+        // send back a "PeerMessages::Ready" notification,
+        // so sync_node.rs::handle_message can handle it with
+        // handle_peer_ready method.
+        let version = Version {
+            user_agent: self.config.user_agent.clone(),
+            protocol_version,
+            id: peer_id,
+            blocks,
+            address_id,
+            services,
+            kind,
+            transport_protocol,
+        };
+
+        self.node_tx
+            .send(NodeNotification::FromPeer(
+                peer_id,
+                PeerMessages::Ready(version),
+            ))
+            .map_err(|_| WireError::ResponseSendError)?;
+
         Ok(())
     }

And received that:

FLORESTARPC 2025-05-10 14:34:03.733097+00:00] GET http://127.0.0.1:18442/addnode?params=['127.0.0.1:18443', 'add', False]
[FLORESTARPC 2025-05-10 14:34:03.733122+00:00] None
[FLORESTARPC 2025-05-10 14:34:08.740269+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-10 14:34:08.740283+00:00] [{'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}, {'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}]

Like you said, the same peer appeard twice. Maybe the issue can be in handle_peer_ready?

@qlrd
Copy link
Contributor Author
qlrd commented May 10, 2025

Removed that line at the end of handle_peer_ready (patch it after patch the one before this):

diff --git a/crates/floresta-wire/src/p2p_wire/node.rs b/crates/floresta-wire/src/p2p_wire/node.rs
index 1f4393a..73c45f3 100644
--- a/crates/floresta-wire/src/p2p_wire/node.rs
+++ b/crates/floresta-wire/src/p2p_wire/node.rs
@@ -1118,8 +1118,6 @@ where
             self.address_man
                 .update_set_state(version.address_id, AddressState::Connected)
                 .update_set_service_flag(version.address_id, version.services);
-
-            self.peer_ids.push(peer);
         }

And appears to be that, now received just one peer:

[FLORESTARPC 2025-05-10 15:51:57.175350+00:00] GET http://127.0.0.1:18442/addnode?params=['127.0.0.1:18443', 'add', False]
[FLORESTARPC 2025-05-10 15:51:57.175372+00:00] None
[FLORESTARPC 2025-05-10 15:52:02.187119+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-10 15:52:02.187174+00:00] [{'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}]

@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 5 times, most recently from 660f5d3 to 355d26c Compare May 15, 2025 12:40
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 3 times, most recently from bb09450 to 7815c41 Compare May 15, 2025 14:29
Copy link
Collaborator
@Davidson-Souza Davidson-Souza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think everything is working. I've tried it locally and worked well.

Leaving some minor comments about code quality, and we should be good

Comment on lines 16 to 22
#[derive(Debug, Clone, ValueEnum, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum AddnodeCommand {
Add,
Remove,
Onetry,
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ff97a62

This could be moved to rpc_types. And also get a small docstring?

/// 3. v2transport (boolean, optional, default=false):
/// - Attempt to connect using BIP324 v2 transport protocol (ignored for 'remove' command)
///
/// Result: json null
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ff97a62

There's a trailing whitespace after 235

Comment on lines 240 to 220
// TODO:
// from https://bitcoincore.org/en/doc/28.0.0/rpc/network/addnode/
// Nodes added using addnode (or -connect) are protected from DoS disconnection and are not required to be
// full nodes/support SegWit as other outbound peers are (though such peers will not be synced from).
// Addnode connections are limited to 8 at a time and are counted separately from the -maxconnections limit.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ff97a62

I think that "not checking for service bits" and excepting them from the connection limit is already done. I think we need to avoid banning them and limit to 8 max.

Comment on lines +212 to +217
/// A simple struct of added peers,
/// used to track the ones we added manually
/// by `addnode <ip:port> add` command.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ff97a62

nit: as per the contributions guide, you should swap the #[derive] attribute with the docstrings

pub struct UtreexoNode<Chain: BlockchainInterface + UpdatableChainstate, Context> {
pub(crate) common: NodeCommon<Chain>,
pub(crate) context: Context,
pub(crate) added_peers: Vec<AddedPeerInfo>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ff97a62

Move this added_peers into NodeCommon. This struct should be as simple as it gets.

Comment on lines 78 to 80
Add(Option<()>),
Remove(Option<()>),
Onetry(Option<()>),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ff97a62

It seems like they either return a Some(()) if succeed and a None on failure. Why not a boolen?

Returns
success: Whether we successfully added this node to our list of peers
"""
if rpcquirk:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d545ba3

Does the utreexod class use this option?

@@ -7,7 +7,7 @@
from test_framework.rpc.base import BaseRPC

REGTEST_RPC_SERVER = {
"host": "127.0.0.1",
"host": "0.0.0.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d545ba3

Question: Why did change this?

Comment on lines 183 to 208
10000
def build_log_message(
url: str,
method: str,
params: List[Any],
user: Optional[str] = None,
password: Optional[str] = None,
) -> str:
"""
Construct a log string for an RPC call like:
GET <user:password>@http://host:port/method?args[0]=val0&args[1]=val1...
"""
logmsg = "GET "

if user or password:
logmsg += f"<{user or ''}:{password or ''}>@"

logmsg += f"{url}{method}"

if params:
query_string = "&".join(
f"args[{i}]={quote(str(val))}" for i, val in enumerate(params)
)
logmsg += f"?{query_string}"

return logmsg

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

d545ba3

This is formatted as if it was a GET, but json-rpc messages are actually POST with a json as data

# the handshake establishment
time.sleep(TIMEOUT)

# now we expecte the node to be in Ready state
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

7815c41

nit: expect

@brunoerg brunoerg self-requested a review May 16, 2025 15:12
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch 5 times, most recently from f527985 to 733367e Compare May 17, 2025 13:27
qlrd added 3 commits May 18, 2025 11:26
Partial fix of vinteumorg#453

* added [add|remove|onetry] required argument to addnode command on
floresta-cli crate;
* added v2transport optional argument to addnode command on floresta-cli
crate;
* changed the return type of addnode command to null json (compliant
with bitcoin-core);
* removed the connect method in florestad/server and replaced them to
addnode method that deal with the [add|remove|onetry] things;
* added PeerAlreadyExists and FailedToAddPeer Errors on floresta-wire
crate;
* added handlers for connect and disconnect in floresta-wire crate (to
comply with the changes with floresta-cli).
* removed --daemon from test_framework/__init__.py to better logging the
tests;
* added a assertFalse assertion method to test_framework/__init__.py;
* added a static method for test_framework/rpc/base.py to human readable
logging in rpc calls;
* fixed addnode method in test_framework/rpc/floresta.py to accept
bitcoin-core's compliant JSONRPC call;
* added bitcoin-core's compliant addnode and getpeerinfo into
test_framework/rpc/utreexo.py.
* split functional/integration tests for addnode into two tests in
context of v1 and v2 transports;
* addnode tests between floresta and utreexod nodes in a regtest network
with only genesis block.
@qlrd qlrd force-pushed the fix-addnode-rpc-saga branch from 733367e to 17eef61 Compare May 18, 2025 14:26
@qlrd
Copy link
Contributor Author
qlrd commented May 18, 2025

Changed a little bit some code, according (i think) to requested changes @Davidson-Souza, @brunoerg

@qlrd qlrd requested a review from Davidson-Souza May 18, 2025 15:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore Cleaning, refactoring, reducing complexity RPC Changes something with our JSON-RPC interface
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0