-
Notifications
You must be signed in to change notification settings - Fork 61
addnode json-rpc command compliant with bitcoin-core: #465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
e19b472
to
9227c64
Compare
adedc0f
to
44351c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A first pass over 9e58232.
I think you also need to add a added_peers
list to UtreexoNode
. Then we need to check whether this peer is connected or not, if they are disconnect, retry. This is the logic for non-oneshot peers
crates/floresta-cli/src/main.rs
Outdated
AddNode { node: String }, | ||
AddNode { | ||
node: String, | ||
command: String, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe command
can be an enum here, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
/// it's false, it will only try to connect to the peer (used by `addnode <node> onetry` command). | ||
pub async fn handle_connect_peer( | ||
&mut self, | ||
responder: tokio::sync::oneshot::Sender<NodeResponse>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the full path needed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I put the full path to follow the pattern in lines 72, 346, 439 and 494. It's worth to add a line
use tokio::sync::oneshot::Sender;
At the top of file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed some paths to oneshot::<thing>
) | ||
.await; | ||
|
||
let success = result.is_ok(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if let Err(e) = result {
warn!("{e:?} {addr}:{port}");
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
let peer_id = *peer_id; | ||
let idx = peer_data.address_id; | ||
|
||
let success = self |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You shouldn't call handle_disconnection
. This is a callback called when the other side disconnects. Use a peer.channel.send(NodeRequest::Shutdown)
instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
used self.send_to_peer
since it appeared to be similar in the requested review and has an await
feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
used
self.send_to_peer
since it appeared to be similar in the requested review and has anawait
feature.
Yeap, that's actually the preferred way of doing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
}, | ||
); | ||
|
||
// It should successfully insert the peer. It returns None if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this handling is needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't understood, the else part or the entire if ... else
?
(github showed only the comments)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to mark 1517...1524
77a9aab
to
dd7f1e7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think most of the logic is already there, just some comments to improve things.
// If the remotion happens well, we need to drop the peer channel and respond with true. | ||
let response = match self.send_to_peer(peer_id, NodeRequest::Shutdown).await { | ||
Ok(_) => { | ||
if let Some((_, address, _)) = self.added_peers.remove(&peer_id) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is all this error handling required here? If you use the ?
operator (e.g. self.do_something()?;
you propagate the error to the caller, and you only need to handle it once in the caller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
); | ||
self.open_connection(ConnectionKind::Regular(ServiceFlags::NONE), 0, local_addr) | ||
UserRequest::Add((addr, port, v2transport)) => { | ||
let _ = self |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, instead of ignoring the error you would handle it and return true/false
given the response.
|
||
if insert_peer { | ||
// Check if the peer already exists | ||
if self.added_peers.values().any(|(_, peer_addr, _)| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would avoid long statements like this, they are usually harder to read. Something like this is usually better (although we do have cases like this in the code :/ )
let alread_exists = self.added_peers.values()....
if aready_exists { ... }
|
||
/// A simple map of added peers, used to track the ones we added manually | ||
/// by `addnode <ip:port> add` command. | ||
pub(crate) added_peers: HashMap<PeerId, (ConnectionKind, LocalAddress, TransportProtocol)>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Why using this HashMap
here? The connection kind is always regular, and the PeerId is sorta irrelevant, since we can compare to another peer by the ip:port tuple. I think a simple Vec<LocalAddress, TransaportProtocol>
would be enough? For clarity, they could be added to a new AddedPeerInfo
struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I think this could live inside the NodeCommon
struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think yes, used HashMap
because i wasn't sure about the connection kind always being Regular
. I will try here with the Vec
.
tests/floresta-cli/addnode-test.py
Outdated
# Test assertions | ||
result_0 = node_0.rpc.addnode(node="0.0.0.0:18443") | ||
self.assertTrue(result_0) | ||
def test_should_add_each_other_to_peers_list(self, nodes, v2transport=False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use getpeerinfo
to check if it actually succeeded.
Also, do this work? Floresta doesn't listen for incoming connections, you shouldn't be able to connect with each other like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, was working and checking with boolean results.
But while fixing, remembered that the Core's rpc returns null json and we're returning a boolean.
Happy to know that we can use getpeerinfo
to check, so i think we can change the NodeResponse to return ()
maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to know that we can use getpeerinfo to check, so i think we can change the NodeResponse to return () maybe?
Yes, I think you can just use a unit variant (a variant with no fields).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think you can just use a unit variant (a variant with no fields).
Changed the addnode
to return Option<()>
(so in floresta we can track some error). In python will still receive None
.
But now the problem is with getpeerinfo
. While testing the new approach, i received that (time need a fix about GMT+3, so consider the real_time = time - 3
):
[FLORESTARPC 2025-05-08 12:12:49.441144+00:00] GET http://127.0.0.1:18442/addnode?params=['0.0.0.0:18443', 'add', False]
[FLORESTARPC 2025-05-08 12:12:49.441165+00:00] None
[FLORESTARPC 2025-05-08 12:12:49.443173+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-08 12:12:49.443188+00:00] []
From server perspective :
[2025-05-08 09:12:48 INFO florestad::florestad] Server running on: 0.0.0.0:50002
[2025-05-08 09:12:48 WARN floresta_wire::p2p_wire::running_node] Failed to init Utreexo peers: anchors.json does not exist yet
[2025-05-08 09:12:48 INFO floresta_wire::p2p_wire::chain_selector] Starting IBD, selecting the best chain
[2025-05-08 09:12:49 INFO floresta_electrum::electrum_protocol] New client connection
[2025-05-08 09:12:49 INFO floresta_electrum::electrum_protocol] Client closed connection: 0
[2025-05-08 09:12:49 INFO floresta_wire::p2p_wire::node] Peer 0 (0.0.0.0:18443) connected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think is fixed. I forgot to increment peer_id_count
and also push peer_id
list at nearly end of open_connection
:
@@ -1675,28 +1682,35 @@ where
(peer_count, Instant::now()),
);
+ self.peers.insert(
+ peer_count,
+ LocalPeerView {
+ address: address.get_net_address(),
+ port: address.get_port(),
+ user_agent: "".to_string(),
+ state: PeerStatus::Awaiting,
+ channel: requests_tx,
+ services: ServiceFlags::NONE,
+ _last_message: Instant::now(),
+ kind,
+ address_id: peer_id as u32,
+ height: 0,
+ banscore: 0,
+ transport_protocol,
+ },
+ );
- self.peers.insert(peer_count, peer_view.clone());
+ // Increment peer_id count and the list of peer ids
+ // so we can get information about connected or
+ // added peers when requesting with getpeerinfo command
+ self.peer_ids.push(peer_count);
+ self.peer_id_count += 1;
if insert_peer {
and received that:
[FLORESTARPC 2025-05-08 12:41:13.977749+00:00] GET http://127.0.0.1:18442/addnode?params=['0.0.0.0:18443', 'add', False]
[FLORESTARPC 2025-05-08 12:41:13.977774+00:00] None
[FLORESTARPC 2025-05-08 12:41:13.979773+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-08 12:41:13.979784+00:00] [{'address': '0.0.0.0:18443', 'initial_height': 0, 'kind': 'regular', 'services': 'ServiceFlags(NONE)', 'state': 'Awaiting', 'transport_protocol': 'V1', 'user_agent': ''}]
A thing worth to mention is that the state
is Awaiting
. Shouldn't be Ready
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awaiting
means we didn't handshake yet, I think 0.0.0.0
isn't a valid address in this context. It will be Ready
after you see the log about handshake succeeded (Something like New peer id=X version=/blah/ ...
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should'nt we send the Ready message after a successfully add/onetry? I mean, sync_node::handle_message
receive the FromPeers::Ready
, but in node::handle_connect_peer
we never send back some info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Ready
message comes from the peer itself, regardless of how we decided to connect with it. They send it after finishing handshake.
8bd7c66
to
bcfbdad
Compare
For some reason (I'm still trying to figure out why), the automatic connection code creates two connections to the same peer. |
I have no context about this, but couldn't it be that we use both the IPv4 and IPv6 adresses from the same peer, treating it as two peers? The DNS lookups returns both kind of addresses |
I don't think so, we only treat individual addresses, we don't know if the address belong to the same node. I'm seeing something like this, for every address:
|
@Davidson-Souza, @JoseSK999, reproduced it on tests (i believe). Made this little change: diff --git a/crates/floresta-wire/src/p2p_wire/node.rs b/crates/floresta-wire/src/p2p_wire/node.rs
index a579fe9..1f4393a 100644
--- a/crates/floresta-wire/src/p2p_wire/node.rs
+++ b/crates/floresta-wire/src/p2p_wire/node.rs
@@ -398,10 +398,11 @@ where
) -> Result<(), WireError> {
debug!("Trying to add peer {addr}:{port} with _v2transport={v2transport}");
- let kind = ConnectionKind::Regular(ServiceFlags::NONE);
-
+ let protocol_version = 70016;
+ let blocks = self.blocks.iter().len() as u32;
+ let services = ServiceFlags::NONE;
+ let kind = ConnectionKind::Regular(services);
let peer_id = self.peer_id_count;
-
let addr_v2 = match addr {
IpAddr::V4(addr) => AddrV2::Ipv4(addr),
IpAddr::V6(addr) => AddrV2::Ipv6(addr),
@@ -411,10 +412,11 @@ where
addr_v2,
0,
AddressState::NeverTried,
- 0.into(),
+ ServiceFlags::NONE,
port,
self.peer_id_count as usize,
);
+ let address_id = address.id;
let transport_protocol = if v2transport {
TransportProtocol::V2
@@ -455,6 +457,28 @@ where
.send(response)
.map_err(|_| WireError::ResponseSendError)?;
+ // If the connection is successful, and anything went wrong
+ // send back a "PeerMessages::Ready" notification,
+ // so sync_node.rs::handle_message can handle it with
+ // handle_peer_ready method.
+ let version = Version {
+ user_agent: self.config.user_agent.clone(),
+ protocol_version,
+ id: peer_id,
+ blocks,
+ address_id,
+ services,
+ kind,
+ transport_protocol,
+ };
+
+ self.node_tx
+ .send(NodeNotification::FromPeer(
+ peer_id,
+ PeerMessages::Ready(version),
+ ))
+ .map_err(|_| WireError::ResponseSendError)?;
+
Ok(())
} And received that: FLORESTARPC 2025-05-10 14:34:03.733097+00:00] GET http://127.0.0.1:18442/addnode?params=['127.0.0.1:18443', 'add', False]
[FLORESTARPC 2025-05-10 14:34:03.733122+00:00] None
[FLORESTARPC 2025-05-10 14:34:08.740269+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-10 14:34:08.740283+00:00] [{'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}, {'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}] Like you said, the same peer appeard twice. Maybe the issue can be in |
Removed that line at the end of diff --git a/crates/floresta-wire/src/p2p_wire/node.rs b/crates/floresta-wire/src/p2p_wire/node.rs
index 1f4393a..73c45f3 100644
--- a/crates/floresta-wire/src/p2p_wire/node.rs
+++ b/crates/floresta-wire/src/p2p_wire/node.rs
@@ -1118,8 +1118,6 @@ where
self.address_man
.update_set_state(version.address_id, AddressState::Connected)
.update_set_service_flag(version.address_id, version.services);
-
- self.peer_ids.push(peer);
} And appears to be that, now received just one peer: [FLORESTARPC 2025-05-10 15:51:57.175350+00:00] GET http://127.0.0.1:18442/addnode?params=['127.0.0.1:18443', 'add', False]
[FLORESTARPC 2025-05-10 15:51:57.175372+00:00] None
[FLORESTARPC 2025-05-10 15:52:02.187119+00:00] GET http://127.0.0.1:18442/getpeerinfo?params=[]
[FLORESTARPC 2025-05-10 15:52:02.187174+00:00] [{'address': '127.0.0.1:18443', 'initial_height': 0, 'kind': 'regular', 'oneshot': False, 'services': 'ServiceFlags(NONE)', 'state': 'Ready', 'transport_protocol': 'V1', 'user_agent': '/Floresta:0.7.0-109-gbcfbdad-dirty/'}] |
660f5d3
to
355d26c
Compare
bb09450
to
7815c41
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think everything is working. I've tried it locally and worked well.
Leaving some minor comments about code quality, and we should be good
crates/floresta-cli/src/main.rs
Outdated
#[derive(Debug, Clone, ValueEnum, Serialize, Deserialize)] | ||
#[serde(rename_all = "lowercase")] | ||
pub enum AddnodeCommand { | ||
Add, | ||
Remove, | ||
Onetry, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be moved to rpc_types
. And also get a small docstring?
crates/floresta-cli/src/main.rs
Outdated
/// 3. v2transport (boolean, optional, default=false): | ||
/// - Attempt to connect using BIP324 v2 transport protocol (ignored for 'remove' command) | ||
/// | ||
/// Result: json null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a trailing whitespace after 235
crates/floresta-cli/src/main.rs
Outdated
// TODO: | ||
// from https://bitcoincore.org/en/doc/28.0.0/rpc/network/addnode/ | ||
// Nodes added using addnode (or -connect) are protected from DoS disconnection and are not required to be | ||
// full nodes/support SegWit as other outbound peers are (though such peers will not be synced from). | ||
// Addnode connections are limited to 8 at a time and are counted separately from the -maxconnections limit. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that "not checking for service bits" and excepting them from the connection limit is already done. I think we need to avoid banning them and limit to 8 max.
/// A simple struct of added peers, | ||
/// used to track the ones we added manually | ||
/// by `addnode <ip:port> add` command. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: as per the contributions guide, you should swap the #[derive]
attribute with the docstrings
pub struct UtreexoNode<Chain: BlockchainInterface + UpdatableChainstate, Context> { | ||
pub(crate) common: NodeCommon<Chain>, | ||
pub(crate) context: Context, | ||
pub(crate) added_peers: Vec<AddedPeerInfo>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this added_peers
into NodeCommon
. This struct should be as simple as it gets.
Add(Option<()>), | ||
Remove(Option<()>), | ||
Onetry(Option<()>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like they either return a Some(())
if succeed and a None
on failure. Why not a boolen
?
Returns | ||
success: Whether we successfully added this node to our list of peers | ||
""" | ||
if rpcquirk: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the utreexod class use this option?
tests/test_framework/rpc/utreexo.py
Outdated
@@ -7,7 +7,7 @@ | |||
from test_framework.rpc.base import BaseRPC | |||
|
|||
REGTEST_RPC_SERVER = { | |||
"host": "127.0.0.1", | |||
"host": "0.0.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question: Why did change this?
tests/test_framework/rpc/base.py
Outdated
def build_log_message( | ||
url: str, | ||
method: str, | ||
params: List[Any], | ||
user: Optional[str] = None, | ||
password: Optional[str] = None, | ||
10000 | ) -> str: | |
""" | ||
Construct a log string for an RPC call like: | ||
GET <user:password>@http://host:port/method?args[0]=val0&args[1]=val1... | ||
""" | ||
logmsg = "GET " | ||
|
||
if user or password: | ||
logmsg += f"<{user or ''}:{password or ''}>@" | ||
|
||
logmsg += f"{url}{method}" | ||
|
||
if params: | ||
query_string = "&".join( | ||
f"args[{i}]={quote(str(val))}" for i, val in enumerate(params) | ||
) | ||
logmsg += f"?{query_string}" | ||
|
||
return logmsg | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is formatted as if it was a GET
, but json-rpc messages are actually POST
with a json as data
tests/floresta-cli/addnode-test.py
Outdated
# the handshake establishment | ||
time.sleep(TIMEOUT) | ||
|
||
# now we expecte the node to be in Ready state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: expect
f527985
to
733367e
Compare
Partial fix of vinteumorg#453 * added [add|remove|onetry] required argument to addnode command on floresta-cli crate; * added v2transport optional argument to addnode command on floresta-cli crate; * changed the return type of addnode command to null json (compliant with bitcoin-core); * removed the connect method in florestad/server and replaced them to addnode method that deal with the [add|remove|onetry] things; * added PeerAlreadyExists and FailedToAddPeer Errors on floresta-wire crate; * added handlers for connect and disconnect in floresta-wire crate (to comply with the changes with floresta-cli).
* removed --daemon from test_framework/__init__.py to better logging the tests; * added a assertFalse assertion method to test_framework/__init__.py; * added a static method for test_framework/rpc/base.py to human readable logging in rpc calls; * fixed addnode method in test_framework/rpc/floresta.py to accept bitcoin-core's compliant JSONRPC call; * added bitcoin-core's compliant addnode and getpeerinfo into test_framework/rpc/utreexo.py.
* split functional/integration tests for addnode into two tests in context of v1 and v2 transports; * addnode tests between floresta and utreexod nodes in a regtest network with only genesis block.
733367e
to
17eef61
Compare
Changed a little bit some code, according (i think) to requested changes @Davidson-Souza, @brunoerg |
Partial fix of #453
What is the purpose of this pull request?
Which crates are being modified?
Description
Notes to the reviewers
According to the #453, the
addnode
command is "Implemented, not tested (missing one parameter)". In truth, it miss one required parameter (command, could beadd
,remove
oronetry
, and one optional parameter (v2transport
if the v2transport will be used or not).Until now, isnt added functional tests neither the
v2transport
is well implemented in thisaddnode
command. So, this PR will be in draft mode until resolvedChecklist
just lint
cargo test