-
Notifications
You must be signed in to change notification settings - Fork 427
Reconstruct ChannelManager forwarded HTLCs maps from Channels
#4227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconstruct ChannelManager forwarded HTLCs maps from Channels
#4227
Conversation
|
👋 Thanks for assigning @joostjager as a reviewer! |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #4227 +/- ##
==========================================
+ Coverage 89.33% 89.37% +0.03%
==========================================
Files 180 180
Lines 139042 139599 +557
Branches 139042 139599 +557
==========================================
+ Hits 124219 124770 +551
- Misses 12196 12240 +44
+ Partials 2627 2589 -38
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Nvm, our test coverage for reload of these maps is just pretty incomplete. |
0b4eb68 to
ce2ccac
Compare
|
Updated with new testing and a few tweaks: diff Will rebase next |
ce2ccac to
1e86521
Compare
joostjager
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did an initial review pass. Even though the change is pretty contained, I still found it difficult to fully follow what's happening.
Overall I think comments are very much on the light side in LDK, and the code areas touched in this PR are no exception in my opinion. Maybe, now that you've invested the time to build understanding, you can liberally sprinkle comments on your changes and nearby code?
Similar to the other /target directories we ignore where a bunch of files are generated during testing.
1e86521 to
26c6dc7
Compare
|
I still need to finish adding some comments and responding to feedback, but I believe the tests are done and this should be mostly there so pushed up the latest state. |
We had discussed whether to split out |
26c6dc7 to
ad79155
Compare
Probably not in this PR since it's already decently sized and has a clear scope, but I will look into it for follow-up! |
6b919ce to
0b83e80
Compare
SGTM! |
|
Btw, feel free to ping me whenever you think this is ready for a secondary reviewer! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the new commit structure. Left a few remarks and questions, some probably due to a few pieces of understanding missing in my mental model.
| // Reconstruct `ChannelManager::decode_update_add_htlcs` from the serialized | ||
| // `Channel`, as part of removing the requirement to regularly persist the | ||
| // `ChannelManager`. | ||
| decode_update_add_htlcs.insert( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't this be done in the next loop over the monitors where the update_add_htlcs are deduped? This loop seems to be about payments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, IMO the first loop is about repopulating HTLC maps from monitors, and the second loop is about pruning maps from monitors (hence having two loops so we don't prune before the maps are completely filled out), so this approach seems to fit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the first loop is about repopulating (outbound) payments. We probably mean the same thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But just for populating decode_update_add_htlcs, I don't think there is a two step processes need. I've been playing around a bit along this line. Not completely the same, but you get the idea.
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index 299ff60ba..5fedf6fb9 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -17670,21 +17670,6 @@ where
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
let peer_state = &mut *peer_state_lock;
is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
- if let Some(chan) = peer_state.channel_by_id.get(channel_id) {
- if let Some(funded_chan) = chan.as_funded() {
- let inbound_committed_update_adds =
- funded_chan.get_inbound_committed_update_adds();
- if !inbound_committed_update_adds.is_empty() {
- // Reconstruct `ChannelManager::decode_update_add_htlcs` from the serialized
- // `Channel`, as part of removing the requirement to regularly persist the
- // `ChannelManager`.
- decode_update_add_htlcs.insert(
- funded_chan.context.outbound_scid_alias(),
- inbound_committed_update_adds,
- );
- }
- }
- }
}
if is_channel_closed {
@@ -17719,9 +17704,17 @@ where
for (channel_id, monitor) in args.channel_monitors.iter() {
let mut is_channel_closed = true;
let counterparty_node_id = monitor.get_counterparty_node_id();
+ let mut inbound_committed_update_adds = Vec::new();
if let Some(peer_state_mtx) = per_peer_state.get(&counterparty_node_id) {
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
let peer_state = &mut *peer_state_lock;
+ if let Some(chan) = peer_state.channel_by_id.get(channel_id) {
+ if let Some(funded_chan) = chan.as_funded() {
+ inbound_committed_update_adds =
+ funded_chan.get_inbound_committed_update_adds();
+ }
+ }
+
is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
}
@@ -17746,12 +17739,9 @@ where
// still have an entry for this HTLC in `forward_htlcs`,
// `pending_intercepted_htlcs`, or `decode_update_add_htlcs`, we were apparently not
// persisted after the monitor was when forwarding the payment.
- dedup_decode_update_add_htlcs(
- &mut decode_update_add_htlcs,
- &prev_hop_data,
- "HTLC was forwarded to the closed channel",
- &args.logger,
- );
+ inbound_committed_update_adds.retain(|update_add| {
+ update_add.htlc_id != prev_hop_data.htlc_id
+ });
dedup_decode_update_add_htlcs(
&mut decode_update_add_htlcs_legacy,
&prev_hop_data,
@@ -17877,6 +17867,8 @@ where
}
}
+ decode_update_add_htlcs.insert(*channel_id, inbound_committed_update_adds);
+
// Whether the downstream channel was closed or not, try to re-apply any payment
// preimages from it which may be needed in upstream channels for forwarded
// payments.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are tests passing for you with that diff? I tried moving the repopulating down to the lower loop and got test failures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't try, but they also fail for me. Did a few more restructurings below that make the tests pass. But the restructure did lead to the realization that when the channel is closed, there are no inbound_committed_update_adds. And that there is no point in deduplicating them and adding to decode_update_add_htlcs? Am i overseeing something here?
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index 299ff60ba..8a101e791 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -17670,21 +17670,6 @@ where
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
let peer_state = &mut *peer_state_lock;
is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
- if let Some(chan) = peer_state.channel_by_id.get(channel_id) {
- if let Some(funded_chan) = chan.as_funded() {
- let inbound_committed_update_adds =
- funded_chan.get_inbound_committed_update_adds();
- if !inbound_committed_update_adds.is_empty() {
- // Reconstruct `ChannelManager::decode_update_add_htlcs` from the serialized
- // `Channel`, as part of removing the requirement to regularly persist the
- // `ChannelManager`.
- decode_update_add_htlcs.insert(
- funded_chan.context.outbound_scid_alias(),
- inbound_committed_update_adds,
- );
- }
- }
- }
}
if is_channel_closed {
@@ -17717,12 +17702,21 @@ where
}
}
for (channel_id, monitor) in args.channel_monitors.iter() {
+ let mut chan_state = None;
let mut is_channel_closed = true;
let counterparty_node_id = monitor.get_counterparty_node_id();
if let Some(peer_state_mtx) = per_peer_state.get(&counterparty_node_id) {
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
let peer_state = &mut *peer_state_lock;
- is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
+ if let Some(chan) = peer_state.channel_by_id.get(channel_id) {
+ if let Some(funded_chan) = chan.as_funded() {
+ chan_state = Some((
+ funded_chan.context.outbound_scid_alias(),
+ funded_chan.get_inbound_committed_update_adds(),
+ ));
+ }
+ is_channel_closed = false;
+ }
}
if is_channel_closed {
@@ -17746,12 +17740,12 @@ where
// still have an entry for this HTLC in `forward_htlcs`,
// `pending_intercepted_htlcs`, or `decode_update_add_htlcs`, we were apparently not
// persisted after the monitor was when forwarding the payment.
- dedup_decode_update_add_htlcs(
- &mut decode_update_add_htlcs,
- &prev_hop_data,
- "HTLC was forwarded to the closed channel",
- &args.logger,
- );
+
+ // No need to do this because chan_state is None?
+ //
+ // chan_state.1.retain(|update_add| {
+ // update_add.htlc_id != prev_hop_data.htlc_id
+ // });
dedup_decode_update_add_htlcs(
&mut decode_update_add_htlcs_legacy,
&prev_hop_data,
@@ -17875,6 +17869,12 @@ where
completion_action,
));
}
+
+ // Nothing to add because the channel is closed??
+ //
+ // if !chan_state.1.is_empty() {
+ // decode_update_add_htlcs.insert(chan_state.0, chan_state.1);
+ // }
}
// Whether the downstream channel was closed or not, try to re-apply any paymentThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some staring and print statements, what's missing from your thoughts above/our offline discussion is the case where we add an HTLC forward to decode_update_add_htlcs when processing the inbound edge channel, then remove it again while processing the (closed) outbound edge channel. So I believe it is important to aggregate them all at once in the first loop and then prune them in the second loop, to avoid failing to prune if we process the outbound edge first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting, didn't think of that. Do you want to add that explanation as a code comment, so that the knowledge is recorded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in #4289
|
👋 The first review has been submitted! Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer. |
0b83e80 to
dfef41a
Compare
|
I think the sermver CI failure is spurious |
Fixed by #4260. |
|
I've tried rebasing this PR on top of #4151 to see if that fixes some of the test failures: https://github.com/joostjager/rust-lightning/tree/reconstruct-mgr-fwd-htlcs-rebased Unfortunately that is not the case. The same 44 failures as before, plus 2 extra being the tests added in this PR. The new tests fail on the channel not being in the right state after deserialization of chan mgr. Probably unrelated to the tests, and a more general problem that blankets specific issues. |
joostjager
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments, but overall I think this PR is ready for reviewer two.
| // Reconstruct `ChannelManager::decode_update_add_htlcs` from the serialized | ||
| // `Channel`, as part of removing the requirement to regularly persist the | ||
| // `ChannelManager`. | ||
| decode_update_add_htlcs.insert( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the first loop is about repopulating (outbound) payments. We probably mean the same thing.
| // Reconstruct `ChannelManager::decode_update_add_htlcs` from the serialized | ||
| // `Channel`, as part of removing the requirement to regularly persist the | ||
| // `ChannelManager`. | ||
| decode_update_add_htlcs.insert( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But just for populating decode_update_add_htlcs, I don't think there is a two step processes need. I've been playing around a bit along this line. Not completely the same, but you get the idea.
diff --git a/lightning/src/ln/channelmanager.rs b/lightning/src/ln/channelmanager.rs
index 299ff60ba..5fedf6fb9 100644
--- a/lightning/src/ln/channelmanager.rs
+++ b/lightning/src/ln/channelmanager.rs
@@ -17670,21 +17670,6 @@ where
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
let peer_state = &mut *peer_state_lock;
is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
- if let Some(chan) = peer_state.channel_by_id.get(channel_id) {
- if let Some(funded_chan) = chan.as_funded() {
- let inbound_committed_update_adds =
- funded_chan.get_inbound_committed_update_adds();
- if !inbound_committed_update_adds.is_empty() {
- // Reconstruct `ChannelManager::decode_update_add_htlcs` from the serialized
- // `Channel`, as part of removing the requirement to regularly persist the
- // `ChannelManager`.
- decode_update_add_htlcs.insert(
- funded_chan.context.outbound_scid_alias(),
- inbound_committed_update_adds,
- );
- }
- }
- }
}
if is_channel_closed {
@@ -17719,9 +17704,17 @@ where
for (channel_id, monitor) in args.channel_monitors.iter() {
let mut is_channel_closed = true;
let counterparty_node_id = monitor.get_counterparty_node_id();
+ let mut inbound_committed_update_adds = Vec::new();
if let Some(peer_state_mtx) = per_peer_state.get(&counterparty_node_id) {
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
let peer_state = &mut *peer_state_lock;
+ if let Some(chan) = peer_state.channel_by_id.get(channel_id) {
+ if let Some(funded_chan) = chan.as_funded() {
+ inbound_committed_update_adds =
+ funded_chan.get_inbound_committed_update_adds();
+ }
+ }
+
is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
}
@@ -17746,12 +17739,9 @@ where
// still have an entry for this HTLC in `forward_htlcs`,
// `pending_intercepted_htlcs`, or `decode_update_add_htlcs`, we were apparently not
// persisted after the monitor was when forwarding the payment.
- dedup_decode_update_add_htlcs(
- &mut decode_update_add_htlcs,
- &prev_hop_data,
- "HTLC was forwarded to the closed channel",
- &args.logger,
- );
+ inbound_committed_update_adds.retain(|update_add| {
+ update_add.htlc_id != prev_hop_data.htlc_id
+ });
dedup_decode_update_add_htlcs(
&mut decode_update_add_htlcs_legacy,
&prev_hop_data,
@@ -17877,6 +17867,8 @@ where
}
}
+ decode_update_add_htlcs.insert(*channel_id, inbound_committed_update_adds);
+
// Whether the downstream channel was closed or not, try to re-apply any payment
// preimages from it which may be needed in upstream channels for forwarded
// payments.dfef41a to
77d6dfa
Compare
|
🔔 1st Reminder Hey @tnull! This PR has been waiting for your review. |
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT the changes look good (though I might ofc lack context on the overall plan/next steps).
Just some minor comments on the tests.
elnosh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reaching the limits of my knowledge of this part of the code. Took some time to build context on the problem and understand the change. Is this the problematic part https://github.com/valentinewallace/rust-lightning/blob/77d6dfa3c2170c768e01302081d3550920a836af/lightning/src/ln/channelmanager.rs#L16945-L17029 that we want to get rid of? So in the future that channel state will come from the monitors and avoid this out-of-sync issues.
As far as my understanding goes, I think the changes here look good.
We have an overarching goal of (mostly) getting rid of ChannelManager persistence and rebuilding the ChannelManager's state from existing ChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure. As part of this, we plan to store at least parts of Channels in ChannelMonitors, and that Channel data will be used in rebuilding the manager. Once we store update_adds in Channels, we can use them on restart when reconstructing ChannelManager maps such as forward_htlcs and pending_intercepted_htlcs. Upcoming commits will start doing this reconstruction.
We have an overarching goal of (mostly) getting rid of ChannelManager persistence and rebuilding the ChannelManager's state from existing ChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure. As part of rebuilding ChannelManager forward HTLCs maps, we will also add a fix that will regenerate HTLCIntercepted events for HTLC intercepts that are present but have no corresponding event in the queue. That fix will use this new method.
We have an overarching goal of (mostly) getting rid of ChannelManager persistence and rebuilding the ChannelManager's state from existing ChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure. We'll use this new util when reconstructing the ChannelManager::decode_update_add_htlcs map from Channel data in upcoming commits. While the Channel data is not included in the monitors yet, it will be in future work.
We have an overarching goal of (mostly) getting rid of ChannelManager persistence and rebuilding the ChannelManager's state from existing ChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure. Soon we'll be reconstructing these now-legacy maps from Channel data (that will also be included in ChannelMonitors in future work), so rename them as part of moving towards not needing to persist them in ChannelManager.
Makes an upcoming commit cleaner
We have an overarching goal of (mostly) getting rid of ChannelManager persistence and rebuilding the ChannelManager's state from existing ChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure. Here we start this process by rebuilding ChannelManager::decode_update_add_htlcs from the Channels, which will soon be included in the ChannelMonitors as part of a different series of PRs. The newly built map is not yet used but will be in the next commit.
We have an overarching goal of (mostly) getting rid of ChannelManager persistence and rebuilding the ChannelManager's state from existing ChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure. Here we start this process by rebuilding ChannelManager::decode_update_add_htlcs, forward_htlcs, and pending_intercepted_htlcs from Channel data, which will soon be included in the ChannelMonitors as part of a different series of PRs. We also fix the reload_node test util to use the node's pre-reload config after restart. The previous behavior was a bit surprising and led to one of this commit's tests failing.
We have an overarching goal of (mostly) getting rid of ChannelManager persistence and rebuilding the ChannelManager's state from existing ChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure. In the previous commit we started this process by rebuilding ChannelManager::decode_update_add_htlcs, forward_htlcs, and pending_intercepted_htlcs from the Channel data, which will soon be included in the ChannelMonitors as part of a different series of PRs. Here we test that HTLC forwards that were originally received on 0.2 can still be successfully forwarded using the new reload + legacy handling code that will be merged for 0.3.
77d6dfa to
a24dcff
Compare
Yes, that's right :) We're trying to avoid hitting "A ChannelManager is stale compared to the current ChannelMonitor!", basically. |
joostjager
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only non-blocking comments. Happy that you've managed to get a first step towards fixing the force closures landed!
| // Reconstruct `ChannelManager::decode_update_add_htlcs` from the serialized | ||
| // `Channel`, as part of removing the requirement to regularly persist the | ||
| // `ChannelManager`. | ||
| decode_update_add_htlcs.insert( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh interesting, didn't think of that. Do you want to add that explanation as a code comment, so that the knowledge is recorded?
| HTLCForwardInfo::FailHTLC { htlc_id, .. } | ||
| | HTLCForwardInfo::FailMalformedHTLC { htlc_id, .. } => (*scid, *htlc_id), | ||
| }; | ||
| if let Some(pending_update_adds) = decode_update_add_htlcs.get_mut(&prev_scid) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_mut -> get?
| match decode_update_add_htlcs.entry(scid) { | ||
| hash_map::Entry::Occupied(mut update_adds) => { | ||
| for legacy_update_add in legacy_update_adds { | ||
| if !update_adds.get().contains(&legacy_update_add) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
store .get_mut() in a var outside of the loop and reuse it inside?
| /// Used to rebuild `ChannelManager` HTLC state on restart. Previously the manager would track | ||
| /// and persist all HTLC forwards and receives itself, but newer LDK versions avoid relying on | ||
| /// its persistence and instead reconstruct state based on `Channel` and `ChannelMonitor` data. | ||
| update_add_htlc_opt: Option<msgs::UpdateAddHTLC>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a check question: is the disk space requirement for this acceptable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be, what with per-channel HTLC limits. If a node has a ton of forwards going on, their node is expected to be more resourced, I would think.
tnull
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, mod pending comments/questions.
|
Will address the tweaks in the follow-up! |
| to_forward_infos.push((forward_info, htlc.htlc_id)); | ||
| htlc.state = InboundHTLCState::Committed; | ||
| htlc.state = InboundHTLCState::Committed { | ||
| // HTLCs will only be in state `InboundHTLCResolution::Resolved` if they were |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So should this be an assertion that gets validated more explicitly on deserialization? This comment seems to imply this code is not reachable and will somehow misbehave, the second isn't clear to me but we should validate the first either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I meant in this comment is that this code is reachable but it shouldn't matter that we set update_add_htlc_opt to None here because we already require this HTLC to not be here on restart, according to the changelog (and restart is the only time when update_add_htlc_opt is used).
We can start returning explicitly DecodeError::InvalidValue if this variant is present on read -- to be clear, is this what you're advocating for?
I unfortunately am not sure the exact misbehavior that can occur, according to the changelog it comes from #3355 but it isn't explained in much detail that I can see
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we already require this HTLC to not be here on restart, according to the changelog (and restart is the only time when update_add_htlc_opt is used).
We can start returning explicitly DecodeError::InvalidValue if this variant is present on read -- to be clear, is this what you're advocating for?
Right, if there's some implicit assumption based on the changelog and upgrade behavior, we really need to be validating that before allowing an object to be deserialized.
| &InboundHTLCState::Committed => { | ||
| &InboundHTLCState::Committed { ref update_add_htlc_opt } => { | ||
| 3u8.write(writer)?; | ||
| inbound_committed_update_adds.push(update_add_htlc_opt.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not avoid cloneing during serialization?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in #4289
| /// Used to rebuild `ChannelManager` HTLC state on restart. Previously the manager would track | ||
| /// and persist all HTLC forwards and receives itself, but newer LDK versions avoid relying on | ||
| /// its persistence and instead reconstruct state based on `Channel` and `ChannelMonitor` data. | ||
| update_add_htlc_opt: Option<msgs::UpdateAddHTLC>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bleh, writing the full onion packet for every pending HTLC (not just ones waiting to be forwarded) on every write (today in ChannelManager but also eventually in ChannelMonitorUpdates) is pretty gross (especially if we add an extra onion in the HTLCs for extra data, which we kinda need to do). Maybe the solution is to remove this field once the HTLC has been (irrevocably, by persisting a monitor update) forwarded, though it might be a bit annoying code-wise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure this will cause real performance issues? 1000 outstanding inbound HTLCs = 1.3 extra MB per manager persist. It seems potentially like a premature optimization, or at least like the LN might have to really take off in between the time we merge this change and when we release support for not having to persist the manager.
That said, it might be easy to remove these irrevocable forwards on timer_tick_occurred? Or possibly return them from the call to channel.monitor_updating_restored(..) and remove them then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An extra MiB (on an object that's usually smaller than, or close to for a large routing node, a single MiB or so - its currently 255K on my node) is a lot of extra overhead for an object we write all the time. But I'm also interested in its size in ChannelMonitorUpdates - storing all pending HTLCs takes ChannelMonitorUpdates from small diffs to huge objects, adding a ton of write time...
That said, it might be easy to remove these irrevocable forwards on timer_tick_occurred?
That seems like a lot of overhead to iterate every HTLC and see if its been forwarded on each timer tick.
Or possibly return them from the call to channel.monitor_updating_restored(..) and remove them then.
This seems simpler.
| }, | ||
| _ => None, | ||
| }) | ||
| .collect() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bleh, the common case as of this PR is that we don't actually use the newly-cloned onion, they're generally always forwarded already. Maybe we should just fix that part and not worry about it here, but it would be nice to avoid copying everything every time just to not need it.
| decode_update_add_htlcs_legacy.unwrap_or_else(|| new_hash_map()); | ||
| let mut pending_intercepted_htlcs_legacy = | ||
| pending_intercepted_htlcs_legacy.unwrap_or_else(|| new_hash_map()); | ||
| let mut decode_update_add_htlcs = new_hash_map(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have logic to decode into both simultaneously. If we have pending_intercepted_htlcs_legacy we can use it, if we don't, we can insert into it. Trying to handle both at the same time (ie preferring the new-style decoding over the old style) just seems like extra work for no reason?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Part of the reason was to get test coverage for the new paths and gain confidence in them with prod usage. Seems nice to have the same behavior in test and prod.
But we could instead always prefer the old maps in prod + avoid writing the old maps in tests only, would you prefer something along those lines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That might be simpler. I'm fine with wanting to use the new code, but we have a lot of extra logic to handle both objects which seems like we're trying too hard when we could just do what you describe here. Also nothing wrong with keeping the old logic in prod for now...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in #4289
We have an overarching goal of (mostly) getting rid of
ChannelManagerpersistence and rebuilding theChannelManager's state from existingChannelMonitors, due to issues when the two structs are out-of-sync on restart. The main issue that can arise is channel force closure.Here we start this process by rebuilding
ChannelManager::decode_update_add_htlcs,forward_htlcs, andpending_intercepted_htlcsfrom theChannels, which will soon be included in theChannelMonitors as part of #4218.Channels will not contain committedupdate_add_htlcsdecode_update_add_htlcsmap and ignoring the legacy maps. This may indicate missing test coverage, since in theory we need to re-forward these HTLCs sometimes so they go back in theforward_htlcsmap for processing