Skip to content
Merged
133 changes: 66 additions & 67 deletions lightning/src/ln/channelmanager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2506,8 +2506,8 @@ where
// `total_consistency_lock`
// |
// |__`forward_htlcs`
// | |
// | |__`pending_intercepted_htlcs`
// |
// |__`pending_intercepted_htlcs`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I searched a bit through the code to ensure that these locks are now never held together. Seems good. I did wonder if debug_sync can be of use to check this type of requirement, but don't know how it works exactly.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nice thing about our murex debug stuff is that it requires no configuration - it checks that locks are correct by detecting the ordering rather than being told it. Comments here are advisory to help developers fix issues they run into, rather than prescriptive.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. And that is running automatically in debug builds?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does, though it runs more aggressively when you build with --features backtrace.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't expect there to be degrees of aggressiveness. So there is a grey area in detecting lock ordering problems?

// |
// |__`decode_update_add_htlcs`
// |
Expand Down Expand Up @@ -10696,77 +10696,76 @@ This indicates a bug inside LDK. Please report this error at https://github.com/
prev_user_channel_id,
forward_info,
};
match forward_htlcs.entry(scid) {
hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().push(HTLCForwardInfo::AddHTLC(pending_add));
},
hash_map::Entry::Vacant(entry) => {
if !is_our_scid
&& pending_add.forward_info.incoming_amt_msat.is_some()
&& fake_scid::is_valid_intercept(
&self.fake_scid_rand_bytes,
scid,
&self.chain_hash,
) {
let intercept_id = InterceptId(
Sha256::hash(&pending_add.forward_info.incoming_shared_secret)
.to_byte_array(),
);
let mut pending_intercepts =
self.pending_intercepted_htlcs.lock().unwrap();
match pending_intercepts.entry(intercept_id) {
hash_map::Entry::Vacant(entry) => {
new_intercept_events.push_back((
events::Event::HTLCIntercepted {
requested_next_hop_scid: scid,
payment_hash,
inbound_amount_msat: pending_add
.forward_info
.incoming_amt_msat
.unwrap(),
expected_outbound_amount_msat: pending_add
.forward_info
.outgoing_amt_msat,
intercept_id,
},
None,
));
entry.insert(pending_add);

if !is_our_scid
&& pending_add.forward_info.incoming_amt_msat.is_some()
&& fake_scid::is_valid_intercept(
&self.fake_scid_rand_bytes,
scid,
&self.chain_hash,
) {
let intercept_id = InterceptId(
Sha256::hash(&pending_add.forward_info.incoming_shared_secret)
.to_byte_array(),
);
let mut pending_intercepts = self.pending_intercepted_htlcs.lock().unwrap();
match pending_intercepts.entry(intercept_id) {
hash_map::Entry::Vacant(entry) => {
new_intercept_events.push_back((
events::Event::HTLCIntercepted {
requested_next_hop_scid: scid,
payment_hash,
inbound_amount_msat: pending_add
.forward_info
.incoming_amt_msat
.unwrap(),
expected_outbound_amount_msat: pending_add
.forward_info
.outgoing_amt_msat,
intercept_id,
},
hash_map::Entry::Occupied(_) => {
let logger = WithContext::from(
&self.logger,
None,
Some(prev_channel_id),
Some(payment_hash),
);
log_info!(
None,
));
entry.insert(pending_add);
},
hash_map::Entry::Occupied(_) => {
let logger = WithContext::from(
&self.logger,
None,
Some(prev_channel_id),
Some(payment_hash),
);
log_info!(
logger,
"Failed to forward incoming HTLC: detected duplicate intercepted payment over short channel id {}",
scid
);
let htlc_source = HTLCSource::PreviousHopData(
pending_add.htlc_previous_hop_data(),
);
let reason = HTLCFailReason::from_failure_code(
LocalHTLCFailureReason::UnknownNextPeer,
);
let failure_type =
HTLCHandlingFailureType::InvalidForward {
requested_forward_scid: scid,
};
failed_intercept_forwards.push((
htlc_source,
payment_hash,
reason,
failure_type,
));
},
}
} else {
let htlc_source = HTLCSource::PreviousHopData(
pending_add.htlc_previous_hop_data(),
);
let reason = HTLCFailReason::from_failure_code(
LocalHTLCFailureReason::UnknownNextPeer,
);
let failure_type = HTLCHandlingFailureType::InvalidForward {
requested_forward_scid: scid,
};
failed_intercept_forwards.push((
htlc_source,
payment_hash,
reason,
failure_type,
));
},
}
} else {
match forward_htlcs.entry(scid) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taking the forward lock could be moved here, to make it more clear that both locks are never held at the same time. If you like the suggestion, it can be done in the follow up too.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point, I'm sorry but I think there was a rebase error because this commit as-is does not remove the lock dep due to the reason you're saying. I'm debating whether it's worth waiting on CI/reacks again...

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chatgpt says the Rust compiler is smart enough to see that the lock can be dropped when entering any of the first two branches. If chatty is wrong, it'll be corrected in the next PR within days.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, I'm going to roll with that since it's not a huge deal. Although I'm curious our policy on merging without CI passing in a case like this, seems we already do it sometimes.

hash_map::Entry::Occupied(mut entry) => {
entry.get_mut().push(HTLCForwardInfo::AddHTLC(pending_add));
},
hash_map::Entry::Vacant(entry) => {
entry.insert(vec![HTLCForwardInfo::AddHTLC(pending_add)]);
}
},
},
}
}
}
}
Expand Down