Skip to content

trampoline: accumulate inbound trampoline htlcs#4493

Open
carlaKC wants to merge 14 commits intolightningdevkit:mainfrom
carlaKC:2299-mpp-accumulation
Open

trampoline: accumulate inbound trampoline htlcs#4493
carlaKC wants to merge 14 commits intolightningdevkit:mainfrom
carlaKC:2299-mpp-accumulation

Conversation

@carlaKC
Copy link
Copy Markdown
Contributor

@carlaKC carlaKC commented Mar 18, 2026

This PR handles accumulation of inbound MPP trampoline parts, including handling of timeout and MPP validation. When all parts are successfully accumulated, we'll fail the MPP set backwards as we do not yet have support for outbound dispatch.

It does not include:

  • Handling trampoline replays / reload from disk (we currently refuse to read HTLCSource::TrampolineForward to prevent downgrade with trampoline in flight).
  • Interception of trampoline forwards, which I think we should add a separate flag for because it's difficult to map to our existing structure when we don't know the outbound channel at time of interception.

@ldk-reviews-bot
Copy link
Copy Markdown

ldk-reviews-bot commented Mar 18, 2026

I've assigned @valentinewallace as a reviewer!
I'll wait for their review and will help manage the review process.
Once they submit their review, I'll check if a second reviewer would be helpful.

Comment thread lightning/src/events/mod.rs Outdated
@carlaKC carlaKC force-pushed the 2299-mpp-accumulation branch 2 times, most recently from 2f01cdc to 9d17783 Compare March 18, 2026 17:53
@valentinewallace
Copy link
Copy Markdown
Contributor

I find it easier to be confident in smaller PRs, so happy to see this broken up as mentioned on the dev call!

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 24, 2026

Codecov Report

❌ Patch coverage is 91.61850% with 58 lines in your changes missing coverage. Please review.
✅ Project coverage is 86.24%. Comparing base (78df66d) to head (0df8fa3).
⚠️ Report is 22 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/ln/channelmanager.rs 91.87% 40 Missing and 5 partials ⚠️
lightning/src/ln/onion_payment.rs 76.66% 7 Missing ⚠️
lightning/src/ln/onion_utils.rs 64.28% 3 Missing and 2 partials ⚠️
lightning/src/blinded_path/payment.rs 98.33% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4493      +/-   ##
==========================================
- Coverage   86.99%   86.24%   -0.75%     
==========================================
  Files         163      161       -2     
  Lines      108744   109083     +339     
  Branches   108744   109083     +339     
==========================================
- Hits        94605    94084     -521     
- Misses      11658    12366     +708     
- Partials     2481     2633     +152     
Flag Coverage Δ
fuzzing ?
tests 86.24% <91.61%> (+0.15%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@carlaKC carlaKC removed this from Weekly Goals Mar 26, 2026
@valentinewallace
Copy link
Copy Markdown
Contributor

Can you let me know your thoughts on the top two claude'd commits here? https://github.com/valentinewallace/rust-lightning/tree/2026-03-mpp-accumulation-wip

I think I prefer not entirely repurposing the existing claimable structs for trampoline. The top commit is pretty large though, admittedly, though it's super mechanical. The nice part is that there isn't a need to add the PaymentPurpose::Trampoline, or have fields that don't apply present in either claimable HTLCs or trampoline HTLCs, such as the skimmed fee.

@carlaKC
Copy link
Copy Markdown
Contributor Author

carlaKC commented Mar 26, 2026

Can you let me know your thoughts on the top two claude'd commits here? https://github.com/valentinewallace/rust-lightning/tree/2026-03-mpp-accumulation-wip
I think I prefer not entirely repurposing the existing claimable structs for trampoline. The top commit is pretty large though, admittedly, though it's super mechanical.

Nice cleanup! Didn't think that repurposing was too bad because it's relatively contained, but def nice to not need an unused PaymentPurpose/few fields. Will incorporate in the prefactor 👍

@carlaKC carlaKC force-pushed the 2299-mpp-accumulation branch 3 times, most recently from 2a44215 to 86256af Compare April 7, 2026 17:21
@carlaKC carlaKC self-assigned this Apr 9, 2026
@carlaKC carlaKC force-pushed the 2299-mpp-accumulation branch 4 times, most recently from 0df8fa3 to 3b411ba Compare April 14, 2026 16:11
carlaKC added 8 commits April 14, 2026 12:13
We don't need to track a single trampoline secret in our HTLCSource
because this is already tracked in each of our previous hops contained
in the source. This field was unnecessarily added under the belief that
each inner trampoline onion we receive for inbound MPP trampoline would
have the same session key, and can be removed with breaking changes to
persistence because we currently refuse to decode trampoline forwards,
and will not read HTLCSource::Trampoline to prevent downgrades.
When we receive a trampoline forward, we need to wait for MPP parts to
arrive at our node before we can forward the outgoing payment onwards.
This commit threads this information through to our pending htlc struct
which we'll use to validate the parts we receive.
For regular blinded forwards, it's okay to use the amount in our
update_add_htlc to calculate the amount that we need to foward onwards
because we're only expecting on HTLC in and one HTLC out.

For blinded trampoline forwards, it's possible that we have multiple
incoming HTLCs that need to accumulate at our node that make our total
incoming amount from which we'll calculate the amount that we need to
forward onwards to the next trampoline. This commit updates our next
trampoline amount calculation to use the total intended incoming amount
for the payment so we can correctly calculate our next trampoline's
amount.

`decode_incoming_update_add_htlc_onion` is left unchanged because
the call to `check_blinded` will be removed in upcoming commits.
When we are a trampoline node receiving an incoming HTLC, we need access
to our outer onion's amount_to_forward to check that we have been
forwarded the correct amount. We can't use the amount in the inner
onion, because that contains our fee budget - somebody could forward us
less than we were intended to receive, and provided it is within the
trampoline fee budget we wouldn't know.

In this commit we set our outer onion values in PendingHTLCInfo to
perform this validation properly. In the commit that follows, we'll
start tracking our expected trampoline values in trampoline-specific
routing info.
When we're forwarding a trampoline payment, we need to remember the
amount and CLTV that the next trampoline is expecting.
When we receive trampoline payments, we first want to validate the
values in our outer onion to ensure that we've been given the amount/
expiry that the sender was intending us to receive to make sure that
forwarding nodes haven't sent us less than they should.
When we are a trampoline router, we need to accumulate incoming HTLCs
(if MPP is used) before forwarding the trampoline-routed outgoing
HTLC(s). This commit adds a new map in channel manager, and mimics the
handling done for claimable_payments.

We will rely on our pending_outbound_payments (which will contain a
payment for trampoline forwards) for completing MPP claims,
not want to surface `PaymentClaimable` events for trampoline,
so do not need to have pending_claiming_payments like we have for MPP
receives.
carlaKC added 6 commits April 14, 2026 12:15
Add our MPP accumulation logic for trampoline payments, but reject
them when they fully arrive. This allows us to test parts of our
trampoline flow without fully enabling it.

This commit keeps the same committed_to_claimable debug_assert behavior
as MPP claims, asserting that we do not fail our
check_claimable_incoming_htlc merge for the first HTLC that we add to a
set. This assert could also be hit if the intended amount exceeds
`MAX_VALUE_MSAT`, but we can't hit this in practice.
If we're a trampoline node and received an error from downstream that
we can't fully decrypt, we want to double-wrap it for the original
sender. Previously not implemented because we'd only focused on
receives, where there's no possibility of a downstream error.

While proper error handling will be added in a followup, we add the
bare minimum required here for testing.
While proper error handling will be added in a followup, we add the
bare minimum required here for testing.

Note that we intentionally keep the behavior of not setting
`payment_failed_permanently` for local failures because we can possibly
retry it.

For example, a local ChannelClosed error is considered to be permanent,
but we can still retry along another channel.
We can't perform proper validation because we don't know the outgoing
channel id until we forward the HTLC, so we just perform a basic CLTV
check.

Now that we've got rejection on inbound MPP accumulation, we relax this
check to allow testing of inbound MPP trampoline processing.
@carlaKC carlaKC force-pushed the 2299-mpp-accumulation branch from 3b411ba to 54d0f2b Compare April 14, 2026 16:15
@carlaKC carlaKC marked this pull request as ready for review April 14, 2026 17:57
Comment on lines +8584 to +8585
let _max_total_routing_fee_msat = match incoming_amt_msat
.checked_sub(our_forwarding_fee_msat + next_hop_info.amount_msat)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: our_forwarding_fee_msat + next_hop_info.amount_msat could theoretically overflow u64 (e.g. a malicious trampoline onion with amount_msat near MAX_VALUE_MSAT and a high proportional fee). In practice, MAX_VALUE_MSAT is ~2.1e18 so overflow is unlikely but not impossible with pathological configs. Consider using checked_add to be defensive:

Suggested change
let _max_total_routing_fee_msat = match incoming_amt_msat
.checked_sub(our_forwarding_fee_msat + next_hop_info.amount_msat)
let _max_total_routing_fee_msat = match our_forwarding_fee_msat
.checked_add(next_hop_info.amount_msat)
.and_then(|total| incoming_amt_msat.checked_sub(total))
{

Same concern on line 8594 with next_hop_info.cltv_expiry_height + cltv_delta.

Comment on lines +8510 to +8512
// TODO: add restriction to specification that trampoline should be consistent across
// MPP parts? Currently, we'll accept a MPP trampoline payments that specify different
// next_node_id destinations (just forwarding to the last one that arrives).
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This TODO has security implications worth calling out: if MPP parts carry different next_hop_info (onion packet, amount, cltv, next_node_id), only the last-arriving part's values are used for fee/cltv validation and forwarding. A malicious sender could exploit this by sending one part with legitimate values (to pass initial checks) and a final part with different values.

Since forwarding isn't implemented yet, this isn't exploitable today, but when it is, this needs to be addressed — either by requiring consistency across parts or by using the first part's values.

Comment on lines +192 to +193
let (next_hop_amount, next_hop_cltv) = check_blinded_forward(
outer_hop_data.multipath_trampoline_data.as_ref().map(|f| f.total_msat).unwrap_or(msg.amount_msat), msg.cltv_expiry, &next_trampoline_hop_data.payment_relay, &next_trampoline_hop_data.payment_constraints, &next_trampoline_hop_data.features
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes the input to check_blinded_forward from msg.amount_msat (single HTLC amount) to the MPP total_msat. This is significant: the fee computation in amt_to_forward_msat and the check_blinded_payment_constraints (including htlc_minimum_msat check) now operate on the total MPP amount rather than the per-HTLC amount.

For the fee computation, this is correct — the blinded relay parameters are designed to be applied to the total amount, not per-part. But check_blinded_payment_constraints at line 69 of this file calls check_blinded_payment_constraints(inbound_amt_msat, ...) which checks against htlc_minimum_msat. Using the total here means a per-HTLC amount below htlc_minimum_msat would still pass if the total is above it. Is that the intended behavior for trampoline MPP?

If multipath_trampoline_data is None, this falls back to msg.amount_msat which is the per-HTLC amount (non-MPP case) — that's correct.

Comment on lines +8558 to +8560
let proportional_fee = (forwarding_fee_proportional_millionths as u128
* next_hop_info.amount_msat as u128
/ 1_000_000) as u64;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The as u64 truncation of the u128 result could silently lose data if forwarding_fee_proportional_millionths is very large. With MAX_VALUE_MSAT (~2.1e18) and a pathological proportional fee of u32::MAX (~4.3e9), the product before division is ~9e27 / 1e6 = ~9e21 which overflows u64 (~1.8e19). The as u64 would silently truncate, resulting in an understated fee, which lets the attacker get their payment forwarded at below the node's required fee.

While forwarding_fee_proportional_millionths is node-configured (not attacker-controlled), if it's ever set very high, next_hop_info.amount_msat from the trampoline onion is attacker-controlled. Consider capping or using checked conversion:

let proportional_fee: u64 = (forwarding_fee_proportional_millionths as u128
    * next_hop_info.amount_msat as u128
    / 1_000_000).try_into().unwrap_or(u64::MAX);

@ldk-claude-review-bot
Copy link
Copy Markdown
Collaborator

Review Summary

Inline comments posted:

  1. lightning/src/ln/channelmanager.rs:8584-8585 — Unchecked addition our_forwarding_fee_msat + next_hop_info.amount_msat could theoretically overflow u64. Same pattern on line 8594 with CLTV values. Should use checked_add.

  2. lightning/src/ln/channelmanager.rs:8510-8512 — The TODO about inconsistent next_hop_info across MPP parts has security implications: only the last-arriving part's trampoline onion/amount/cltv is used for validation and (future) forwarding. This needs to be addressed before forwarding is enabled.

  3. lightning/src/ln/onion_payment.rs:192-193check_blinded_forward now uses MPP total_msat instead of per-HTLC msg.amount_msat. This changes the semantics of htlc_minimum_msat checking inside check_blinded_payment_constraints — individual HTLCs below the minimum could pass if the MPP total is sufficient. Needs confirmation this is intended.

  4. lightning/src/ln/channelmanager.rs:8558-8560 — The as u64 truncation of the proportional fee calculation could silently understate the fee with pathological (but valid) config values combined with attacker-controlled amount_msat. Should use try_into().unwrap_or(u64::MAX).

Cross-cutting observations:

  • The awaiting_trampoline_forwards state is not persisted to disk and not included in the ChannelManager serialization (confirmed by the Readable impl at line 20537 which initializes it as empty). This is acknowledged in the PR description but is a significant limitation — any restart during MPP accumulation will lose the pending HTLCs, which then can only be resolved by channel-level HTLC timeout. This is acceptable for the current "reject all forwards" state but will need persistence before forwarding is enabled.

  • The overall structure of the MPP accumulation is sound: reusing check_incoming_mpp_part for trampoline is clean, and the timeout/on-chain expiry paths correctly mirror the existing claimable payment timeout logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

4 participants