20:02:02 <BlueMatt> #startmeeting Lightning Meeting Name
20:02:02 <lndev-bot> Meeting started Mon Jul 19 20:02:02 2021 UTC and is due to finish in 60 minutes.  The chair is BlueMatt. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:02:02 <lndev-bot> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:02:02 <lndev-bot> The meeting name has been set to 'lightning_meeting_name'
20:02:21 <BlueMatt> ok, first up https://github.com/lightningnetwork/lightning-rfc/pull/847
20:02:25 <BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/pull/847
20:02:39 <BlueMatt> mostly, probably, the discussion ending here: https://github.com/lightningnetwork/lightning-rfc/pull/847#discussion_r671873899
20:03:38 <BlueMatt> where I argued, and likely rusty will disagree, that we should simply drop the requirement for suggested fee being below channel feerate_per_kw for *both* anchor and non-anchor
20:03:43 <t-bast> BlueMatt IIUC you've started implementing this on the RL side?
20:04:11 <BlueMatt> yes
20:04:20 <t-bast> neat
20:04:30 <rusty> BlueMatt: yeah, I could buy that.  Let me check if we enforce that on recv though....
20:04:31 <BlueMatt> the proposal is technically an incompatibility with existing nodes, but at worst it causes force-closure when we intended to close the channel anyway
20:05:04 <BlueMatt> as an alternative, we could suggest dropping on the receive-side check, and then say something generic like "on jan 1 2022 you can stop caring when you send it"
20:05:05 <rusty> BlueMatt: it's not an incompatibility if we make it so for the new-style quick close though.
20:05:20 <t-bast> and if you breach that by sending a fee higher than the commit feerate, it's probably okay-ish to force-close (barring the csv delays)
20:05:27 <BlueMatt> you dont know if its new-style or not if you're the channel funder and speak first
20:05:54 <t-bast> (in terms of fees paid)
20:06:41 <rusty> BlueMatt: true.
20:07:22 <rusty> I know we didn't want to burn a feature bit here, but it would have been easier in transition.  Oh well.
20:07:50 <BlueMatt> yea, we could....yuck tho
20:08:08 <t-bast> agreed, yuck, I don't think we need a feature bit for that
20:08:16 <t-bast> I
20:08:25 <BlueMatt> at least personally I'm basically fine with some accidental force-closes during shutdown while nodes upgrade
20:08:31 <BlueMatt> like, you were already gonna shut down....eh
20:09:07 <niftynei_> but at what cost?
20:09:21 <t-bast> in that case the fees would be the same (force-closer would even be cheaper)
20:09:29 <t-bast> it's just an additional csv delay
20:09:33 <BlueMatt> right, you'd *save* on fees, but pay the csv delay.
20:09:50 <BlueMatt> of course given you wanted to use a higher fee, you're probably pretty sad about the csv delay
20:10:00 <BlueMatt> cause you probably wanted to pay a higher fee *because* you didnt want to wait
20:10:10 <t-bast> but you're not the one waiting for the delay though
20:10:14 <BlueMatt> but, still, that's a risk the sender takes, the spec doesnt need to care about that
20:10:18 <t-bast> because it's your peer that will force-close on you
20:10:29 <t-bast> so no delay on your side
20:10:35 <BlueMatt> sure, but you still dont get to pay the higher fee that you wanted to pay, presumably to get into the Next Block
20:10:39 <t-bast> unless they just send an error and wait
20:10:45 <t-bast> true
20:10:47 <BlueMatt> (which nodes do do...)
20:11:01 <BlueMatt> but, in any case, votes in favor/against just saying the incompatibility is ok?
20:11:16 <t-bast> I won't be very helpful, I'm fine with both xD
20:11:39 <rusty> Hmm, seems like our enforcement is lax here, but the logic is a bit gnarly and I'll have to actually test.
20:12:01 <rusty> BlueMatt: I'm happy to remove the requirement; I expect it won't happen very often in practice.
20:12:02 <BlueMatt> are you staunchly against spec change even if you enforce it, rusty?
20:12:09 <BlueMatt> right, that's my other thinking
20:12:23 <BlueMatt> not many nodes are going to by default send a higher fee than the channel fee, cause otherwise they would have increased the channel fee
20:13:03 <t-bast> yes, in the case of eclair, we would have done an update_fee beforehand, so we shouldn't be in this case except for anchor outputs where we keep the commit feerate low
20:13:42 <rusty> OK, let's simply remove that requirement?
20:13:50 <BlueMatt> alright, sounds like maybe rough agreement. lets do it.
20:13:55 <BlueMatt> next topic...
20:13:56 <BlueMatt> #https://github.com/lightningnetwork/lightning-rfc/pull/880
20:13:59 <BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/pull/880
20:14:09 <BlueMatt> rusty: has the floor
20:14:26 <rusty> #action t-bast to remove cap-at-unilateral from 847 for all channel types
20:14:30 <rusty> (Needed for minutes)
20:15:00 <t-bast> ack
20:15:02 <rusty> OK, this is implemented in a PR.
20:15:10 <roasbeef> this is dropping the requirement that co-op close fee is below the commit fee rate?
20:15:16 <BlueMatt> roasbeef: yes.
20:15:19 <rusty> roasbeef: yeah, for any type.
20:16:15 <BlueMatt> so whats the status of channel types agreement/disagreement? rusty?
20:16:19 <rusty> Node MUST play audio of Hey Big Spender when close fee proposal is above commitment fee rate.
20:16:38 <rusty> BlueMatt: so it's now a simple "take it or leave it" proposal by opener.
20:16:42 <BlueMatt> echo "Hey Big Spender" > /dev/audio
20:17:01 <jkczyz> > /dev/null
20:17:09 <roasbeef> rusty: for all commits, or just anchors?
20:17:22 <t-bast> roasbeef: for all commits
20:17:34 <roasbeef> but would only apply for this new feature bit?
20:17:36 <ariard> rusty: what's the purpose of sending back the `channel_type` in accept_channel, if you don't like the channel_type just stay silent?
20:17:52 <t-bast> I have the latest version of channel_type implemented in a PR in eclair as well, I can test cross-compat if you want with the c-lightning version rusty
20:17:54 <roasbeef> ariard: just to echo I guess?
20:18:06 <BlueMatt> rusty: cool. whats the status of error codes to indicate "no, try another channel type"?
20:18:07 <t-bast> ariard: it's important
20:18:10 <rusty> ariard: yeah, I thought about that, but it's also nice that it's caught immediately, not later when they try to add an HTLC
20:18:21 <BlueMatt> roasbeef: no new feature bit.
20:18:27 <t-bast> it shows you selected it, if you don't mention it and it's different from the implicit one, the opener cannot know what you expect
20:18:41 <rusty> BlueMatt: that discussion is ongoing, let me check ml
20:18:48 <ariard> rusty: well they won't try to add a HTLC if receiver never send back sigs for initial commitment
20:19:15 <ariard> echo sounds good, just a small bandwidth waste, and could be catched with error messages if we had ones
20:19:17 <t-bast> oh I misunderstood, you mean in the case where you disagree and don't want that channel?
20:19:27 <roasbeef> BlueMatt: that breaks compat...
20:19:35 <BlueMatt> roasbeef: yes, that was the above discussion....
20:19:48 <BlueMatt> roasbeef: it was discussed in the scrollback :)
20:19:59 <ariard> t-bast: it's take it or leave it, so opener have to figure out by itself what you expect?
20:20:09 <rusty> ariard: ah, yes, you need echo to know they understood.
20:20:12 <BlueMatt> t-bast/rusty: wait, then how do your current implementations decide when to suggest the next channel type in a new open_channel message?
20:20:21 <BlueMatt> or is it just "on any error with the same channel id"?
20:20:40 <rusty> BlueMatt: mine just sets the default (except there's an lnprototest which tries everything possible from the feature bits)
20:20:56 <rusty> BlueMatt: today, it's "meh, some error happened, me try again!"
20:21:01 <BlueMatt> ah, ok.
20:21:03 <BlueMatt> yes, makes sense.
20:21:04 <t-bast> the flow in eclair is that the node operator explicitly chooses what channel_type to try, and either the flow completes or they receive an `error`, and are free to analyze it and decide whether to try a different channel type or not
20:21:04 <ariard> rusty: ah okay, in case of talking with non-upgraded `channel_type` nodes, silently acking a channel type they don't understand at all
20:21:37 <rusty> ariard: yeah.
20:22:43 <BlueMatt> alright, I mean sounds cool. glad its getting cross-node impl. will implement it when we get there at least from our end, but may not be immediately.
20:22:49 <BlueMatt> any further discussion that should happen live in it?
20:22:55 <rusty> But a side comment: this explicit use of channel types is a kind of latent concept which made our code nicer when we actually called it out (kudos, roasbeef); the spec could use a similar sweep to refer to channel types rather than "if option_static_remotekey applies to the channel...." lang.
20:23:28 <rusty> I think if t-bast and I interop, we're good to apply?  Should we approve that now, or wait for another meeting?
20:23:49 <ariard> rusty: you mean should we pin the channel types board in bolt9 or somewhere else and reuse it across the spec?
20:23:52 <BlueMatt> I think that's fine, at least in concept. I'll read it over but yea go for it.
20:23:53 <t-bast> rusty: ACK on my side, I can test interop this week
20:24:15 <rusty> t-bast: great, thanks!
20:24:17 <t-bast> #action t-bast test channel_type interop with c-lightning
20:24:40 <t-bast> roasbeef, are you fine with that version of the proposal?
20:25:14 <rusty> ariard: more that we can now refer to "channel type" everywhere and know what we mean ("if channel type includes option_static_remotekey" for example).  But I'll have to see what it looks like when I actually sweep the spec.
20:25:26 <BlueMatt> rusty: nice!
20:25:28 <rusty> #action rusty to start spec cleanup to refer to channel type throughout.
20:25:55 <BlueMatt> ok, no news from roasbeef is good news :) next topic.
20:25:57 <BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/pull/834
20:26:01 <BlueMatt> warning messages
20:26:13 <BlueMatt> another rusty special. anything you want to get feedback on it live, rusty?
20:26:35 <BlueMatt> looks like the pr itself needs rebase, but t-bast ack'd
20:26:47 <rusty> BlueMatt: it Just Works.... though it'd be nice for debugging if other impls printed it out rather than unknown msg.
20:27:01 <BlueMatt> yea, we can do that pretty easy
20:27:08 <rusty> Weakening the error semantics is just recognizing reality, it's long overdue.
20:27:12 <t-bast> yes I've found warnings very useful, and I've got a concrete use-case related for #847 that I think is worth sharing
20:27:19 <BlueMatt> yep, cool!
20:27:33 <t-bast> in some cases our only current choice is "disconnect", but it's actually putting us in a deadlock in some situations
20:28:05 <rusty> #action rusty to rebase 834
20:28:08 <t-bast> in the closing fee_range negotiation, if your peer sends a fee_range you completely disagree with, disconnecting isn't helpful because at reconnection they must re-send the closing_signed
20:28:13 <t-bast> and you will still disagree
20:28:25 <t-bast> sending a warning and then staying silent is much better
20:28:38 <BlueMatt> wait, shouldnt you force-close if the fee rate suggested for close is insane?
20:28:44 <t-bast> the node operator can pick that up and send a different closing_signed with a fee_range you'd like, or force-close
20:29:05 <t-bast> you could, but there's no real reason to if you don't need the funds immediately
20:29:13 <t-bast> you can send a warning first
20:29:26 <BlueMatt> sure there is - the channel is useless, and most node operators dont sit there and read the logs carefully
20:29:28 <rusty> (Note there's a proposal on the ml to add some concrete semantics to errors, which could apply to warnings too, but Carla hasn't responded)
20:29:57 <t-bast> when they see that the node they sent shutdown is stuck, they will likely look at their logs though
20:30:19 <t-bast> and then they can decide to either force-close or try different fee_range
20:30:24 <BlueMatt> sure, but the non-shutdown-initiator is unlikely to read the logs, so that ndoe should just force-close.
20:30:51 <rusty> BlueMatt, t-bast: this is true in general, that there should be some "we haven't made progress in X days, let's force close".
20:31:13 <BlueMatt> anyway, this is a node policy issue, which doesn't seem super relevant, we all agree we want warnings anyway :)
20:31:23 <BlueMatt> next topic!
20:31:25 <BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/issues/745
20:31:29 <t-bast> ACK!
20:31:51 <rusty> I think we all agreed on this one, want me to draft a clarification?
20:32:11 <BlueMatt> sounds good. I admit I didnt read it, but ariard thinks we do the thing everyone else does, so I'm happy ;)
20:32:15 <lnd-bot> [13lightning-rfc] 15t-bast pushed 2 commits to 06relax-closing-fee-requirement: 02https://github.com/lightningnetwork/lightning-rfc/compare/f02916485c49...c99002013e6a
20:32:15 <lnd-bot> 13lightning-rfc/06relax-closing-fee-requirement 148683525 15t-bast: Use warning instead of disconnecting
20:32:15 <lnd-bot> 13lightning-rfc/06relax-closing-fee-requirement 14c990020 15t-bast: Remove fee below commit fee requirement
20:32:36 <BlueMatt> hmm, can we not use lightning-rfc branches for PRs? that seems a bit weird imo
20:33:58 <rusty> BlueMatt: yeah, it's weird (I usually use my personal copy) but I don't really mind.
20:35:09 <t-bast> I agree with the way rusty reframed the requirement at the end of the discussion: it's clear and concise
20:35:57 <BlueMatt> yes, agreed
20:36:12 <BlueMatt> note that implementing it in not-this-way would actually be really quite annoying for us.
20:36:37 <rusty> Yeah, just added another comment.
20:37:10 <BlueMatt> correct, I agree with you rusty (and thats the way our code works, if I'm reading it correctly)
20:37:13 <BlueMatt> we would reject that add
20:38:07 <BlueMatt> any further discussion?
20:38:10 <ariard> yeah what i'm trying to understand is what lnd is doing on this behavior, crypt-iq comment is a bit unclear
20:38:16 <t-bast> I'd need to write that as a test to be 100% sure whether eclair would reject it or not...it's a simple test to write though I'll try that
20:38:17 <BlueMatt> It seems we're all in agreement, maybe t-bast wants to comment on the latest issue from the reporter
20:39:14 <BlueMatt> ariard: which comment was unclear?
20:39:18 <BlueMatt> I think I understood it
20:39:23 <t-bast> I'll need to write that test, I think eclair is quite conservative here and wouldn't "risk" sending that last add, but I'll need to verify
20:39:47 <BlueMatt> for the sake of time, lets move on and leave further discussion on the issue. in the mean time, rusty graciously offered to write up a spec clarification :)
20:39:55 <BlueMatt> #action rusty to clarify spec to resolve #745
20:40:06 <BlueMatt> #topic https://github.com/lightningnetwork/lightning-rfc/issues/873
20:40:24 <BlueMatt> rusty had proposed some wording in the issue
20:40:56 <BlueMatt> there was some concern over DoS in the previous meeting on may 24
20:42:24 <BlueMatt> roasbeef: had suggested there that he'd have crypt-iq write up a patch to test for cpu dos
20:42:28 <BlueMatt> did that happen?
20:42:30 <t-bast> Probably worth exchanging some kind of `max_accepted_dust_htlc` to limit that?
20:42:42 <ariard> i think we sould introduce a new limit for dust htlc count and not let them unbounded
20:42:47 <BlueMatt> t-bast: you already have a max total htlc in-flight limit
20:42:54 <BlueMatt> ariard: y tho
20:43:13 <t-bast> but that's what we want to override...?
20:43:25 <t-bast> we don't want these dust htlc to be included in that limit, right?
20:43:28 <BlueMatt> t-bast: no, not in-flight limit, but the htlc-count limit
20:43:45 <niftynei_> the comments from last time i suggested a 'max_fee_from_dust" limit
20:43:53 <t-bast> right, but it's only an msat value, so it's probably a huge amount of dust htlcs, isn't it?
20:44:08 <BlueMatt> t-bast: right, the objection last time was that this could turn into a DoS issue
20:44:13 <crypt-iq> You don't need a max_fee_from_dust parameter I found out
20:44:17 <niftynei_> so putting a sats limit on the amount of extra fee you'd allow for "htlc escrow that's too small for its own htlc output"
20:44:17 <rusty> Yeah, I really don't want you to add 1M dust htlcs, though at 1msat that's only 1000sats in fees.
20:44:23 <BlueMatt> but you dont send signatures for them, so, really, I dont see why
20:44:51 <cdecker[m]> Still needs storage and memory though
20:44:59 <crypt-iq> With the network today, you can limit your exposure to dust htlcs by refusing to forward them if your inbound channel is dusted or if your outbound channel will be dusted
20:45:02 <BlueMatt> I mean, its a computer, if you dont send a signature, the cpu cost of like 100M little HTLCs should be pretty akin to 1 normal htlc
20:45:12 <BlueMatt> crypt-iq: define "dusted"
20:45:14 <crypt-iq> lnd will probably only handle ~10k htlc's
20:45:31 <t-bast> BlueMatt: good point, it's true that without the sig it's quite inexpensive, it's worth testing
20:45:33 <crypt-iq> As a safe limit int his case
20:45:39 <BlueMatt> crypt-iq: why?
20:45:52 <crypt-iq> We have a uint16 for forwarding htlc's and we don't want an overflow
20:45:54 <BlueMatt> crypt-iq: which cost are you optimizing for limiting?
20:46:03 <BlueMatt> crypt-iq: so swap it for a u32?
20:46:22 <BlueMatt> or a uint64, cause thats the same speed on most x86_64 processors :)
20:46:26 <crypt-iq> Yeah but it's a database upgrade, which we'd want to avoid. Could be something revisited though
20:46:52 <niftynei_> is there a network related reason for the 10k limit?
20:46:58 <crypt-iq> So when receiving an incoming HTLC, if either you or your counterparty's commitment has too much dust on it (defined by your dust threshold) you can just fail back
20:47:01 <crypt-iq> There's no network related reason no
20:47:25 <BlueMatt> crypt-iq: isn't that just....the htlc in-flight total value limit?
20:47:54 <crypt-iq> Well this dusted amount is stealable, htlc-in-flight applies to non-dust as well
20:47:55 <roasbeef> re compat of the co-op close fee thing: that'll end up borking a lot of channels in the wild, if you send one above the range, lnd won't like it and you'll have to force close the channels
20:48:18 <BlueMatt> crypt-iq: you mean it burns to fee? thats been part of the lightning security model forever :)
20:48:23 <roasbeef> why not tie it to a feature bit, if the negotiation logic is already gonna change?
20:48:23 <crypt-iq> Bumping antoine's ML post: https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-May/002714.html
20:49:04 <BlueMatt> roasbeef: yes, we're aware it was a concern, did you read the scrollback from the beginning of the meeting for the rationale proposed?
20:49:07 <niftynei_> crypt-iq: i think by 'stealable' you mean "gets paid to the miner"?
20:49:13 <crypt-iq> Right it's burned to fee, but you can limit this amount per-channel without having to negotiate any parameters. So it allows you to not have the max_dust_htlc_fee option in the option_dusty_htlcs proposal
20:49:23 <roasbeef> rusty: +1 re the chan types making certain sections of code nicer, this is what I was getting at w/ the like "mega switch statement for feature bits thing"
20:49:36 <niftynei_> but isn't that the "max fees from dust" limit ?
20:49:44 <BlueMatt> crypt-iq: no you cant, someone can add a ton and then create a new commitment transaction and then broadcast that. just because you fail the htlc back in the next commitment doesnt change this.
20:49:50 <roasbeef> t-bast: version w/ the echo of the type? yeah we have a PR we need to clean up, but we can all prob start doing interop on testnet pretty soon
20:49:51 <t-bast> roasbeef: I really think a feature bit would be wasted here, in practice no-one sends higher fees (we would all send an update_fee before that) so it shouldn't happen, and won't be a concern for anchor outputs channels where we have to remove that requirement anyway
20:49:58 <ariard> niftynei: if you're miner with any chance to mine a block during the HTLC timelock it's quite a high-success attack
20:50:31 <crypt-iq> BlueMatt: you're not vulnerable to this because it's subtracted from the incoming's balance, you're vulnerable when you forward. I can expand on the cases in the issue itself
20:50:33 <t-bast> roasbeef: cool for the channel_type!
20:50:55 <BlueMatt> crypt-iq: ah, I see your point, ok.
20:50:56 <crypt-iq> niftynei: it is the same as max fees from dust but it doesn't have to be negotiated and can be deployed selectively by impl's right now
20:51:01 <rusty> I think our roasbeef is laggy; maybe we should reboot him? :)
20:51:17 <t-bast> crypt-iq: good point as well, it's worth highlighting that in the issue
20:51:38 <roasbeef> re the error msg stuff, don't see why to introduce a new message vs just re-using out existing once since that'll bridge compat, it also makes things simpler in that all nodes have one error path way, and older nodes just ignore what they don't understand
20:51:46 <t-bast> crypt-iq: (the fact that as long as you don't forward, it doesn't open attack vectors)
20:52:08 <crypt-iq> t-bast: right, there are 4 cases to tackle and probably better to lay them out in the issue and not on irc
20:52:33 <roasbeef> I also think enumerating the initially defined set of error codes/pathways as Carla did in her proposal is important, otherwise it's just another blob that hasn't really been that useful in practice outside of trying to diagnose force close scenarios
20:52:33 <cdecker[m]> Sounds good to me
20:52:47 <niftynei_> crypt-iq: right i just wanted to point out that you're talking about the thing that i mentioned last time, that they're the same thing in terms of how to handle the issue lol
20:53:11 <roasbeef> coudl be used for the "i'm shutting down now" message we've talked about in the past, I've been trying to debug some p2p connectivity issues w/ pure tor nodes in the wild lately, but at times context is lacking beyond "EOF"
20:53:25 <crypt-iq> niftynei: gotcha
20:53:29 <BlueMatt> roasbeef: if you're gonna respond to something a half hour ago maybe just put it on the issue?
20:53:31 <t-bast> roasbeef: I find that warnings are more useful than errors, they act as a "nudge" that doesn't get your channels closed
20:53:38 <BlueMatt> we can't rehash the whole meeting a half hour late.
20:53:44 <BlueMatt> that would be a waste of everyone's time
20:54:34 <rusty> BTW, I'd like to discuss Turbo channels; it's not on agenda since there's not PR yet, but that can be fixed...
20:55:07 <BlueMatt> whats the concrete next-steps for dusty htlcs uncounted?
20:55:10 <t-bast> Sure, we can discuss turbo even without a PR
20:55:58 <t-bast> What about a proposal PR for dusty htlcs uncounted? If crypt-iq you think you've experimented and gathered enough feedback?
20:56:07 <rusty> BlueMatt: next steps: on issue, debate if we actually need a limit, and what it looks like if so.  crypt-iq to lead?
20:56:16 <rusty> (Or, if not, why not)
20:56:21 <BlueMatt> it sounds like maybe it can go ahead as-is, but with callouts of the ability to burn your own balance to fees and that nodes should limit their relaying of dust htlcs to limit their own total exposure
20:56:35 <BlueMatt> rusty: i believe I agree with crypt-iq that no in-spec limit is required/relevant
20:56:41 <BlueMatt> but instead at-forwarding-time limits apply
20:56:48 <niftynei_> ack
20:57:03 <crypt-iq> what limits are we talking about here? sum of dust limits?
20:57:17 <crypt-iq> I think a max_dust_htlcs that you can offer outstanding is necessary, no?
20:57:17 <t-bast> it would be up to each node's policy I guess
20:57:18 <BlueMatt> sum of outbound dust htlcs
20:57:21 <BlueMatt> but its not spec
20:57:29 <BlueMatt> crypt-iq: why?
20:57:34 <cdecker[m]> Both number and sum id guess
20:57:34 <crypt-iq> where max_dust_htlcs is the literal number
20:57:38 <BlueMatt> crypt-iq: didnt you just argue we *dont* need an in-spec limit?
20:57:52 <crypt-iq> I argued that we don't need a sum of dust limits
20:58:04 <cdecker[m]> Yeah, just fail forwarding it if it's above your internal limit I'd guess
20:58:08 <BlueMatt> because we can do it at forward-time (its the node *sending* the htlc that takes the risk)
20:58:24 <BlueMatt> and if its at forward-time it doesnt appear in the spec outside of rationale sections
20:58:29 <crypt-iq> Well the problem here is that it will be locked into a commitment transaction before we can fail back
20:58:32 <niftynei_> i mean it sounds like there's two things you could decide to limit it on, and that's up to your node/impl to decide which to apply
20:58:34 <crypt-iq> And lnd has a uint16
20:59:08 <crypt-iq> So we want to communicate that our max so that limit isn't reached
20:59:14 <BlueMatt> crypt-iq: no? because its outbound
20:59:16 <BlueMatt> you just...dont send it?
20:59:29 <rusty> BlueMatt: but you're still committed to it until you fail it.
20:59:30 <cdecker[m]> That's ok, the sender is out of pocket and lnd can apply <<u16 to avoid getting to >u16
20:59:39 <BlueMatt> oh, you mean that lnd will be unable to upgrade to *accept* new htlcs due to a coding issue?
20:59:42 <rusty> (This is the reason we *have* incoming limits).
20:59:53 <BlueMatt> that just seems like an issue where y'all can not set the feature bit until you add the relevant code?
21:00:04 <BlueMatt> or are you *also* making a security argument?
21:00:29 <crypt-iq> Well another problem is that we have in-memory buffers so we want to limit that exposure as well and these HTLCs all get stored with their blobs
21:00:43 <cdecker[m]> Wouldn't that require there to be more than `u16max-buffer` in a single commitment?
21:00:59 <crypt-iq> cdecker: right, which I don't see as likely except in an attack scenario
21:01:02 <BlueMatt> yes, do you have a specific limit in mind for that? that seems like something that could be a general spec-enforced limit
21:01:09 <BlueMatt> cause, like, the number can just be almost arbitrarily high
21:01:16 <BlueMatt> we use 64K for lots of things, lets just say that? :)
21:01:18 <ariard> you can limit the dusted balance implementation-wise, though at risk of silent force-close if it overrides negotiated `dust_limit_satoshis`
21:01:46 <crypt-iq> I was thinking 10000, but if we upgrade our database I don't see a reason to not have 64k
21:01:48 <BlueMatt> ariard: that's a largely-separate issue, though, thats the channel negotiation at creation?
21:02:03 <rusty> 32k!  And bike shed BLUE dammit BLUE
21:02:22 <crypt-iq> ariard: how can it lead to a silent force close error?
21:02:51 <ariard> crypt-iq: we don't have upper bound of the dust_limit_satoshi for now, at least not negotiated in the spec
21:02:54 <BlueMatt> rusty: ok! 32k it is :)
21:03:00 <cdecker[m]> It just has to be low enough so that a single commitment round doesn't risk going above the limit. Then we can reject the ones over the limit on the next commitment
21:03:12 <BlueMatt> cdecker: what limit?
21:03:19 <ariard> if you start to enforce one on already-existent channel, your counterparty might have a stale view of what's a dust HTLC for you
21:03:28 <BlueMatt> cdecker[m]: cause the only relevant security concern, as I understand it, is how much *outbound* dust HTLC you have sent
21:03:29 <cdecker[m]> Whatever your node choses to enforce
21:03:32 <BlueMatt> which you can just...limit?
21:03:42 <BlueMatt> there's no in-channel force-close risk relevant here whatsoever
21:04:00 <cdecker[m]> Yep, can't see the force close risk here either
21:04:02 <crypt-iq> ariard: dust_limit_satoshis is negotiated and static, not sure I follow
21:04:41 <niftynei_> i think we have some next steps outlined here, i'd love to spend a few minutes chatting about turbos
21:04:46 <BlueMatt> sgtm
21:04:49 <ariard> crypt-iq: let's defer discussion to the issue? we might lack context here
21:04:51 <BlueMatt> last call for questions here.
21:04:51 <niftynei_> i know we're already over time a bit
21:04:55 <BlueMatt> alright
21:05:00 <BlueMatt> #topic turbossssssssssssssssssss
21:05:03 <cdecker[m]> Sgtm
21:05:03 <BlueMatt> rusty has the floor
21:05:10 <BlueMatt> and about 10 minutes until we call time
21:05:55 <t-bast> Do we need to include turbo in channel_type?
21:06:21 <rusty> t-bast: hmm, good q!
21:06:24 <t-bast> Or do we just react to `min_depth` being `0` in `accept_channel`?
21:06:41 <t-bast> Because what's a bit weird here is that open_channel doens't have a way to specify min_depth
21:06:57 <t-bast> Unless we add it as a tlv?
21:07:22 <BlueMatt> if we're adding stuff anyway should it include a "will accept payments at 0conf" field on both ends so you know whether you can *also* send or just receive?
21:07:31 <rusty> t-bast: min_depth is totally advisory: you can always delay sending as long as you want.
21:08:03 <t-bast> rusty: true, but you wouldn't understand why the opener delays, it would be better to be explicit?
21:08:09 <roasbeef> re #873, missing the context here related to CPU DoS? in that if you have more dust HTLCs things are harder to process for certain implementations?
21:08:12 <t-bast> BlueMatt: that could be reasonable, yes
21:08:28 <rusty> BlueMatt: channel type would cover that?
21:08:37 <BlueMatt> presumably could, yes.
21:08:52 <roasbeef> seems you really need to clamp both the count, and total value
21:08:54 <roasbeef> for dust
21:09:13 <rusty> The difference is not send vs recv, it's "I will route for you even though your open is unconfirmed".
21:09:35 <roasbeef> BlueMatt: yeah I read it, I don't agree w/ the rationale, it's gonna cause a lot of force closes in the wild
21:09:35 <rusty> Which, I'm tempted to say "try it and see"?
21:09:38 <cdecker[m]> Well but that's just local policy again isn't it?
21:09:59 <roasbeef> t-bast: disagree that we need to worry about wasting feature bits, people are using feature bits in teh 1000s ranges already today, there's a lot of room from 20 or so where we are rn to there
21:10:04 <cdecker[m]> Accept the HTLC, if it's forwarded and you don't want to just fail it immediately again
21:10:08 <roasbeef> and we'd already have a feature bit for the new fee stuff right?
21:10:22 <BlueMatt> roasbeef: again, its pretty rude to dig up a topic from an hour ago. wait till after the meeting.
21:11:06 <rusty> Yeah, roasbeef, comment on issue please.
21:11:08 <BlueMatt> rusty: hmmm, so I guess you'd just try to route, see if fail, and retry payment over another route if possible
21:11:16 <rusty> BlueMatt: I think so.
21:11:21 <BlueMatt> I guess that works, as long as you know your peer will accept the htlc to begin with
21:11:36 <t-bast> It's quite harmless, but why not be explicit to avoid a failed round-trip?
21:11:37 <roasbeef> BlueMatt: had something come up in meat space, was going thru the scrollback to reply where tagged
21:11:41 <rusty> So really, you only need to know "are you gonna get upset at me trying?" which is a feature negotiation.  I don't even think it needs to be a channel?
21:11:48 <rusty> ... type
21:12:02 <cdecker[m]> Right
21:12:08 <t-bast> Yeah I'm not sure either it needs to be a channel_type
21:12:19 <BlueMatt> t-bast: I mean you shouldnt hit a stuck payment in this case, hopefully, and its not like you have to wait for several hops of commitment_signed dances.
21:12:29 <rusty> IOW, all channels are turbo channels.  I think our analysis shows that (as long as you refuse to fwd) it's "why not, your funeral"?
21:12:39 <roasbeef> t-bast: fwiw we never close when we receive errors, always seemed like an uncessary way to make users angry (by auto force closing), but it's possible to re-use the error message as is, using the all zero connection ID flag, then using a TLV field to pin point a channel and/or action
21:12:54 <t-bast> BlueMatt: that's true, but it's still something we could easily avoid by being explicit, can't we?
21:13:15 <cdecker[m]> Well even if you're the final recipient you should maybe not act on it until it's confirmed (ship the paid goods)
21:13:39 <BlueMatt> t-bast: yea, I guess its a question of protocol complexity
21:13:50 <BlueMatt> cdecker[m]: only if you're the direct counterparty, otherwise you get paid either way :)
21:13:54 <t-bast> roasbeef: yeah, maybe we should do the same, I'm not sure though, I like having the two distinct mechanisms (and it's really a tiny amount of work to support)
21:13:59 <BlueMatt> cdecker[m]: but, yea, thats basically a node-level api issue, no?
21:14:23 <cdecker[m]> Yep, but we may need to bubble that up to the user so they can decide
21:14:33 <t-bast> BlueMatt: of course, if it's tedious to do, I can definitely live with the try-and-see approach, but if it's really just including a tiny informative tlv it could be worth it
21:14:57 <rusty> Yeah, easier to add a warning msg than to extend error msg, tbh.
21:16:14 <rusty> t-bast: I was kinda assuming we would have some command for user to say "I trust this nodeid!".  That might happen after channel open though?
21:17:21 <rusty> t-bast: hmm, we could use funding_locked to indicate "I'm ready to fwd"?
21:17:23 <roasbeef> BlueMatt: ppl are free to read messages or not, this is async chat
21:17:23 <t-bast> rusty: we have per-node configuration overrides, it has been quite handy for that kind of things - you can declare in `eclair.conf` that you override some specific limits or features for specific node_ids
21:17:24 <BlueMatt> t-bast: eh, I'll implement it either way. I kinda like not having it, but I dont feel *that* strongly
21:17:49 <niftynei_> failure case on turbos seems kinda complex, no? like value has been exchanged but the accounting for it fell apart?
21:17:57 <rusty> ... ah, no you need that to send anh HTLCs, ignore.
21:18:20 <rusty> niftynei_: yeah, you got money, but oh no not really.
21:18:44 <niftynei_> isn't turbo basically "send funding_locked at successful broadcast"?
21:18:46 <t-bast> when the channel completely disappears from under the feet, it's nasty
21:19:07 <BlueMatt> I mean some folks will want to do this, I dont think we should say no?
21:19:20 * niftynei_ does not want to be on that accounting team
21:19:27 <t-bast> niftynei: that's the way we've currently implemented it for Phoenix, yes
21:19:27 <rusty> BlueMatt: oh yeah, we should totally do it.  I think it's just *how*.
21:19:33 <BlueMatt> like, you trust your counterparty, great, software still has to handle the accounting, but the user shot themselves in the face
21:19:37 <BlueMatt> yea, fair
21:20:23 <niftynei_> i definitely see the use case for channels btw mobile units and their 'service provider' so to speak
21:20:29 <rusty> t-bast: hmm, ok, so where does the "I am prepared to fwd for you" msg go?
21:20:33 <roasbeef> rusty: is it though? we have a blob that has no structure atm, can either repurpose it or add the other field
21:21:09 <t-bast> rusty: we assume it by default, but that's also because in the Phoenix case we're always funders so we know we're not going to double-spend ourselves
21:21:12 <rusty> roasbeef: yes, yes it is.  It's backwards compatible.
21:21:26 <t-bast> rusty: so we have a simpler case than the general turbo channels mechanism
21:21:43 <roasbeef> re turbos, breez has a protocol they're using in the wild, and have revived a PR of it for lnd, it doesn't make a distinction w.r.t being able to route HTLCs or not, for them the whole point is they can route HTLCs to let users insta recv
21:21:45 <rusty> t-bast: hmm, so we could have a "... but be warned I'm not gonna fwd" tlv?
21:21:57 <BlueMatt> t-bast: but in the phoenix case the "just set the 'i will accept payment pre-lock-in' bit on both sides" just works as you expect
21:22:19 <BlueMatt> t-bast: cause, presumably, phoenix router will forward pre-lock-in?
21:22:28 <t-bast> rusty: yes we could, I guess
21:22:29 <BlueMatt> cause you wont double spend, but users wont forward at all, cause its private channels.
21:22:42 <rusty> t-bast: don't know if it's worth the complexity.
21:23:02 <t-bast> BlueMatt: yes exactly, we only open turbo channels to end nodes that won't forward, and we accept forwarding for them because we know we won't double-spend ourselves
21:23:11 <t-bast> BlueMatt: the trust is on the wallet user side
21:23:21 <t-bast> But for the general case, we probably need more configuration hooks?
21:23:43 <cdecker[m]> If someone receives an HTLC on a 0conf channel they're the recipient, since nobody else knows about that channel (6 conf broadcast limit)
21:24:01 <roasbeef> cdecker[m]: hop hints?
21:24:03 <t-bast> TBH I haven't thought it through yet, since our use-case is a simpler case
21:24:05 <BlueMatt> I guess I'm trying to understand the concrete use-case for "I'll accept payment, but not forward". because the ux is gonna need to display the same "payment pending" status to the user until lock-in either way, I dont see a ton of value in it
21:24:09 <cdecker[m]> Unless you do weirdnstuff with routehints
21:24:27 <rusty> BlueMatt: but you can send it out again instantly, via same channel?
21:24:28 <t-bast> cdecker[m]: it could be in routing hints though
21:24:34 <roasbeef> cdecker[m]: yeah afaik, ppl like breez always use hop hints, and have a scheme to generate a scid that works in the onion and the invoice
21:24:56 <roasbeef> BlueMatt: I think you're right here, ppl that do this in the wild always care about the forward aspect, since that's what improves UX
21:25:06 <BlueMatt> we need random scids/pubkeys in hop hints *anyway*, but that seems unrelated
21:25:12 <BlueMatt> or, can be unrelated
21:25:17 <cdecker[m]> If anything i think we need to say that a 0conf channel won't hold the pending HTLC until the forward depth is reached, otherwise I don't see how failing it can cause trouble
21:25:22 <BlueMatt> rusty: hmm, I dont quite get that?
21:25:34 <roasbeef> it's related since you need to identify a channel still, tho there's also the pubkey routing thing -- so put the pubkey in the onion instead of the scid
21:25:49 <roasbeef> since the mapping only needs to be known by the last two hops in the route
21:26:08 <t-bast> Maybe the simplest scheme is indeed "if we go turbo, let's go turbo all the way and just forward each other's htlc"?
21:26:25 <roasbeef> t-bast: turbo or bust
21:26:26 <t-bast> and note that this is the turbo trade-off?
21:26:33 <roasbeef> since that's what all the ppl in the wild that use it already do
21:26:42 <BlueMatt> roasbeef: I dont see how its related aside from "its kinda required for accepting 0conf payments"
21:26:56 <BlueMatt> t-bast: yea.
21:27:00 <roasbeef> BlueMatt: yeh that's it, ppl want to recv and send insta
21:27:01 <rusty> BlueMatt: if you open an unconf channel with me and send me some sats, I can send them through you out to anyone.
21:27:13 <BlueMatt> roasbeef: yea, ok
21:27:15 <rusty> There's no *routing* here, importantly.
21:27:22 <rusty> s/routing/forwarding/
21:27:24 <BlueMatt> rusty: but, like you wont *accept* the original payment
21:27:26 <BlueMatt> you may accept the htlc
21:27:28 <roasbeef> so you either need a way to crafta a custom short channel ID, or you use pubkey based routing in the onion (since that's already in the invoice)
21:27:34 <BlueMatt> but from an api/ux perspective, you'll market it as "pending"
21:27:44 <roasbeef> iirc rn breez uses a scid mapping of heights below the segwit activation height
21:27:46 <BlueMatt> *unless* you're trusting the counterparty, at which point you'll also route
21:28:04 <BlueMatt> roasbeef: I think lets just create a way to create a custom scid, cause we want to do that anyway imo :)
21:28:17 <rusty> BlueMatt: that's naive UX though.  You can still use the funds, just not out any other channel.
21:28:43 <BlueMatt> rusty: I guess I dont get why you'd display to the user "received a payment on 0conf channel" instead of just "payment pending waiting for sender to send"
21:28:51 <BlueMatt> like, that seems like a vaguely useless ux distinction
21:28:56 <BlueMatt> but, ok, if you want to do that, go for it :)
21:28:56 <roasbeef> sure, I mention this since ppl already have their own schemes in the wild, and will likely continue to use those still, but maybe they'll write them up in a bLIP or something if people want to interop (usually it's their software interacting w/ their software, so interop matters less in the wild)
21:29:05 <BlueMatt> in either case, it seems like not gonna be the most common use-case :)
21:29:26 <rusty> BlueMatt: AFAICT that's *exactly* what Phoenix does today?
21:29:27 <BlueMatt> roasbeef: that seems like something that could just be a bolt, no?
21:29:34 <BlueMatt> rusty: no, I believe they forward happily?
21:29:50 <BlueMatt> according to what t-bast seemed to say above? or am I wrong?
21:29:51 <rusty> BlueMatt: in theory, in practice most users have a single channel.
21:30:00 <BlueMatt> huh?
21:30:08 <t-bast> rusty: do you expect that people will want the safe-ish turbo? Instead of just doing full turbo-yolo when they do turbo? I'm not sure trying to half-protect the funds in case the channel is double-spent is really worth it
21:30:18 <ariard> to me, the problem seems to be "how do i signal to my counterparty forward-only-after-conf?"
21:30:27 <BlueMatt> ariard: I dont think we need to?
21:30:45 <t-bast> BlueMatt: yes, we forward happily and the phoenix user trusts that we won't double-spend the channel
21:30:54 <ariard> BlueMatt: that might the forwarding policy you wish, like being both a merchant and routing node
21:30:55 <BlueMatt> alright, lets discuss more on an issue, it seems like the big question is "Do We need to Tell our Counterparty that you wont forward, or do we just reject the htlcs"
21:31:03 <roasbeef> one other thing w/ the custom ID, is: once the channle is confirmed, do you use the actual scid or keep using the custom one?
21:31:08 <BlueMatt> rusty: you wanna open an issue?
21:31:22 <rusty> t-bast: the only issue I can see is that invoices will get marked paid, even though they're not really.  That's hard!
21:31:23 <roasbeef> iirc rn, breez switches over to the real once after things are confirmed
21:31:26 <BlueMatt> roasbeef: i was thinking custom scid for privacy of private nodes, so you'd want it later.
21:31:45 <rusty> #action rusty to open an issue to discuss further.
21:31:48 <BlueMatt> roasbeef: *plus* custom fake pubkeys
21:31:53 <BlueMatt> #endmeeting