Dynamic PoW Scaling perversely encourages more ledger bloat

Consider an attacker who wants to publish N blocks. They compute N blocks at base pow. If there is no pow scaling, they publish N blocks, N blocks are confirmed, all done.
Now if pow scales to 8x, they have to spend on average 8x more effort finding pow for those blocks. However, the pow solutions they find along the way at base pow are still potentially useful to them as spam, not for their intended purpose but as 'bonus' spam for a marginal cost near zero. They can't guarantee specific transactions will be published at 8x, but they know that on average 1 in 8 of the base pow they find will have 8x pow. For the purposes of the attacker, this is good enough. So they publish roughly 8N blocks, approximately N of which have the desired 8x pow but the remaining 7N of which come as low priority spam for nearly zero marginal cost to the attacker. Though nodes may still prioritize blocks based on pow perfectly, the attacker has not been deterred from posting blocks relative to the average user. In fact, the attacker has been encouraged to post otherwise-unnecessary low pow blocks, potentially making the end-ledger state more unprunably bloated than if there were no pow scaling at all (because the attacker could have achieved their goal with only N base pow blocks without it, and has now published 8N blocks)

In summary, PoW scaling seems to actually perversely encourage an attacker to cause more ledger bloat than they otherwise would have.
@NanoOrca identified this problem, though perhaps not the implications for the attacker, in this thread and provided one solution, and @PlasmaPower provided another improved solution. Basically, add the intended difficulty to the nonce. It now seems necessary to me that some sort of such system be implemented if PoW scaling is to serve its intended purpose.

I don't see how the two things are related. Even with the proposed solutions, the ledger bloat remains the same because the network capcity does not change.

The attacker is able to, in aggregate, publish additional blocks as a "free bonus" which never have work intentionally done on valid pow done for them. There is zero incentive for a spammer to scale their pow difficulty up no matter what pow threshold of saturation their spam is targeting.

There seems to be two main challenges here.

A spam attack which saturates and slows down the network. To me dynamic proof of work is a great solution as no matter what difficulty the spammer chose to create his millions of blocks a good actor can do n+1 PoW and jump to the front of the queue.

Eventually though, to your point, any spam blocks will eventually be processed and if valid, placed in the ledger. Controlling bloat though is a different challenge to address. Pruning can be the solution but we need to address controlling "unpruneable" blocks. There's lots of options here, many being discussed in this forum.

To me these two solutions go hand in hand.

1 Like