Making dPoW more dynamic

Dynamic PoW works great but does not discourage spamming. My aim would be that at times of network saturation we invalidate all blocks with pre-computed PoW, especially if they were fabricated en-masse with the intent of spamming. Again, the solution should be simple and not time-based.

Here's the idea: why don't we changing the base the PoW dynamically along with the difficulty?
( H is the hashing function, b is the base of PoW calculation )

  • At base difficulty: H (nonce || b ) ≥ threshold
  • At 'n' x difficulty: H (nonce || n x Blake2b(b) ) ≥ threshold where n x Blake2b(b) means that 'b' should be hashed 'n' times

Advantages: at times of network saturation not only PoW-generation takes longer but all pre-computed PoW's are rendered invalid.
Disadvantages: PoW-validation takes slightly longer. Services would have to recalculate PoW with increasing difficulty but they would have to do it anyway.

This proposal is fitting to the "exploring ideas of limiting pre-computing of work" by @zach-atx

Ok, now let me know why this is stupid - I might have missed something crucial.
Thanks.

Thanks for your suggestion. Here is some more detail about this type of approach that may be helpful.

The difficulty threshold is currently hardcoded in order to ensure all nodes are invalidating the same set of blocks. Once that threshold becomes dynamic there is a risk that, due to the asynchronous design where blocks are gossiped and processed at potentially different times on different nodes, subsets of nodes on the network may consider a PoW invalid, while others may not. This would cause inefficiencies on the network and slow things down as nodes would then have to bootstrap and validate those blocks again once they realize others didn't consider them invalid (or alternatively those nodes that still have them would see their elections linger and eventually not reach consensus, thus requiring more resources to sort out the situation).

This points us toward the need for a method of agreement between nodes as to what the difficulty is, but there is no method of doing that on the network currently that is secure (telemetry gives an indication of network wide difficulty but can be forged, etc.). To get that agreement on difficulty requires a consensus type activity, thus more communication and complexity. Although this idea is feasible and we are open to it in the long term, it requires quite a bit of change and should be weighed against other options for improving spam protection/quality of service on the network.

Hope that helps explain the situation a bit further as you continue to explore ideas around this.

1 Like

Thanks for the explanation. I wouldn't touch the threshold at all, only use a different base for each difficulty. Currently it's the public key for 'open' block (I guess) and hash of previous block for any subsequent blocks. I would just hash this (n times) which would result bytes of same length, and use that instead of the original.

How is the difficulty that gets mixed into the work generation made consistent between all the nodes?

This may be a blind spot in my knowledge. I thought that the difficulty is calculated by each node as the average of work difficulty values seen in transactions being confirmed on the network. So no consensus required on difficulty level. It could also be just language barrier as English is not my mother tongue.

You are correct that each nodes calculate average work difficulty values seen in transactions, but there are various conditions which cause nodes to see different numbers at the same global time due to processing blocks in different orders. One main factor is the hardware specs on the node: some can process blocks much faster than others so their difficulty average will be different than a very slow node that is behind on blocks at that moment. And when nodes are under load the chances of this difference in difficulty calculated increases.

Am I understanding this right in that the current DPoW set-up may create different PoW thresholds between nodes but it only changes the order of processing (I.E. all received blocks are eventually processed though some long term blocks could be dropped), whereas the proposed idea could create situations where some nodes see blocks as invalid while other see them as valid?

I assume so. Question is how it would affect the block election.

Yes, that is the difference. The concern is that resource usage could become more of a problem under certain conditions if blocks are invalidated rather than just de-prioritized.