CPS as limited resource subject to supply and demand

Lets give this a go and see if this proposal has any merits.

Lets recognise that the network has CPS limits and that if you want the network to function efficiently and reliably you need ensure that it remains within those limits.

CPS is a limited resource and should be treated as such, the more demand for CPS there is the more (PoW) it should cost.

At the moment CPS is treated like an unlimited resource until a node hits saturation and only then dynamic PoW kicks in and a ‘priority lane’ is introduced. The base layer still consumes the most resources but the cost never changes, if it hits saturation then it becomes unreliable and practically unusable.

This proposal introduces some global values the network must reach consensus over:

  • global_CPS_limit: this value based on the capabilities of the principle representatives. As the network improves this value goes up – which would also correlate with adoption and hardware improvement over time.
  • current_PoW: this is minimum required PoW required to make use of the network. As demand for CPS increases relative to global_CPS_limit this value increases.

## global_CPS_limit

How would this be determined?

As we have seen the reliability of the NANO network breaks down on the ‘weakeast link’. It doesn’t really matter if there are some super nodes than can keep up with 2000 CPS if the rest of the network is left behind.

So we need to find a value that enough of the network can keep up with. What is enough? Up for debate but I guess something like the CPS that 75% of the voting weight can reliably process.

Each representative node would include a max_CPS value in it’s telemetry information. How does a node determine this value? The node operator could initially set this value manually in the configuration file – some safe value that the operator is confident their node can handle. They could profile their node for a while and monitor resource usage and extrapolate from that. Eventually some benchmarking tools.

So you rank nodes by their max_CPS tally up the voting weight and see where 75% leaves you with the ‘slowest’ node. And that is global_CPS_limit the protocol enforces.

## Current_PoW

This value scales with network usage. As CPS increases this would increase – any new transactions would be discarded if their PoW was below this value.

I would suggest the scaling not be fractional but rather at fixed intervals say something like this:

  • Network CPS = 25% of global_CPS_limit then Current_Pow = base_PoW * 2
  • 50% global_CPS_limit then Current_PoW = base_PoW * 4
  • 75% = base_PoW * 8
  • Then ramp it up as you approach global_CPS_limit
  • 80% = base_PoW * 16

Each representative keeps track of average CPS over a period (lets say the last 10 minutes).

Lets say Rep X sees that CPS has gone above 25% of global_CPS_limit then it calls for a vote to increase PoW to base_PoW * 2. If the other reps agree and the vote passes then Current_PoW = base_PoW * 2 comes into effect at some ‘grace period’ after the vote has passed the new value is enforced.

This is the basis for Dynamic Proof of Work. Your proposal seems to try and limit demand before it exceeds supply and I really don't know a technical justification (or economic one) of why this is necessary.

Are you saying DPoW is broken (based on the last week) and can't be fixed?

Because at the moment 'supply' is undefined and node dependant.

When a node reaches it's own supply limit it falls behind becomes useless, while the rest of network continues at 110%... as we've seen if enough nodes go over the limit the network stops functioning.

1 Like

I think the PoW should only kick in when the bandwidth has been overrun. If we need PoW to solve the crisis created by 70TPS, we are doomed.

In no way are we doomed.

At current adoption levels we don't need much more than 5-10 CPS. It's nice to brag that we can hit x number for 2 minutes. But it has been proven that 70 CPS sustained is too much.

Should we rather let it fail at 70 CPS or try and discourage the spammer from spamming and have a network that still functions at 50 CPS?

Have a network still function at 50 CPS, but without increasing the difficulty to solve the POW. The fear with proof of work is it creating economies of scale. If 6 million people are using Nano daily, 70 cps with occasional burst in CPS is a reality. Now, if you consider Nano as a global currency, that 6 million people using it daily will still make it an early stage of adoption. If we have to rely on pow at this stage, when ledger bloat wasn't even a gb/day, and when the PRs had no problem confirming the transactions, the future just looks gloomy to me.

We are a long way away from that point. When we reach that point we can safely assume that representatives will be running on better hardware which means the CPS_limit will have moved higher.

We need a network than knows where it's limits are and defends those limits so that all parties know where they stand and can plan accordingly.

One of the great strengths of NANO is also that it's fast. Do you like your transactions fast and reliable or do want to have to retry a send multiple times like this new backlog proposal will result in.

If you don't view the network CPS as a limiting factor and you don't want to increase costs to the spammer then sure we keep on pursueing the other options on the table. None of them increase the cost of spamming the network and using up CPS for no good.

And by the way my propsol allows for burst CPS, it just protects against sustained spam and increases the costs for sustaining that spam.