Increasing minimum work difficulty with current PoW algorithm

As we continue to research new PoW algorithms to replace the current one (Medium Article, Forum topic: Equihash as a new PoW algorithm), we have received feedback from the community about interest in increasing the minimum required difficulty for work with the existing Blake2b algorithm. The desire to make this update is largely due to the increases in computational power since the original minimum was set over 2 years ago, making transactions very cheap to produce today.

There are some considerations and decisions to be made around this change, which we’d like to collect feedback on from the community.

Epoch blocks
Increasing the difficulty requires a new round of epoch blocks to be distributed. At this time, this would introduce over 1 million new blocks to the ledger. When a new PoW algorithm is added, it will also require epoch blocks, so making a difficulty increase now would result in an additional couple million blocks in the ledger. We are open to hearing concerns and opinions around this increase.

Although not in scope for this discussion, it is worth mentioning there is a forum topic about exploring automated network upgrades here: Automated Network Upgrades.

Level of difficulty increase
Community members have suggested difficulty increases between 5x and 100x the current minimum. Higher difficulty will make work generation on common CPUs increasingly less viable vs. GPUs, so this should be considered in the discussion. We can provide more specific benchmarks on the current algorithm, but more generally typical, modern GPUs are capable of generating around 3-5 PoWs/sec. Since difficulty increases scale linearly, a 10x increase means these same GPUs would see around 0.3-0.5 PoWs/sec.

As difficulty increases, so does the average cost per transaction, which will make “spam” transactions hitting the network more expensive. Some members have suggested modeling out the typical cost/transaction and choosing a target based on this information. We are open to exploring this approach but haven’t pulled together the necessary data to inform the decision from this angle. Any volunteers to help evaluate these scenarios would be appreciated.

We are interested in hearing additional thoughts and perspectives on this nuanced topic to help inform a final decision on whether a difficulty increase will be done, and if so, what level of increase should be done. If this increase is to be done, it will likely go into the upcoming V21.0 release.

7 Likes

Until we have a consistent problem with the network being spammed into a prioritization status, I don't think we should go through the effort of increasing the base level PoW required.

Continue the focus on developing a long-term plan; what we have right now is good enough for right now.

1 Like

I very much disagree. At the current level of spam, 7 tps or so, about half a gigabyte is added per day. This is by a single person, probably. And he was adding 10 times that a few weeks ago per day, about 50-70 tps. The problem is not that the network can't handle that throughput right now, as it apparently can, it's that a single entity being able to bring the network to its knees is a massive problem. Also, if you've run a node, you'll note that the bandwidth costs at these levels actually become significant. Until some sort of pruning is available, the cost for spam should not be trivial. Since the network does about 0.1 tps per day without spam, there are very few good arguments against raising the PoW floor.

Also, at the current rate the ledger is growing, most of the nodes on the network on cheap VPS's will stop working in a few months without upgrading. Argue if you want that "only robust nodes" should be operating the network, but honestly it just pushes people away that would want to use nano if the cost to run a node is too high.

10 Likes

This is a non issue for users. The impact is on 3rd parties generating PoW. As a PR runner I approve the change, but I think we should give a lot of weight to the opinion of third parties such as Natrium and give them time to evaluate the costs.

1 Like

If seven transactions per second is a problem for node owners, then Nano has no future.

As far as ledger size, I've always had the viewpoint that pruning should take priority over a new proof of work algorithm.

I think ledger bloat is a far greater immediate concern to the network than spam. Additionally, things like frontiers and bootstrapping are going to be challenging topics, and deserve more focus than they are getting now.

1 Like

Node pruning will take a while to implement. In the meantime there is a way to minimise ledger bloat from the current spam with a short term 1 million epoch block broadcast giving more time for proper pruning implementation and/or new PoW.
I’m of the opinion that the short term pain of a temporary PoW Base difficulty increase with epoch blocks is worth the long term benefit of increased time to research pruning and PoW properly.

1 Like

I agree that we should increase PoW with around 10x. Right now it's too cheap to create blocks, which has been seen by the "spam tests" during the last weeks. Increasing it with like 10x, then it would still only take a few seconds with a modern GPU, which is fine.

You are right that we need to listen to services, but we also have distributed PoW that can help services to compute PoW. Here the community can help by providing GPUs for some time to compute PoW. Also distributed PoW computes PoW in parallel, allowing it to be computed very fast.

1 Like

Are there any services currently that are doing multiple transactions per second that would require the use of Dist PoW?

My FreeNanoFaucet site currently uses local PoW generation and can keep up with its occasional load so far, but I don't consider it a serious project.

I like the idea of small nodes like mine being able to do local PoW generation (low barrier to entry for non-devs like myself), but it's probably not worth keeping the PoW low for hobby projects like mine.

1 Like

I think that can be easily solved using Pippin and Distributed PoW, where we as a community support hobby projects. So I think it would be very nice if TNC had a node that people could use together with Pippin and Distributed PoW (with API keys to prevent spam and misuse). It's important to have a low barrier for new projects, and having this setup would encourage new developers to experiment with Nano.

3 Likes

Just to add some quick calculations here, we have used a cloud hosted GPU server for work before (single GTX 1080ti) which gets ~3tps @ $107/mo. This was one of the cheaper cloud-based GPU servers we could find (although may not be the cheapest/transaction in the cloud), and could help illustrate the infrastructure-for-rent + energy costs together for difficulty increases, on this setup:

  • Moving to 16x = ~0.1875tps (~5.3 sec/tx), generating 486,000 tx/month = $0.00022/tx
  • To get to a cost of $1k/1m transactions ($0.001/tx) as @sev threw out in Discord recently, difficulty would need to be increased by ~75x (~25 sec/tx)
2 Likes

Those numbers make me think that maybe we're tackling the wrong issue here (at least partially). A single GPU doing 3 TPS live doesn't sound too unreasonable, especially for high volume services like exchanges or light wallets. Wouldn't the bigger issue be precomputing millions of blocks and then dumping them all at once?

I imagine that the number of people with access to 30 GPUs to do 90 TPS is relatively small, especially compared to the number of people with access to 1 GPU and the ability to precompute for weeks or months. If somehow precomputing were almost completely removed, a single GPU doing 0.1-3 TPS might sound pretty reasonable.

3 Likes

I vote for the 75x :slight_smile: Even Binance hot wallet is only producing a few transactions per minute at max.

Most exchanges/wallets have to scale in bursts: e.g. during bull markets, or when people wake up, or when pay day arrives. Going to 75x means that a lot more thought (and work) would need to be put into services being able to scale PoW for global usage

We also need to keep in mind the power usage and electricity costs that keep Nano competitive with its alternatives

Agreed, no increase of work is going to slow down a dump of precomputed blocks onto the network...

Wasn't there a paper pushed a while back talking about work generation in a binary tree where it becomes increasingly unlikely that your work has followed the correct predicted path?

Need to disensentivize precomputing blocks.

2 Likes

Do you mean that?

1 Like

Cool numbers. A 16x difficulty increase seems reasonable (if needed).

I also agree with Ty's view that sending out Epoch blocks should be used sparingly (if at all). It's like using infinity stones to alter the mechanics of a ledger. If we increase to 75x and find that that is not favorable, we shouldn't just decrease difficulty with another set of Epoch blocks.

However, Sev and others have been node-running for far longer than I have, so I do give strong credibility to their suggestions. I'd like to not cripple the ability for high volume throughput, but I'd also be able to work around this if need be.

Why not? The attacker will be able to precompute X times less blocks, X being whatever the proof of work multiplier is.

1 million blocks < 3% current ledger size so I'd vote for a dedicated set of epoch blocks to implement difficulty increases and PoW multipliers (PoW Multipliers (Anti-Spam Brainstorming))

In addition, how about;

Implement controls in the node software to allow node operators to set transaction filters.
Eg.
-Set a threshold for the lowest allowable transaction value.

-Enable and specify delay time (ban time) between successive transactions sent by an account.
-Enable banning of associated accounts (to prevent ping-pong or malicious "hot potato" transfers along a chain of new accounts.)
Obviously whitelist known services.

Beyond spam prevention (simply slowing the increase in database size)...active reduction of the database size
Eg.
-Consolidate the ledger through snapshots and balance transfers to a fresh lattice.