Increasing minimum work difficulty with current PoW algorithm

I agree that we should increase PoW with around 10x. Right now it's too cheap to create blocks, which has been seen by the "spam tests" during the last weeks. Increasing it with like 10x, then it would still only take a few seconds with a modern GPU, which is fine.


You are right that we need to listen to services, but we also have distributed PoW that can help services to compute PoW. Here the community can help by providing GPUs for some time to compute PoW. Also distributed PoW computes PoW in parallel, allowing it to be computed very fast.


Are there any services currently that are doing multiple transactions per second that would require the use of Dist PoW?

My FreeNanoFaucet site currently uses local PoW generation and can keep up with its occasional load so far, but I don't consider it a serious project.

I like the idea of small nodes like mine being able to do local PoW generation (low barrier to entry for non-devs like myself), but it's probably not worth keeping the PoW low for hobby projects like mine.

1 Like

I think that can be easily solved using Pippin and Distributed PoW, where we as a community support hobby projects. So I think it would be very nice if TNC had a node that people could use together with Pippin and Distributed PoW (with API keys to prevent spam and misuse). It's important to have a low barrier for new projects, and having this setup would encourage new developers to experiment with Nano.


Just to add some quick calculations here, we have used a cloud hosted GPU server for work before (single GTX 1080ti) which gets ~3tps @ $107/mo. This was one of the cheaper cloud-based GPU servers we could find (although may not be the cheapest/transaction in the cloud), and could help illustrate the infrastructure-for-rent + energy costs together for difficulty increases, on this setup:

  • Moving to 16x = ~0.1875tps (~5.3 sec/tx), generating 486,000 tx/month = $0.00022/tx
  • To get to a cost of $1k/1m transactions ($0.001/tx) as @sev threw out in Discord recently, difficulty would need to be increased by ~75x (~25 sec/tx)

Those numbers make me think that maybe we're tackling the wrong issue here (at least partially). A single GPU doing 3 TPS live doesn't sound too unreasonable, especially for high volume services like exchanges or light wallets. Wouldn't the bigger issue be precomputing millions of blocks and then dumping them all at once?

I imagine that the number of people with access to 30 GPUs to do 90 TPS is relatively small, especially compared to the number of people with access to 1 GPU and the ability to precompute for weeks or months. If somehow precomputing were almost completely removed, a single GPU doing 0.1-3 TPS might sound pretty reasonable.


I vote for the 75x :slight_smile: Even Binance hot wallet is only producing a few transactions per minute at max.

Most exchanges/wallets have to scale in bursts: e.g. during bull markets, or when people wake up, or when pay day arrives. Going to 75x means that a lot more thought (and work) would need to be put into services being able to scale PoW for global usage

We also need to keep in mind the power usage and electricity costs that keep Nano competitive with its alternatives

Agreed, no increase of work is going to slow down a dump of precomputed blocks onto the network...

Wasn't there a paper pushed a while back talking about work generation in a binary tree where it becomes increasingly unlikely that your work has followed the correct predicted path?

Need to disensentivize precomputing blocks.


Do you mean that?

1 Like

Cool numbers. A 16x difficulty increase seems reasonable (if needed).

I also agree with Ty's view that sending out Epoch blocks should be used sparingly (if at all). It's like using infinity stones to alter the mechanics of a ledger. If we increase to 75x and find that that is not favorable, we shouldn't just decrease difficulty with another set of Epoch blocks.

However, Sev and others have been node-running for far longer than I have, so I do give strong credibility to their suggestions. I'd like to not cripple the ability for high volume throughput, but I'd also be able to work around this if need be.

Why not? The attacker will be able to precompute X times less blocks, X being whatever the proof of work multiplier is.

1 million blocks < 3% current ledger size so I'd vote for a dedicated set of epoch blocks to implement difficulty increases and PoW multipliers (PoW Multipliers (Anti-Spam Brainstorming))

In addition, how about;

Implement controls in the node software to allow node operators to set transaction filters.
-Set a threshold for the lowest allowable transaction value.

-Enable and specify delay time (ban time) between successive transactions sent by an account.
-Enable banning of associated accounts (to prevent ping-pong or malicious "hot potato" transfers along a chain of new accounts.)
Obviously whitelist known services.

Beyond spam prevention (simply slowing the increase in database size) reduction of the database size
-Consolidate the ledger through snapshots and balance transfers to a fresh lattice.

1 Like

Some ideas to fight spam, instead of focusing on transactions, look also at account balance and account creation:

  1. require minimum amount of Nano in sender's account to process send transaction (just like XRP and XLM). This amount can steadily rise or decrease by node updates, which will (not) process transactions, instead of using epoch blocks which will create more bloat.

  2. require high level of effort for account creation, this effort must be significantly higher than for transactions, e.g. 1000x

1 Like
  1. Has been discussed previously and it would significantly reduce value proposition of the protocol for many of the services (eg new accounts for payment services generating unique addresses for each payment)
  2. Also discussed, and seems to make sense, but not sure what the logistical barriers are for implementation.

I don't see how it reduces the value proposition of the protocol;
There is currently no legitimate value in being able to send less than 0.001 NANO.
Using a threshold that can be adjusted by node updates, the network can still support microtransactions (if they ever become useful) while not wasting resources on impractical transactions below the threshold.

They are presumably referring to minimum account balance to minimise account creation spam as implemented in XRP and XLM

Seems to work well for XRP and XLM...

It also reduces the feasibility of a number of use cases for account opening including new addresses for each merchant transaction.