PoW Multipliers (Anti-Spam Brainstorming)

Some brainstorming below to generate discussion:


  • Minimize hard-coded limits that require node/protocol updates over time
  • Incentivize receives to make pruning easier
  • Prioritize human-to-human transactions (occasional + real world value)
  • Discourage "bad" behavior (hard to prune, ledger bloating, network saturation)


Significantly increase PoW for open transactions

  • Rationale: Typically happens infrequently; reduces ledger bloating; people understand opening accounts before using them (+harder to burn transactions on accident?)
  • Risks: Exchanges/light wallets (but they could charge users for more accounts); Can still send to unopened accounts (see next idea)

Significantly increase PoW for sending to unopened accounts

  • Rationale: Reduces bloating; Incentive to open accounts (easier to prune)
  • Risks: Uneducated users vs exchanges/lightwallets? Less "privacy" due to incentivizing address reuse; people can still not receive transactions after the initial open

Significantly increase PoW for every order of magnitude less than "unano" (10^18)

  • Rationale: Lowest GNI in the world is ~1/1000 of the US
  • Humans don't transact in 1000ths of a cent; at 10 cents/Nano, .001 cents == .0001 nano; add another 3 orders of magnitude for lowest GNI country == .0000001 is lowest currently reasonable, add 4 more orders of magnitude for if Nano ever goes 10000x (10x current world GDP), aka .00000000001 becomes a good soft floor
  • Risks: "True" microtransactions/message encoding

Increase send PoW based on number of unreceived transactions CREATED BY the sending address:

  • Example: nano_a has sent 100 unreceived transactions == 2xPoW for next 100
  • Rationale: Encourage receiving transactions for pruning; harder to spam
  • Risks: Light wallets & exchanges that open accounts and send transactions for other people, BUT exchanges could check withdrawal addresses for pending receives first (and notify users to pocket their funds before they'll allow withdrawal); Light wallets have different sending addresses; Could also do a rolling window to account for light wallets/exchanges/sending to people that never receive?
  • Requiring extra DB reads/calculations to determine PoW validity?

Increase send PoW based on number of pending receives TO destination address:

  • Rationale: Encourage receives for pruning
  • Risk: Spam DoSing legitimate users

Increase PoW costs for each order of magnitudes increase in sending volume vs a rolling window:

  • Example: 100 vs 1000 vs 10000 transactions in an hour (increase PoW cost for each)
  • Rationale: Really high orders of magnitudes almost always come from spammers
  • Risks: Lightwallets and exchanges

Per-node whitelisting of known-good services for lower PoW/prioritization under load:

  • Rationale: Differentiating between spam and high volume services; could potentially allow for increasing the base PoW without impacting good services as much (if enough PRs whitelist them)
  • Risks: Potentially anti-competitive for new Nano businesses;

Limit pre-computing PoW (time-based nonce?):

  • Rationale: Prevent mass accumulating and dumping blocks
  • Risks: The Nano network has no sense of time

Types of spam to consider

  • Opening new accounts - Storage risk
  • Valid transactions (high or low value) in a loop (Epstein spam) - Network saturation risk + storage risk w/out pruning
  • Sending and never receiving - Hard to prune (storage risk)
  • Sending to unopened accounts - Hard to prune (storage risk)

Other thoughts

  • All of the above increases would be multipliers vs whatever the baseline send PoW is
  • There is a constant balance battle between legitimate services and spammers
  • Full node operators are almost never affected since they do their own PoW
  • If all services could enable client-side PoW, it would be a lot easier to prevent spam without impacting legitimate services
  • If receives were auto-received (no signature required), pruning would be a lot easier
  • Some of the above ideas are not intuitive; users would have to understand the protocol to understand why some sends/receives are harder than others

These are excellent ideas. However, I think we should be realistic on what could possibly be implemented, the more simple the better. Apart from the already implemented dPoW and the future different PoW design, I suggest to differentiate PoW difficulty by block type.

PoW-difficulty from lowest to highest:

  1. Receive block: default network difficulty
  • Rationale: it's common sense, we want to make receives seamless. There is no danger that a spammer would saturate the network with receive blocks only as they would be discarded by the nodes.
  • Risk: none
  1. Change block: 2x default
    Rationale: representative changes do (or should) not happen frequently, there's plenty of time to pre-calculate the PoW, however, as others have suggested before, a theoretical "change block attack" could happen where the malicious node would saturate the network with extreme amount of change blocks.
    Risk: none

  2. Send block: 4x default

  • Rationale: If we design nano to be a digital currency of choice between humans, it is unreasonable to assume that a human may want to do multiple transactions per second. A higher difficulty PoW for send blocks would make the the spamming more expensive by default even when the network is not yet close to saturation. Average users would not notice difference as their PoW would be pre-calculated.
  • Risk: Slightly increased confirmation time. Light wallets backends and exchanges (hot wallets) would suffer. Exchanges could mitigate this by distributing the funds to multiple hot wallets.

4: Open block: 10x default

  • Rationale: Again, looking things from the human user's perspective, you don't need to open multiple accounts per second. We must discourage high frequency broadcasting of open blocks by the same node.
  • Risks: Services that create a new address for every transaction would suffer along with users...

Regarding send transactions, the ideal situation for me would be having everyone use client-side PoW (only a few second per phone), and then restricting high volume/bad behavior in other ways. That's not really possible at the moment though, since PoW costs between high and low end devices are so different.

Regarding open blocks, it's possible to send to unopened accounts, so you'd have to pair the increased account open cost with an increased cost for sending to unopened accounts

Wouldn't checking those conditions make the protocol a lot slower? How hard is it to check whether an account is opened?

1 Like

Since work is the first thing checked before using additional resources to further validate a block, then checking whether an account was opened to determine the level of valid PoW required would mean a disk check is needed, and thus open up potential DOS attack vectors, unless optimizations can be designed to avoid that (accounts list with open status in RAM, etc.).


So that would prevent us from having a PoW multiplier based on the volume of send transactions to an unopened account (since we don't know if it's unopened unless we read from disk), but we could still increase the required work for open blocks themselves (i.e. if "previous" value == 0, use 2PoW). Would that help reduce ledger bloat from new account spam at all? I guess if we're not careful this could actually incentivize spamming to unopened accounts. It almost has to be paired with increasing the cost to send to unopened accounts somehow

I'll have to think about the other stuff some more. The next thought I had is to keep an in-memory rolling window count of sends from an address that only decrements if a receive is seen from the destination address. If no receive is seen after X transactions in Y time, the PoW required from the sender could be multiplied. That might incentivize receives and decrease spam. Not sure yet though, still thinking about it. Issues might include: 1) receive transactions linking to the previous block hash and not a sending address, 2) receives not related to the sender decrementing the counter, and 3) balancing sender PoW difficulty vs number of receives

Wasn't there a discussion some time ago that there should be a relation between amount sent and PoW difficulty required? What happened to that idea?

Prioritizing high amount transfers doesn't help that much in preventing spam. You could send 100 nanos back and forth between two or more accounts. It does force the spammer to create a receive block though, so I guess it would make pruning more efficient.

Exactly, spammers would have to open accounts if they wanted to spamm that way and prunning would take care of the rest.

My view is that there isn't a single anti-spamm measure that will save Nano, there needs to be a bundle of them to be efficient.

1 Like

I think that particular idea is less about prioritizing high amount transfers, and more about increasing the cost of wasting network resources with transactions that provide no real world value. It's about focusing on the human-to-human digital cash usecase over anything else

You could do something like increasing the required PoW by 33% for every order of magnitude below "nano" aka 10^24, and that would naturally discourage extremely low value transactions while not making them impossible. It would also act as a pseudo-auto scaling mechanism for hardware improvements vs PoW since that will loosely correspond with Nano market capitalization increases (hopefully)


1- Increase the baseDifficulty level 5x
2- Difficulty = baseDifficulty / repsWeightPercent
3- Min Difficulty limit (Difficulty = Difficulty>minDifficulty?Difficulty:minDifficulty)

If you want to make more transactions, you should either have a lot of nano or people should trust you.

I first suggested this on discord and I was asked to bring it here:

First this idea does not prevent spam and have nothing to do with PoW. it's goal is to prioritize transactions once network capacity is reached. I'm in the philosophical camp that assumes that spam is indistinguishable from usage except for intent. That's why I focus on how to deal with saturation instead of how to prevent it.

We will split transaction into 2 types: regular and de-prioritised. To do so we use something alike bitcoin days destroyed. Since nano has no universal state nodes could keep a list of all recent senders and destinations (addresses). This list does not have to be long, since we are talking about spam a few seconds should do, so a 10s rolling list at 1.000 tps would mean 20.000 addresses.

With that list in hand, the node checks every transaction to see if either the sender or destination is there. If so, it goes into the de-prioritised list. if not to the regular list. Now transactions are processed in the current order except that you only do the de-prioritised list after the regular list is empty (you are under full capacity).

Can the spammer circumnavigate the system by messing around with accounts? Not initially. A spam starts by acquiring X nano in a single account (bought, faucet, stole whatever). If I just try to ping (send from A to B, from B to C, from C to D etc) the system would see that account B is trying to send right after receiving and putting it on the de-prioritized list. If I try to send to different accounts (A -> B, A-> C, A-> D etc) it is still detected by the constant sends from A. As far as I can think of every other possible scenario is a variation of this, unless his X nano starts already spread in different accounts. This is obviously impossible since all Nano started in the genesis unless he uses someone else to do the initial PoW used to distributed the Nano across all those accounts. This would mean for example buying nano on an exchange and making several withdraws to different accounts, or using the faucet the same way. This does not seem viable to me (in order to circumnavigate the prioritization system he would need to spread the nanos in far too many accounts, one for each transaction until the windows expires (10 sec in the example).

  • Advantages:
  1. Most services would not be affected. Exchanges would since all their sends come from the hot wallet, but only if their send rate is larger than the window and during a saturation period (so in our example once every 10 seconds during a spam attack).

  2. Agnostic for value of transaction, so no bad effects on weaker economies.

  • Problems:
  1. The list must be kept at RAM to make the checks viable and I don't know how to calculate the resources impact of maintaining and checking all blocks over this moving window.

  2. Those lists are not equal across the nodes on the network. But since we are talking about spam, the guilty parties should be agreed upon even if the entire list as a whole does not.

  3. The network does not have a "single" capacity, each node has its own. How will the network react once some nodes starts to process de-prioritized transactions while others are still at full capacity?

  4. More importantly, this only works once. If the spam attack is designed to spread the nanos over several accounts, the first attack would be de-prioritized and of no effect, but followed attacks would not (now his accounts can send without repetition of sender or receiver). Using higher PoW for open accounts increases the cost of the second attack, but the issue persists for future attacks. Now the first is ineffective, the second is effective but more expensive, but the third and so on are now cheaper and effective. it's almost like I'm rewarding persistence :smiley:


Ok, to extend on that suggestion I think we can add this to close the system:

We make default on nodes prioritization of transactions by amount being moved. As I've posted, the first system above works as means of pushing an eventual spammer towards a particular form of spam:

Transactions from independent accounts until the running window resets and he is able to reuse his accounts without being tagged as spam. That means that the number of accounts required to make this attack is a function of the size of running window and desired tps by the spammer. Since I'm concerned about saturation here, spam attacks that don't reach saturation don't matter, so desired tps will always be the network full capacity minus current used capacity.

So for the current state of the network, if we assume a 10s window and 500 tps capacity that attack would required 5000 accounts. With prioritization based on amount, low amount transactions would stay pending until the network in under capacity again and they can be processed, and that requires the spam transactions to be of significant amounts in order to be impactful and take priority over regular transaction. That would force the attacker to acquire non irrelevant amounts of Nano to perform this attack in any meaningful way. Of course he would still own the funds by the end of the attack, so this is not a loss, but opportunity cost.

  • Advantages
  1. Forcing the attacker to either pay a lot in PoW or to have a stake on the Nano network reduces the interest in attacking, since successful attacks should reduce the usability of the network and therefore its value and his investment. It's the same argument for the consensus except here it is far weaker since the stake is a lot smaller. This system on conjunction with the described in the original post, force attacker between 3 basic options:
    1. Have virtually no stake, take priority over most transactions of the network but pay a lot in PoW (50 Nano ping ponging between 2 accounts) triggering the exponential increase of PoW for his accounts.
    2. Pay default amounts in PoW, have virtually no stake in the network but lose priority over almost all regular transactions (0.000001 Nano spread over thousands of accounts to avoid increase of PoW). Causes spam transactions to be always in the end of line of prioritization. I would call this not even a spam attack since it does not really affect the transactions being made, it's more of a ledger bloat attack.
    3. Pay default amounts of PoW, have a meaningful stake in order to jump ahead in the prioritization queue (having thousands of accounts with a enough Nano each to take priority over regular transactions).

To know how much the 3rd option works as deterrent we can try to estimate the stake required for different levels of success: currently the median bitcoin transaction is of almost 400 dollars, in Nano that would naturally be lower (no fees), but even at 1% of that value, it amounts to 4 dollars per transaction to be in the top 50% of priority, so 4 dollars per account used in the spam (just to take priority over 1/2 the regular transactions). That would amount only to 20,000 USD for 500 tps with a 10s window. This is not a lot, but this number is a function of network saturation, which is an ever expanding entity even if due only to hardware/bandwidth/technological advancements unrelated to Nano. Also if the attacker chooses to cash out the Nano immediately after attacking, he cannot attack again without re distributing the Nano with the associated cost (problem 4 from the previous post). If he chooses to keep the Nano in order to be able to attack again cheaply he would keep his money exposed in Nano which makes the opportunity cost larger (one can assume that an attacker does not desire exposure in Nano). There's also a way to make the required number of accounts increase significantly: Increase the running time window. We can make the window dynamic, changing it predictably but not deterministically. Let's say for example that on average every 5,000 blocks we will change the window from 10 seconds to 100 seconds and then change back to 10 after on average 1,000 blocks. That would mean that any attack from multiple accounts would need to calculate the required amount of accounts using the 100 seconds windows, because midway his attack the window is likely to change and he does not know when (that's where the non deterministically part matters). So while nodes would have to incur in the extra resources requirements of 100 seconds window on average only 16% of the time, the attacker would need to take into account for his whole attack or risk being caught as a spammer and having his attack halted and his PoW thrown away (all his next blocks would be invalid due to insufficient PoW). But I got to admit I have not thought this part through yet and there could be more pitfalls I have not yet seen.

Btw, in order to synchronize the nodes on which is the right running window we could just use the hash of a block that is being added to the ledger. Every time a node adds a block to his ledger that has a hash with a pattern that should happen only once every 5,000 blocks it increases his own window, once he he adds a block with a pattern that should show up only once every 500 he decreases it back to 10s. This would guarantee synchronicity (enough of it at least) for agreement across all nodes on which window to use.

  • Disadvantages
  1. Advantage 2 from the previous post is gone. prioritization based on amount will always hurt the smaller economies, every time the network is saturated it's their transactions that will suffer (regardless of being a spam attack). But I simply can't think of way of not punishing the poor while also hurting a potential spammer. In any case their transactions would be slower, but they would still require the same amount of PoW and be feeless (they would be punished by delays not cost increases).

  2. As usual the devil is on the details. Even if there are no logical flaws and if the incentives are enough to reduce spams, implementing this might be challenging. I honestly don't know.

There's one thing I forgot to mention: In the original system, a transaction would be tagged as "spam" and have it's PoW increased by appearing repeatedly within the moving window. This system severely affect some providers who aggregate funds (exchanges). Originally it would not have been a problem because the only punishment was de-prioritization but now that we are actively increasing PoW requirements, it's a lot damaging. The solution to that is not to flag a transaction as spam unless 2 conditions happen:

  1. The previous one (multiple showings in the running window)

  2. The node working at max capacity. The node would only look to itself of course and I honestly don't know how to measure if a node is running at it's limits (maybe time between block received and block processed?)

So transactions would only have PoW increased if their transactions makes the network work at capacity. Otherwise they would just be de-prioritized.

This way services that happen to transact at a frequency that tags them in the moving window won't have their PoW increased unless the nodes reach capacity. They would just be processed last.


Here's an idea for workable time-based pre-computation limit with the existing network limitations which I recently expressed on Discord.

Require the PoW to be hashed with the sender's PR's frontier.
Current hash: blake2b(transaction_nonce||prev_block_hash)
Proposed hash: blake2b(transaction_nonce||prev_block_hash||pr_frontier_hash)

Whenever your PR sends a transaction with their PR node, all work previously computed would be rendered invalid. No one can predict any particular PR's transactions except that PR.
This allows each PR to send a transaction every few hours, days or even minutes to 'reset' any conceivable pre-computed attack. It does not centralize the existing network, because each PR is allowed to set their own 'refresh rate' so to speak, and senders can use any PR they wish.


  • Limits possible pre-computed PoW at PR's discretion
  • Does not require increased network bandwidth or other resources; only one hash needs to be completed to verify the transaction.
  • Does not require frequent fork resolution like adding a degree of true randomness or a time-based measure would.
  • Still allows precaching and does not put a hard limit on it.


  • Requires clients to occasionally pull frontier from their node if their transaction is not rebroadcast, in a similar way to how they currently resubmit their transaction with a higher PoW after a few seconds if it is not rebroadcast.
  • Precached PoW would now have a shelf life, and occasionally need to be recomputed.
  • The lowest common denominator of PRs with open apis is the effective timelimit; so if there are any PRs which choose to never send transactions to 'reset' PoW, this measure is rendered ineffective.
  • Possible edge cases where transactions immediately after a PR sends a transaction of their own might be rejected by other PRs that may not have received the new frontier, since the network can not guarantee processing transactions from different accounts (in this case a PR and it's reliant accounts) in a particular order.

This is an interesting idea. To help mitigate some of the drawbacks mentioned you could consider a small number of hashes beyond just the frontier as valid for PoW. That way proper distribution of a new frontier can be allowed to happen with minimal issues (wallets don't start using a new frontier for a while to allow for the distribution time). Some additional things to consider:

  • With PoW as one of the first levels of DoS protection it is helpful to have all inputs for work verification to be included in the block for easier access. Given the limited scope of how many PR frontiers there could be at any given time though, this seems like a reasonable set of values to keep refreshed in memory to use for validation.
  • As you mentioned there would have to be consideration for PRs not adding new blocks often enough. This would be an incentive to re-delegate votes to those "lazy" PRs for accounts who don't want their work invalidated - which is basically anyone on the network. So some changes would have to be put in place to discourage that.
  • Also as a spammer I could setup a rep, get to PR status (or buy someone's PR key), prepare a bunch of transactions to get the hashes needed for PoW, pre-compute PoW off of those and then schedule out an attack. It is a longer attack requiring more effort and planning, but one to consider.
1 Like

Something to add to this brainstorming session. Two approaches, one is based on block speed negotiation. The other is based on priority processing using transaction PoW difficulty and value. The later simply adds to the DPOW and prioritize based on additional key (aka. trx value).

Changing node's behavior from FIFO process to priority processing should help in network saturation events. Priority based on transaction value should be considered along side transaction PoW difficulty, given a transaction with a higher value have a higher probability of being valid.

First, the simplest approach to help control network congestion is to allow nodes to use handshake packet to negotiate with peers an acceptable rate of exchange. Max_Rate/Total_Reps = Rate of exchange.

Max Rate = 1000 blocks/sec
Total Reps = 100

Expected bps negotiated between each peer is 10 blocks/sec max (Values will differ between peers based on differences in hardware, network bandwidth, etc..). This limit imposes a limit on all block types (trx blocks, vote blocks, etc... but creates a very stable and predictable platform during high network activity). Block propagation can be adjusted to account for rate based on defined agreement. Slower peers will get blocks slower than higher performing nodes.

This serves the same purpose used by many systems such as line speed for digital communication, traffic speed for a more uniform traffic flow on highways (although multi-purpose), etc... Although appropriate, this idea may not be suited for this type of service.

The 2nd approach would be to introduce priority queues to limit rate of block processing/propagation based upon block's PoW difficulty and transaction value. Priority is currently being done by DPOW, my suggestion is to basically add in transaction value as another constraint.

Transaction value threshold should be derived through block sampling to identify min/max values within a user defined period (or use minimum_receive value defined in node's config file). Min/Max values can be used to generate an average transaction threshold value (e.g sampled every 300 seconds). A rolling average for transaction value should be computed, such that blocks that breach this threshold value are simply queued and processed at a slower rate (e.g 10 blocks/sec).

**Average_trx_value_threshold = (Current_trx_value_threshold+ ((MAX+MIN)/2) ) /2 **.

Should allow average transaction value threshold to be dynamically adjust based on sampled network activity.

Transaction Priority Order:

  • High PoW difficulty and high transaction value (No Delay)
  • High PoW difficulty and low transaction value (No Delay if difficulty >= DPOW active difficulty during period, otherwise rate limited)
  • Low PoW difficulty and high transaction value (No Delay if value >= ((max+min)/2) trx value during period, otherwise rate limited)
  • Low PoW difficulty and low transaction value (Rate limit based on sampling threshold detection)

This does add additional cost in terms of memory to a node operator (queue). Creating 2 new user-defined config entry to control size of backlog as well as to enable new block discarding for blocks breaching threshold, should alleviate this memory increase concern.

PoW difficulty increase is better suited to reducing transaction spam and ledger bloat. It does shift cost with respect to time and money onto either the client and/or rep owner. Allowing for priority processing/propagation to be rate limited does increase discouragement of spam given it increases cost in time (not money) for network saturation. It does not eliminate ledger bloat.

The only thing to aide in reduction of ledger bloat unfortunately is pruning. I believe separating out solutions for ledger bloating vs network congestion is better suited to identify the most optimal solution all parties can agree on. Since focusing on a single solution will create an imbalance either for users and/or node operators.