Below are some of the top areas of friction in dealing with work generation that are worth discussing and expanding on so proper solutions can be targeted.
Searching for more egalitarian work generation
It has always been a goal to keep work generation as egalitarian as possible to ensure those with access to specialized and more expensive hardware have limited scaling potential vs. others with more consumer and commodity hardware.
Although this is a goal many projects have been reaching for over the years, our goals and requirements are a bit different than most, as covered in our new PoW algorithm update. We believe moving towards a memory-hard algorithm will bring us some of the desired benefits, thus the continued research into options using a new configuration of Equihash.
Fixed shape of work requirements
Static thresholds in the node provide little flexibility to adjust the difficulty levels for valid transactions. Given the different nature of transactions in relation to network activities, their situation on related account chains and other factors, there may be better shapes of work requirements to ensure consistent operation of the network long term.
For some ideas previously discussed around this, the PoW multipliers brainstorming topic is a good one to visit. In order to effectively use this, reliance on top-down bootstrapping methods is being looked into.
Advantages of spammer pre-computing abilities
The current design favors spammers over regular users and services due to the nature of information asymmetry for transactions. A spammer doesn't have any unknowns in the work generation process as they can define amounts and destinations themselves, and control all parts of the transaction.
Services and regular users on the other hand rarely know the amount or destination ahead of time so at most they can pre-compute a single work value. For many cases this is sufficient, but also puts more emphasis on services having resources ready for on-demand work because there are limits to how much batching can be done.
Limiting of pre-computing
One major inflection point in discussions around work generation is whether solutions for limiting pre-computing of work should be explored. By throttling pre-computation, the rate of throughput on the network could be changed, but given it would be done at an account-level, may have limits to its effects as spammers spread horizontally.
One result of a shift in this direction is more reliance on on-demand work generation for certain use cases since any solutions looking to push multiple transactions on a single account faster than their current work generation capabilities would be throttled at that rate.
What other areas exist with work generation that are causing friction with your integrations or participation on the Nano network?