So, background: ORV has some long-term issues that we are better prepare to, now.
- Voter passivity, lost Nano. This is simple - some coins are lost, some people do not choose representative / do not change representative that is long gone. While voting is done according to some % of online weight, it also has an absolute limit (~60 millions Nano). Rationale is that in case of network split the forking is never possible, and both parts just go offline until the connectivity is restored.
The thesis I will defend is that this absolute limit should be removed, arguments to follow.
- Inherent censorship vulnerability by big representatives. This is widely discussed right now, mostly in hysteric tone (e.g. "withdraw your funds from Binance"). Now, a bit of the game theory on this matter.
On-chain incentives were discussed a lot in the context pure PoS vs DPoS. Apparently, DPoS systems are (1) very prone to voter bribery, it was actually observed in EoS, (2) very prone to validator collusion and censorship if the incentives arise, was again observed in EoS in chinese vs western validators conflict.
ORV is kind of exempt of this kind of problems for now, because there is no pie to share - validators are not incentivised, which naturally removes most of the issues. However, in a long term (similar scale of adoption to when altruistic validators start to fall off and off-chain incentives begin to dominate) ORV is subject to the same problems. Most of all, it is prone to political collusion of validators in order to censor the parts of the network (say, freezing funds).
Assumptions in which the following propositions are done.
1 - We have signed timestamps.
2 - We have such node discovery algorithm that an attacker can not arbitrarily disconnect the network. In my opinion, the most robust algorithm is having each node choose trusted connections (3-4 peers) that are always prioritized for bootstrapping, and for other connections use standard Kademilla node discovery. While it is some trust assumption (node runner must manually choose these connections), it is roughly similar to the assumptions on trust we use already, and it only matters in case of the network split.
3 - Change rep is modified to be more conservative. User can still change representative instantly, but we have "epochs" of length roughly a day, at which the representative weight is constant, and changes between epochs. Epoch time is determined by signed timestamps, and in the epoch change network halts all voting for the time ~4T, where T is the network sync time + local clock allowed offset. It should also guarantee determinism of block confirmation, see this Does asynchronous processing of "change" blocks mean that block confirmation is non-deterministic?
Proposition 1: Create on-chain method of validating liveness. Basic algorithm could be following:
(step 1) Node checks what % of the blocks which are not contradicted by anything it sees (locally correct) is not validated, both globally and accountwise.
(step 2) If some account gets a big % of blocks censored (say, node sees 10 attempts in a row of locally correct different blocks which all do not get through) or globally the confirmation % gets too low, the node starts spying what are the validators not accepting the blocks. It is easy to see that even if the attacker controls almost all weight, they will need to not accept at least half of the blocks on each individual validator. Validators not accepting the blocks get reduced trust level (repeatedly from the same account, and separate one for global censorship).
(step 3) If the trust level of potentially malicious validator on the node reaches some threshold, say, 0, it invokes "call to arms". Call to arms is rebroadcast only by the nodes which have trust level <0.5 against the same potentially malicious validator, and the calling node shows the transactions that it witnessed, and they were not confirmed. Every node checks if it witnessed these transactions too, and if it witnessed at least 90% of these transactions, it believes the censorship is going on, and the validator is malicious.
rationale: call to arms can not be called unless the validator is malicious (because trust level on most nodes would be too big), and if it is called, then every honest node will agree on the matter.
exception: node should still have a "legal" way to go offline, this method is for nodes that keep responding that they are online to the network but do not vote.
Proposition 2: Censorship prevention through colored delegation weight.
Now, after the call to arms, all nodes which agreed that the validator is malicious consider it as offline, and do not care about its voting weight. To prevent an attacker from moving delegation and replaying the attack in the next epoch, we will use additional concept: "colored" delegation power. Each account will have a parameter, defined at the start of the epoch, which is a fraction of votes which are considered legitimate. By default, this parameter is all of its voting weight, and call to arms sets it to 0. Going offline should also slightly reduce this parameter, but not much. Now, when Nano gets redelegated, the illegitimate and legitimate votes are fused (if account A had L_A legitimate weight and F_A illegitimate weight, and B had L_B, F_B respectively, then redelegating some Nano from A to B will move legitimate or illegitimate weight trying to make the proportions L_A/F_A and L_B/F_B is as close as possible).
That means that we do not need to freeze attacker funds or anything (which is suboptimal because dusting attack), we just color them non-voting, and dusting would not lower the voting power of legitimate users. It makes Nano slightly non-fungible (some people might not like to get payment in non-voting Nano), so there should be some decay mechanism, making non-voting Nano voting slowly over time.