Nano Vote Compression

I had some ideas over the last week about more efficiently representing votes on the network for less bandwidth usage.

It has some small tradeoffs but could represent a lot of savings in the usual case and seems easy enough to implement.


For those having trouble viewing the slides above:


I hope I don't come as ignorant, but why do votes generate so much traffic?

1 Like

For every block there are X number of Principle Representative nodes voting. So if there are 60 active PR nodes that’s 60 votes per block.

Votes are smaller than blocks in general per hash and there are some savings involved by combining 12 hashes into a single vote at high TPS to save on overhead, but because they’re multiplied by the number of PR’s it uses more bandwidth than block propagation.

This would reduce bandwidth from votes by about 17% at low TPS and up to 65% at high TPS I think.


When paranoid bootstrapping nodes are asking for their unconfirmed ledger to be voted on by the network, they would request it for the full hash, yes? Not requesting a vote on only the 4 bytes?

It's only the returning votes that would be compressed?

The vote signature is always on the full hash.

The only change is the wire-representation of the hash is truncated, on the assumption that you can find the correct full hash easily.

It would be possible for confirm_req to specify the prefix length they want to make re-assembly easy.


Is there any way to calculate the likelihood of a prefix collision for a prefix of a given length (e.g. 4 bytes)? I asked a friend, and his guess was 2^prefixLength/2^hashLength (for completely randomized hashes), but I haven't found any papers that might confirm that. Maybe some variation of birthday problem math?

That would at least give us an idea of potential collision frequency


One other general question I have is, are we optimizing the right thing? Given that saturated beta test nodes seem to be hitting CPU constraints now (not bandwidth), how much would the decompression scheme impact CPU load?

1 Like

Yep there's a pretty easy way to estimate this, if you have N bits: After 2^(N/2) different random hashes there's a 50% chance of a collision.

So for 4 bytes, 32 bits, 2^16 or after 65536 hashes there's a 50% of a single collision.


For light-nodes perspective, that brokes. Remember that these nodes (light/pruned) didn't store all blocks, then it's not possible to reconstruct the signature.

Every light-node interested in the block will request the confirm_req again, against all nodes. Just because can't decompress that "compressed vote". Currently, a light-node watching the real-time data can parse the votes and validate only one block, which the node is interested (receive/send of X account).

1 Like

That's a good point. Maybe nodes can state as part of their peering handshake, the minimum prefix size they want to accept. Since nodes won't forward votes unless they can be properly decompressed, they'll know the full vote contents before they'd forward it on.

I think DoS attacks based on this would be a big problem. You can efficiently generate collisions using Pollard's rho algorithm, as we've discussed before with nanopow. Generating a large number of collisions could drastically slow the network down since the signature of each would need to be verified against the hash of each, about O(n^2), and this needs done for each vote.


Yea that's definitely the biggest issue, there can't be a degenerate case.

After some quick napkin math, I'd recommend a 16 byte (128 bit) prefix, which has 64 bits of security against collision attacks, which should be enough to prevent DoS attacks. This is about double what you proposed, which I think is reasonable because collision attacks effectively halve the bits of security.