Replacing vote sequence numbers with a timestamp

This change is to replace vote sequence numbers with unique timestamps.

Currently, each vote contains a sequence number that is used to identify the most recent vote from each representative. This number is started from 0 and incremented for each vote generated. The sequence number is used to prevent vote replays from being processed as only the highest sequence number for each election is considered when confirming a vote.

Sequence numbers are simple, however, the number needs to be maintained for the lifetime of the representative. The sequence number is periodically saved to disk so a representative can recover it when a node is restarted. In the case that a representative key is moved to a new ledger, or the stored sequence number is otherwise lost, nodes on the network will replay the most recent vote they've seen from this representative back to them, so the sequence number will be synchronized.

Using a timestamp as a vote sequence number allows nodes to use local clock synchronization in order to generate a unique timestamp without the need to record the latest sequence number to disk, and without needing functionality for nodes to replay votes in the case that a representative key has been moved.


Besides what you've mentioned here for sequence vs timestamps, are there other benefits you see down the road that these timestamps could be used for?

Seems the precision should be sufficient to prevent overlaps in timestamps between forks but curious if you anticipate any potential issues with either the process to generate the timestamps (performance) or potential overlaps if 2 forks happen to arrive at identical times etc?

Thanks for sharing, it's nice to see more topics like this pushed out to the forum/community to participate in the discussion!

I think there could be future benefits to having a timestamp somewhere in the system. While I don't see a need for it in blocks themselves, votes are mostly ephemeral and these timestamps are the same size as existing sequence numbers. This new functionality could be built on the timestamps in votes to provide what we need.

The timestamp generated is always unique from the perspective of the representative. Even if two timestamps are requested in the same millisecond, the lower 20 bits are a monotonically, atomically increasing counter to guarantee their uniqueness.

Notably there isn't agreement on the notion of time, the timestamp is from the perspective of the representative's machine.

One additional use of the vote timestamps would be to get an objective proof of confirmation. Right now our confirmation is interactive, requiring observing and/or requesting votes from representatives.

An objective proof would be durable and able to be replayed trustlessly by anyone without risk of replays.

I'm working on a formal verification of this method, which would require these timestamps, but since timestamps are better in other ways it makes sense to put them in regardless.

You could make the timestamps "more or less" universal by taking the weight average of the time stamps of all the votes until you reach the cementing threshold.

It's obviously not universal, but there will be agreement with some precision. Might be useful for some stuff...

I think where most projects run in to issues is where they try to get consensus of time, in order to get consensus of transactions which is circular. We'll need to make sure we avoid that situation.

Probably a dumb question, but could these vote timestamps somehow be incorporated into potential PoW expiration for preventing long-term precomputing? E.g. average the vote timestamps and if the PoW is X time older (24 hours, 7+ days, or whatever is decided), it must be recomputed?

It's not stored, so it couldn't be used for bootstrap validation of PoW, but maybe that doesn't matter? Real-time validation seems more important

Hoping this could be a possibility to solve the precomputation situation and provide a limit!

Non-expiring precomputing appears to be quite important for the continued viability of ultra low cost services built on nano. Especially with the recent difficulty increase, it is far preferable for the good of these services to limit precompute-ability to a small number of transactions than to expire PoW.

Are there any legitimate reasons to precompute for more than a few days? It seems to me that the risk of allowing 7+ day precomputes is greater than the benefits it might bring. Anyone can precompute for a few weeks to build a stash of blocks to spam, but far fewer people have the resources to saturate the network in real time.

Straight average is probably not a great way to do it though, since it could easily be shifted by one or two participants sending extreme timestamps (e.g. 0 or max)

@Qwahzi There are some reasons to have at least 1 precomputed for a long duration, though some alternatives to this would be to allow them to expire and re-precompute again when the user's wallet activates or there's some activity by the user in another way.

It's possible these requirements could be built using this timestamp. One of the first goals we'd like to do is eliminate the need for the work field to be stored along with the block, which would require it to not be necessary during bootstrapping.

One of the things that may make that system not work is a chosen-time-point precomputation where they pick a date they intend to release, and when the timestamps will coincide, in order to circumvent this as a limiter.

1 Like

We discussed before about removing work/signatures from non frontier blocks, and other than requiring top-down bootstrapping, I don't remember it being an issue.

What prevents us from removing the work field today?

It might be dobale today, If someone can make a PR I'll look, otherwise I have a few things in front of that before I'd get to it.