(DRAFT) Latency Chains - A Nano Weather Map Proposal.
During the latest test for Nano, it became apparent that the community is reliant on data sources that are honest and trustworthy. Great sites like NanoTIcker.info were reporting telemetry data. It occurred to me that this data is crucial to the network. Its construction and availability should try to follow the same trustless rule that nano does. Telemetry is useful, but AFAIK it relies on the nodes reporting information honestly, which an attacker would not probably want to do. Regardless, here is an idea that may improve the situation.
I am a nano novice and this proposal may have some big inaccuracies, please point them out. I am about to begin building a simple prototype - a reverse proxy in rust to test these assumptions. Your feedback and criticism are welcome.
For this example we will use 6 steps in a latency chain. The example below also is for a node joining the network. Established nodes will be regularly running requests. Like performing a periodic random 6 step random consecutive ping or 6 step random trace-route around the network, where each step is signed.
A portion of Nano bandwidth would be dedicated to revealing the physical (speed) topology of the network resource. Nodes would be allocated a category based on their performance in the network: 'stable' and 'unstable', which becomes the node 'state'. State is measured as a threshold starting at the practical ~0.65c (reasonable speed of light measure of network latency). We cannot know how slow a communication is likely to happen, but we can know what the fastest communication is possible. We do know that beyond a normalised range over time, a node can move from a stable state to an unstable one.
Think of it like a parcel. If you know the parcel was sent to the next city, regardless of carrier, it is likely to arrive the next day. If it was sent to a city on the other side of the world, your expectations would be different. Maybe a week could elapse and you would not think there was a problem. But if your parcel did not arrive in the next city within a week, you know, without calling anyone that there is a problem with the delivery network. Since we cannot know if packets are being sent to a geographic location, we must rely on the time packets take to arrive and return as a measure of network effectiveness. This, over time would reveal a latency map of the network. If you send a parcel every day and it arrives the next day each time you send it, you know the destination is likely to be nearby.
A stable node would be a node that is performing as expected, based on their connectivity to other nodes and provision of other services as requested. The base calculation is one of reported latency over time to that node, as in how far they are away from other nodes in latency chains they become part of. Importantly, a node cannot set its own state but is given a state from being a party in a successful latency chain, which it can report to the network. Success of a chain is measured by all nodes being reachable within a reasonable range of their reported latency. It is a service level agreement with the network that is forced on them by reacting to random queries.
Latency chains measure state by providing latency data around the network that will either validate the chain and all the nodes within it (reporting they are stable), or invalidate the chain and thus report on nodes that are forming an unstable latency chain. A network operator can also form their own opinion by manually querying the node connectivity, or asking a node to confirm a portion of a path is correct.
- Every node is in a state and no nodes are in both states.
- A node joining the network is in an unstable state by default and must stay stable on the network long enough to attain a stable state.
- Nodes moving to an unstable state can become stable again by being included in another successful latency chain.
- Nodes can move between states or return to an unstable state if they drop off the network.
Once we have state as a measure we can construct prediction tools (e.g. Markov chains) and other tools, such as choosing more local voting nodes, ie. not asking nodes in Australia to vote on forks in Alabama. Nodes could cluster around proximity, providing better service to each other and set lower expectations for nodes that are further away.
The problems that the latency chain would address:
- How does a node know ‘where’ it is in the network without asking a watchtower?
- How does a node know if it is ahead or behind (or within some reasonable bound) the state of the network?
- How can the network reliably report on that node’s activity?
- How can the network reliably aggregate node information therefore get a snapshot of the network as a whole, without creating watchtowers which are open to attack?
Here is my thinking on the process so far:
- Node A wants to join the Nano network and contribute to consensus through representation, is advertising voting and other services useful to the network, like helping to construct a network latency map :).
- Node A can be honest or malicious, so we cannot trust any information the node will present, but we can better trust information the network will tell us about the node
- By making connections to Node A from the network we can establish
a. The latency of the node
b. Potentially a benchmark, where we ask the node to compute and return a response that cannot be known, for example to presented with a nonce to be hashed
- Because the latency chain may be fully formed of attacking nodes who wish to lie about the state of the network, chains must be available to be queried by any node at any time
- On finding an invalid chain (any request response that is out of the calculated bounds of latency provided in the latency chain report) a node may broadcast its findings, then cast a vote to invalidate the chain, where other nodes will test the latency of the chain participants, moving all nodes in the chain into an unstable state if they agree.
- A stable node is a node that holds a valid latency chain report, not invalidated by a vote
- Node A sends a message to an existing PR (Node B) to ask for an invitation to contribute
- Node B maintains a capped table of previous invitations (IP address & timestamp) which it looks up to ensure it has not seen this node before, to avoid spam requests
- Node B will trigger a 6 step latency chain, ending with Node A.
- Node B broadcasts the invitation message, which Node A picks up and stores.
- The first response to that invitation from another node will become Node C, who starts the chain
- Node C will then sign the latency chain, creating the next block, and then broadcast it to the network
- …and so on until you have Mn steps in the chan to satisfy the number of steps
- Each step will create a nonce that will ensure the chain of requests has a finite length and integrity.
- Each latency chain will be published by all participating and can be audited from any point in the chain
- It is in Node A's interest to ensure this report contains a very low latency as they will be judged on it in some further ranking.
- Node N (the final node in the latency chain will report back to Node A with the latency report (all blocks in the chain).
- Node A signs the Message (MN) from Node N and uses this request to be recognised as a 'stable' node.
- Nodes will randomly come in and test Node A and follow its chain in an attempt to invalidate it
- Any node reporting or providing a timestamp out of an acceptable network performance range will trigger a vote to invalidate the chain that Node A holds.
Nodes can be queried to form clusters and are incentivised to provide low latency to their neighbours and low latency to the network as a whole. Equally nodes that are over performing, I.e. saturating the network can be identified in terms of location and throttled.
Latency chains can be implemented as a proxy system that works alongside the nodes and has no direct bearing on the network, but rather could acts as an advisory system which leverages key signing capabilities on the existing network.
As I said, I am a novice and looking to build this in Rust as a reverse proxy that sits alongside the node (probably not on the same machine).This is very much work in progress and the whole idea might be unnecessary, over complex. Looking for feedback and revisions and a sanity check before starting to code.
Hopefully this may spark some discussion about this at least.
[EDIT: image improvement and typos]