Representative selections for delegating Nano Foundation Development Fund voting weight

On Friday, November 22, 2019 the Nano Foundation split up the Development Fund across 48 accounts - 47 of which containing 50,000 Nano and the last with the remaining 32,284 Nano. This was done for security reasons and the voting weight was spread to different Principal Representatives run by the community to add some redundancy for voting. This was covered in the announcements the prior week:

After splitting things up and delegating weight to representatives, there were questions about how we chose the representatives to delegate weight to. We consulted with a group of community members who had a history of helping track representatives through various public and private tools to help better inform our decisions. Below is a table with those delegations:

One of the requirements to be considered for weight delegation was already being a Principal Representative - our aim was to strengthen existing PRs and not aim to create new ones. This was done to not only avoid some confusion about whether we were trying to make appearances of increased decentralization by increasing the PR count, but also to avoid a situation where efforts of PRs to gain weight to get over the PR threshold would be slowed down and their status become too heavily reliant on weight delegated by the Nano Foundation.

Here are some of the considerations we included during selection:

  • Used results from Srayman's (Discord) dropped messages testing to help determine possible under-spec'd nodes and focus on better performing nodes.
  • We ruled out Nano Foundation representatives, any representatives run by individual Nano Foundation members, representatives with inappropriate public names and those who have a business model already driving growth in their delegations.
  • Kept voting weight increases to under 60% of existing delegated amounts
  • Favored higher number of delegators, which helps reduce the likelihood of a small number of delegation changes forcing the node rely on NF weight to maintain PR status
  • Considered level of activity within the community of the PR owner
  • Favored those registered publicly with MyNanoNinja and the following metrics that registration brings:
    • Length of time PR has been running
    • Uptime of node over past year
    • Contact ability with owner via social media connections on their profile and other known contact points by community members
    • Speed of voting latency (a feature currently in beta in MyNanoNinja)

As some of these values are quantitative and some qualitative, the reviews weren’t summed up in specific scoring value but each metric considered in comparison between nodes on the list of potential candidates. We are open to feedback about future approaches to deciding how dev fund voting weight should be delegated. We plan to review this list and make adjustments quarterly to the delegations, or as a situation arises to ensure stable voting is done with the delegated weight.

A big thanks goes out to those who helped provide data for evaluation and to the sites hosting awesome details about Nano representatives including MyNano.Ninja,,, and more. We are interested in hearing ideas and excited to see the community keep pushing forward towards even better representative evaluation - those efforts not only help us decide where to delegate the dev fund voting weight but also help inform all Nano users about their best options for representatives.


Thanks for the transparency!

Maybe my data is not reliable, but according to NanoTicker, during the recent stress tests several reps in the list have performed very poorly in terms of median confirmation time. If you have much more confidence about your data than what Nanoticker reported during stress test, then I’m all for it. Just want to make sure there wasn’t error in your analysis.


From my perspective performance was the lowest category for consideration as the data is still lacking to easily identify under performing nodes. Luckily 50k isn’t a significant impact on quorum and if there are issues with performance it could be re-delegated relatively easily to better performing nodes.


Ok that makes sense, and again highlights the need for establishing a standardized, reliable method to measure performance.

We are open to any additional data that will help us make the best decisions. As Srayman pointed out the dropped messages testing was only one piece of data, and it was used to help narrow the full list of PRs down to a subset for evaluating in other respects. We may have missed some prior performance issues with some reps, so if you have specific cases you'd like to highlight, please let us know and we can take a closer look.

1 Like