This is a description of the change to bound the number of unconfirmed blocks in the ledger. Adding a bandwidth cap has effectively bounded the rate at which the network confirms blocks but there still can be a large number of unconfirmed blocks in the ledger.
A new table called ‘backlog’ will be in the database to track unconfirmed block hashes.
In memory, a sorted container mapping difficulty to block hash is kept and used to look up unchecked blocks by difficulty.
When a block is inserted, its hash is put into the backlog table and the memory mapping is updated. If the memory mapping exceeds a node-configurable number, it will find the block hash with the lowest difficulty and remove it from the ledger.
Eventually it is possible the block will have low enough difficulty that it will get removed from all ledgers in the network because there is a cap on the number of blocks the node will keep in the backlog. This will require the block creator to increase the difficulty on the block and publish it again. The functionality to increase this difficulty and republish already exists.
This strategy ensures the number of blocks accepted into the ledger that are not confirmed stays reasonable. It also offers a direct way to find unconfirmed blocks instead of scanning account frontiers as is currently done in the confirmation height processor.