I have an idea on how to limit the impact on spam attacks. This idea may be flawed so I hope some of you clever guys could help me brainstorm this idea and point out any problems.
If I want to spam the network I would spin up a node and then calculate pow for thousands of transaction and then publish all them through my node as fast as possible.
Looking at my nodes blockpublishing history it is clearly visible to us that this was spam and not organic transactions. There will be a short big spike in transactions and then back to zero.
Nodes that publishes organic transactions (like an exchange) will usually have a long history of publishing blocks at a somewhat steady rate.
My idea is to extend node monitoring to record the average daily publishing bps for nodes and then use this to estimate if a node is following it's normal pattern (within an allowed tolerance)
Nodes that suddenly breaks their normal publishing pattern (say 10x transactions) would have their max publishing rate throttled to their average bps rate
A spammer would no longer be able to precalculate the spam. He needs a high daily average transaction rate to be able to send at an even higher rate.
I'm sure there is something obvious that I'm missing here because it seem like a very simple solution that is easy to implement.
One problem that I see is that if a service is down for a few days it will come back online to an average daily transaction rate of zero. Using weekly or monthly average rates could be used to combat this.