Right now, to the extent that I see network performance as needing attention, and considering typical load we have a lot more opportunities through standard code improvements rather than protocol changes.
The protocol changes, specifically the most difficult part to change: the signing payload, are being considered with the new state block design We can iterate on these kinds of changes but they need to be prioritized.
Late to this party since I never saw the thread, but... I have also thought in the past about batching transactions.
It would allow for implicit recv's (basically you batch previous recv's together with a send to effectively take them in automagically).
It would also play very nicely with TaaC. Gone are the days of "you only have 1 Nano so you can only do 0.001 TPS" criticisms and instead it's more of a "You need to make 15 transactions but you can only do 0.001 TPS, so you had better batch wisely." Basically it lets us let poor users batch requests while simultaneously attackers (who don't want to batch) get penalized by anti-spam measures (whatever they be).
I also disagree with Jay that 300 TPS is enough (and my memory seems to put the figure an order of magnitude lower...). Nano is DoA for any real large-scale global use if we aren't at least aiming for 10k TPS IMO.
Plus there would be ledger benefits. Other thread is raging on about how a (compressible) memo field would bloat up the ledger uncontrollably (by like 3%) but nobody is batting an eye at the fact that batching could reduce ledger bloat by 50-80% if implemented aggressively.
The biggest concern with it though is that it can make forks a nightmare, as 1 double-spend could cascade into needing to "roll back" a dozen transactions. I actually think that if the security could be solved, there's absolutely no reason not to batch.