Speed test on RPC v 1/2 and IPC encoding 1/2/4

I recently ran a speedtest to have an overall view on RPC v 1/2 and IPC encoding 1/2/4 speed.

I used PHP library https://github.com/mikerow/php4nano on Raspberry Pi 4 (Ubuntu 64bit)

This is what I get (100k calls for each test, account_weight/AccountWeight call)

speed

Observations:

  • RPC v2 (message envelope) is 100% slower than RPC v1 (classic)
  • IPC unix_socket_domain encoding 4 is a bit slower than encoding 2 (not that much, but visible)
  • IPC TCP encoding 4 is much slower (almost 900%) than encoding 2

Conclusions:

I may have missed something about optimizations, but it seems like to me there is a lack of speed on RPC v2 and IPC TCP encoding 4

I'm available for more tests

Hey mike, thanks so much for testing the available endpoints!

Apologies in advance for lacking accuracy in some of the terms here.

The fastest one should end up being using flatbuffers (encoding 3) directly with IPC 2 in a socket domain, that is, not using the HTTP gateway. I am resuming work on this quite soon, and we'll end up needing to develop, with the help of community devs, some client libraries in various languages - js, go, python, php mostly - to facilitate using it. The IPC 2 server is full duplex so there are several advantages if the client is async.

It starts with generating flatbuffers for your target language. After compiling the node normally you'll see a flatc binary in your build folder. You can use this and the following bash script to generate the flatbuffers for your target language (it does for many in this case). Run it from the repository root directory (assumes you built in a build folder called build).

# by cryptocode
declare -a target_languages=("js" "go" "python" "php")
declare -a files=( "nanoapi")

for language in "${target_languages[@]}"
do
        for file in "${files[@]}"
        do
                ./build/flatc --force-empty-vectors --reflect-names --gen-mutable --gen-name-strings --gen-object-api --strict-json --no-fb-import --"$language" -o api/generated/flatbuffers/"$language" api/flatbuffers/"$file".fbs
        done
done

You will then see the generated flatbuffers in api/generated/flatbuffers/php/nanoapi/ , and this interface can be used in your application, along with google's library to properly generate the messages.

How you use these will depend on the language and I don't have experience with PHP. We're working on a python client, and some more information is available in https://github.com/cryptocode/notes/wiki/IPC-Flatbuffers-API . You can likely find information on using flatbuffers with PHP.

Please let me know if you'd like more specific information.

I'll let you know with the next test, after implementing Flat Buffers

1 Like

Just in case someone find usefull the bash code but it's not working. Use dos2unix tool in order to make it work.

Anyway, just executed the script, it's very usefull and gives very wide codes. It will make implementation easier!

Here is the test I ran comparing encoding 2,3,4 on IPC calls.
100k call for each row, account_weight/AccountWeight

It seems like FlatBuffers aren't worth of to be used in PHP for now. Probably due to fact that json_encode() or json_decode() are C submodules and result much faster than the google provided library.

test

@mikerow I merged the other topic you made in here.
Would you be able to make the same test but only instantiating the flatbuffer envelope once? This would ensure we're testing the speed of the server, not the client.

Possible. But I'm thinking to a code optimization. I'll update

I did 2 tests

  1. envelope instantiated once, return output (FB) elaborated
  2. envelope instantiated once, return output (FB) raw

These are the results (I changed the loop from 100k to 1k) so every row seems faster than previous outputs, but still proportioned.

So, it seems to me, at least in PHP, without compiling ad hoc submodules, json_encode() or json_decode() are still the best solution.

Same as above, but 100k calls per row

*istantiated envelope (sorry)

Tested on a callback-based python client with full deserialization of the envelope, and re-using the same envelope when sending.

100k calls of IsAlive took 15 seconds, over TCP on windows.

I'll try to publish this client tomorrow

On which machine did you run the test? Could you run the same test using AccountWeight?

@Dotcom I've checked my code again, really can't find speed flaws.

RPC v1.0 calls are still faster than json over flatbuffers and flatbuffers (at least in PHP)

When you have time if you can post your python code I can give a look to see if your algo is better (as you stated the call performances are better).