Microservices application architecture is very popular nowadays,
however, it is important to understand that everything has advantages
and drawbacks. I absolutely understand advantages of micro-services
application architecture, however, there is at least one drawback. Of
course, there are more, but let's show at least the potential impact on
performance. The performance is about latency.
Monolithic application calls functions (aka procedures) locally within a single compute node memory (RAM). Latency of RAM is approximately 100 ns (0.0001 ms) and Python function call in decent computer has latency ~370 ns (0.00037 ms). Note: You can test Python function latency in your computer with the code available at https://github.com/davidpasek/function-latency/tree/main/python
application is using remote procedure calls (aka RPC) over network.
Typically as REST or gRPC call over https, therefore, it has to traverse
the network. Even the latency of modern 25GE Ethernet network is
approximately 480 ns (0.00048 ms is still 5x slower than latency of
RAM), and RDMA over Converged Ethernet latency can be ~3,000 ns (0.003
ms), the latency of microservice gRPC function call is somewhere between
40 and 300 ms. [source]
Python local function call latency is ~370 ns. Python remote function call latency is ~280 ms.
That's the order of magnitude (10^6) higher latency of micro-services
application. RPC in low-level programming languages like C++ can be 10x
faster, but it is still 10^5 slower than local Python function call.