Distributed Computing Fallacies

Samir Behara takes us through a few fallacies with distributed computing:

The network is reliable
Service calls made over the network might fail. There can be congestion in network or power failure impacting your systems. The request might reach the destination service but it might fail to send the response back to the primary service. The data might get corrupted or lost during transmission over the wire. While architecting distributed cloud applications, you should assume that these type of network failures will happen and design your applications for resiliency.

To handle this scenario, you should implement automatic retries in your code when such a network error occurs. Say one of your services is not able to establish a connection because of a network issue, you can implement retry logic to automatically re-establish the connection.

There are some very good points in here.

Related Posts

Breaking Down the MAXDOP Guidance Change

Joe Obbish digs into Microsoft’s new guidance for maximum degree of parallelism: I’ve heard some folks claim that keeping all parallel workers on a single hard NUMA nodes can be important for query performance. I’ve even seen some queries experience reduced performance when thread 0 is on a different hard NUMA node than parallel worker […]

Read More

Offloading Code Review Burdens with Automation

Ed Elliott argues that automation and testing can make code reviews easier: OK so if we break this down into what a DBA should be doing as part of a code review: – Ensure formatting is correct and any standards followed– Have they introduces a SQL injection vulnerability?– Consider any side effects of the actual […]

Read More

Categories

May 2019
MTWTFSS
« Apr Jun »
 12345
6789101112
13141516171819
20212223242526
2728293031