Patterns
Last updated
Last updated
Resiliance is super important in Composition Patterns.
Similar with .
Compose information from multiple sources.
Availability - single point of failure.
Data consistency.
Higher latency.
Where the API Composition
would be a Service Composer
that would receive data and run business rules over them, returning after the resulting data.
A way to decompose Microservices with two rules:
Every new feature will be a microservice.
Segregate small peaces of a monolith as a microservice.
At each interation, the monolithic system shrinks until it selft turns into a microservice.
Difficulties in executing this:
Communication with the monolith.
Team maturity. (DevOps culture)
Data Base. (Segregating/Migrating) (Tip.: Use APM to help)
Each microservice needs an APM from the start.
Metrics. (Which metrics you expect for each microservice)
Just like in DDD
microservices should use an Anti corruption layer to make services more independent.
In DDD
an ACL is an interface
, in microservices an ACL is another microservice in the middle.
This is a technique used to serve data for specific frontend types, since each type of frontend expects different outputs from APIs.
To do this an ACL (Anti-Corruption Layer) is used to transform data returned from the APIs, depending on the Frontend that requested.
Since APIs services are big and complex, using BFF they don't have to worry about this optimizations.
But this new layer of BFFs will add latency to the requests.
Writing the own API microservice to handle this is a solution if thinking ahead of time.
OR using solutions like GraphQL
. Since the each Frontend client can choose the excat data that requests.
The downside to this are GraphQL
own limitations and downsides.
This is a pattern to implement in a Microservice architecture, to highly increase #resilience, and avoid that requests don't get lost when microservices in the chain are unavailable. (Or even communication between Services and Queues)
Mainly used for transactions/requests that are not Sync, so that you don't need real time responses.
To garantee that if at any step of the chain, the message won't be lost if a certain Microservice is down (Even with #retries policies), you will have to persist this message in a Database
or Queue
.
So in the example of the image above, we could deal with:
Database
MS 1
will have a Database specific for persisting these messages.
When MS 1
action starts the Message is persisted, and when MS 1
action ends it deletes from the Database.
Then if communication to MS 2
fails the message is persisted and MS 1
can still retry.
The ideia would be a Database for each MS, but I guess the Database could be global and shared between all of them.
Just think about resilience and the repercussion of this.
Examples of Database to use:
RDBMS
KV -> DynamoDB
Cache -> Redis
It is important to not mix this Database with the MS data Database.
Queue
Just like in the solution above, messages are produced when arrived in each Microservice, and a consumer runs on each of them, consuming the message as soon they finish.
A mechanism to store Credentials inside a Microservice environment, so that they are not stored inside Environment Variables
or stored inside the Microservices.
This solution makes much easier to automatically rotate these passwords.
All logs from Microservices should be standardized.
Use OTEL
(Open Telemetry) to mask and decouple which tools you use for Logs, Metrics and Telemetry (like New Relic, Datadog, Elastic, etc) from your architecture.
Unify log lines to a single line:
Depending on the language or tools, it generates stack traces of logs or similars.
But a standerdized log should be in 1 line only.
More information, best practices and examples can be seen in #link_to_OTEL.