This page describes the NFRs (Non Functional Requirements) for the services within the bank. New NFRs will be added here to constitute a complete-list.

Domain Authorization of requests

The domain services are responsible to perform Authorization of incoming requests (decision). See https://safibank.atlassian.net/wiki/spaces/ITArch/pages/9109614/IAM+Domain#Authentication-and-authorization-logic-in-Domain-Managers-and-Backoffice-Manager for more information about the responsibility split between the domain services (managers), Backoffice and IAM.

The services should leverage the authentication library to provide information about the request issuer and execute proper access/deny decisions based on the access rules of their specific domain.

Segregation of Kafka message types

The service should only publish messages to Avro topics of the 3 types specified in Kafka message types (to internal topics, the TM topics are exempted of this rule). No other message types should exist.

Snapshot publishing

As per the https://safibank.atlassian.net/wiki/spaces/ITArch/pages/25362436/Kafka+message+types#Snapshots , all services which maintain an entity should publish a Snapshot representation of the entity every time there is an update.

Scalability

All services should be built with expectation to be scaled to multiple instances connecting to a single DB instance.

Contract first approach (REST API)

All services which expose a REST API should provide an OpenAPI contract and Kotlin generated libraries to access this API conveniently. Leverage the generated-client-publisher-plugin to generate the OpenAPI spec and the client libraries.

The services should then follow the rules specified in REST API versioning approach when evolving their REST API.

Contract first approach (Kafka messages)

All async communication between internal services is done via Kafka messaging. All the internal topics (except for TM topics) are in Avro. The topics and schemas follow the guidelines defined in Kafka topic schema management .

Use generated Kafka publishers and listeners

SAF-1278 - [NFR] Use common generated Kafka helpers Backlog

Instead of specifying a topic and schema within the code and having to ensure that these are actually properly matching, the service should use the autogenerated classes from the configuration of topics in topicSchemasDefinitions.json . This ensures that the publishers are always publishing a correct schema to the correct topic, and vice versa - that listeners expect the correct schema from the correct topics. In addition to that, it comes with a common logging mechanism for publishing and consuming messages.

More information on how to use the proper class per publisher and listener is in the readme

Idempotency (WIP)

SAF-1273 - [NFR] Idempotency in services Backlog

REST endpoints which offer idempotent behavior should follow the REST API guidelines

TODO: A common library which would help with making the processes idempotent is in development.

TODO: Rules regarding the retention period are about to be defined

Messages in Kafka are also subject to idempotency as described in Kafka message types section “Idempotency” per each of the message types.

By default, all endpoints should be idempotent. However, it is on the tech lead to prioritize making idempotent those endpoints which pose the highest risk when not done so.

Data privacy (WIP)

All services should be compliant with the Data privacy requirements. The linked page contains more detail about different aspects of the data privacy:

Logging

SAF-1272 - [NFR] Data privacy - logging Backlog

There should be no PII data logged in system logs in the production environment.

Data exposure to 3rd parties

SAF-1271 - [NFR] Data privacy - endpoint exposure Backlog

The 3rd parties connecting to the backend systems should receive only the data they explicitly require to function. This should be achieved by:

For REST:

  • Dedicated REST endpoints provided to the 3rd party with an API response fitting exactly the purpose of the connecting 3rd party

  • Egress calls to the 3rd party contain only the data which the 3rd party requires in the endpoint to function properly

Kafka:

  • Dedicated topics which are created exclusively for the given 3rd party to consume or to produce

Kafka retries and DLQ management (WIP)

SAF-1280 - [NFR] Implement Kafka DLQ management Backlog

All services which consume kafka messages should respect the DLQ management when handling errors during processing.

TODO: Create a common configuration for KafkaClient regarding retry policy.

Input sanitization (WIP)

SAF-1274 - [NFR] Input sanitization Backlog

All inputs which reach the backend should be sanitized to prevent attacks such as sql injections. These can happen when the query is built by concatenation of user input with the query template.

These issues can be prevented automatically by using prepared statements (parameterized queries). The SQL query is compiled before the user input is added, hence preventing a change in the semantics of the query.

Such are used by default whenever the @Query annotation is used to fetch the data.

Generic error messaging (WIP)

SAF-1275 - [NFR] Generic error messaging Backlog

The end-users should not be exposed sensitive details about an error which occurred on the backend (specific internal error, stack trace, etc). Instead, the end-users should always receive (from the BE) an error code which maps to a human-readable message exposable to customers, which briefly explains the error.

In addition to that, the error codes should be generic enough so that it’s not possible for the end-users to “guess” or deduce the meaning of the error from purely the error code itself.

E.g. an error code ERR_CUSTOMER_POTENTIAL_FRAUDSTER exposes information about the meaning of the error, while ERR_CUST_001 does not.

Generated client APIs not depending on Micronaut (WIP)

SAF-1281 - [NFR] Generated client APIs not depending on Micronaut Backlog

The currently generated client APIs are dependent on Micronaut and cause incompatibilities when consumed by a service that is dependent on another Micronaut version. In turn, this requires that any Micronaut upgrade be made across all services, to keep them all in sync. To remove this coupling, the client APIs will be generated using a pure Java generator, not dependent on Micronaut.

This upgrade will happen gradually, with a time period where clients will be generated in both types of implementation, thus reducing the risk of any bugs being introduced by consuming the new clients and allowing consumers to upgrade at their own pace.