Table Of Contents

User Authentication / Authorization

User authentication is based on PKI (Public Key Infrastructure).

Main idea is to separate authentication and authorization in time:

  • In the access token case, authentication and some part of the authorization happens at the same time (in the same auth sequence): the user authenticates themselves then receives the access token representing their rough rights (manifested as claim(s) in the token). Note that these rights are broad, most likely represent just a role, type of user, etc.

  • In the PKI case only authentication is done first (by the digital signature) and we check all the authorization policies when executing the request.

Note: in order to avoid replay attacks, the request data should include a (precise enough) timestamp, sequence number, request ID (or a challenge in the online case).

A simplified request flow is the following:

Properties

  • PKI makes it possible that no secret user credentials need to be sent to any server (only the public keys are stored in “auth server”), thus making server-side database harvesting attacks basically unworthy.

  • If the client has proper biometrics based authentication and secure storage, it allows a highly secure, pure passwordless implementation. (Lack of biometrics auth and/or secure store needs a slightly modified version and proper risk assessment. See Device Capabilities and Risk Assessment.)

  • Passcode is available as a failover or even as a primary unlock mechanism to handle the non-biometric device (or temporal biometrics failure).

  • This solution is inherently MFA (2FA): the first factor is the possessed private key (in the secure store on the device), the second is the unlock credential.

Attestation

Attestation can be used to receive data about the hardware that created the key pair and also about the key itself. This makes it possible to assess the trust in the key and its storage and refuse unreliable and/or weak devices. This is part of the WebAuthn standard and supported by platforms, e.g.: Android attestation and also on the server side: e.g.: Yubico server-side attestation check library. There is a document about FIDO’s Certified Authenticator Levels.

Signature Check

Signature check is required at the execution point (BE endpoint) and may also be needed at the firewall / API gateway to filter replayed / invalid requests (called as “perimeter guard (filter)” in the below diagram). In order to make this available, there are two possibilities:

  • The user public key can be signed by our auth server so this resulting cert can be used to validate the request off-line (i.e. without further requests). This is basically the same mechanism that is used with the access token check, the difference is that in this case the signature is not directly checked with the (almost) static auth server public key but first the cert is checked and then the verified public key is used to check the signed request. TODO: check it with CloudFlare!

    • Note that despite this cert transfer seems to be very verbose, the same thing happens upon the TLS channel setup, so it is not that much bad.

    • We need to handle the cert revocation (maybe in a push mode - if the revocation list is updated it is pushed to the perimeter guard - if it can somehow store it).

  • The other possibility (that is depicted in the diagram below) is to fetch the public key from the auth server (via the key ID included in the request) and check it. TODO: check it with CloudFlare!

Device Capabilities and Risk Assessment

If the device does not support secure biometrics, we can fallback to using the secure store only (with passcode unlock) which should be protected via OS/HW to avoid brute force attacks (i.e. trying out all possibilities of the passcode). This is secure enough (note that even biometric phones allow typing in a passcode) just a bit of lower security due to the possibility of espying you typing in the passcode or simply guessing (many users have a significant date as passcode for example). We still have 2FA.

If no secure store is available, we can fallback to using an encrypted file but this has the inherent problem that if the encrypted file gets stolen, the attacker can do brute force. (But the key can obviously be revoked so it is only a problem if the steal happens without user knowledge.) So, allowing this use case is more like a business decision: Do we allow such low level security phones? Note that this is similar to allowing rooted / cracked OS phones: in this case there are very low guarantees about security. We may disallow certain functionalities in this case or even fallback to the original auth by access token version but that doesn’t help much if the user credentials (password) could be also stolen due to the low security guarantees of the device / OS. Some kind of phone ID could also be incorporated in the passcode used to encrypt the key store thus stealing the encrypted file and using it on another device is harder because this phone ID should also be mimicked.

The user should understand that it's their money they risk. If they use a low quality phone, they risk their money.

Checking device capabilities may be based on attestation (see above) or OS level functionality (most probably in a challenge-response way to avoid playback attacks). TODO: research phone capability check possibilities and also the capabilities of the phone models covering the majority of the Philippines market.

Comparison with Access Token Auth

Authentication vs authorization in OAuth 2.0 context: https://oauth.net/articles/authentication/

Let’s add the classic access token auth variant to the above diagram: the black modules are the access token variant, the app, user policy service, MS are common, the other grey modules are the PKI variants.

Comparison:

  • App

    • Onboarding

      • There is no difference (WebAuthn registration is the same).

    • Login

      • PKI case: the WebAuthn login is the same, no data is returned.

      • Access token case: the WebAuthn login is the same, access token is returned.

    • Request

      • PKI case: sign the request parameters (add key ID into it) and call the endpoint.

      • Access token case: call the endpoint with the request parameters together with the access token.

  • Perimeter guard (signature check → drop msg if failed)

    • PKI case: fetch user public key based on key ID in the message (different for requests made by different users), check signature.

    • Access token case: use auth server public key (same for all requests).

  • Auth server

    • Onboarding

      • There is no difference (WebAuthn registration is the same).

    • Login

      • PKI case: the WebAuthn login is the same, no data is returned.

      • Access token case: the WebAuthn login is the same, access token is returned.

    • Request

      • PKI case: endpoint for fetching the user public key based on key ID.

      • Access token case: endpoint for fetching / receiving the auth server public key.

  • MS

    • PKI case: check request data’s authenticity by fetching the user’s key from the Auth server.

    • Access token case: check the access token with the Auth server’s public key.

Traceability and Auditing

Since the user request data is signed by the user, it not only means that we are not just relying on the bank's statement but the bank can prove that the user (and only the user) was able to generate the request. (Since the keys are considerably long lived, if the user denies the request it means that many other requests are also invalidated which makes it hard for a fraudulent user to prove false claims.)

Comparison with WebAuthn

WebAuthn (as the name suggests) is an authentication protocol, so it is independent of OAuth (and they perfectly work together: https://oauth.net/webauthn/). Since the main idea is to separate authentication and authorization in time, we only need WebAuthn for registering the key (and logging in - but this is optional, since we do not use a token, so there is no need to get it when logging in).

More details: https://developer.chrome.com/blog/webauthn/

Images are from https://webauthn.me/introduction.

Registration

Our implementation’s key registration procedure is the same.

In order to make the process even more seamless we utilize resident credentials if it is supported by the authenticator (https://auth0.com/blog/a-look-at-webauthn-resident-credentials/)

Login

If we implement the login procedure, then it is the same. Note that the difference is that after login, no access token is returned.

Resources

Use Cases

Parallel Logged in Users

Solution A) Multiple keys (for a user) are not available, so parallel logins are impossible.

Solution B) We do a Login request upon opening the app (e.g. signing a login challenge) and we send a push notification to the other logged in apps to automatically forcing to log out. Meanwhile, we only allow requests for the logged in key.

We’ll implement Solution B but Solution A is ok for MVP.

Re-authentication (Step-up & Transactional Auth)

This is all about making sure that the user is still there and the consent is given to the request.

The reason of the re-authentication can be the following:

  1. The original (last) authentication was not strong enough for the current request and we want to re-authenticate with another (stronger) factor.

  2. The strong authentication was not done recently and we want to make sure the user is still present (so we want to make sure the request was made by the user).

  3. Risk determines anomalies so we want to re-authenticate with another factor (most probably this will be AAI face comparison)

In the biometrics case, the authentication is strong enough, so case #2 is to re-authenticate TODO: research phone platform support to force re-authentication (or allow no-reauth use of the keys - depending on which is the default), e.g. https://developer.android.com/training/articles/keystore#UserAuthentication

In the passcode case, both #1 and #2 may be applicable so we may want to apply a second factor.

AAI Face Comparison

Summary: send the AAI check ID in the request for step-up or transactional auth. The result of the ID will be verified by the BE. For transactional auth this is exactly the same procedure that was planned with the authorization server. For the step-up case, the difference is that we need to reuse the ID and invalidate it after some time.

Offline Mode

Offline → Online: app needs to sign and queue the request and send them when network is restored. Most probably offline transactions will need an expiration time after which their execution will be denied.

Forgotten Passcode

Since passcode can not be recovered, the only thing that can be done is registering another key. The onboarding process can be repeated possibly with some changes when the system realises that the user already has an account (so this also results in the same level of user identification that has been done at onboarding).

Phone number change

If only the phone number needs to be changed, it is out of scope of IAM since IAM does not store the phone number and does not use it for auth purposes. The request itself can be of course initiated by the app (sending the new number as parameter).

New Device

A simple solution is to redo the onboarding process.

If the user has their old device, we can implement a QR-code / swarm point style authentication so the new phone shows the QR/swarm (of the new public key) and the old phone scans it and sends that signed. So basically this will be just a "change pubkey" request. (After this, the SMS validation should also be passed and maybe the face comparison also.)