SaFi Bank Space : (Deprecated) - Meiro to Kubernetes

NOTE: The project to migrate Meiro to Kubernetes was discontinued as it creates too much complexity and issues with maintainability. Solution was to give provide isolated project to Meiro team and install/support/maintain it by their internal engineers while Safi-Org will provide only the infrastructure.

Task

Jira Tickets

Story

SM-3371 - Deploy Meiro Resolved

Research

SM-3372 - Research how Meiro can be deployed in our environment Resolved

Sandbox

SM-3373 - Deploy Meiro (Sandbox) Resolved

General Architecture

Meiro compose of 3 major logical service which consist of multiple microservice:

Integration (MI short name), Events (ME short name), and Business Explorer (CDP short name). Cockroach DB and Opensearch are decoupled to the platform and consider as external system.

Current Supported Installation

Meiro team only supports official installation using docker-compose at the moment. We tried the docker-compose installation in the Sandbox phase, and documented the process here Install Meiro Platform (Sandbox using docker-compose).

Docker compose files:

Proposed Kubernetes Architecture

Project: Common
Env: Production and minimal Staging

Shared Volume Complexity

Microservice within logical services are somewhat stateful and shared volumes and data within between services or replicas. Example below shared volume structure, proposed solutions to use NFS sort of service like https://link.medium.com/DHlIndsQ7sb. No POC for this one yet.

Blockers:

Docker-Compose File Complexity

Provided docker-compose file are not exactly straight forward to convert them to kubernetes manifest yaml files. Some issues can be dealt with using extra params but other issues cannot be solved without modifying the container image.

  1. Hardcoded hostname/service calls written in container entry point

    /app/me_producer # cat run.sh
    #!/bin/bash
    
    ...
    # Waiting for RabbitMQ initialization.
    until curl -f http://rmq:15672/ 1>/dev/null 2>&1; do
      >&2 echo "RabbitMQ is unavailable - sleeping"
      sleep 5
    done

    This cannot be achieve in our K8 implementation as we expose our application using service/ingress/domains.

  2. Application and Resource configuration are coupled.

        frontend:
            image: "images.meiro.solutions/meiro_cdp/frontend:pr747"
            volumes:
             - ./config.ini:/config.ini


    config.ini contains application and resource configuration, not sure how the app is managing it, in K8, these are decoupled and declared separately. Also it contains hardcoded hostname calls

      "api": {
        "rest_url": "http://me.meiro.local/api",
        "ws_url": "ws://me.meiro.local"
      },
      "docker_registry": {
        "url": "https://images.meiro.solutions/v2",
        "name": "images.meiro.solutions",
        "user": "",
        "password": ""
      },
      "traefik": {
        "custom_config": false,
        "deploy": {
          "cpus": 1,
          "memory": "2048M"
        },
        "user": "blabhal",
        "password": "blabhal"
      },

  3. With the issues from 1 and 2, setting application configuration or parameters to variables and expose them as container environment variables would be more sensible approach.

    #FEATURE REQUEST, 4 and 5 have work around and not really a blocker to move to k8

  4. Path of conifg file such as config.ini should be configurable and if possible prevent mounting files in "/" directory. Kubernetes does not allow mounting in "/", work around is mount it somewhere else then create as symlink to "/" directory. It would be better if this can be set as an ENV variable instead

    frontend:
            image: "images.meiro.solutions/meiro_cdp/frontend:pr747"
            volumes:
             - ./config.ini:/config.ini


    work around

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      ...
    spec:
      ...
      template:
        ...
        spec:
          containers:
          - env:
            ...
            image: images.meiro.solutions/meiro_integrations/frontend:2022.08.24.1
            name: mi-frontend
            command: ["/bin/sh", "-c"]
            args:
              - echo starting;
                ln -s /tmp/config.ini /config.ini &&
                sh /frontend/run.sh
            resources: {}
            volumeMounts:
              - mountPath: /tmp
                name: integration-config

  5. Target Ports of containers should be visible in docker-compose file.
    Sample simplified docker file from meiro

    version: '3'
    services:
        frontend:
            image: "images.meiro.solutions/meiro_cdp/frontend:pr747"
            volumes:
             - ./config.ini:/config.ini
             - ./meiro_filesystem/frontend:/var/www/img/client
            environment:
             - REACT_APP_MEIRO_LOCATION=DigitalOcean, Singapore
             - REACT_APP_MEIRO_VERSION=pr747
             - REACT_APP_CLIENT_NAME=yellowcanary
        api:
            image: "images.meiro.solutions/meiro_cdp/api:pr747"
            volumes:
             - ./config.ini:/config.ini
             - /var/run/docker.sock:/var/run/docker.sock
             - ./meiro_filesystem:/meiro_filesystem
             - ./export_destinations.json:/cdp_installator/cdp_installator/external_export_destinations.json
            environment:
             - WORKERS_COUNT=4
        websocket_api:
            image: "images.meiro.solutions/meiro_cdp/websocket_api:pr747"
            volumes:
              - ./config.ini:/config.ini
              - ./meiro_filesystem:/meiro_filesystem
            environment:
              - APP_NAME=CDP websocket api
              - REST_API_URL=api
        workers:
            image: "images.meiro.solutions/meiro_cdp/workers:pr747"
            volumes:
             - ./config.ini:/config.ini
             - /var/run/docker.sock:/var/run/docker.sock
             - ./meiro_filesystem:/meiro_filesystem
            environment:
             - RSA_KEY_FILES_PATH=/meiro_filesystem/rsa
             - API_URL=api
             - APP_NAME=CDP workers

No expose ports, need to ssh into each container to check what ports it's listening to, it would be better if it’s visible in the compose so we can create an accurate mapping for target ports in kuberenetes service.