Argo CD is a declarative GitOps continuous delivery tool for Kubernetes.
GitOps Objectives
Repeatable – Apply changes the same way in every environment
Predictable – Comprehensive understanding of deployment impact
Auditable – Traceable changes to infrastructure and applications
Accessible – Changes only require a pull request
How it works?
Argo CD is implemented as a Kubernetes CRD which continuously monitors running applications and compares the current, live state against the desired target state (as specified in the git repo). A deployed application whose live state deviates from its target state is considered out-of-sync. Argo CD reports & visualizes any deviation as well as provides mechanisms to automatically or manually sync the live state to the desired target state. Any modifications made to the desired target state in the git repo can be automatically applied and reflected in the specified target environments.
There are a couple of different ways of doing declarative continuous delivery. Pull model is where the CD system (Argo CD) continuously monitors and updates the application’s state on the Kubernetes cluster to the target state defined in Git. In the push model, a user initiates the update from an external system using a CI pipeline.
Argo CD supports both the pull and the push-based GitOps model to sync target environments with desired application state.
Argo CD is following microservice architecture and consists of several components.
To learn more about GitOps and Argo CD you can at free course from Codefresh.
Installation guide
Requirements
GKE cluster
Ingress controller (we use Traefik)
Cert-manager (to generate TLS certificates for UI and CLI connections)
Argo CD installed via Helm (using Terraform provider)
argocd.tf (https://github.com/SafiBank/SaFiMono/blob/main/devops/terraform/tf-cicd/argocd.tf)
resource "kubernetes_namespace" "argocd" { provider = kubernetes.app_cluster metadata { name = "argocd" } } # # Need to wait a few seconds when removing the resource to give helm # # time to finish cleaning up. resource "time_sleep" "wait_30_seconds_argocd" { depends_on = [kubernetes_namespace.argocd] destroy_duration = "30s" } resource "kubernetes_secret" "vault_address" { provider = kubernetes.app_cluster metadata { name = "argocd-vault-replacer-credentials" namespace = "argocd" } data = { VAULT_ADDR = "http://vault.vault.svc.cluster.local:8200" } } resource "random_password" "argocd_github_api" { length = 16 special = true } resource "github_repository_webhook" "argocd" { repository = "SaFiMono" configuration { url = "https://argocd.safibank.online/api/webhook" content_type = "json" insecure_ssl = false secret = random_password.argocd_github_api.result } active = true events = ["push"] } resource "helm_release" "argo-cd" { provider = helm.app_cluster name = "argocd" namespace = "argocd" lint = true repository = "https://argoproj.github.io/argo-helm" chart = "argo-cd" version = "5.13.9" depends_on = [time_sleep.wait_30_seconds_argocd] values = [ file("argocd_values.yaml") ] set_sensitive { name = "configs.cm.oidc\\.config" value = <<EOT name: Okta issuer: https://safibank.okta.com clientID: ${okta_app_oauth.argocd.client_id} clientSecret: ${okta_app_oauth.argocd.client_secret} requestedScopes: - openid - profile - email - groups requestedIDTokenClaims: {\"groups\": {\"essential\": true}} EOT } set_sensitive { name = "configs.secret.githubSecret" value = random_password.argocd_github_api.result } } resource "helm_release" "argocd-apps" { provider = helm.app_cluster name = "argocd-apps" namespace = "argocd" repository = "https://argoproj.github.io/argo-helm" chart = "argocd-apps" version = "0.0.1" depends_on = [ time_sleep.wait_30_seconds_argocd, helm_release.argo-cd ] values = [ file("argocd_apps_values.yaml") ] }
argocd_values.yaml (https://github.com/SafiBank/SaFiMono/blob/main/devops/terraform/tf-cicd/argocd_values.yaml)
global: logging: format: json level: warn securityContext: runAsUser: 999 runAsGroup: 999 fsGroup: 999 configs: cm: url: "https://argocd.safibank.online" exec.enabled: true statusbadge.enabled: true accounts.github: apiKey, login configManagementPlugins: |- - name: argocd-vault-replacer generate: command: ["argocd-vault-replacer"] - name: kustomize-argocd-vault-replacer generate: command: ["sh", "-c"] args: ["kustomize build . | argocd-vault-replacer"] - name: helm-argocd-vault-replacer init: command: ["/bin/sh", "-c"] args: ["helm dependency build"] generate: command: [sh, -c] args: ["helm template -n $ARGOCD_APP_NAMESPACE $ARGOCD_APP_NAME . | argocd-vault-replacer"] - name: argocd-lovely-plugin generate: command: ["argocd-lovely-plugin"] params: controller.status.processors: 50 controller.operation.processors: 25 controller.repo.server.timeout.seconds: 180 server.insecure: true reposerver.parallelism.limit: 10 redis.compression: gzip rbac: policy.csv: | g, argocd-viewer, role:developers g, argocd-admin, role:devops p, role:developers, applications, get, env-dev-apps/*, allow p, role:developers, applications, get, env-stage-apps/*, allow p, role:developers, applications, get, env-brave-apps/*, allow p, role:devops, applications, *, */*, allow dex: enabled: false redis-ha: enabled: true exporter: enabled: false priorityClassName: important redis: config: min-replicas-to-write: 0 controller: replicas: 3 metrics: enabled: true serviceMonitor: enabled: true priorityClassName: important resources: requests: cpu: 500m memory: 1024Mi server: replicas: 2 autoscaling: enabled: true minReplicas: 2 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 1 periodSeconds: 180 scaleUp: stabilizationWindowSeconds: 300 policies: - type: Pods value: 2 periodSeconds: 60 metrics: enabled: true serviceMonitor: enabled: true resources: requests: cpu: 50m memory: 256Mi priorityClassName: important repoServer: replicas: 2 autoscaling: enabled: true minReplicas: 2 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 1 periodSeconds: 180 scaleUp: stabilizationWindowSeconds: 300 policies: - type: Pods value: 2 periodSeconds: 60 metrics: enabled: true serviceMonitor: enabled: true priorityClassName: important resources: requests: cpu: 500m memory: 1024Mi env: - name: ARGOCD_ENV_LOVELY_PLUGINS value: argocd-vault-replacer envFrom: - secretRef: name: argocd-vault-replacer-credentials initContainers: - name: argocd-vault-replacer-install image: ghcr.io/crumbhole/argocd-vault-replacer imagePullPolicy: Always volumeMounts: - mountPath: /custom-tools name: custom-tools-vault - name: argocd-lovely-plugin-download image: ghcr.io/crumbhole/argocd-lovely-plugin:stable imagePullPolicy: Always volumeMounts: - mountPath: /custom-tools name: custom-tools-lovely # -- Additional volumeMounts to the repo server main container volumeMounts: - name: custom-tools-vault mountPath: /usr/local/bin/argocd-vault-replacer subPath: argocd-vault-replacer - name: custom-tools-lovely mountPath: /usr/local/bin/argocd-lovely-plugin subPath: argocd-lovely-plugin # -- Additional volumes to the repo server pod volumes: - name: custom-tools-vault emptyDir: {} - name: custom-tools-lovely emptyDir: {} notifications: enabled: false applicationSet: replicaCount: 2 metrics: enabled: true serviceMonitor: enabled: true resources: requests: cpu: 5m memory: 64Mi
To connect to Argo you can port-forward to server pod
kubectl port-forward services/argocd-server 8000:80 -n argo-cd
OR if you have DNS record you can set up ingress
Cert-manager definition to issue valid TLS cert
apiVersion: cert-manager.io/v1 kind: Certificate spec: commonName: argocd.safibank.online dnsNames: - argocd.safibank.online issuerRef: kind: ClusterIssuer name: letsencrypt-prod secretName: argocd-tls
Ingressroute definition both for http (UI connection) and grpc (CLI connection)
apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute spec: entryPoints: - websecure routes: - kind: Rule match: Host(`argocd.safibank.online`) services: - name: argo-cd-argocd-server port: 80 - kind: Rule match: >- Host(`argocd.safibank.online`) && Headers(`Content-Type`, `application/grpc`) priority: 11 services: - name: argo-cd-argocd-server port: 80 scheme: h2c tls: secretName: argocd-tls
Argo CD generates admin password if not specified during installation. To get password from secrets:
kubectl -n argo-cd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Custom installation configuration
Projects and Applications
https://argo-cd.readthedocs.io/en/stable/user-guide/projects/
Projects provide a logical grouping of applications, which is useful when Argo CD is used by multiple teams. Projects provide the following features:
restrict what may be deployed (trusted Git source repositories)
restrict where apps may be deployed to (destination clusters and namespaces)
restrict what kinds of objects may or may not be deployed (e.g. RBAC, CRDs, DaemonSets, NetworkPolicy etc...)
defining project roles to provide application RBAC (bound to OIDC groups and/or JWT tokens)
In our installation, we set up an additional project to deploy applications, so we can later restrict access to Argo for developers.
Also we deploy initial applications (app of app pattern) which are the same as project name.
Applications and projects are deployed with argocd-apps helm chart.
argocd_apps_values.yaml (https://github.com/SafiBank/SaFiMono/blob/main/devops/terraform/tf-cicd/argocd_apps_values.yaml)
RBAC Configuration
The RBAC feature enables restriction of access to Argo CD resources. Argo CD does not have its own user management system and has only one built-in user admin
. The admin
user is a superuser and it has unrestricted access to the system. RBAC requires SSO configuration or one or more local users setup. Once SSO or local users are configured, additional RBAC roles can be defined, and SSO groups or local users can then be mapped to roles.
In our installation, we set up new role developers which is assigned to group developers (access to all deployed in application project), role devops to group devops (access to all projects/applications).
rbac: policy.csv: | g, argocd-viewer, role:developers g, argocd-admin, role:devops p, role:developers, applications, get, env-dev-apps/*, allow p, role:developers, applications, get, env-stage-apps/*, allow p, role:developers, applications, get, env-brave-apps/*, allow p, role:devops, applications, *, */*, allow
OIDC provider Okta (SSO)
Once installed Argo CD has one built-in admin
user that has full access to the system. It is recommended to use admin
user only for initial configuration and then switch to local users or configure SSO integration.
In our installation, we set up Okta (SSO) inside the cluster and later in can be connected by adding additional configuration.
set_sensitive { name = "configs.cm.oidc\\.config" value = <<EOT name: Okta issuer: https://safibank.okta.com clientID: ${okta_app_oauth.argocd.client_id} clientSecret: ${okta_app_oauth.argocd.client_secret} requestedScopes: - openid - profile - email - groups requestedIDTokenClaims: {\"groups\": {\"essential\": true}} EOT }
Okta configuration is done in terraform. Creation of oauth app, creation of groups argocd-admin and argocd-viewer that later are mapped to roles by argocd RBAC configuration.
Git Webhook Configuration
Argo CD polls Git repositories every three minutes to detect changes to the manifests. To eliminate this delay from polling, the API server can be configured to receive webhook events. Argo CD supports Git webhook notifications from GitHub, GitLab, Bitbucket, Bitbucket Server and Gogs.
resource "random_password" "argocd_github_api" { length = 16 special = true } resource "github_repository_webhook" "argocd" { repository = "SaFiMono" configuration { url = "https://argocd.safibank.online/api/webhook" content_type = "json" insecure_ssl = false secret = random_password.argocd_github_api.result } active = true events = ["push"] }
To lower the number of resources to update on every commit we annotate resources to trigger only if there were changes on path. https://argo-cd.readthedocs.io/en/stable/operator-manual/high_availability/#webhook-and-manifest-paths-annotation
Secret management
Argo CD is un-opinionated about how secrets are managed. There's many ways to do it and there's no one-size-fits-all solution.
In our case we use argocd-vault-replacer.
Requirements
Installed Vault
Configured Kubernetes authentication for repo-server Service Account
Argocd-vault-replacer is a plugin which can be installed during init-container stage. For plugin to work, you need to specify Vault address (mandatory). For example: http://vault.vault.svc.cluster.local:8200
Specified in argocd.tf
resource "kubernetes_secret" "vault_address" { provider = kubernetes.app_cluster metadata { name = "argocd-vault-replacer-credentials" namespace = "argocd" } data = { VAULT_ADDR = "http://vault.vault.svc.cluster.local:8200" } }
To install plugin we specify in values file for Argo CD:
configs: cm: configManagementPlugins: |- - name: argocd-vault-replacer generate: command: ["argocd-vault-replacer"] - name: kustomize-argocd-vault-replacer generate: command: ["sh", "-c"] args: ["kustomize build . | argocd-vault-replacer"] - name: helm-argocd-vault-replacer init: command: ["/bin/sh", "-c"] args: ["helm dependency build"] generate: command: [sh, -c] args: ["helm template -n $ARGOCD_APP_NAMESPACE $ARGOCD_APP_NAME . | argocd-vault-replacer"] - name: argocd-lovely-plugin generate: command: ["argocd-lovely-plugin"] repoServer: envFrom: - secretRef: name: argocd-vault-replacer-credentials initContainers: - name: argocd-vault-replacer-install image: ghcr.io/crumbhole/argocd-vault-replacer imagePullPolicy: Always volumeMounts: - mountPath: /custom-tools name: custom-tools-vault volumeMounts: - name: custom-tools-vault mountPath: /usr/local/bin/argocd-vault-replacer subPath: argocd-vault-replacer volumes: - name: custom-tools-vault emptyDir: {}
Examples how to work with secrets: https://github.com/crumbhole/argocd-vault-replacer/tree/main/examples
Connect to private Git repository/Helm registry
To connect to private Git you can use ssh-based authentication. https://argo-cd.readthedocs.io/en/stable/user-guide/private-repositories/#ssh-private-key-credential
To connect to private Helm registry, you can use CLI or UI
argocd repo add https://charts.helm.sh/stable --type helm --name stable --username --password test
Connect external clusters to Argo CD
To manage external clusters, Argo CD stores the credentials of the external cluster as a Kubernetes Secret in the argocd namespace. This secret contains the K8s API bearer token associated with the argocd-manager
ServiceAccount created during argocd cluster add
, along with connection options to that API server (TLS configuration/certs, AWS role-arn, etc...). The information is used to reconstruct a REST config and kubeconfig to the cluster used by Argo CD services.
To add external cluster you must have cluster added in kube config.
Then you can add with:
argocd cluster add CONTEXTNAME
On target cluster will be created clusterrole argocd-manager-role
, clusterrolebinding argocd-manager-role-binding
and servicaccount argocd-manager
.
Attachments:
Снимок экрана 2022-06-07 в 13.08.51.png (image/png)
Снимок экрана 2022-06-07 в 13.09.25.png (image/png)