We have multiple different dispatchers, which create specific resources needed to either access our resources or to create new GCP Projects and TFC Workspaces.

Dispatcher

This dispatcher was created manually as a starting point of our TFC infrastructure.

It’s purpose is to create additional TFC Workspaces, GCP Projects. It also takes care of distributing necessary variables to other workspaces, and maintains the access to other workspaces.

Creation of new Workspaces and GCP Projects is documented in Create New GCP & TFC projects via dispatcher .

Variables are added manually to the tf-dispatcher workspace and their distribution is managed by 20_environments_variables.tf.

Example Code:

locals {
  safi_env_new_projects = yamldecode(file("${path.module}/_files/new_projects.yaml"))["projects"]
  common_vars = {for env, environment in local.safi_environments: env => [
    {
      key           = "env_name"
      value         = env
      category      = "terraform"
      sensitive     = false
      description   = local.managed_by_terraform
      },
      {
        key         = "github_token"
        value       = var.github_token
        category    = "terraform"
        sensitive   = true
        description = local.managed_by_terraform
      },
      {
        key         = "tfc_agent_token"
        value       = tfe_agent_token.this[env].token
        category    = "terraform"
        sensitive   = true
        description = local.managed_by_terraform
      },
      {
        key         = "vault_address"
        value       = var.vault_address
        category    = "terraform"
        sensitive   = false
        description = "CI/CD vault address"
      },
      {
        key         = "vault_token"
        value       = var.vault_token
        category    = "terraform"
        sensitive   = true
        description = "CI/CD vault token"
      }]
  }


The keys which are mentioned in the file above are then used in conjunction with new_projects.yaml to distribute the necessary variables.

Example of a project structure:

projects:
  vpn:
    subprojects:
      - infra
      - config
    variables:
      infra: [argo, shared_vpc_creds]
      config: [slack]
    tfc:
      enabled: true
    gcp:
      enabled: true
      apis:
        - iam.googleapis.com
        - dns.googleapis.com
        - compute.googleapis.com
        - servicenetworking.googleapis.com
        - cloudresourcemanager.googleapis.com
        - container.googleapis.com
        - iap.googleapis.com
        - monitoring.googleapis.com
        - cloudkms.googleapis.com
        - logging.googleapis.com
      sa_roles:
        - roles/owner


The variables chosen in previous example are then applied in the environment terraforms, for example in 20_new_environments.tf we have this code.

workspace_variables = concat(
    local.common_vars[each.value.env],
    local.safibank_online_var,
    flatten([for k in local.safi_env_new_projects[each.value.project]["variables"][each.value.subproject] : local.specific_vars[each.value.env][k]]),

    flatten([for item in [for k, v in local.safi_env_new_projects : k if v.gcp.enabled] :
      [{
        key         = format("google_credentials_%s", item)
        value       = sensitive(replace(base64decode(module.env_gcp_projects_new["${each.value.env}-${item}"].service_account_private_key[0]), "\n", ""))
        category    = "terraform"
        sensitive   = true
        description = local.managed_by_terraform
        },
        {
          key         = format("google_project_id_%s", item)
          value       = module.env_gcp_projects_new["${each.value.env}-${item}"].project_name
          category    = "terraform"
          sensitive   = false
          description = local.managed_by_terraform
    }] if item == each.value.project]),

    ## Generate variables for Google ServiceAccounts
    flatten([for item in [for k, v in local.safi_env_new_projects : k if v.gcp.enabled] :
      [{
        key         = format("google_sa_%s_name", item)
        value       = module.env_gcp_projects_new["${each.value.env}-${item}"].service_account[0].name
        category    = "terraform"
        sensitive   = false
        description = local.managed_by_terraform
        },
        {
          key         = format("google_sa_%s_email", item)
          value       = module.env_gcp_projects_new["${each.value.env}-${item}"].service_account[0].email
          category    = "terraform"
          sensitive   = false
          description = local.managed_by_terraform
    }] if item == each.value.project])

  )
}


The terraform workspace access is done by code for all workspaces, for tf-dispatcher we have addedd the workspace id manually, all others have their ids read from the terraform state.

locals {
  ## List of all TFC Workspaces. Because they are created not in loop,
  ## we need one variable where all IDs will be grouped together.
  ## At this time we use it to grant readonly access to all TFC members.
  ## They have to be merged this way so we dont use for and implicitly state that we use all of them 
  safi_tfc_workspaces = merge(
    { dispatcher = { workspace_id = "ws-ypYfj41BteF3NJwk" } }, # dispatcher
    module.environment_tfe_workspace,
    module.data_environment_tfe_workspace,
    { cicd = { workspace_id = module.cicd_tfe_workspace.workspace_id } },
    { repos = { workspace_id = module.repos_tfe_workspace.workspace_id } },
    module.dns_tfe_workspace,
    module.sandbox_tfe_workspace,
    module.environment_tfe_workspace_new,
    module.environment_tfe_dispatcher_workspaces
  )
}

The permissions are then created in tfc_users.tf.

The most important resource in that file is this block of code, to which you have to add all workspaces that you create so you can then give access to them.

resource "tfe_team" "teams" {
  for_each = { 
                for item in concat(
                  ["dispatcher", "cicd", "repos", "all-readonly","dispatcher-okta-config", "dispatcher-gcp-binding-config", "dispatcher-confluent-binding-config"],
                  [for d in local.safi_dns_domains : format("dns-%s", d)],
                  [for e in keys(local.safi_environments) : format("env-%s", e)],
                  [for g in keys(local.safi_environments) : format("env-%s-data", g)],
                  [for i in keys(local.safi_environments) : format("env-%s-vpn-config", i)],
                  [for j in keys(local.safi_environments) : format("env-%s-vpn-infra", j)],
                  [for k in keys(local.safi_environments) : format("env-%s-hcv_secrets-config", k)],
                  [for l in keys(local.safi_environments) : format("env-%s-monitor-config", l)],
                  [for m in keys(local.safi_environments) : format("env-%s-monitor-infra", m)],
                  [for n in keys(local.safi_environments) : format("env-%s-cloudflare-config", n)],
                  [for n in keys(local.safi_environments) : format("env-%s-cloudcomposer-infra", n)],
                  [for n in keys(local.safi_environments) : format("env-%s-tms-infra", n)],
                  [for n in keys(local.safi_environments) : format("env-%s-tms-config", n)],
                  [for n in keys(local.safi_environments) : format("env-%s-hcvault-infra", n)],
                  [for n in keys(local.safi_environments) : format("env-%s-hcvault-config", n)],
                  [for o in keys(local.safi_environments) : format("env-%s-data-config", o)],
                  [for p in keys(local.safi_environments) : format("env-%s-data-infra", p)],
                  [for q in keys(local.safi_environments) : format("env-%s-meiro-infra", q)],
                  [for r in keys(local.safi_environments) : format("env-%s-tyk-a-config", r)],
                  [for s in keys(local.safi_environments) : format("env-%s-tyk-a-infra", s)],
                  [for t in keys(local.safi_environments) : format("env-%s-applications-config", t)],
                  [for u in keys(local.safi_environments) : format("env-%s-applications-infra", u)],
tfc_users.tf
                ) : item => ""
              }

  name         = format("%s-%s", local.prefix, each.key)
  organization = var.tfe_organization
}


Then you can just use code similar to this, to add the access.

resource "tfe_team_access" "vpn_config" {
  for_each = toset(keys(local.safi_environments))

  access       = "write"
  team_id      = tfe_team.teams[format("env-%s-vpn-config", each.key)].id
  workspace_id = module.environment_tfe_workspace_new[format("%s-vpn-config", each.key)].workspace_id
}

resource "tfe_team_access" "vpn_infra" {
  for_each = toset(keys(local.safi_environments))

  access       = "write"
  team_id      = tfe_team.teams[format("env-%s-vpn-infra", each.key)].id
  workspace_id = module.environment_tfe_workspace_new[format("%s-vpn-infra", each.key)].workspace_id
}

Main files:

  • 0_dispatcher.tf creates specific dispatchers, for account creation / role binding

  • 20_environments.tf creates workspace for common_resources and shared_vpc project

  • 20_new_environments.tf creates all other workspaces and projects, what it creates can be found under new_environments.yaml.

  • 20_variables_environments.tf declaration of all variables that might be given to other workspaces

Create new dispatcher:

If you want to create new dispatcher you have to edit this file. The structure is the same as when we create new projects/work-spaces.

projects:
  dispatcher-okta:
    subprojects:
      - config
  dispatcher-gcp-binding:
    subprojects:
      - config
  dispatcher-confluent-binding:
    subprojects:
      - config

Dispatcher-confluent-binding-config

This workspace is used to create role binding for Confluent Cloud, it’s created trough code by our dispatcher. It’s configuration can be found in this folder.

Dispatcher-gcp-binding-config

This workspace is used to create role binding for GCP, it’s created trough code by our dispatcher. It’s configuration can be found in this folder.

Dispatcher-okta-config

This workspace is used to create users in Okta, it’s created trough code by our dispatcher. It’s configuration can be found in this folder.

The users created in Okta, are then assigned to groups and applications by other workspaces, for example the access to grafana is configured in the tf-env-monitor-config folder in okta_grafana.tf