Skip to content

Zitadel

Zitadel is an Identity Management solution that includes acting as an OIDC provider.

screenshot of Argo CD web interface's tree view of a zitadel app of apps. The main app of apps branches off into the following appsets: external secrets, postgres, s3 provider, s3 PVC, and zitadel web app. Each of those then branches off into a similarly named app.

Zitadel web app (official zitadel helm chart) screenshot screenshot of Argo CD web interface's tree view of a zitadel web app in tree view mode. Includes the following child resources: zitadel config map, zitadel service, zitadel service account, zitadel deployment, zitadel init job, zitadel setup job, zitadel service monitor, zitadel ingress, zitadel role, zitadel role binding. The zitadel service then branches off into zitadel endpoint and zitadel endpointslice. The zitadel deployment branches off into a zitadel replica set which branches off into a zitadel pod. The zitadel init and setup jobs also branch off into their own completed pods, and finally, the zitadel ingress resource branches off into a zitadel TLS certificate
Argo CD Zitadel Postgresql cluster screenshot screenshot of Argo CD web interface's tree view of a zitadel postgresql cluster in tree view. It shows the following secrets and coorsponding certificates: client cert, postgres cert, server secret, zitadel cert. Each of those then has their own cert request resource. Afte rthat there's 3 tls issuers: client ca, selfsigned, and server ca. Next there is the cluster, which branches off into a pvc, pod, secret for the app, secret for the super user, service for read, service for read only, service for read write, service account, pod disruption budget for the primary, role, and role binding

Zitadel is one of the more complex apps that smol-k8s-lab supports out of the box. For initialization, you need to pass in the following info:

  • username - name of the first admin user to create
  • email - email of the first admin user
  • first name - first name of the first admin user
  • last name - last name of the first admin user
  • gender - optional - the gender of the first admin user

The above values are used to create an initial user. We also create Argo CD admin and users groups to be used with an Argo CD OIDC app that we prepare. If Vouch is enabled, we also create an OIDC app for that as well as a user group. You initial user is automatically added to all the groups we create.

Finally, we create a groupsClaim so that all queries for auth also process the user's groups.

In addition to those one time init values, we also require a hostname to use for the Zitadel API and web frontend.

Sensitive values

Sensitive values can be provided via environment variables using a value_from map on any value under init.values or backups. Example of both providing s3 credentials and restic repo password as well as smtp credentials via sensitive values:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
apps:
  zitadel:
    init:
      # Switch to false if you don't want to create initial secrets or use the
      # API via a service account to create the above described resources
      enabled: true
      values:
        # mail server, must include port! e.g. mymailserver.com:587
        smtp_host:
          value_from:
            env: ZITADEL_SMTP_HOST
        # mail user
        smtp_user:
          value_from:
            env: ZITADEL_SMTP_USER
        # mail password
        smtp_password:
          value_from:
            env: ZITADEL_SMTP_PASSWORD
        # mail from address
        smtp_from_address:
          value_from:
            env: ZITADEL_SMTP_FROM_ADDRESS
        # mail from name
        smtp_from_name:
          value_from:
            env: ZITADEL_SMTP_FROM_NAME
        # mail reply to address
        smtp_reply_to_address:
          value_from:
            env: ZITADEL_SMTP_REPLY_TO_ADDRESS
    backups:
      s3:
        secret_access_key:
          value_from:
            # can be any env var
            env: ZITADEL_S3_BACKUP_SECRET_KEY
        access_key_id:
          value_from:
            # can be any env var
            env: ZITADEL_S3_BACKUP_ACCESS_ID
      restic_repo_password:
        value_from:
          # can be any env var
          env: ZITADEL_RESTIC_REPO_PASSWORD

Backups

Backups are a new feature in v5.0.0 that enable backing up your postgres cluster and PVCs via restic to a configurable remote S3 bucket. Backups require init.enabled set to true and you must ensure you're using our pre-configured argo.repo. We support both instant backups, and scheduled backups.

When running a zitadel backup, we will initiate a Cloud Native Postgresql backup to your local seaweedfs cluster that we setup for you, and then wait until the last wal archive associated with that backup is complete. After that, we start a k8up backup job to backup all of your important PVCs to your configured s3 bucket.

To use the backups feature, you'll need to configure the values below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apps:
  zitadel:
    backups:
      # cronjob syntax schedule to run zitadel seaweedfs pvc backups
      pvc_schedule: 10 0 * * *
      # cronjob syntax (with SECONDS field) for zitadel postgres backups
      # must happen at least 10 minutes before pvc backups, to avoid corruption
      # due to missing files. This is because the cnpg backup shows as completed
      # before it actually is, due to the wal archive it lists as it's end not
      # being in the backup yet
      postgres_schedule: 0 0 0 * * *
      s3:
        # these are for pushing remote backups of your local s3 storage, for speed and cost optimization
        endpoint: s3.eu-central-003.backblazeb2.com
        bucket: my-zitadel-backup-bucket
        region: eu-central-003
        secret_access_key:
          value_from:
            env: ZITADEL_S3_BACKUP_SECRET_KEY
        access_key_id:
          value_from:
            env: ZITADEL_S3_BACKUP_ACCESS_ID
      restic_repo_password:
        value_from:
          env: ZITADEL_RESTIC_REPO_PASSWORD

Restores

Restores are a new feature in v5.0.0 that enable restoring your cluster via restic from a configurable remote S3 bucket. This feature was finally tested with Zitadel in v5.6.0. If you have init.enabled set to true and you're using our pre-configured argo.repo, we support restoring both your Postgresql cluster and Persistent Volume Claims.

A restore is a kind of initialization process, so it lives under the init section of the config for your application, in this case, Zitadel. Here's an example you could use in your ~/.config/smol-k8s-lab/config.yaml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apps:
  zitadel:
    init:
      enabled: true
      restore:
        enabled: false
        cnpg_restore: true
        restic_snapshot_ids:
          # these can all be any restic snapshot ID, but default to latest
          seaweedfs_volume: latest
          seaweedfs_filer: latest

The restore process will put your secrets into place, then restore your seaweedfs cluster first, followed by your postgresql cluster, and then it will install your zitadel argocd app as normal.

Sensitive values before v5.0.0

smol-k8s-lab did not originally support the value_from map. If you're using a version before v5.0.0, to avoid having to provide sensitive values every time you run smol-k8s-lab with zitadel enabled, set up the following environment variables:

  • ZITADEL_S3_BACKUP_ACCESS_ID
  • ZITADEL_S3_BACKUP_SECRET_KEY
  • ZITADEL_RESTIC_REPO_PASSWORD

Example config

Here's a full working config for zitadel. (If this isn't working, please submit an issue on our GitHub!)

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
apps:
  zitadel:
    enabled: false
    description: |
      [link=https://zitadel.com/opensource]ZITADEL[/link] is an open source self hosted IAM platform for the cloud era

      smol-k8s-lab supports initialization of:
        - an admin service account
        - a human admin user (including an autogenerated password)
        - a project with a name of your chosing
        - 2 OIDC applications for Argo CD and Vouch
        - 2 Argo CD groups (admins and users)
        - 1 vouch groups
        - groupsClaim action to enforce group roles on authentication
        - updates your appset_secret_plugin secret and refreshes the pod

      The default app will also deploy SeaweedFS to backup your database which in turn is backed up to a remote s3 provider of your choice.

      To provide sensitive values via environment variables to smol-k8s-lab use:
        - ZITADEL_S3_BACKUP_ACCESS_ID
        - ZITADEL_S3_BACKUP_SECRET_KEY
        - ZITADEL_RESTIC_REPO_PASSWORD
        - ZITADEL_SMTP_HOST
        - ZITADEL_SMTP_USER
        - ZITADEL_SMTP_PASSWORD
        - ZITADEL_SMTP_FROM_ADDRESS
        - ZITADEL_SMTP_FROM_NAME
        - ZITADEL_SMTP_REPLY_TO_ADDRESS
    init:
      # Switch to false if you don't want to create initial secrets or use the
      # API via a service account to create the above described resources
      enabled: true
      values:
        # login username of admin user
        username: 'certainlynotadog'
        # email of admin user
        email: 'notadog@humans.com'
        # first name of admin user
        first_name: 'Dogsy'
        # last name of admin user
        last_name: 'Dogerton'
        # options: GENDER_UNSPECIFIED, GENDER_MALE, GENDER_FEMALE, GENDER_DIVERSE
        # more coming soon, see: https://github.com/zitadel/zitadel/issues/6355
        gender: GENDER_UNSPECIFIED
        # name of the default project to create OIDC applications in
        project: core
        # mail server, must include port! e.g. mymailserver.com:587
        smtp_host:
          value_from:
            env: ZITADEL_SMTP_HOST
        # mail user
        smtp_user:
          value_from:
            env: ZITADEL_SMTP_USER
        # mail password
        smtp_password:
          value_from:
            env: ZITADEL_SMTP_PASSWORD
        # mail from address
        smtp_from_address:
          value_from:
            env: ZITADEL_SMTP_FROM_ADDRESS
        # mail from name
        smtp_from_name:
          value_from:
            env: ZITADEL_SMTP_FROM_NAME
        # mail reply to address
        smtp_reply_to_address:
          value_from:
            env: ZITADEL_SMTP_REPLY_TO_ADDRESS
      restore:
        enabled: false
        cnpg_restore: true
        restic_snapshot_ids:
          seaweedfs_volume: latest
          seaweedfs_filer: latest
    backups:
      # cronjob syntax schedule to run zitadel seaweedfs pvc backups
      pvc_schedule: 10 0 * * *
      # cronjob syntax (with SECONDS field) for zitadel postgres backups
      # must happen at least 10 minutes before pvc backups, to avoid corruption
      # due to missing files. This is because the cnpg backup shows as completed
      # before it actually is, due to the wal archive it lists as it's end not
      # being in the backup yet
      postgres_schedule: 0 0 0 * * *
      # these are for pushing backups of your local s3 storage to a remote s3 bucket, which
      # is separate from your postgresql backups, so that postgresql can backup wal archives
      # every 5 minutes with speed and then for and then for cost optimization, only backup
      # all achives gathered during the day to the remote s3 store AFTER the nightly
      # postgresql backups.
      s3:
        endpoint: s3.eu-central-003.backblazeb2.com
        bucket: my-zitadel-backup-bucket
        region: eu-central-003
        secret_access_key:
          value_from:
            env: ZITADEL_S3_BACKUP_SECRET_KEY
        access_key_id:
          value_from:
            env: ZITADEL_S3_BACKUP_ACCESS_ID
      restic_repo_password:
        value_from:
          env: ZITADEL_RESTIC_REPO_PASSWORD
    argo:
      # secrets keys to make available to ArgoCD ApplicationSets
      secret_keys:
        # FQDN to use for zitadel
        hostname: 'zitadel.gooddogs.com'
        # type of database to use: postgresql or cockroachdb
        database_type: postgresql
        # set the local s3 provider for zitadel's database backups. can be minio or seaweedfs
        s3_provider: seaweedfs
        # local s3 endpoint for postgresql backups, backed up constantly
        s3_endpoint: 'zitadel-s3.gooddogs.com'
        # capacity for the PVC backing your local s3 instance
        s3_pvc_capacity: 2Gi
      # repo to install the Argo CD app from
      # git repo to install the Argo CD app from
      repo: https://github.com/small-hack/argocd-apps
      # path in the argo repo to point to. Trailing slash very important!
      path: zitadel/app_of_apps/
      # either the branch or tag to point at in the argo repo above
      revision: main
      # kubernetes cluster to install the k8s app into, defaults to Argo CD default
      cluster: https://kubernetes.default.svc
      # namespace to install the k8s app in
      namespace: zitadel
      # recurse directories in the provided git repo
      directory_recursion: false
      # source repos for Argo CD App Project (in addition to argo.repo)
      project:
        name: zitadel
        source_repos:
          - https://charts.zitadel.com
          - https://zitadel.github.io/zitadel-charts
          - https://small-hack.github.io/cloudnative-pg-cluster-chart
          - https://operator.min.io/
          - https://seaweedfs.github.io/seaweedfs/helm
        destination:
          namespaces: []

You can learn more about our Zitadel Argo CD Application at small-hack/argocd-apps/zitadel.