Articles in this section

Install Gluu on Microk8s using Helm

Published:

This guide helps you install the Gluu Server on a local MicroK8s cluster using Helm, with MySQL as the backend.

Uninstall Existing Gluu Deployments

  • Uninstall the Gluu deployment in the Gluu namespace.
      helm uninstall gluu -n gluu
    
  • Uninstall the Gluu Helm release in the sql namespace.
      helm uninstall gluu -n sql
    

Update System and Install Required Packages

  • Update and upgrade system packages.
     apt update && apt upgrade -y
    
  • Install required packages, such as CA certificates and curl.
      sudo apt-get install ca-certificates curl
    

Set Up Docker Repository

  • Create a directory for storing APT keyrings.

      sudo install -m 0755 -d /etc/apt/keyrings
    
  • Download the Docker GPG key and save it in the keyrings directory.

      sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
    
  • Set read permissions on the Docker GPG key.

     sudo chmod a+r /etc/apt/keyrings/docker.asc
    
  • Add the Docker repository to the apt sources list.

      echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    

Install Docker and Docker Components

  • Update and upgrade the system packages again.

      apt update && apt upgrade -y
    
  • Install Docker and associated plugins.

      sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    

Install MicroK8s and Helm

  • Installs MicroK8s
      snap install microk8s --classic
    
  • Installs Helm
      snap install helm --classic
    

Enable Essential MicroK8s Add-ons

  • Enables community add-ons in MicroK8s.
      microk8s.enable community
    
  • Enables helm3 support within MicroK8s.
    microk8s.enable helm3
    
  • Enables dynamic storage provisioning.
     microk8s.enable storage
    
  • Enables the NGINX Ingress Controller for routing traffic.
    microk8s.enable ingress
    
  • Enables the dns service for internal service discovery in Kubernetes.
    microk8s.enable dns
    
    

Create Required Kubernetes Namespaces

  • Create the required namespaces for Gluu.
      microk8s.kubectl create namespace gluu
    ```~~~~
    
  • Create the required namespaces for SQL.
     microk8s.kubectl create namespace sql
    

Configure kubectl Access

  • Configures kubectl to use MicroK8s by saving its configuration to the default kubeconfig location.
      microk8s.kubectl config view --raw > ~/.kube/config
    

Create Docker Registry Secret for Gluu

  • Creates a Kubernetes secret for pulling images from Docker Hub. Replace yyyy with your Docker Hub username and zzzz with your password.
      microk8s.kubectl -n gluu create secret docker-registry regcred \
      --docker-server=https://index.docker.io/v1/ \
      --docker-username=yyyy \
      --docker-password=zzzz
    

Add Gluu Helm Repository

    https://gluufederation.github.io/gluu4/pygluu/kubernetes/templates/helm

Deploy MySQL Using Helm

  • Install the MySQL Helm chart in the sql namespace with specific configurations. Replace passwords as needed.
      helm install my-release \
      --set auth.password=Test1234#,auth.database=gluu,auth.username=gluu,auth.rootPassword=Test1234# \
      -n sql oci://registry-1.docker.io/bitnamicharts/mysql
    

Retrieve MySQL Root Password

  • Retrieve the MySQL root password using the following command.
      echo Username: root MYSQL_ROOT_PASSWORD=$(microk8s.kubectl get secret --namespace sql my-release-mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d)
    

Connect to MySQL Client Pod

  • Launch a MySQL client pod and connect interactively.
      microk8s.kubectl run my-release-mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.4.4-debian-12-r4 --namespace sql --env  MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
    

Connect to MySQL Server

  • Connect to the MySQL server from the client pod using this command.
      mysql -h my-release-mysql.sql.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
    

Retrieve the values.yaml file for Gluu deployment.

```
  global:
    usrEnvs:
      normal: {}
      secret: {}
    istio:
      ingress: false
      enabled: false
      gateways: []
      namespace: istio-system
      additionalLabels: {}
      additionalAnnotations: {}
    alb:
      ingress:
        enabled: false
        adminUiEnabled: true
        openidConfigEnabled: true
        uma2ConfigEnabled: true
        webfingerEnabled: true
        webdiscoveryEnabled: true
        scimConfigEnabled: false
        scimEnabled: false
        u2fConfigEnabled: true
        fido2Enabled: false
        fido2ConfigEnabled: false
        authServerEnabled: true
        casaEnabled: false
        passportEnabled: true
        shibEnabled: false
        additionalLabels: {}
        additionalAnnotations:
          kubernetes.io/ingress.class: alb
          alb.ingress.kubernetes.io/scheme: internet-facing
          alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxx:certificate/xxxxxx
          alb.ingress.kubernetes.io/auth-session-cookie: custom-cookie
    cloud:
      testEnviroment: true
    upgrade:
      enabled: false
      image:
        repository: gluufederation/upgrade
        tag: 4.5.8-1
      sourceVersion: "4.5"
      targetVersion: "4.5"
    storageClass:
      allowVolumeExpansion: true
      allowedTopologies: []
      mountOptions:
      - debug
      parameters: {}
      provisioner: microk8s.io/hostpath
      reclaimPolicy: Retain
      volumeBindingMode: WaitForFirstConsumer
    gcePdStorageType: pd-standard
    azureStorageAccountType: Standard_LRS
    azureStorageKind: Managed
    casa:
      gluuCustomJavaOptions: ""
    lbIp: 172.31.23.40
    domain: tmpcn2.gluu.info
    isDomainRegistered: "false"
    enableSecurityContextWithNonRegisteredDomain: "true"
    ldapServiceName: opendj
    gluuPersistenceType: sql
    gluuJackrabbitCluster: "false"
    configAdapterName: kubernetes
    configSecretAdapter: kubernetes
    sslCertFromDomain: "false"
    cnGoogleApplicationCredentials: /etc/gluu/conf/google-credentials.json
    cnAwsSharedCredentialsFile: /etc/gluu/conf/aws_shared_credential_file
    cnAwsConfigFile: /etc/gluu/conf/aws_config_file
    cnAwsSecretsReplicaRegionsFile: /etc/gluu/conf/aws_secrets_replica_regions
    oxauth:
      enabled: true
      gluuCustomJavaOptions: ""
      appLoggers:
        enableStdoutLogPrefix: "true"
        authLogTarget: "STDOUT"
        authLogLevel: "INFO"
        httpLogTarget: "FILE"
        httpLogLevel: "INFO"
        persistenceLogTarget: "FILE"
        persistenceLogLevel: "INFO"
        persistenceDurationLogTarget: "FILE"
        persistenceDurationLogLevel: "INFO"
        ldapStatsLogTarget: "FILE"
        ldapStatsLogLevel: "INFO"
        scriptLogTarget: "FILE"
        scriptLogLevel: "INFO"
        auditStatsLogTarget: "FILE"
        auditStatsLogLevel: "INFO"
        cleanerLogTarget: "FILE"
        cleanerLogLevel: "INFO"
    fido2:
      enabled: false
      gluuCustomJavaOptions: ""
      appLoggers:
        enableStdoutLogPrefix: "true"
        fido2LogTarget: "STDOUT"
        fido2LogLevel: "INFO"
        persistenceLogTarget: "FILE"
        persistenceLogLevel: "INFO"
    scim:
      enabled: false
      gluuCustomJavaOptions: ""
      appLoggers:
        enableStdoutLogPrefix: "true"
        scimLogTarget: "STDOUT"
        scimLogLevel: "INFO"
        persistenceLogTarget: "FILE"
        persistenceLogLevel: "INFO"
        persistenceDurationLogTarget: "FILE"
        persistenceDurationLogLevel: "INFO"
        scriptLogTarget: "FILE"
        scriptLogLevel: "INFO"
    config:
      enabled: true
    jobTtlSecondsAfterFinished: 300
    jackrabbit:
      enabled: false
      appLoggers:
        jackrabbitLogTarget: "STDOUT"
        jackrabbitLogLevel: "INFO"
    persistence:
      enabled: true
    oxtrust:
      enabled: true
      gluuCustomJavaOptions: "-XshowSettings:vm -XX:MaxRAMPercentage=80"
      appLoggers:
        enableStdoutLogPrefix: "true"
        oxtrustLogTarget: "STDOUT"
        oxtrustLogLevel: "INFO"
        httpLogTarget: "FILE"
        httpLogLevel: "INFO"
        persistenceLogTarget: "FILE"
        persistenceLogLevel: "INFO"
        persistenceDurationLogTarget: "FILE"
        persistenceDurationLogLevel: "INFO"
        ldapStatsLogTarget: "FILE"
        ldapStatsLogLevel: "INFO"
        scriptLogTarget: "FILE"
        scriptLogLevel: "INFO"
        auditStatsLogTarget: "FILE"
        auditStatsLogLevel: "INFO"
        cleanerLogTarget: "FILE"
        cleanerLogLevel: "INFO"
        velocityLogLevel: "INFO"
        velocityLogTarget: "FILE"
        cacheRefreshLogLevel: "INFO"
        cacheRefreshLogTarget: "FILE"
        cacheRefreshPythonLogLevel: "INFO"
        cacheRefreshPythonLogTarget: "FILE"
        apachehcLogLevel: "INFO"
        apachehcLogTarget: "FILE"
    opendj:
      enabled: false
    oxshibboleth:
      enabled: false
      gluuCustomJavaOptions: ""
      appLoggers:
        enableStdoutLogPrefix: "true"
        idpLogTarget: "STDOUT"
        idpLogLevel: "INFO"
        scriptLogTarget: "FILE"
        scriptLogLevel: "INFO"
        auditStatsLogTarget: "FILE"
        auditStatsLogLevel: "INFO"
        consentAuditLogTarget: "FILE"
        consentAuditLogLevel: "INFO"
        ldapLogLevel: ""
        messagesLogLevel: ""
        encryptionLogLevel: ""
        opensamlLogLevel: ""
        propsLogLevel: ""
        httpclientLogLevel: ""
        springLogLevel: ""
        containerLogLevel: ""
        xmlsecLogLevel: ""
    oxd-server:
      enabled: false
      gluuCustomJavaOptions: ""
      appLoggers:
        oxdServerLogTarget: "STDOUT"
        oxdServerLogLevel: "INFO"
    nginx-ingress:
      enabled: true
    oxauth-key-rotation:
      enabled: true
    cr-rotate:
      enabled: true
  config:
    usrEnvs:
      normal: {}
      secret: {}
    orgName: Gluu
    email: support@gluu.org
    adminPass: P@ssw0rd
    ldapPass: P@ssw0rd
    redisPass: P@assw0rd
    countryCode: US
    state: TX
    city: Austin
    salt: ""
    configmap:
      cnSqlDbSchema: ""
      cnSqlDbDialect: mysql
      cnSqlDbHost: my-release-mysql.sql.svc.cluster.local
      cnSqlDbPort: 3306
      cnSqlDbName: gluu
      cnSqlDbUser: gluu
      cnSqlDbTimezone: UTC
      cnSqlPasswordFile: /etc/gluu/conf/sql_password
      cnSqldbUserPassword: Test1234#
      gluuOxdApplicationCertCn: oxd-server
      gluuOxdAdminCertCn: oxd-server
      gluuCouchbaseCrt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURlakNDQW1LZ0F3SUJBZ0lKQUwyem5UWlREUHFNTUEwR0NTcUdTSWIzRFFFQkN3VUFNQzB4S3pBcEJnTlYKQkFNTUlpb3VZMkpuYkhWMUxtUmxabUYxYkhRdWMzWmpMbU5zZFhOMFpYSXViRzlqWVd3d0hoY05NakF3TWpBMQpNRGt4T1RVeFdoY05NekF3TWpBeU1Ea3hPVFV4V2pBdE1Tc3dLUVlEVlFRRERDSXFMbU5pWjJ4MWRTNWtaV1poCmRXeDBMbk4yWXk1amJIVnpkR1Z5TG14dlkyRnNNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUIKQ2dLQ0FRRUFycmQ5T3lvSnRsVzhnNW5nWlJtL2FKWjJ2eUtubGU3dVFIUEw4Q2RJa1RNdjB0eHZhR1B5UkNQQgo3RE00RTFkLzhMaU5takdZZk41QjZjWjlRUmNCaG1VNmFyUDRKZUZ3c0x0cTFGT3MxaDlmWGo3d3NzcTYrYmlkCjV6Umw3UEE0YmdvOXVkUVRzU1UrWDJUUVRDc0dxVVVPWExrZ3NCMjI0RDNsdkFCbmZOeHcvYnFQa2ZCQTFxVzYKVXpxellMdHN6WE5GY0dQMFhtU3c4WjJuaFhhUGlva2pPT2dyMkMrbVFZK0htQ2xGUWRpd2g2ZjBYR0V0STMrKwoyMStTejdXRkF6RlFBVUp2MHIvZnk4TDRXZzh1YysvalgwTGQrc2NoQTlNQjh3YmJORUp2ZjNMOGZ5QjZ0cTd2CjF4b0FnL0g0S1dJaHdqSEN0dFVnWU1oU0xWV3UrUUlEQVFBQm80R2NNSUdaTUIwR0ExVWREZ1FXQkJTWmQxWU0KVGNIRVZjSENNUmp6ejczZitEVmxxREJkQmdOVkhTTUVWakJVZ0JTWmQxWU1UY0hFVmNIQ01Sanp6NzNmK0RWbApxS0V4cEM4d0xURXJNQ2tHQTFVRUF3d2lLaTVqWW1kc2RYVXVaR1ZtWVhWc2RDNXpkbU11WTJ4MWMzUmxjaTVzCmIyTmhiSUlKQUwyem5UWlREUHFNTUF3R0ExVWRFd1FGTUFNQkFmOHdDd1lEVlIwUEJBUURBZ0VHTUEwR0NTcUcKU0liM0RRRUJDd1VBQTRJQkFRQk9meTVWSHlKZCtWUTBXaUQ1aSs2cmhidGNpSmtFN0YwWVVVZnJ6UFN2YWVFWQp2NElVWStWOC9UNnE4Mk9vVWU1eCtvS2dzbFBsL01nZEg2SW9CRnVtaUFqek14RTdUYUhHcXJ5dk13Qk5IKzB5CnhadG9mSnFXQzhGeUlwTVFHTEs0RVBGd3VHRlJnazZMRGR2ZEN5NVdxWW1MQWdBZVh5VWNaNnlHYkdMTjRPUDUKZTFiaEFiLzRXWXRxRHVydFJrWjNEejlZcis4VWNCVTRLT005OHBZN05aaXFmKzlCZVkvOEhZaVQ2Q0RRWWgyTgoyK0VWRFBHcFE4UkVsRThhN1ZLL29MemlOaXFyRjllNDV1OU1KdjM1ZktmNUJjK2FKdWduTGcwaUZUYmNaT1prCkpuYkUvUENIUDZFWmxLaEFiZUdnendtS1dDbTZTL3g0TklRK2JtMmoKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
      gluuCouchbasePass: P@ssw0rd
      gluuCouchbaseSuperUserPass: P@ssw0rd
      gluuCouchbaseSuperUser: admin
      gluuCouchbaseUrl: cbgluu.default.svc.cluster.local
      gluuCouchbaseBucketPrefix: gluu
      gluuCouchbaseUser: gluu
      gluuCouchbaseIndexNumReplica: 0
      gluuCouchbasePassFile: /etc/gluu/conf/couchbase_password
      gluuCouchbaseSuperUserPassFile: /etc/gluu/conf/couchbase_superuser_password
      gluuCouchbaseCertFile: /etc/certs/couchbase.crt
      gluuPersistenceLdapMapping: ''
      gluuCacheType: NATIVE_PERSISTENCE
      gluuSyncShibManifests: false
      gluuSyncCasaManifests: false
      gluuMaxRamPercent: "75.0"
      containerMetadataName: kubernetes
      gluuRedisUrl: redis:6379
      gluuRedisUseSsl: "false"
      gluuRedisType: STANDALONE
      gluuRedisSslTruststore: ""
      gluuRedisSentinelGroup: ""
      gluuOxtrustConfigGeneration: true
      gluuOxtrustBackend: oxtrust:8080
      gluuOxauthBackend: oxauth:8080
      gluuOxdServerUrl: oxd-server:8443
      gluuOxdBindIpAddresses: "*"
      gluuLdapUrl: opendj:1636
      gluuJackrabbitPostgresUser: jackrabbit
      gluuJackrabbitPostgresPasswordFile: /etc/gluu/conf/postgres_password
      gluuJackrabbitPostgresDatabaseName: jackrabbit
      gluuJackrabbitPostgresHost: postgresql.postgres.svc.cluster.local
      gluuJackrabbitPostgresPort: 5432
      gluuJackrabbitAdminId: admin
      gluuJackrabbitAdminPassFile: /etc/gluu/conf/jackrabbit_admin_password
      gluuJackrabbitSyncInterval: 300
      gluuJackrabbitUrl: http://jackrabbit:8080
      gluuJackrabbitAdminIdFile: /etc/gluu/conf/jackrabbit_admin_id
      gluuDocumentStoreType: DB
      cnGoogleServiceAccount: SWFtTm90YVNlcnZpY2VBY2NvdW50Q2hhbmdlTWV0b09uZQo=
      cnGoogleProjectId: google-project-to-save-config-and-secrets-to
      cnGoogleSpannerInstanceId: ""
      cnGoogleSpannerDatabaseId: ""
      cnGoogleSpannerEmulatorHost: ""
      cnSecretGoogleSecretVersionId: "latest"
      cnSecretGoogleSecretNamePrefix: gluu
      cnAwsAccessKeyId: ""
      cnAwsSecretAccessKey: ""
      cnAwsSecretsEndpointUrl: ""
      cnAwsSecretsNamePrefix: gluu
      cnAwsDefaultRegion: us-west-1
      cnAwsProfile: gluu
      cnAwsSecretsReplicaRegions: []
      lbAddr: ""
      gluuOxtrustApiEnabled: true
      gluuOxtrustApiTestMode: false
      gluuScimProtectionMode: "OAUTH"
      gluuPassportEnabled: true
      gluuPassportFailureRedirectUrl: ""
      gluuCasaEnabled: false
      gluuSamlEnabled: false
      gluuPersistenceType: sql
    image:
      repository: gluufederation/config-init
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    volumes: []
    volumeMounts: []
    lifecycle: {}
    dnsPolicy: ""
    dnsConfig: {}
    migration:
      enabled: false
      migrationDir: /ce-migration
      migrationDataFormat: ldif
    resources:
      limits:
        cpu: 300m
        memory: 300Mi
      requests:
        cpu: 300m
        memory: 300Mi
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
  nginx-ingress:
    certManager:
      certificate:
        enabled: false
        issuerKind: ClusterIssuer
        issuerName: ""
        issuerGroup: cert-manager.io
    ingress:
      enabled: true
      legacy: false
      path: /
      adminUiEnabled: true
      adminUiLabels: {}
      adminUiAdditionalAnnotations: {}
      openidConfigEnabled: true
      openidConfigLabels: {}
      openidAdditionalAnnotations: {}
      deviceCodeEnabled: true
      deviceCodeLabels: {}
      deviceCodeAdditionalAnnotations: {}
      firebaseMessagingEnabled: true
      firebaseMessagingLabels: {}
      firebaseMessagingAdditionalAnnotations: {}
      uma2ConfigEnabled: true
      uma2ConfigLabels: {}
      uma2AdditionalAnnotations: {}
      webfingerEnabled: true
      webfingerLabels: {}
      webfingerAdditionalAnnotations: {}
      webdiscoveryEnabled: true
      webdiscoveryLabels: {}
      webdiscoveryAdditionalAnnotations: {}
      scimConfigEnabled: false
      scimConfigLabels: {}
      scimConfigAdditionalAnnotations: {}
      scimEnabled: false
      scimLabels: {}
      scimAdditionalAnnotations: {}
      u2fConfigEnabled: true
      u2fConfigLabels: {}
      u2fAdditionalAnnotations: {}
      fido2ConfigEnabled: false
      fido2ConfigLabels: {}
      fido2ConfigAdditionalAnnotations: {}
      fido2Enabled: false
      fido2Labels: {}
      authServerEnabled: true
      authServerLabels: {}
      authServerAdditionalAnnotations: {}
      casaEnabled: false
      casaLabels: {}
      casaAdditionalAnnotations: {}
      passportEnabled: true
      passportLabels: {}
      passportAdditionalAnnotations: {}
      shibEnabled: false
      shibLabels: {}
      shibAdditionalAnnotations: {}
      additionalLabels: {}
      additionalAnnotations: {}
      ingressClassName: public
      hosts:
      - tmpcn2.gluu.info
      tls:
      - secretName: tls-certificate # DON'T change
        hosts:
        - tmpcn2.gluu.info
  jackrabbit:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: 1
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/jackrabbit
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: 1
    resources:
      limits:
        cpu: 1500m
        memory: 1000Mi
      requests:
        cpu: 1500m
        memory: 1000Mi
    secrets:
      gluuJackrabbitAdminPass: Test1234#
      gluuJackrabbitPostgresPass: P@ssw0rd
    service:
      jackRabbitServiceName: jackrabbit
      name: http-jackrabbit
      port: 8080
    clusterId: "first"
    storage:
      size: 5Gi
    livenessProbe:
      tcpSocket:
        port: http-jackrabbit
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    readinessProbe:
      tcpSocket:
        port: http-jackrabbit
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
  opendj:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: 1
    backup:
      enabled: true
      cronJobSchedule: "*/59 * * * *"
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/opendj
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    persistence:
      size: ''
    ports:
      tcp-admin:
        nodePort: ""
        port: 4444
        protocol: TCP
        targetPort: 4444
      tcp-ldap:
        nodePort: ""
        port: 1389
        protocol: TCP
        targetPort: 1389
      tcp-ldaps:
        nodePort: ""
        port: 1636
        protocol: TCP
        targetPort: 1636
      tcp-repl:
        nodePort: ""
        port: 8989
        protocol: TCP
        targetPort: 8989
      tcp-serf:
        nodePort: ""
        port: 7946
        protocol: TCP
        targetPort: 7946
      udp-serf:
        nodePort: ""
        port: 7946
        protocol: UDP
        targetPort: 7946
    replicas: ''
    resources:
      limits:
        cpu: 1500m
        memory: 2000Mi
      requests:
        cpu: 1500m
        memory: 2000Mi
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
      failureThreshold: 20
    readinessProbe:
      tcpSocket:
        port: 1636
      initialDelaySeconds: 60
      timeoutSeconds: 5
      periodSeconds: 25
      failureThreshold: 20
    volumes: []
    volumeMounts: []
    lifecycle:
      preStop:
        exec:
          command: ["/bin/sh", "-c", "python3 /app/scripts/deregister_peer.py 1>&/proc/1/fd/1"]
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
    gluuRedisEnabled: false
  persistence:
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/persistence
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    resources:
      limits:
        cpu: 300m
        memory: 300Mi
      requests:
        cpu: 300m
        memory: 300Mi
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
  oxauth:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: "90%"
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/oxauth
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: 1
    resources:
      limits:
        cpu: 2500m
        memory: 2500Mi
      requests:
        cpu: 2500m
        memory: 2500Mi
    service:
      oxAuthServiceName: oxauth
      name: http-oxauth
      port: 8080
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
  oxtrust:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: 1
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/oxtrust
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: 1
    resources:
      limits:
        cpu: 2500m
        memory: 2500Mi
      requests:
        cpu: 2500m
        memory: 2500Mi
    service:
      name: http-oxtrust
      port: 8080
      clusterIp: None
      oxTrustServiceName: oxtrust
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
    istioDestinationRuleCookieTTL: 60s
  fido2:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: "90%"
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/fido2
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: ''
    resources:
      limits:
        cpu: 500m
        memory: 500Mi
      requests:
        cpu: 500m
        memory: 500Mi
    service:
      fido2ServiceName: fido2
      name: http-fido2
      port: 8080
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
  scim:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: "90%"
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/scim
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: ''
    resources:
      limits:
        cpu: 1000m
        memory: 1000Mi
      requests:
        cpu: 1000m
        memory: 1000Mi
    service:
      scimServiceName: scim
      name: http-scim
      port: 8080
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
  oxd-server:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: "90%"
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/oxd-server
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: 1
    resources:
      limits:
        cpu: 1000m
        memory: 400Mi
      requests:
        cpu: 1000m
        memory: 400Mi
    service:
      oxdServerServiceName: oxd-server
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
  casa:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: "90%"
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/casa
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: ''
    resources:
      limits:
        cpu: 500m
        memory: 500Mi
      requests:
        cpu: 500m
        memory: 500Mi
    service:
      casaServiceName: casa
      port: 8080
      name: http-casa
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
    istioDestinationRuleCookieTTL: 60s
  oxpassport:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: "90%"
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: 
      NODE_ENV: production
      NODE_CONFIG_DIR: /opt/gluu/node/passport/config
      NODE_LOGS: /opt/gluu/node/passport/logs
      DEBUG: "*"
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/oxpassport
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: ''
    resources:
      limits:
        cpu: 700m
        memory: 900Mi
      requests:
        cpu: 700m
        memory: 900Mi
    service:
      oxPassportServiceName: oxpassport
      port: 8090
      name: http-passport
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
      failureThreshold: 20
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
      failureThreshold: 20
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
    istioDestinationRuleCookieTTL: 60s
  oxshibboleth:
    topologySpreadConstraints: {}
    pdb:
      enabled: true
      maxUnavailable: 1
    hpa:
      enabled: true
      minReplicas: 1
      maxReplicas: 10
      targetCPUUtilizationPercentage: 50
      metrics: []
      behavior: {}
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/oxshibboleth
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    replicas: ''
    resources:
      limits:
        cpu: 1000m
        memory: 1000Mi
      requests:
        cpu: 1000m
        memory: 1000Mi
    service:
      sessionAffinity: ClientIP
      port: 8080
      oxShibbolethServiceName: oxshibboleth
      name: http-oxshib
    livenessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 30
      periodSeconds: 30
      timeoutSeconds: 5
    readinessProbe:
      exec:
        command:
        - python3
        - /app/scripts/healthcheck.py
      initialDelaySeconds: 25
      periodSeconds: 25
      timeoutSeconds: 5
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []
    istioDestinationRuleCookieTTL: 60s
  cr-rotate:
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/cr-rotate
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 200m
        memory: 200Mi
    service:
      crRotateServiceName: cr-rotate
      port: 8084
      name: http-cr-rotate
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
  oxauth-key-rotation:
    usrEnvs:
      normal: {}
      secret: {}
    dnsPolicy: ""
    dnsConfig: {}
    image:
      pullPolicy: IfNotPresent
      repository: gluufederation/certmanager
      tag: 4.5.8-1
      pullSecrets:
      - name: regcred
    cronJobSchedule: ""
    keysLife: 48
    keysStrategy: NEWER
    keysPushDelay: 0
    keysPushStrategy: NEWER
    resources:
      limits:
        cpu: 300m
        memory: 300Mi
      requests:
        cpu: 300m
        memory: 300Mi
    volumes: []
    volumeMounts: []
    lifecycle: {}
    additionalLabels: {}
    additionalAnnotations: {}
    tolerations: []
    affinity: {}
    nodeSelector: {}
    customScripts: []


```

Install Gluu Using Helm and Monitor the Deployment

After customizing the values.yaml file, use the following commands to install Gluu, verify the status of the pods, and troubleshoot any issues if necessary.

Install Gluu Using Helm

  • Installs the Gluu Server in the gluu namespace using the Helm chart and the customized values.yaml configuration file.
helm -n gluu install gluu gluu/gluu -f values.yaml

Check All Running Pods

  • Lists all running pods across all namespaces
    microk8s.kubectl get pods --all-namespaces
    
    

View Logs of All Gluu Containers in oxAuth Pod

  • View real-time logs of all containers in the specified pod within the gluu namespace.
    microk8s.kubectl logs -f gluu-oxauth-54665db4b-wpb4w --all-containers -n gluu
    

Describe Gluu oxAuth Pod

  • To inspect the details of an oxAuth pod
    microk8s.kubectl describe pod gluu-oxauth-54665db4b-wpb4w -n gluu
    
    

View MySQL Pod Logs

  • This command shows the real-time logs of the MySQL database pod running in the SQL namespace
    microk8s.kubectl logs -f my-release-mysql-0 -n sql
    
    

Describe MySQL Pod

  • Displays detailed information about the MySQL pod
    microk8s.kubectl describe pod my-release-mysql-0 -n sql
    
    
Was this article useful?
Like
Dislike
Help us improve this page
Please provide feedback or comments
Access denied
Access denied