Use K8s Only

INFO

This guide is intended for users who are familiar with Kubernetes. It does not cover how to set up a Kubernetes cluster or provide tutorials on using kubectl. Additionally, this guide may include some advanced Kubernetes terminology, so a certain level of familiarity is required for reading.

This article focuses on deploying GZCTF in a Kubernetes cluster. For configuration instructions specific to GZCTF itself, please refer to the Quick Start guide.

Deployment Notes

  1. GZCTF supports multi-instance deployment, but based on testing, currently the most stable deployment method is a single-instance deployment with the database on the same node. Therefore, this article will focus on single-instance deployment as an example.

  2. For multi-instance deployment, all instances need to use S3/Object Storage to ensure file consistency and write concurenncy (See #365). Additionally, Redis needs to be deployed to ensure cache consistency among multiple instances. Refer to appsettings documentation to configure both. Multiple providers are supported such as Azure/AWS/Minio.

  3. For multi-instance deployment, the load balancer needs to be configured with sticky sessions to enable real-time data retrieval using websockets.

  4. If you prefer a simpler deployment, go for a single-instance deployment!

  5. Since you have chosen to deploy with Kubernetes, it implies that you need a larger number of Pods. Please pay attention to the following configuration:

    • Specify --kube-controller-manager-arg=node-cidr-mask-size=16 during installation. The default CIDR is /24, supporting a maximum of 255 Pods. This cannot be changed after installation.

      INSTALL_K3S_EXEC="--kube-controller-manager-arg=node-cidr-mask-size=16"
    • Adjust the value of maxPods accordingly, otherwise you may reach the Pod limit and be unable to schedule more Pods.

      apiVersion: kubelet.config.k8s.io/v1beta1
      kind: KubeletConfiguration
      maxPods: 800

Deploying GZCTF

  1. Create namespaces and configuration files. See appsettings

    apiVersion: v1
    kind: Namespace
    metadata:
      name: gzctf-server
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: gzctf-config
      namespace: gzctf-server
    data:
      appsettings.json: |
        { ... } # content of appsettings.json
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: gzctf-sa
      namespace: gzctf-server
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: gzctf-crb
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin # use to access the Kubernetes API
    subjects:
      - kind: ServiceAccount
        name: gzctf-sa
        namespace: gzctf-server
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: gzctf-tls
      namespace: gzctf-server
    type: kubernetes.io/tls
    data:
      tls.crt: ... # base64 encoded TLS certificate
      tls.key: ... # base64 encoded TLS private key
  2. Create local PV (if you need to share storage among multiple instances, please change the configuration yourself)

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: gzctf-files-pv
      namespace: gzctf-server
    spec:
      capacity:
        storage: 2Gi # If you configured S3 storage then this PV will be used for storing logs only  so 2Gi is too much
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: /mnt/path/to/gzctf/files # local path
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: gzctf-db-pv
      namespace: gzctf-server
    spec:
      capacity:
        storage: 1Gi
      accessModes:
        - ReadWriteOnce
      hostPath:
        path: /mnt/path/to/gzctf/db # local path
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gzctf-files
      namespace: gzctf-server
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi # If you configured S3 storage then this PV will be used for storing logs only so 2Gi is too much
      volumeName: gzctf-files-pv
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: gzctf-db
      namespace: gzctf-server
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      volumeName: gzctf-db-pv
  3. Create the Deployment manifest of GZCTF

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: gzctf
      namespace: gzctf-server
      labels:
        app: gzctf
    spec:
      replicas: 1
      strategy:
        type: RollingUpdate
      selector:
        matchLabels:
          app: gzctf
      template:
        metadata:
          labels:
            app: gzctf
        spec:
          serviceAccountName: gzctf-sa
          nodeSelector:
            kubernetes.io/hostname: xxx # Specify the deployment node, forcing it to be on the same node as the database
          containers:
            - name: gzctf
              image: gztime/gzctf:latest
              imagePullPolicy: Always
              env:
                - name: GZCTF_ADMIN_PASSWORD
                  value: xxx # Admin password
                # choose your backend language `en_US` / `zh_CN` / `ja_JP`
                - name: LC_ALL
                  value: en_US.UTF-8
              ports:
                - containerPort: 8080
                  name: http
              volumeMounts:
                - name: gzctf-files
                  mountPath: /app/files
                - name: gzctf-config
                  mountPath: /app/appsettings.json
                  subPath: appsettings.json
              resources:
                requests:
                  cpu: 1000m
                  memory: 384Mi
          volumes:
            - name: gzctf-files
              persistentVolumeClaim:
                claimName: gzctf-files
            - name: gzctf-config
              configMap:
                name: gzctf-config
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: gzctf-redis
      namespace: gzctf-server
      labels:
        app: gzctf-redis
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: gzctf-redis
      template:
        metadata:
          labels:
            app: gzctf-redis
        spec:
          containers:
            - name: gzctf-redis
              image: redis:alpine
              imagePullPolicy: Always
              ports:
                - containerPort: 6379
                  name: redis
              resources:
                requests:
                  cpu: 10m
                  memory: 64Mi
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: gzctf-db
      namespace: gzctf-server
      labels:
        app: gzctf-db
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: gzctf-db
      template:
        metadata:
          labels:
            app: gzctf-db
        spec:
          nodeSelector:
            kubernetes.io/hostname: xxx # Specify the deployment node, need to be on the same node as GZCTF
          containers:
            - name: gzctf-db
              image: postgres:alpine
              imagePullPolicy: Always
              ports:
                - containerPort: 5432
                  name: postgres
              env:
                - name: POSTGRES_PASSWORD
                  value: xxx # Database password, needs to be consistent with the database password in appsettings.json
              volumeMounts:
                - name: gzctf-db
                  mountPath: /var/lib/postgresql/data
              resources:
                requests:
                  cpu: 500m
                  memory: 512Mi
          volumes:
            - name: gzctf-db
              persistentVolumeClaim:
                claimName: gzctf-db
  4. Create Service and Ingress

    apiVersion: v1
    kind: Service
    metadata:
      name: gzctf
      namespace: gzctf-server
      annotations: # Enable Traefik Sticky Session
        traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
        traefik.ingress.kubernetes.io/service.sticky.cookie.name: "LB_Session"
        traefik.ingress.kubernetes.io/service.sticky.cookie.httponly: "true"
    spec:
      selector:
        app: gzctf
      ports:
        - protocol: TCP
          port: 8080
          targetPort: 8080
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: gzctf-db
      namespace: gzctf-server
    spec:
      selector:
        app: gzctf-db
      ports:
        - protocol: TCP
          port: 5432
          targetPort: 5432
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: gzctf-redis
      namespace: gzctf-server
    spec:
      selector:
        app: gzctf-redis
      ports:
        - protocol: TCP
          port: 6379
          targetPort: 6379
    ---
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: gzctf
      namespace: gzctf-server
      annotations: # Some TLS settings for Traefik, you can modify them to satisfy your needs
        kubernetes.io/ingress.class: "traefik"
        traefik.ingress.kubernetes.io/router.tls: "true"
        ingress.kubernetes.io/force-ssl-redirect: "true"
    spec:
      tls:
        - hosts:
            - ctf.example.com # Domain name
          secretName: gzctf-tls # Certificate name, you need to create the corresponding Secret yourself
      rules:
        - host: ctf.example.com # Domain name
          http:
            paths:
              - path: /
                pathType: Prefix
                backend:
                  service:
                    name: gzctf
                    port:
                      number: 8080
  5. Additional Configuration for Traefik

In order to make GZCTF able to obtain the real IP address of users through XFF, Traefik needs to be able to add the XFF header correctly. Please note that the following content may not always be up-to-date and applicable to all versions of Traefik. This is an example of helm values, please search for the latest configuration method yourself.

service:
  spec:
externalTrafficPolicy: Local # To make XFF work properly, set externalTrafficPolicy to Local
deployment:
  kind: DaemonSet
ports:
  web:
  redirectTo: websecure # Redirect HTTP to HTTPS
additionalArguments:
  - "--entryPoints.web.proxyProtocol.insecure"
  - "--entryPoints.web.forwardedHeaders.insecure"
  - "--entryPoints.websecure.proxyProtocol.insecure"
  - "--entryPoints.websecure.forwardedHeaders.insecure"

Deployment Tips

  1. If you want GZCTF to automatically create an admin account during initialization, make sure to pass the GZCTF_ADMIN_PASSWORD environment variable. Otherwise, you will need to manually create the admin account.
  2. Please debug and verify if GZCTF can correctly retrieve the real IP address of users on the system log page. If not, check if the Traefik configuration is correct.
  3. If you want to monitor the resource usage, please deploy Prometheus and Grafana on your own, enable Prometheus support in Traefik, and monitor the resource usage of the challenge containers using node exporter.
  4. If you need to automatically update the deployment of GZCTF based on the configuration file, please refer to Reloader.
  5. Inside the cluster, you can use https://kubernetes.default.svc.cluster.local:443 as the server field in the cluster configuration file.