Use Docker and K8s
Here we use the lightweight K3s as an example. If you have already deployed a K8s cluster, you can refer to the subsequent configuration for similar settings.
Please read the entire tutorial before proceeding, otherwise the deployment result may not meet expectations.
Install Docker
You can find the installation instructions on the Docker official website, and use the basic configuration in Quick Start to run GZCTF.
Install K3s
K3s is a lightweight k8s distribution that can be quickly deployed on single and multiple machines. Official documentation: https://docs.k3s.io/
If you only have one machine, we strongly recommend using other methods for deployment:
- For private intranet competitions with less than 200 teams and 30 challenges, you can directly use Docker for deployment without using k8s.
- For public or larger competitions, it is strongly recommended to use k3s for integrated deployment, do not use Docker.
Finally, if you really want to run k3s through Docker, you can specify the Docker backend by adding the following parameters during installation, we strongly do NOT recommend you to do this, you may need to fix various compatibility issues and cause a lot of trouble.
If you need to run more than 255 challenge containers on a single k3s instance, you need to specify INSTALL_K3S_EXEC during k3s installation and change node-cidr-mask-size to the desired subnet size.
The above configuration cannot be easily changed after installation. We recommend using /22 (about 1024 Pods per node), or /20 (about 4096 Pods per node) if needed. Increasing the block further (e.g., /16) just reserves an oversized address space per node and reduces the cluster's ability to scale out workers.
The above configuration only changes the IP address range used by the node Pods. If you need to change the node's Pod limit, please refer to the following.
And install k3s in the following way. For more information, please refer to k3s installation configuration:
Chinese users can use a mirror site to speed up installation.
For multi-node installation and cluster setup, please refer to the official documentation.
Configure GZCTF
The connection configuration file for k3s is located at /etc/rancher/k3s/k3s.yaml, and it can be exported using the following command:
Use the following command to obtain the IP address of the k3s control-panel machine:
If it shows 127.0.0.1, it means that the k3s control-panel is on the current machine. Please use ip a to check the IP address of the current machine.
You can directly use the IP address or use a domain name, but make sure the domain name resolves to the machine where the k3s control-panel is located, and ensure that the machine running GZCTF can access port 6443 on it.
Save the above output as kube-config.yaml and change the server field to the IP address of the machine where the k3s control-panel is located, for example:
Save it to the machine where GZCTF is deployed, in the same folder as compose.yml, for example, kube-config.yaml.
Then modify the mount configuration in compose.yml:
Also, change the appsettings.json file and set the ContainerProvider field:
Restart GZCTF, and then you can use k3s as the container backend. Users who have already used k8s can also refer to the above configuration process to integrate GZCTF into an existing k8s cluster.
Change NodePort Port Range
The default NodePort port range for k3s is 30000-32767, which may not meet your requirements. Therefore, you can modify the NodePort port range of k3s according to your needs.
Run the following commands on the machine where the k3s control-panel is located:
-
sudo nano /etc/systemd/system/k3s.service -
Edit the
ExecStartsetting below to specifyservice-node-port-range -
sudo systemctl daemon-reload -
sudo systemctl restart k3s
Change the container limit of K3s
The default container limit of K3s is 110, which may not be suitable for a large number of small containers in a competition. Therefore, you can change the container limit of K3s according to your needs.
On the machine where the k3s control-panel is located, use the built-in config file /etc/rancher/k3s/config.yaml:
-
sudo nano /etc/rancher/k3s/config.yaml -
Add or update the following (example increases per-node limit to 800):
-
Restart the service to apply:
-
Optional: If you also need a larger Pod subnet per node, configure
kube-controller-manager-argin the same file. We recommend/22(~1024 Pods/node), or/20if required: -
Note: Using a very small mask (e.g.,
/16) allocates an oversized block per node and hurts horizontal scaling by limiting the number of workers you can add.
Add Container Image Registry
Using an external container image registry directly is not supported in k3s. You need to add the image registry to k3s.
Run the following commands on the machine where the k3s control-panel is located:
-
sudo nano /etc/rancher/k3s/registries.yaml -
Edit the
mirrorssetting below to specify the address of the image registry you need -
sudo systemctl restart k3s