Node Management
A node is a collection of machines that run containerized applications. Every cluster must have at least one worker node to be able to deploy applications. Nodes can be used by defining node pools. Nodes belonging to a node pool must have the same server type, size, and OS image, and by creating multiple node pools, a flexible deployment strategy can be established.
After creating a Kubernetes Engine cluster, add a node pool and modify or delete it as needed.
- It is recommended not to use the OS firewall on Kubernetes Engine nodes that use Calico.
- The firewall settings of Samsung Cloud Platform are set to Inactive by default.
- As recommended in the reference link below, in environments using Calico, it is recommended to set the firewall to an unused state.
- If the node is designated as a Backup service target, node deletion is not possible, so the function below cannot be used.
- Node pool reduction (including auto-scaling)
- Node Pool Upgrade
- Node pool auto recovery
- Delete node pool
Add node pool
A node refers to a machine that runs containerized applications, and at least one node is required to deploy applications in a Kubernetes cluster. After the creation of a Kubernetes Engine cluster is complete, add a node pool on the details page.
- You can define and use node pools, which are sets of nodes, in Kubernetes Engine. Nodes belonging to a node pool use the same server type, size, and OS image, so users can establish flexible deployment strategies by using multiple node pools.
In the Virtual Server menu, you can create a node pool using the user’s Custom Image. To create a node pool using a Custom Image, follow these steps.
- Create a Virtual Server that includes the Kubernetes Engine image of Samsung Cloud Platform.
- Use the Image creation of the corresponding Virtual Server to proceed with image creation.
- Select the registered Custom Image to create a node pool.
- For more details, please refer to Virtual Server > Image Creation.
To add a node pool, follow the steps below.
- Click the All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to add a node pool to. Navigate to the Cluster Details page.
- Cluster Details page, after selecting the Node Pool tab, click the Add Node Pool button. Add Cluster Node Pool page will be displayed.
- On the Add Cluster Node Pool page, enter the information required to create a node pool and select detailed options.
- Service Information Input area, enter or select the required information.
Category Required or notDetailed description Node Pool Name Required Node Pool Name - Start with a lowercase English letter and use lowercase English letters, numbers, and the special character (
-) within 3 - 20 characters- The special character (
-) cannot be used at the end of the name
- The special character (
Node Pool > Server Type Required Virtual Server server type of worker node - Standard: Standard specifications commonly used
- High Capacity: Large-capacity server specifications above Standard
- GPU: GPU specifications available when securing resources for special requirements such as AI/ML
- For detailed information on server types provided by Virtual Server, refer to Virtual Server Server Type
Node Pool > Server OS Required Worker node’s Virtual Server OS image - Standard: RHEL 8.10, Ubuntu 22.04
- Custom: Custom image for Kubernetes created from Virtual Server product (RHEL, Ubuntu)
Node Pool > Block Storage Required Block Storage settings used by the worker node’s Virtual Server - SSD: High-performance general volume
- HDD: General volume
- SSD/HDD_KMS: Additional encrypted volume using Samsung Cloud Platform KMS (Key Management System) encryption key
- Encryption can only be applied at initial creation and cannot be changed after service creation
- Performance degradation occurs when using the SSD_KMS disk type
- Enter capacity in Units, with a value between 13 and 125
- Since 1 Unit is 8 GB, 104 ~ 1,000 GB will be created
Node Pool > Server Group Select Apply the pre-created Server Group in Virtual Server service to worker nodes - Click Use to set Server Group usage
- When usage is set, select Server Group
- Supports Affinity or Anti-Affinity policies
- Partition policy not supported
- Cannot modify after node pool creation
- GPU server type cannot be selected
Node Pool Auto Scaling Required Automatically adjust the number of nodes in the node pool - Refer to the configuration method Node Pool Auto Scaling
Number of Nodes Required Number of worker nodes to create within a single node pool - Enter a value within the range 1 - 100
Node Auto Recovery Required When an abnormal node is found in the node pool, automatically delete and create a new one - Refer to Node Pool Auto Recovery for configuration method
Keypair Required User authentication method used to connect to the worker node’s Virtual Server - Create new: Create new if a new Keypair is needed
- Refer to Create Keypair
- List of default login accounts by OS
- Alma Linux: almalinux
- RHEL: cloud-user
- Rocky Linux: rocky
- Ubuntu: ubuntu
- Windows: sysadmin
Label Select Optionally schedule workloads to nodes - Click the Add button to enter label key and value
- Refer to Setting Node Pool Labels
Taint Select Prevent workloads from being scheduled onto nodes - Click the **Add** button to input taint effect, key, and value
- For configuration method, see [Node Pool Taint Settings](#노드-풀-테인트-설정하기)
- Click **Use** to select whether to apply advanced settings items for the node pool to be created
- Refer to [Configure Node Pool Advanced Settings](#노드-풀-고급-설정하기) for the configuration method
Table. Kubernetes Engine node pool service information input items - Start with a lowercase English letter and use lowercase English letters, numbers, and the special character (
- Service Information Input area, enter or select the required information.
- Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
- When creation is complete, check the created resources on the Cluster Details > Node Pool tab > Node Pool List page.
- If the notification popup opens, click the Confirm button.
Edit Node Pool
If needed, modify the number of nodes in the node pool on the Kubernetes Engine details page.
To modify the number of nodes, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click the Cluster menu. Navigate to the Cluster List page.
- Cluster List page, select the cluster you want to modify the node count for. Navigate to the Cluster Details page.
- Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- On the Node Pool Details page, click the Edit icon on the right of Node Pool Information. The Node Pool Edit popup window will open.
- Node Pool Edit In the popup window, after modifying the node pool information, click the Confirm button.
Upgrade Node Pool
If the Kubernetes version of the control plane and the version of the node pool are different, you can upgrade the node pool to synchronize the versions.
- After upgrading the cluster, proceed with the node pool upgrade. The control plane and node pool upgrades of the Kubernetes cluster are performed separately.
- When performing a node pool upgrade, a rolling update is carried out on the nodes belonging to the node pool. At this time, a momentary service interruption may occur, but this is a normal phenomenon due to the rolling update and will automatically normalize after a certain period.
- The server OS version may differ depending on the Kubernetes version of the node pool.
To upgrade the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine menu, click. Go to the Service Home page of Kubernetes Engine.
- On the Service Home page, click the Cluster menu. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to perform a node pool version upgrade on. Navigate to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click More > Node Pool Upgrade at the far right of the Node Pool row. The Node Pool Version Upgrade popup will open.
- You can only upgrade the node pool when the node’s status is Running.
- Node Pool Version Upgrade After checking the information in the popup window, click the Confirm button.
Node pool auto scaling/downsizing
Node pool auto scaling is a feature that automatically adjusts the number of node pools by adding new nodes to a specified node pool or removing existing nodes according to workload demands. This feature operates based on the node pool.
- When node pool auto scaling/downsizing, it is adjusted based on the resource requests of pods running on the node pool’s nodes rather than actual resource usage, and it periodically checks the status of pods and nodes and executes auto scaling/downsizing tasks.
To set up the auto-scaling/auto-shrinking feature of the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. Go to the Cluster List page.
- Cluster List page, select the cluster you want to use the node auto‑scaling/scale‑down feature. Then go to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
- Node Pool Edit in the popup window, select Node Pool Auto Scaling to Enable.
- After entering the minimum and maximum number of nodes, click the Confirm button.Reference
Node pool auto-scaling settings can also be configured on the cluster node pool creation page.
- Node pool expansion conditions
- When pod fails to run on the cluster due to insufficient resources (Pending pod occurs)
- Node pool reduction condition (when all satisfied)
- If the sum of resource requests (CPU/Memory) of all pods running on a node is less than 50% of the node’s allocatable resources
- If all pods running on the node can be run on another node (there must be no pods with PDB restrictions, etc.)
- While using node pool auto scaling, to prevent deletion due to node reduction, please add the following annotation to the node.
cluster-autoscaler.kubernetes.io/scale-down-disabled: “true”
- Node pool expansion conditions
- Node pool auto-scaling works only when the NotReady nodes among all nodes in the cluster are 45% or less of the total and no more than 3.
- If there are directly connected nodes that are not node pools created by the Kubernete Engine service, using the feature may cause malfunction.
Auto-recover node pool
Node auto-recovery is a feature that, when an abnormal node is detected in the cluster, automatically deletes it and creates a new node to restore all node counts in the node pool to a normal state. This feature operates based on the node pool.
Node auto-recovery deletes the existing node and creates a new node when communication between K8S Control Planes fails due to node (Virtual Server) issues, stopped state, network issues, etc., according to the node auto-recovery conditions, so caution is required when using it.
- When creating a node pool, it is restored according to the initially set conditions, and custom settings made after node creation are not restored.
If there are directly connected nodes that are not part of the node pool created by the Kubernete Engine service, the feature may malfunction when used.
To set up the node auto-recovery feature, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to use the node auto-recovery feature. Move to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
- Node Pool Edit In the popup, select Node Auto Recovery as Enable, then click the Confirm button.
Node auto-recovery settings can also be configured on the cluster node pool creation page.
- When it is a node auto-recovery target
- If a node reports NotReady status in consecutive checks for a certain time threshold (about 10 minutes)
- If the node does not report any status for a certain time threshold (about 10 minutes)
- If not a node automatic recovery target
- Node that remains in Creating state and does not become Running when initially created
- When five or more abnormal nodes occur simultaneously in the same node pool
Setting Node Pool Labels
Node pool labels are a feature for selectively scheduling workloads onto nodes.
- When applying node pool label, it is not applied to existing nodes, and the label is applied only to newly created nodes.
- If you need to apply a label to an existing node, the user must set it directly with kubectl.
To set the node pool label, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- On the Cluster List page, select the cluster for which you want to set the node pool label. It navigates to the Cluster Details page.
- Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- Node Pool Details page, when you click the Edit icon of the label, the Edit Label popup window opens.
- Label Edit In the popup window, click the Add button to add the required number of labels.
- Enter the label information and click the Confirm button.
Setting Node Pool Taint
Node pool taint is a feature to prevent workloads from being scheduled onto nodes.
- If you set a taint on all node pools, pods required for normal cluster operation may not run.
- When applying node pool taint, it is not applied to existing nodes, and the taint is applied only to newly created nodes.
- If you need to apply a taint to an existing node, the user must set it directly with kubectl.
To set the node pool taint, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. Go to the Cluster List page.
- Cluster List page, select the cluster you want to set the node pool label for. Move to the Cluster Details page.
- Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- On the Node Pool Details page, clicking the Edit icon of a taint opens the Edit Taint popup.
- Tint Edit In the popup window, click the Add button to add tints as many as needed.
- Enter the tint information and click the Confirm button.
Advanced Node Pool Settings
Node pool advanced settings is a feature to apply detailed settings such as the number of pods, PID, logs, image GC, etc. within a worker node.
Each setting corresponds to the kubelet configuration as follows.
- Maximum pods per node: maxPods
- Image GC upper limit percent: imageGCHighThresholdPercent
- Image GC low threshold percent: imageGCLowThresholdPercent
- Container log maximum size MB: containerLogMaxSize
- Container log maximum file count: containerLogMaxFiles
- Pod PID limit: podPidsLimit
- Unsafe Sysctl allowed: allowedUnsafeSysctls
To perform advanced settings for the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to configure node pool advanced settings. Navigate to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click Create Node Pool. You will be taken to the Create Node Pool page.
- On the Node Pool Creation page, select Advanced Settings to Enable.
- After selecting Use, enter the required information for the items that appear.
- Summary tab, after confirming that the required information has been entered correctly, click the Create button.
Delete node pool
If necessary, delete the node pool from the Kubernetes Engine details page.
To delete the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- On the Cluster List page, select the cluster whose node count you want to modify. You will be taken to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click the More button at the far right of the node pool row. In the More menu, click Delete Node Pool.
- Delete Node Pool In the popup window, select the checkbox and enter the name of the node pool to delete, then click the Confirm button.
- You must select the checkbox of the node deletion confirmation message for the confirm button to be enabled.
Check node details
A node is a working machine used in a Kubernetes cluster, containing essential services required to run Pods. Each node is managed by the master components, and depending on the cluster configuration, virtual machines or physical machines can be used as nodes.
After creating the cluster, you can view information such as metadata and object information of the added nodes, and edit the resource file with a YAML editor.
To view detailed information of the node pool, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Node menu on the Service Home page. Navigate to the Node List page.
- Node List page, after selecting the cluster you want to view detailed information for from the gear button at the top left, click the Confirm button.
- Select the node you want to view detailed information for and click. You will be taken to the Node Details page.
Category Detailed descriptionStatus Display Displays the current status of the node Detailed Information Check the node’s Account information, metadata, and object information YAML Node resources can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred on the node Pod Check node’s pod information - Pod (Pod) is the smallest compute unit that can be created, managed, and deployed in Kubernetes Engine
Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check metadata information such as node labels, annotations, taints Object Information Displays the object information of the created node, such as internal IP, machine ID, capacity, resources, etc. - If GPU resources are present, check the number of GPUs in the Capacity > Nvidia.com/GPU column
Table. Node Detailed Information Items