1 - Overview

Service Overview

Kubernetes Engine is a service that provides lightweight virtual computing and containers, as well as a Kubernetes cluster to manage them. Users can utilize the Kubernetes environment without complex preparation by installing, operating, and maintaining the Kubernetes Control Plane.

Features

  • Standard Kubernetes Environment Configuration: The standard Kubernetes environment can be used without separate configuration through the default Kubernetes Control Plane provided. It is compatible with applications in other standard Kubernetes environments, so you can use standard Kubernetes applications without modifying the code.

  • Easy Kubernetes Deployment: Provides secure communication between worker nodes and managed control planes, and quickly provisions worker nodes, allowing users to focus on building applications on the provided container environment.

  • Convenient Kubernetes Management: Provides various management features to conveniently use the created Kubernetes cluster, such as cluster information inquiry and cluster management, namespace management, and workload management through the dashboard for enterprise environments.

Service Composition Diagram

Configuration Diagram
Figure. K8s Engine Configuration Diagram

Provided Features

Kubernetes Engine provides the following features.

  • Cluster Management: You can create and manage clusters to use the Kubernetes Engine service. After creating a cluster, you can add services necessary for operation, such as nodes, namespaces, and workloads.
  • Node Management: A node is a set of machines that run containerized applications. Every cluster must have at least one worker node to deploy applications. Nodes can be defined and used by defining a node pool. Nodes belonging to a node pool must have the same server type, size, and OS image, and multiple node pools can be created to establish a flexible deployment strategy.
  • Namespace Management: Namespace is a logical separation unit within a Kubernetes cluster, and is used to specify access permissions or resource usage limits by namespace.
  • Workload Management: Workload is an application running on Kubernetes Engine. You can create a namespace, then add or delete workloads. Workloads are created and managed item by item, such as deployments, pods, stateful sets, daemon sets, jobs, and cron jobs.
  • Service and Ingress Management: Service is an abstraction method that exposes applications running in a set of pods as a network service, and Ingress is used to expose HTTP and HTTPS paths from outside the cluster to the inside. After creating a namespace, you can create or delete services, endpoints, ingresses, and ingress classes.
  • Storage Management: When using Kubernetes Engine, you can create and manage the storage to be used. Storage is created and managed by items such as PVC, PV, and storage class.
  • Configuration Management: When there is a need to manage values that change inside a container according to multiple environments such as Dev/Prod, managing them with separate images due to environment variables is inconvenient and causes significant cost waste. In Kubernetes, you can manage environment variables or configuration values as variables from the outside so that they can be inserted when a Pod is created, and at this time, ConfigMap and Secret can be used.
  • Access Control: In cases where multiple users access a Kubernetes cluster, you can grant permissions for specific APIs or namespaces to restrict access. You can apply Kubernetes’ role-based access control (RBAC) feature to set permissions for clusters or namespaces. You can create and manage cluster roles, cluster role bindings, roles, and role bindings.

Component

Control Plane

The Control Plane is the master node role in the Kubernetes Engine service. The master node is the management node of the cluster, and it plays a role in managing other nodes in the cluster. The cluster is the basic creation unit of the Kubernetes Engine service, and it is used to manage node pools, objects, controllers, and other components within it. Users set up the cluster name, control plane, network, File Storage, and other settings, and then create a node pool within the cluster to use it. The master node assigns tasks to the cluster, monitors the status of the nodes, and plays a role in data communication between nodes.

The cluster name creation rule is as follows.

  • It starts with English and can be set within 3-30 characters using English, numbers, and special characters (-).
  • The cluster name must not be duplicated with the existing one.

Worker Node

The Worker Node is a work node in the cluster, playing a role in performing the cluster’s tasks. The Worker Node receives tasks from the cluster’s master node, performs them, and reports the task results to the cluster’s master node. All nodes created within the node pool and namespace play the role of a worker node.

The creation rule of the node pool, which is a collection of worker nodes, is as follows.

  • A node pool must have at least one node to be created for application deployment to be possible.
  • Up to 100 nodes can be created in a node pool.
  • Since the maximum number of nodes is 100, if there are 100 node pools, 1 node per node pool, and if there are 50 node pools, 2 nodes per node pool, the total number of nodes can be created freely within 100 nodes.
  • It is possible to set up Block Storage connected to the node pool.
  • It is possible to set the server type, size, and OS image for nodes belonging to the node pool, and all must be the same.
  • Auto-Scaling service allows you to set automatic node pool expansion/reduction according to the requirements of the deployed application.

Preceding Service

This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.

Service CategoryServiceDetailed Description
NetworkingVPCA service that provides an independent virtual network in a cloud environment
NetworkingSecurity GroupA virtual firewall that controls the server’s traffic
StorageFile StorageA storage that allows multiple clients to share files over the network
  • Used as a Persistant Volume
Fig. Preceding services of Kubernetes Engine

1.1 - Monitoring Metrics

Kubernetes Engine Monitoring Metrics

The following table shows the monitoring metrics of Kubernetes Engine that can be checked through Cloud Monitoring. For detailed instructions on using Cloud Monitoring, refer to the Cloud Monitoring guide.

Performance ItemDetailed DescriptionUnit
Cluster Namespaces [Active]Number of active namespacescnt
Cluster Namespaces [Total]Total number of namespaces in the clustercnt
Cluster Nodes [Ready]Number of nodes in READY statecnt
Cluster Nodes [Total]Total number of nodes in the clustercnt
Cluster Pods [Failed]Number of failed pods in the clustercnt
Cluster Pods [Pending]Number of pending pods in the clustercnt
Cluster Pods [Running]Number of running pods in the clustercnt
Cluster Pods [Succeeded]Number of succeeded pods in the clustercnt
Cluster Pods [Unknown]Number of unknown pods in the clustercnt
Instance StatusCluster statusstatus
Namespace Pods [Failed]Number of failed pods in the namespacecnt
Namespace Pods [Pending]Number of pending pods in the namespacecnt
Namespace Pods [Running]Number of running pods in the namespacecnt
Namespace Pods [Succeeded]Number of succeeded pods in the namespacecnt
Namespace Pods [Unknown]Number of unknown pods in the namespacecnt
Namespace GPU Clock FrequencySM clock frequency in the namespaceMHz
Namespace GPU Memory UsageMemory utilization in the namespace%
Namespace GPU UsageGPU utilization in the namespace%
Node CPU Size [Allocatable]Allocatable CPU in the nodecnt
Node CPU Size [Capacity]CPU capacity in the nodecnt
Node CPU UsageCPU usage in the node%
Node CPU Usage [Request]CPU request ratio in the node%
Node CPU UsedCPU utilization in the nodestatus
Node Filesystem UsageFilesystem usage in the node%
Node Memory Size [Allocatable]Allocatable memory in the nodebytes
Node Memory Size [Capacity]Memory capacity in the nodebytes
Node Memory UsageMemory utilization in the node%
Node Memory Usage [Request]Memory request ratio in the node%
Node Memory WorkingsetMemory working set in the nodebytes
Node Network In BytesNode network received bytesbytes
Node Network Out BytesNode network transmitted bytesbytes
Node Network Total BytesNode network total bytesbytes
Node Pods [Failed]Number of failed pods in the nodecnt
Node Pods [Pending]Number of pending pods in the nodecnt
Node Pods [Running]Number of running pods in the nodecnt
Node Pods [Succeeded]Number of succeeded pods in the nodecnt
Node Pods [Unknown]Number of unknown pods in the nodecnt
Pod CPU Usage [Limit]CPU usage limit ratio in the pod%
Pod CPU Usage [Request]CPU request ratio in the pod%
Pod CPU UsageCPU usage in the pod%
Pod GPU Clock FrequencySM clock frequency in the podMHz
Pod GPU Memory UsageMemory utilization in the pod%
Pod GPU UsageGPU utilization in the pod%
Pod Memory Usage [Limit]Memory usage limit ratio in the pod%
Pod Memory Usage [Request]Memory request ratio in the pod%
Pod Memory UsageMemory usage in the podbytes
Pod Network In BytesPod network received bytesbytes
Pod Network Out BytesPod network transmitted bytesbytes
Pod Network Total BytesPod network total bytesbytes
Pod Restart ContainersContainer restart count in the podcnt
Workload Pods [Running]-cnt
Table. Kubernetes Engine Monitoring Metrics

1.2 - ServiceWatch Metrics

Kubernetes Engine sends metrics to ServiceWatch. The metrics provided as basic monitoring are data collected at 1-minute intervals.

Note
For information on how to check metrics in ServiceWatch, refer to the ServiceWatch guide.

Basic Metrics

The following are basic metrics for the Kubernetes Engine namespace.

Metrics with metric names shown in bold below are key metrics selected among the basic metrics provided by Kubernetes Engine. Key metrics are used to configure service dashboards that are automatically built for each service in ServiceWatch.

For each metric, the user guide describes which statistical value is meaningful when querying that metric, and the statistical value shown in bold among the meaningful statistics is the key statistic. You can query key metrics through key statistics in the service dashboard.

Metric NameDetailed DescriptionUnitMeaningful Statistics
cluster_upCluster upCount
  • Sum
  • Average
  • Maximum
  • Minimum
cluster_node_countCluster node countCount
  • Sum
  • Average
  • Maximum
  • Minimum
cluster_failed_node_countCluster failed node countCount
  • Sum
  • Average
  • Maximum
  • Minimum
cluster_namespace_phase_countCluster namespace phase countCount
  • Sum
  • Average
  • Maximum
  • Minimum
cluster_pod_phase_countCluster pod phase countCount
  • Sum
  • Average
  • Maximum
  • Minimum
node_cpu_allocatableNode CPU allocatable-
  • Sum
  • Average
  • Maximum
  • Minimum
node_cpu_capacityNode CPU capacity-
  • Sum
  • Average
  • Maximum
  • Minimum
node_cpu_usageNode CPU usage-
  • Sum
  • Average
  • Maximum
  • Minimum
node_cpu_utilizationNode CPU utilization-
  • Sum
  • Average
  • Maximum
  • Minimum
node_memory_allocatableNode memory allocatableBytes
  • Sum
  • Average
  • Maximum
  • Minimum
node_memory_capacityNode memory capacityBytes
  • Sum
  • Average
  • Maximum
  • Minimum
node_memory_usageNode memory usageBytes
  • Sum
  • Average
  • Maximum
  • Minimum
node_memory_utilizationNode memory utilization-
  • Sum
  • Average
  • Maximum
  • Minimum
node_network_rx_bytesNode network receive bytesBytes/Second
  • Sum
  • Average
  • Maximum
  • Minimum
node_network_tx_bytesNode network transmit bytesBytes/Second
  • Sum
  • Average
  • Maximum
  • Minimum
node_network_total_bytesNode network total bytesBytes/Second
  • Sum
  • Average
  • Maximum
  • Minimum
node_number_of_running_podsNode number of running podsCount
  • Sum
  • Average
  • Maximum
  • Minimum
namespace_number_of_running_podsNamespace number of running podsCount
  • Sum
  • Average
  • Maximum
  • Minimum
namespace_deployment_pod_countNamespace deployment pod countCount
  • Sum
  • Average
  • Maximum
  • Minimum
namespace_statefulset_pod_countNamespace statefulset pod countCount
  • Sum
  • Average
  • Maximum
  • Minimum
namespace_daemonset_pod_countNamespace daemonset pod countCount
  • Sum
  • Average
  • Maximum
  • Minimum
namespace_job_active_countNamespace job active countCount
  • Sum
  • Average
  • Maximum
  • Minimum
namespace_cronjob_active_countNamespace cronjob active countCount
  • Sum
  • Average
  • Maximum
  • Minimum
pod_cpu_usagePod CPU usage-
  • Sum
  • Average
  • Maximum
  • Minimum
pod_memory_usagePod memory usageBytes
  • Sum
  • Average
  • Maximum
  • Minimum
pod_network_rx_bytesPod network receive bytesBytes/Second
  • Sum
  • Average
  • Maximum
  • Minimum
pod_network_tx_bytesPod network transmit bytesBytes/Second
  • Sum
  • Average
  • Maximum
  • Minimum
pod_network_total_bytesPod network total bytesCount
  • Sum
  • Average
  • Maximum
  • Minimum
container_cpu_usageContainer CPU usage-
  • Sum
  • Average
  • Maximum
  • Minimum
container_cpu_limitContainer CPU limit-
  • Sum
  • Average
  • Maximum
  • Minimum
container_cpu_utilizationContainer CPU utilization-
  • Sum
  • Average
  • Maximum
  • Minimum
container_memory_usageContainer memory usageBytes
  • Sum
  • Average
  • Maximum
  • Minimum
container_memory_limitContainer memory limitBytes
  • Sum
  • Average
  • Maximum
  • Minimum
container_memory_utilizationContainer memory utilization-
  • Sum
  • Average
  • Maximum
  • Minimum
node_gpu_countNode GPU countCount
  • Sum
  • Average
  • Maximum
  • Minimum
gpu_tempGPU temperature-
  • Sum
  • Average
  • Maximum
  • Minimum
gpu_power_usageGPU power usage-
  • Sum
  • Average
  • Maximum
  • Minimum
gpu_utilGPU utilizationPercent
  • Sum
  • Average
  • Maximum
  • Minimum
gpu_sm_clockGPU SM clock-
  • Sum
  • Average
  • Maximum
  • Minimum
gpu_fb_usedGPU FB usageMegabytes
  • Sum
  • Average
  • Maximum
  • Minimum
gpu_tensor_activeGPU tensor active rate-
  • Sum
  • Average
  • Maximum
  • Minimum
pod_gpu_utilPod GPU utilizationPercent
  • Sum
  • Average
  • Maximum
  • Minimum
pod_gpu_tensor_activePod GPU tensor active rate-
  • Sum
  • Average
  • Maximum
  • Minimum
Table. Kubernetes Engine Basic Metrics

2 - How-to guides

Users can enter the required information for the Kubernetes Engine and select detailed options to create a service through the Samsung Cloud Platform Console.

Create Kubernetes Engine

You can create and use the Kubernetes Engine service from the Samsung Cloud Platform Console.

You can create and manage clusters to use the Kubernetes Engine service. After creating a cluster, you can add services needed for operation such as nodes, namespaces, and workloads.

Caution
  • You can select up to 4 Security Groups in the network settings of Kubernetes Engine.

    • If you directly add a Security Group to nodes created by Kubernetes Engine on the Virtual Server service page, they may be automatically detached because they are not managed by Kubernetes Engine.
    • For nodes, the Security Group must be added/managed in the network settings of the Kubernetes Engine service.
  • Managed Security Group is automatically managed in Kubernetes Engine.

    • Do not use Managed Security Group for arbitrary user purposes because if you delete it or add/delete rules, it will automatically be restored.

Creating a cluster

You can create and use a Kubernetes Engine cluster service from the Samsung Cloud Platform Console.

To create a Kubernetes Engine cluster, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Create Cluster button on the Service Home page. You will be taken to the Create Cluster page.
  3. Cluster Creation page, enter the information required for service creation, and select detailed options.
    • Service Information Input area, please enter or select the required information.
      Category
      Required
      Detailed description
      Cluster NameRequiredCluster Name
      • Start with an English letter and use English letters, numbers, and the special character (-) within 3-30 characters
      Control Plane Settings > Kubernetes VersionRequiredSelect Kubernetes Version
      Control Area Settings > Private Endpoint Access ControlSelectSelect whether to use Private Endpoint Access Control
      • After selecting Use, click Add to select resources that are allowed to access the private endpoint
      • Only resources in the same Account and same region can be registered
      • Regardless of the Use setting, the nodes of the cluster can access the private endpoint
      Control Area Settings > Public Endpoint Access/Access ControlSelectSelect whether to use Public Endpoint Access/Access Control
      • After selecting Use, enter the Allowed Access IP Range as 192.168.99.0/24
        • Set the access control IP range so that external users can access the Kubernetes API server endpoint
        • If external access is not needed, you can disable it to reduce security threats
      ServiceWatch log collectionSelectSet whether to enable log collection so that logs for the cluster can be viewed in ServiceWatch
      • Use to select provides 5 GB of log storage for free for all services within the Account, and if it exceeds 5 GB, charges are applied based on storage amount
      • If you need to check cluster logs, it is recommended to enable the ServiceWatch log collection feature
      Cloud Monitoring log collectionSelectSet whether to enable log collection so that logs for the cluster can be viewed in Cloud Monitoring
      • Enable: If selected, 1 GB of log storage is provided for free for all services within the Account, and any amount exceeding 1 GB will be deleted sequentially
      Network SettingsRequiredNetwork connection settings for node pool
      • VPC Name: Select a pre-created VPC
      • Subnet Name: Choose a standard Subnet to use among the subnets of the selected VPC
      • Security Group: Select button after clicking then Select Security Group popup window select Security Group
        • Up to 4 Security Group can be selected
      File Storage SettingsRequiredSelect the file storage volume to be used in the cluster
      • Default Volume (NFS): Click the Search button and then select the file storage in the File Storage Selection popup. The default Volume file storage can only use the NFS format
      Table. Kubernetes Engine service information input items
    • Enter additional information area, input or select the required information.
      Category
      Required or not
      Detailed description
      TagSelectAdd Tag
      • Up to 50 can be added per resource
      • After clicking the Add Tag button, enter or select Key, Value values
      Table. Kubernetes Engine Additional Information Input Items
  4. Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
    • When creation is complete, check the created resources on the Cluster List page.

Check cluster details

Kubernetes Engine service allows you to view and edit the full resource list and detailed information. Cluster Details page consists of Details, Node Pools, Tags, Activity History tabs.

To view detailed cluster information, follow the steps below.

  1. All Services > Container > Kubernetes Engine 메뉴를 클릭하세요. Kubernetes Engine의 Service Home 페이지로 이동합니다.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. Click the resource (cluster) you want to view detailed information for on the Cluster List page. You will be taken to the Cluster Details page.
    • Cluster Details page displays the cluster’s status information and detailed information, and consists of Details, Node Pool, Tags, Job History tabs.
      CategoryDetailed description
      Cluster StatusKubernetes Engine cluster status
      • Creating: Creating
      • Running: Created / Running
      • Updating: Version upgrade in progress
      • Deleting: Deleting
      • Error: Error occurred
      Service TerminationButton to terminate a Kubernetes Engine cluster
      • To terminate the Kubernetes Engine service, you must delete all node pools added to the cluster
      • If the service is terminated, the running service may be stopped immediately, so termination is necessary considering the impact of service interruption
      Table. Cluster status information and additional functions

Detailed Information

You can view detailed information of the selected resource on the Cluster List page, and modify the information if necessary.

Category
Detailed description
serviceservice name
Resource TypeResource Type
SRNUnique resource ID in Samsung Cloud Platform
Resource NameResource Name
  • In the Kubernetes Engine service, it refers to the cluster name
Resource IDUnique resource ID in the service
CreatorUser who created the service
Creation DateTimeDateTime when the service was created
ModifierUser who modified the service information
Modification DateTimeDateTime when service information was modified
Cluster NameCluster Name
LLM EndpointLLM Endpoint information
Control Plane SettingsCheck assigned Kubernetes control plane (Control Plane) version and access permission scope
  • If there is a Kubernetes version of the control plane that can be upgraded, click the Edit icon to perform a Cluster Version Upgrade. See Cluster Version Upgrade for details.
  • Click the Admin Kubeconfig Download/User Kubeconfig Download button for the private endpoint address to download the kubeconfig settings for each role as a yaml document.
  • Click the Edit icon of the private endpoint access control to modify usage and allowed resources.
  • Click the Admin Kubeconfig Download/User Kubeconfig Download button for the public endpoint address to download the kubeconfig settings for each role as a yaml document.
  • Click the Edit icon of the public endpoint access/control to modify usage and allowed IP range.
  • Click the Edit icon of ServiceWatch log collection to change usage. When log collection is enabled, view the cluster control plane’s Audit/Event logs in ServiceWatch > Log Group.
  • Click the Edit icon of Cloud Monitoring log collection to change usage. When log collection is enabled, view the cluster control plane’s Audit/Event logs in Cloud Monitoring > Log Analysis.
Network SettingsView VPC, Subnet, and Security Group information set when creating a Kubernetes Engine cluster
  • Click each setting to view detailed information on the detail page
  • If a Security Group change is needed, click the Edit icon to configure
  • Managed Security Group is an item provided by the system and is generated automatically
File Storage SettingsIf you click the volume name, you can view detailed information on the storage detail page
Table. Cluster detailed information tab items
Reference
  • The version of Kubernetes Engine is denoted in the order [major].[minor].[patch], and you can upgrade only one minor version at a time.
    • Example: Version 1.11.x > 1.13.x (Not allowed) / Version 1.11.x > 1.12.x (Allowed)
  • If you are using a Kubernetes version that has reached end of support or a version that is scheduled to reach end of support, a red exclamation mark will appear to the right of the version. If this icon is displayed, we recommend upgrading the Kubernetes version.

Node Pool

You can view cluster node pool information and add, modify, or delete. For detailed information on using node pools, refer to Managing Nodes.

CategoryDetailed description
Add Node PoolAdd node pool to current cluster
Node Pool ListCheck the list of node pools created in the current cluster
  • Click the node pool name to go to the detail page and view detailed information
More menuProvides node pool management features
  • Node information: Displays node name, version, and status information
  • Node pool upgrade: Node pool version upgrade
  • Node pool deletion: Delete node pool
Table. Node Pool Tab Items
Reference

If a red exclamation mark icon appears on the version of the node pool information, the server OS of that node pool is not supported in newer versions of Kubernetes. To ensure stable service, the node pool server OS must be upgraded.

  • To upgrade the node pool version, delete the existing node pool and then create a new node pool with a higher server OS version.

Tag

Cluster List page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.

CategoryDetailed description
Tag ListTag List
  • You can check the Key and Value information of tags
  • Up to 50 tags can be added per resource
  • When entering tags, search and select from the previously created Key and Value list
Table. Cluster Tag Tab Items

Work History

You can view the operation history of the selected resource on the Cluster List page.

CategoryDetailed description
Work History ListResource Change History
  • Work details, work date and time, resource type, resource name, work result, worker information can be checked
  • When you click the corresponding resource in the Work History List list, the Work History Details popup opens
Table. Cluster Job History Tab Items

Managing Cluster Resources

We provide cluster version upgrade, kubeconfig download, and control plane logging modification features for cluster resource management.

Caution
To use Kubernetes Engine, you need at least read permissions for VPC, VPC Subnet, Security Group, FileStorage, and Virtual Server.
Even without create/delete permissions, Security Group and Virtual Server are created/deleted by Kubernetes Engine for lifecycle management purposes, and the creator/modifier is indicated as System.

Cluster Version Upgrade

If there is a version that can be upgraded from the cluster’s Kubernetes version, you can perform the upgrade on the Cluster Details page.

Reference
  • Before the cluster upgrade, check the following items.
    • Check if the cluster status is Running
    • Check that the status of all node pools in the cluster is Running or Deleting
    • Check that all node pool versions in the cluster are the same version as the cluster
    • Check if automatic scaling/downsizing of all node pools in the cluster and node auto-recovery feature are disabled
  • After upgrading the cluster, proceed with the node pool upgrade. The control plane and node pool upgrades of the Kubernetes cluster are performed separately.
  • You can upgrade only one minor version at a time.
    • Example: version 1.12.x > 1.13.x (possible) / version 1.11.x > 1.13.x (not possible)
  • After an upgrade, you cannot perform a downgrade or rollback, so to use the previous version again you must create a new cluster.

Caution
  • Since user systems using an end-of-support Kubernetes version may become vulnerable, upgrade the control plane and node pool versions directly in the Samsung Cloud Platform Console.
    • No separate cost will be incurred due to the upgrade.
  • Please perform compatibility testing for the upgrade version in advance to ensure stable system operation for users.

Cluster version upgrade preparation

There is no need to delete and recreate API objects when upgrading the cluster version. For the transitioned API, all existing API objects can be read and updated using the new API version. However, due to deprecated APIs in older Kubernetes versions, you may be unable to read or modify existing objects or create new ones. Therefore, to ensure system stability, it is recommended to migrate clients and manifests before the upgrade.

Migrate the client and manifest using the following method.

Reference
Since the deprecated API differs for each cluster version, the scope of application and system impact may also differ. For detailed explanation, refer to the Kubernetes official documentation > Deprecation Guide.

Upgrade cluster and node pool version

To update the cluster and node pool, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
  2. Service Home page, click the Cluster menu. Go to the Cluster List page.
  3. Click the resource (cluster) to upgrade the version on the Cluster List page. You will be taken to the Cluster Details page.
  4. Click the edit icon of Kubernetes version on the Cluster Details page. The Cluster version upgrade popup opens.
  5. Select the Kubernetes version to upgrade, and click the Confirm button.
    • It may take a few minutes until the cluster upgrade is complete
    • During the upgrade, the cluster status is shown as Updating, and when the upgrade is complete, it is shown as Running.
  6. When the upgrade is complete, select the Node Pool tab. Go to the Node Pool page.
  7. Click the More button of the node pool item and click Node Pool Upgrade. The Node Pool Version Upgrade popup window opens.
  8. Node Pool Version Upgrade After checking the message in the popup window, click the Confirm button.
    • It may take a few minutes until the node pool upgrade is completed.
    • During the upgrade, the node pool status is shown as Updating, and when the upgrade is complete, it is shown as Running.

kubeconfig download

You can download the admin/user kubeconfig settings of the cluster’s public and private endpoints as a yaml document.

To download the kubeconfig settings of the cluster, follow the steps below.

  1. Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engines.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. Click the resource (cluster) to download the kubeconfig on the Cluster List page. You will be taken to the Cluster Details page.
  4. Cluster Details on the page, select the desired endpoint’s Admin kubeconfig download/User kubeconfig download button and click it.
    • You can download the kubeconfig file in yaml format for each permission.

Modify private endpoint access control

You can change the private endpoint access control settings of the cluster.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
  2. Click the Cluster menu on the Service Home page. Navigate to the Cluster List page.
  3. Cluster List page, click the resource (cluster) for which you want to modify the private endpoint access control. You will be taken to the Cluster Details page.
  4. Click the Edit icon of Private Endpoint Access Control on the Cluster Details page. The Edit Private Endpoint Access Control popup opens.
  5. In the Private Endpoint Access Control Edit popup, set the Use status of Private Endpoint Access Control, add the allowed access resources, and then click the Confirm button.

Modify public endpoint access/access control

You can change the public endpoint access control settings of the cluster.

  1. All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engines.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. Cluster List page, click the resource (cluster) you want to modify public endpoint access control. Navigate to the Cluster Details page.
  4. Click the Edit icon of Public Endpoint Access/Access Control on the Cluster Details page. The Public Endpoint Access/Access Control Edit popup opens.
  5. Public endpoint access/access control modification In the popup, set the use of Public endpoint access control, add the allowed IP range, and then click the Confirm button.

Modify control area log collection settings

You can change the log collection settings of the cluster’s control plane. Detailed logs of the cluster can be viewed in the ServiceWatch service or the Cloud Monitoring service.

Reference

Even if you set up Cloud Monitoring log collection, you can check the cluster logs.

  • However, the Cloud Moniotring log collection feature is scheduled for termination, so we recommend using ServiceWatch log collection.

To change the control plane log collection settings of the cluster, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
  2. Click the Cluster menu on the Service Home page. Go to the Cluster List page.
  3. Click the resource (cluster) to modify control plane logging on the Cluster List page. You will be taken to the Cluster Details page.
  4. On the Cluster Details page, click the Edit icon of ServiceWatch Log Collection. The ServiceWatch Log Collection popup opens.
    • Cloud Monitoring log collection feature can also be set the same way.
  5. ServiceWatch log collection in the popup window, after setting the use of ServiceWatch log modification, click the Confirm button.
Reference

When log collection is used, you can view the Audit/Event logs of the cluster control plane in each service. Detailed logs can be viewed on the next page.

Security Group Edit

You can modify the cluster’s Security Group.

Caution
  • You can select up to 4 Security Groups in the network settings of Kubernetes Engine.

    • If you directly add a Security Group on the Virtual Server service page for nodes created by Kubernetes Engine, it may be automatically released because it is not managed by Kubernetes Engine.
    • For nodes, the Security Group must be added/managed in the network settings of the Kubernetes Engine service.
  • Managed Security Group is automatically managed in Kubernetes Engine.

    • Do not use Managed Security Group for arbitrary user purposes because if you delete it or add/delete rules, it will automatically be restored.

Follow the steps below to modify the cluster’s Security Group.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. Click the resource (cluster) to modify the Security Group on the Cluster List page. You will be taken to the Cluster Details page.
  4. Click the Edit icon of Security Group on the Cluster Details page. The Edit Security Group popup window opens.
  5. After selecting or deselecting the Security Group to modify, click the Confirm button.

Cancel Cluster

Caution
If you terminate the cluster, all connected node pools will be deleted, and all data in all pods within the cluster will be permanently deleted.

To cancel the cluster, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
  2. Click the Cluster menu on the Service Home page. Move to the Cluster List page.
  3. Cluster List page, click the resource (cluster) for which you want to view detailed information. You will be taken to the Cluster Detail page.
  4. Click Cancel Service on the Cluster Details page.
  5. Service termination After reviewing the content in the popup window, click the Confirm button.

2.1 - Node Management

A node is a collection of machines that run containerized applications. Every cluster must have at least one worker node to be able to deploy applications. Nodes can be used by defining node pools. Nodes belonging to a node pool must have the same server type, size, and OS image, and by creating multiple node pools, a flexible deployment strategy can be established.

After creating a Kubernetes Engine cluster, add a node pool and modify or delete it as needed.

Caution
  • It is recommended not to use the OS firewall on Kubernetes Engine nodes that use Calico.
    • The firewall settings of Samsung Cloud Platform are set to Inactive by default.
  • As recommended in the reference link below, in environments using Calico, it is recommended to set the firewall to an unused state.
  • If the node is designated as a Backup service target, node deletion is not possible, so the function below cannot be used.
    • Node pool reduction (including auto-scaling)
    • Node Pool Upgrade
    • Node pool auto recovery
    • Delete node pool

Add node pool

A node refers to a machine that runs containerized applications, and at least one node is required to deploy applications in a Kubernetes cluster. After the creation of a Kubernetes Engine cluster is complete, add a node pool on the details page.

  • You can define and use node pools, which are sets of nodes, in Kubernetes Engine. Nodes belonging to a node pool use the same server type, size, and OS image, so users can establish flexible deployment strategies by using multiple node pools.
Reference

In the Virtual Server menu, you can create a node pool using the user’s Custom Image. To create a node pool using a Custom Image, follow these steps.

  1. Create a Virtual Server that includes the Kubernetes Engine image of Samsung Cloud Platform.
  2. Use the Image creation of the corresponding Virtual Server to proceed with image creation.
  3. Select the registered Custom Image to create a node pool.

To add a node pool, follow the steps below.

  1. Click the All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. Cluster List page, select the cluster you want to add a node pool to. Navigate to the Cluster Details page.
  4. Cluster Details page, after selecting the Node Pool tab, click the Add Node Pool button. Add Cluster Node Pool page will be displayed.
  5. On the Add Cluster Node Pool page, enter the information required to create a node pool and select detailed options.
    • Service Information Input area, enter or select the required information.
      Category
      Required or not
      Detailed description
      Node Pool NameRequiredNode Pool Name
      • Start with a lowercase English letter and use lowercase English letters, numbers, and the special character (-) within 3 - 20 characters
        • The special character (-) cannot be used at the end of the name
      Node Pool > Server TypeRequiredVirtual Server server type of worker node
      • Standard: Standard specifications commonly used
      • High Capacity: Large-capacity server specifications above Standard
      • GPU: GPU specifications available when securing resources for special requirements such as AI/ML
      Node Pool > Server OSRequiredWorker node’s Virtual Server OS image
      • Standard: RHEL 8.10, Ubuntu 22.04
      • Custom: Custom image for Kubernetes created from Virtual Server product (RHEL, Ubuntu)
      Node Pool > Block StorageRequiredBlock Storage settings used by the worker node’s Virtual Server
      • SSD: High-performance general volume
      • HDD: General volume
      • SSD/HDD_KMS: Additional encrypted volume using Samsung Cloud Platform KMS (Key Management System) encryption key
        • Encryption can only be applied at initial creation and cannot be changed after service creation
        • Performance degradation occurs when using the SSD_KMS disk type
      • Enter capacity in Units, with a value between 13 and 125
        • Since 1 Unit is 8 GB, 104 ~ 1,000 GB will be created
      Node Pool > Server GroupSelectApply the pre-created Server Group in Virtual Server service to worker nodes
      • Click Use to set Server Group usage
      • When usage is set, select Server Group
        • Supports Affinity or Anti-Affinity policies
        • Partition policy not supported
      • Cannot modify after node pool creation
      • GPU server type cannot be selected
      Node Pool Auto ScalingRequiredAutomatically adjust the number of nodes in the node pool
      Number of NodesRequiredNumber of worker nodes to create within a single node pool
      • Enter a value within the range 1 - 100
      Node Auto RecoveryRequiredWhen an abnormal node is found in the node pool, automatically delete and create a new one
      KeypairRequiredUser authentication method used to connect to the worker node’s Virtual Server
      • Create new: Create new if a new Keypair is needed
      • List of default login accounts by OS
        • Alma Linux: almalinux
        • RHEL: cloud-user
        • Rocky Linux: rocky
        • Ubuntu: ubuntu
        • Windows: sysadmin
      LabelSelectOptionally schedule workloads to nodes
      • Click the Add button to enter label key and value
      TaintSelectPrevent workloads from being scheduled onto nodes
      • Click the **Add** button to input taint effect, key, and value
      • For configuration method, see [Node Pool Taint Settings](#노드-풀-테인트-설정하기)
      | | Advanced Settings | Select | Settings for detailed areas such as pods, logs, etc. for worker nodes
      • Click **Use** to select whether to apply advanced settings items for the node pool to be created
      • Refer to [Configure Node Pool Advanced Settings](#노드-풀-고급-설정하기) for the configuration method
      |
      Table. Kubernetes Engine node pool service information input items
  6. Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
    • When creation is complete, check the created resources on the Cluster Details > Node Pool tab > Node Pool List page.
  7. If the notification popup opens, click the Confirm button.

Edit Node Pool

If needed, modify the number of nodes in the node pool on the Kubernetes Engine details page.

Reference
If you modify the number of nodes, nodes will be automatically added or removed, causing the container operation to terminate. At this time, because the container moves to another node, the running service may be interrupted.

To modify the number of nodes, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click the Cluster menu. Navigate to the Cluster List page.
  3. Cluster List page, select the cluster you want to modify the node count for. Navigate to the Cluster Details page.
  4. Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
  5. On the Node Pool Details page, click the Edit icon on the right of Node Pool Information. The Node Pool Edit popup window will open.
  6. Node Pool Edit In the popup window, after modifying the node pool information, click the Confirm button.

Upgrade Node Pool

If the Kubernetes version of the control plane and the version of the node pool are different, you can upgrade the node pool to synchronize the versions.

Caution
  • After upgrading the cluster, proceed with the node pool upgrade. The control plane and node pool upgrades of the Kubernetes cluster are performed separately.
  • When performing a node pool upgrade, a rolling update is carried out on the nodes belonging to the node pool. At this time, a momentary service interruption may occur, but this is a normal phenomenon due to the rolling update and will automatically normalize after a certain period.
  • The server OS version may differ depending on the Kubernetes version of the node pool.

To upgrade the node pool, follow the steps below.

  1. All Services > Container > Kubernetes Engine menu, click. Go to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click the Cluster menu. You will be taken to the Cluster List page.
  3. Cluster List page, select the cluster you want to perform a node pool version upgrade on. Navigate to the Cluster Details page.
  4. On the Cluster Details page, select the Node Pool tab, then click More > Node Pool Upgrade at the far right of the Node Pool row. The Node Pool Version Upgrade popup will open.
    • You can only upgrade the node pool when the node’s status is Running.
  5. Node Pool Version Upgrade After checking the information in the popup window, click the Confirm button.

Node pool auto scaling/downsizing

Node pool auto scaling is a feature that automatically adjusts the number of node pools by adding new nodes to a specified node pool or removing existing nodes according to workload demands. This feature operates based on the node pool.

  • When node pool auto scaling/downsizing, it is adjusted based on the resource requests of pods running on the node pool’s nodes rather than actual resource usage, and it periodically checks the status of pods and nodes and executes auto scaling/downsizing tasks.

To set up the auto-scaling/auto-shrinking feature of the node pool, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
  2. Click the Cluster menu on the Service Home page. Go to the Cluster List page.
  3. Cluster List page, select the cluster you want to use the node auto‑scaling/scale‑down feature. Then go to the Cluster Details page.
  4. On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
  5. Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
  6. Node Pool Edit in the popup window, select Node Pool Auto Scaling to Enable.
  7. After entering the minimum and maximum number of nodes, click the Confirm button.
    Reference

    Node pool auto-scaling settings can also be configured on the cluster node pool creation page.

    • Node pool expansion conditions
      • When pod fails to run on the cluster due to insufficient resources (Pending pod occurs)
    • Node pool reduction condition (when all satisfied)
      • If the sum of resource requests (CPU/Memory) of all pods running on a node is less than 50% of the node’s allocatable resources
      • If all pods running on the node can be run on another node (there must be no pods with PDB restrictions, etc.)
    • While using node pool auto scaling, to prevent deletion due to node reduction, please add the following annotation to the node.
      • cluster-autoscaler.kubernetes.io/scale-down-disabled: “true”
Caution
  • Node pool auto-scaling works only when the NotReady nodes among all nodes in the cluster are 45% or less of the total and no more than 3.
  • If there are directly connected nodes that are not node pools created by the Kubernete Engine service, using the feature may cause malfunction.

Auto-recover node pool

Node auto-recovery is a feature that, when an abnormal node is detected in the cluster, automatically deletes it and creates a new node to restore all node counts in the node pool to a normal state. This feature operates based on the node pool.

Caution

Node auto-recovery deletes the existing node and creates a new node when communication between K8S Control Planes fails due to node (Virtual Server) issues, stopped state, network issues, etc., according to the node auto-recovery conditions, so caution is required when using it.

  • When creating a node pool, it is restored according to the initially set conditions, and custom settings made after node creation are not restored.

If there are directly connected nodes that are not part of the node pool created by the Kubernete Engine service, the feature may malfunction when used.

To set up the node auto-recovery feature, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. Cluster List page, select the cluster you want to use the node auto-recovery feature. Move to the Cluster Details page.
  4. On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
  5. Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
  6. Node Pool Edit In the popup, select Node Auto Recovery as Enable, then click the Confirm button.
Reference

Node auto-recovery settings can also be configured on the cluster node pool creation page.

  • When it is a node auto-recovery target
    • If a node reports NotReady status in consecutive checks for a certain time threshold (about 10 minutes)
    • If the node does not report any status for a certain time threshold (about 10 minutes)
  • If not a node automatic recovery target
    • Node that remains in Creating state and does not become Running when initially created
    • When five or more abnormal nodes occur simultaneously in the same node pool

Setting Node Pool Labels

Node pool labels are a feature for selectively scheduling workloads onto nodes.

Caution
  • When applying node pool label, it is not applied to existing nodes, and the label is applied only to newly created nodes.
    • If you need to apply a label to an existing node, the user must set it directly with kubectl.

To set the node pool label, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. On the Cluster List page, select the cluster for which you want to set the node pool label. It navigates to the Cluster Details page.
  4. Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
  5. Node Pool Details page, when you click the Edit icon of the label, the Edit Label popup window opens.
  6. Label Edit In the popup window, click the Add button to add the required number of labels.
  7. Enter the label information and click the Confirm button.

Setting Node Pool Taint

Node pool taint is a feature to prevent workloads from being scheduled onto nodes.

Caution
  • If you set a taint on all node pools, pods required for normal cluster operation may not run.
  • When applying node pool taint, it is not applied to existing nodes, and the taint is applied only to newly created nodes.
    • If you need to apply a taint to an existing node, the user must set it directly with kubectl.

To set the node pool taint, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Cluster menu on the Service Home page. Go to the Cluster List page.
  3. Cluster List page, select the cluster you want to set the node pool label for. Move to the Cluster Details page.
  4. Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
  5. On the Node Pool Details page, clicking the Edit icon of a taint opens the Edit Taint popup.
  6. Tint Edit In the popup window, click the Add button to add tints as many as needed.
  7. Enter the tint information and click the Confirm button. ​

Advanced Node Pool Settings

Node pool advanced settings is a feature to apply detailed settings such as the number of pods, PID, logs, image GC, etc. within a worker node.

Caution
After creating a node pool, it cannot be modified. If an incorrect value is entered, the node may not operate normally.
Reference

Each setting corresponds to the kubelet configuration as follows.

  • Maximum pods per node: maxPods
  • Image GC upper limit percent: imageGCHighThresholdPercent
  • Image GC low threshold percent: imageGCLowThresholdPercent
  • Container log maximum size MB: containerLogMaxSize
  • Container log maximum file count: containerLogMaxFiles
  • Pod PID limit: podPidsLimit
  • Unsafe Sysctl allowed: allowedUnsafeSysctls

To perform advanced settings for the node pool, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. Cluster List page, select the cluster you want to configure node pool advanced settings. Navigate to the Cluster Details page.
  4. On the Cluster Details page, select the Node Pool tab, then click Create Node Pool. You will be taken to the Create Node Pool page.
  5. On the Node Pool Creation page, select Advanced Settings to Enable.
  6. After selecting Use, enter the required information for the items that appear.
  7. Summary tab, after confirming that the required information has been entered correctly, click the Create button.

Delete node pool

If necessary, delete the node pool from the Kubernetes Engine details page.

To delete the node pool, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
  3. On the Cluster List page, select the cluster whose node count you want to modify. You will be taken to the Cluster Details page.
  4. On the Cluster Details page, select the Node Pool tab, then click the More button at the far right of the node pool row. In the More menu, click Delete Node Pool.
  5. Delete Node Pool In the popup window, select the checkbox and enter the name of the node pool to delete, then click the Confirm button.
  • You must select the checkbox of the node deletion confirmation message for the confirm button to be enabled.

Check node details

A node is a working machine used in a Kubernetes cluster, containing essential services required to run Pods. Each node is managed by the master components, and depending on the cluster configuration, virtual machines or physical machines can be used as nodes.

After creating the cluster, you can view information such as metadata and object information of the added nodes, and edit the resource file with a YAML editor.

To view detailed information of the node pool, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Node menu on the Service Home page. Navigate to the Node List page.
  3. Node List page, after selecting the cluster you want to view detailed information for from the gear button at the top left, click the Confirm button.
  4. Select the node you want to view detailed information for and click. You will be taken to the Node Details page.
    Category
    Detailed description
    Status DisplayDisplays the current status of the node
    Detailed InformationCheck the node’s Account information, metadata, and object information
    YAMLNode resources can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changes
    EventCheck events that occurred on the node
    PodCheck node’s pod information
    • Pod (Pod) is the smallest compute unit that can be created, managed, and deployed in Kubernetes Engine
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck metadata information such as node labels, annotations, taints
    Object InformationDisplays the object information of the created node, such as internal IP, machine ID, capacity, resources, etc.
    • If GPU resources are present, check the number of GPUs in the Capacity > Nvidia.com/GPU column
    Table. Node Detailed Information Items

2.2 - Manage Namespaces

A namespace is a logical separation unit within a Kubernetes cluster, and it is used to specify access permissions or resource usage limits per namespace.

Create Namespace

To create a namespace, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Namespace menu on the Service Home page. Navigate to the Namespace List page.
  3. On the Namespace List page, select the cluster where you want to create a namespace from the gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information on object creation, refer to the Kubernetes official documentation > Kubernetes Objects.

Check namespace detailed information

You can check the namespace status and detailed information on the namespace detail page.

To view detailed namespace information, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
  2. Click the Namespace menu on the Service Home page. Navigate to the Namespace List page.
  3. On the Namespace List page, select the cluster that the namespace requiring detailed information belongs to from the gear button at the top left, then click Confirm.
  4. Click on the item you want to view detailed information for on the Namespace List page. You will be taken to the Namespace Details page.
    CategoryDetailed description
    Status DisplayDisplays the current status of the namespace
    Namespace DeletionDelete namespace
    • A namespace containing workloads cannot be deleted. To delete a namespace, all associated workloads must be deleted.
    Detailed InformationCheck the Account information and metadata information of the namespace
    YAMLNamespaces can be edited in the YAML editor
    • Click the Edit button, modify the namespace, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changes
    EventCheck events that occurred within the namespace
    PodCheck pod information of the namespace
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the namespace
    Table. Namespace detailed information items

Delete namespace

To delete a namespace, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click the Namespace menu on the Service Home page. You will be taken to the Namespace List page.
  3. Namespace List page, after selecting the cluster that the namespace you want to delete belongs to from the gear button at the top left, click the Confirm button.
  4. Namespace List page, select the item you want to view detailed information and click. You will be taken to the Namespace Details page.
  5. Click Delete Namespace on the Namespace Details page.
  6. When the alert confirmation window appears, click the Confirm button.
Warning
  • After selecting the item you want to delete on the namespace list page, click Delete to delete the selected namespace.
  • A namespace that contains workloads cannot be deleted. To delete the namespace, delete all associated workloads.

2.3 - Manage Workload

A workload is an application that runs on Kubernetes Engine. You can create a namespace and then add or delete workloads. Workloads are created and managed per deployment, pod, stateful set, daemon set, job, and cron job.

Reference

Deployments, Pods, StatefulSets, DaemonSets, Jobs, and CronJobs services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.

  • To select a different cluster (namespace), click the gear button on the right side of the list. Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.

Managing Deployments

A Deployment refers to a resource that provides updates for Pods and ReplicaSets. In workloads, you can create a Deployment and view detailed information or delete it.

Create Deployment

To create a deployment, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Deployment under the Workload menu on the Service Home page. You will be taken to the Deployment List page.
  3. On the Deployment List page, select the cluster and namespace from the top-left gear button, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
    • The following is an example .yaml file showing the required fields and object Spec for creating a deployment. (application/deployment.yaml)
      Color mode
       apiVersion: apps/v1
       kind: Deployment
      metadata:
         name: nginx-deployment
       spec:
         selector:
            matchLabels:
               app: nginx
         replicas: 2 # tells deployment to run 2 pods matching the template
         template:
           metadata:
              labels:
                 app: nginx
           spec:
              containers:
              - name: nginx
                image: nginx:1.14.2
                ports:
                - containerPort: 80
       apiVersion: apps/v1
       kind: Deployment
      metadata:
         name: nginx-deployment
       spec:
         selector:
            matchLabels:
               app: nginx
         replicas: 2 # tells deployment to run 2 pods matching the template
         template:
           metadata:
              labels:
                 app: nginx
           spec:
              containers:
              - name: nginx
                image: nginx:1.14.2
                ports:
                - containerPort: 80
      Code block. Required fields and object Spec for deployment creation
Reference
For detailed information on the concept of deployments and object creation, refer to the Kubernetes official documentation > Deployment.

Check deployment detailed information

To view the deployment details, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. From the Service Home page, click Deployment under the Workloads menu. Navigate to the Deployment List page.
  3. Deployment List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to view detailed information for on the Deployment List page. You will be taken to the Deployment Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete DeploymentDelete deployment
    Detailed InformationCan check detailed information of deployment
    YAMLDeployment resource files can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply the changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the deployment
    PodCheck the pod information of the deployment
    • Pod (pod) is the smallest computing unit that can be created, managed, and deployed in Kubernetes Engine
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the deployment
    Object InformationCheck the object information of the deployment
    Table. Deployment detailed information items

Delete Deployment

To delete the deployment, follow these steps.

  1. All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Deployment under the Workload menu. Navigate to the Deployment List page.
  3. Deployment List page, select the cluster and namespace from the top left gear button, then click Confirm.
  4. Select the item you want to delete on the Deployment List page. Go to the Deployment Details page.
  5. Click Delete Deployment on the Deployment Details page.
  6. Alert confirmation window appears, click the Confirm button.
Caution
On the deployment list page, after selecting the item you want to delete, you can delete the selected deployment by clicking Delete.

Managing Pods

A pod (Pod) is the smallest computing unit that can be created, managed, and deployed in Kubernetes, referring to a group of one or more containers. In a workload, you can create a pod and view detailed information or delete it.

Create a pod

To create a pod, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click Pod under the Workload menu. Navigate to the Pod List page.
  3. Pod List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
  4. In the Object Creation Popup from, enter the object information and click the Confirm button.
Reference
For detailed information on the concept of pods and object creation, refer to the Kubernetes official documentation > Pods.

Check pod detailed information

To check the detailed pod information, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Pod under the Workload menu on the Service Home page. You will be taken to the Pod List page.
  3. On the Pod List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to view detailed information for on the Pod List page. You will be taken to the Pod Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Status DisplayDisplays the current status of the pod
    Delete PodDelete pod
    Detailed InformationCan view detailed information of the pod
    YAMLPod resource files can be edited in the YAML editor
    • Edit button click and modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the pod
    LogWhen you select a container, you can view the container information that the pod has
    Account informationCheck basic information about the Account such as Account name, location, creation date and time
    Metadata InformationCheck the pod’s metadata information
    Object InformationCheck the pod’s object information
    Init Container InformationCheck the init container information of the pod
    Container InformationCheck the pod’s container information
    Table. Pod detailed information items

Delete Pod

To delete a pod, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
  2. Click Pod under the Workload menu on the Service Home page. Navigate to the Pod List page.
  3. On the Pod List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Pod List page, select the item you want to delete. Pod Detail page, navigate.
  5. Click Delete Pod on the Pod Details page.
  6. Notification Confirmation Window appears, click the Confirm button.
Caution
After selecting the item you want to delete on the pod list page, you can delete the selected pod by clicking Delete.

Managing StatefulSet

StatefulSet refers to a workload API object used to manage the stateful aspects of an application. In a workload, you can create a StatefulSet and view detailed information or delete it.

Creating a StatefulSet

To create a StatefulSet, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click StatefulSet under the Workload menu on the Service Home page. You will be taken to the StatefulSet List page.
  3. On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. Object Creation Popup에서 enter the object information and click the Confirm button.
Reference
For detailed information on the StatefulSet concept and object creation, refer to Kubernetes official documentation > StatefulSet.

Check detailed information of StatefulSet

To view the detailed information of the StatefulSet, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home on the page, click StatefulSet under the Workload menu. Navigate to the StatefulSet List page.
  3. On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to view detailed information for on the StatefulSet List page. You will be taken to the StatefulSet Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view the service information.
    Category
    Detailed description
    Delete StatefulSetDelete the StatefulSet
    Detailed InformationCan check detailed information of StatefulSet
    YAMLStatefulSet resource files can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply the changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the StatefulSet
    PodCheck the pod information of the StatefulSet
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the StatefulSet
    Object InformationCheck the object information of the StatefulSet
    Table. StatefulSet detailed information items

Delete StatefulSet

To delete a StatefulSet, follow the steps below.

  1. Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click StatefulSet under the Workload menu. Navigate to the StatefulSet List page.
  3. On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to delete on the StatefulSet List page. Go to the StatefulSet Details page.
  5. Click Delete StatefulSet on the StatefulSet Details page.
  6. If the notification confirmation window appears, click the Confirm button.
Caution
On the StatefulSet list page, after selecting the item you want to delete, you can delete the selected StatefulSet by clicking Delete.

Managing DaemonSets

DaemonSet refers to a resource that ensures that a copy of a pod runs on all nodes or some nodes. In workloads, you can create a DaemonSet and view detailed information or delete it.

Creating a DaemonSet

To create a DaemonSet, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click DaemonSet under the Workload menu. You will be taken to the DaemonSet List page.
  3. On the DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.

Reference
The concept of DaemonSet and detailed information about object creation can be found in the Kubernetes official documentation > DaemonSet.

Check DaemonSet detailed information

To view the detailed information of the DaemonSet, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click DaemonSet under the Workload menu on the Service Home page. You will be taken to the DaemonSet List page.
  3. DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. DaemonSet List page, select the item you want to view detailed information for. It navigates to the DaemonSet Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view the service information.
    Category
    Detailed description
    DaemonSet DeleteDelete DaemonSet
    Detailed InformationCan view detailed information of DaemonSet
    YAMLDaemonSet resource files can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the DaemonSet
    PodCheck the pod information of the DaemonSet
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the DaemonSet
    Object InformationCheck the object information of the DaemonSet
    Table. DaemonSet detailed information items

Delete DaemonSet

To delete a DaemonSet, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click DaemonSet under the Workload menu on the Service Home page. Navigate to the DaemonSet List page.
  3. DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. DaemonSet List page, select the item you want to delete. Move to the DaemonSet Details page.
  5. Click Delete DaemonSet on the DaemonSet Details page.
  6. If the Alert confirmation window appears, click the Confirm button.
Warning
On the DaemonSet list page, after selecting the item you want to delete, click Delete to delete the selected DaemonSet.

Job Management

A job refers to a resource that creates one or more pods and continues to run pods until the specified number of pods have successfully terminated. In a workload, you can create a job and view detailed information or delete it.

Create Job

To create a job, follow the steps below.

  1. All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
  2. Click Job under the Workload menu on the Service Home page. You will be taken to the Job List page.
  3. On the Job List page, select the cluster and namespace from the top left gear button, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information on the concept of jobs and object creation, refer to the Kubernetes official documentation > Job.

Check job details

To view detailed job information, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Job under the Workload menu on the Service Home page. Navigate to the Job List page.
  3. On the Job List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. On the Job List page, select the item for which you want to view detailed information. You will be taken to the Job Details page.
    • Selecting Show system objects at the top of the list will display all items except the Kubernetes object entries.
  5. Click each tab to view service information.
    Category
    Detailed description
    Job DeleteDelete Job
    Detailed InformationCan view detailed information of the job
    YAMLJob resource file can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changes
    EventCheck events that occurred within the job
    PodCheck the pod information of the job
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the job’s metadata information
    Object InformationCheck the job’s object information
    Table. Job Detailed Information Items

Delete Job

To delete a job, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
  2. Click Job under the Workload menu on the Service Home page. You will be taken to the Job List page.
  3. Job List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Job List page, select the item you want to delete. Go to the Job Details page.
  5. Click Delete Job on the Job Details page.
  6. Alert confirmation window appears, click the Confirm button.
Caution
On the job list page, after selecting the item you want to delete, you can delete the selected job by clicking Delete.

Managing Cron Jobs

Cron jobs refer to resources that periodically execute a job according to a schedule written in cron format. They can be used to run repetitive tasks at regular intervals such as backups, report generation, etc. In the workload, you can create a cron job and view or delete its detailed information.

Create Cron Job

To create a cron job, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click CronJob under the Workload menu on the Service Home page. You will be taken to the CronJob List page.
  3. CronJob List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information on the concept of CronJobs and object creation, refer to the Kubernetes official documentation > CronJob.

Check Cron Job Detailed Information

To check the detailed information of the cron job, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Cron Job under the Workload menu on the Service Home page. You will be taken to the Cron Job List page.
  3. On the CronJob List page, select the cluster and namespace from the top left gear button, then click Confirm.
  4. Cron Job List page: select the item you want to view detailed information for. You will be taken to the Cron Job Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Cron job deleteDelete cron job
    Detailed InformationCan view detailed information of cron job
    YAMLCron job resource files can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, you can click the Diff button to view the changed content
    EventCheck events that occurred within the cron job
    JobCheck the job information of the Cron job. Selecting a job item moves to the job detail page
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the cron job
    Object InformationCheck the object information of the cron job
    Table. Cronjob detailed information items

Delete Cron Job

To delete a cron job, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Cron Job under the Workload menu on the Service Home page. You will be taken to the Cron Job List page.
  3. CronJob List page에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
  4. Cron Job List page, select the item you want to delete. Navigate to the Cron Job Details page.
  5. Click Delete Cron Job on the Cron Job Details page.
  6. If the Notification Confirmation Window appears, click the Confirm button.
Warning
On the cron job list page, after selecting the item you want to delete, clicking Delete will delete the selected cron job.

2.4 - Service and Ingress Management

A service is an abstraction method that exposes applications running in a set of pods as a network service, and an ingress is used to expose HTTP and HTTPS paths from outside the cluster to inside the cluster. After creating a namespace, you can create or delete services, endpoints, ingresses, and ingress classes.

Reference

Service, endpoint, ingress, ingress class services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.

  • To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace you want to change and click the Confirm button. You can view the services created in the selected cluster/namespace.

Service Management

You can create a service and view or delete its detailed information.

Create Service

To create a service, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Service under the Service and Ingress menu on the Service Home page. You will be taken to the Service List page.
  3. Service List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information on the concept of services and object creation, refer to the Kubernetes official documentation > Service.

Check service detailed information

To view detailed service information, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Service under the Service and Ingress menu. You will be taken to the Service List page.
  3. On the Service List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. On the Service List page, select the item for which you want to view detailed information. You will be taken to the Service Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete ServiceDelete the service
    Detailed InformationCan check detailed service information
    YAMLService resource files can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply the changes
    • When editing content, click the Diff button to view the changes
    EventCheck events that occurred within the service
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the service’s metadata information
    Object InformationCheck the service’s object information
    Table. Service Detailed Information Items

Delete Service

To delete the service, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Service under the Service and Ingress menu. You will be taken to the Service List page.
  3. Service List page, select the cluster and namespace from the top left gear button, then click Confirm.
  4. Service List page, select the item you want to delete. Service Details page will be opened.
  5. Click Delete Service on the Service Details page.
  6. If the Notification Confirmation Window appears, click the Confirm button.
Caution
After selecting the item you want to delete on the service list page, click Delete to delete the selected service.

Manage Endpoints

You can create an endpoint and view or delete its detailed information.

Create Endpoint

To create an endpoint, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Endpoint under the Service and Ingress menu on the Service Home page. Navigate to the Endpoint List page.
  3. Endpoint List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.

Check endpoint detailed information

To view detailed endpoint information, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click Endpoint under the Service and Ingress menu. Navigate to the Endpoint List page.
  3. On the Endpoint List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Endpoint List page, select the item you want to view detailed information for. Endpoint Details page will be opened.
    • If you select Show System Objects at the top of the list, all items except the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Endpoint DeletionDelete endpoint
    Detailed InformationCan check detailed information of the endpoint
    YAMLEndpoint resource files can be edited in the YAML editor
    • click the Edit button and modify the resource, then click the Save button to apply the changes
    • When editing content, click the Diff button to view the changes
    EventCheck events that occurred within the endpoint
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the endpoint
    Object InformationCheck the endpoint’s object information
    Table. Endpoint Detailed Information Items

Delete Endpoint

To delete the endpoint, follow the steps below.

  1. Click the All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Endpoint under the Service and Ingress menu. You will be taken to the Endpoint List page.
  3. Endpoint List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Endpoint List page, select the item you want to delete. Navigate to the Endpoint Detail page.
  5. Click Delete Endpoint on the Endpoint Details page.
  6. Notification Confirmation Window appears, click the Confirm button.
Reference
On the endpoint list page, after selecting the item you want to delete, click Delete to delete the selected endpoint.

Manage Ingress

Ingress is an API object that manages external access (HTTP, HTTPS) to services within the Kubernetes Engine, used to expose workloads externally, and provides L7 load balancing functionality.

Create Ingress

To create an ingress, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click Ingress under the Service and Ingress menu. Go to the Ingress List page.
  3. Ingress List page에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Create Object을 클릭하세요.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information on the concept of Ingress and object creation, refer to the Kubernetes official documentation > Ingress.

Check Ingress Detailed Information

To view the ingress detailed information, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Ingress under the Service and Ingress menu on the Service Home page. Navigate to the Ingress List page.
  3. On the Ingress List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to view detailed information for on the Ingress List page. You will be taken to the Ingress Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete IngressDelete Ingress
    Detailed InformationCan view detailed information of Ingress
    YAMLIngress resource files can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the ingress
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the ingress
    Object InformationCheck the object information of Ingress
    Table. Ingress detailed information items

Delete Ingress

To delete Ingress, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click Ingress under the Service and Ingress menu. Navigate to the Ingress List page.
  3. On the Ingress List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Ingress List page, select the item you want to delete. Go to the Ingress Details page.
  5. Click Delete Ingress on the Ingress Detail page.
  6. Alert confirmation window appears, click the Confirm button.
Caution
On the Ingress list page, after selecting the item you want to delete, you can delete the selected Ingress by clicking Delete.

Manage Ingress Class

IngressClass refers to an API resource that allows multiple ingress controllers to be used in a single cluster. In each ingress, you must specify a reference class to the IngressClass resource that includes the configuration, including the controller that must implement the class.

Create Ingress Class

To create an Ingress class, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
  2. Click IngressClass under the Service and Ingress menu on the Service Home page. Go to the IngressClass List page.
  3. On the IngressClass List page, select the cluster and namespace from the top-left gear button, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
The concept of IngressClass and detailed information about object creation, see the Kubernetes official documentation > Ingress.

Check Ingress Class Detailed Information

To view detailed information of the Ingress class, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click IngressClass under the Service and Ingress menu. Navigate to the IngressClass List page.
  3. IngressClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. On the IngressClass List page, select the item for which you want to view detailed information. You will be taken to the IngressClass Detail page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete Ingress ClassDelete the ingress class
    Detailed InformationCan check detailed information of IngressClass
    YAMLIngress class resource file can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply the changes
    • When editing content, click the Diff button to view the changes
    EventCheck events that occurred within the Ingress class
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the Ingress class
    Object InformationCheck the object information of the Ingress class
    Table. Ingress Class Detailed Information Items

Delete Ingress Class

To delete the Ingress class, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click IngressClass under the Service and Ingress menu. Navigate to the IngressClass List page.
  3. IngressClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to delete on the IngressClass List page. Move to the IngressClass Details page.
  5. Click Delete Ingress Class on the Ingress Class Details page.
  6. Notification confirmation window appears, click the Confirm button.
Warning
On the Ingress Class list page, after selecting the item you want to delete, clicking Delete will delete the selected Ingress Class.

2.5 - Storage Management

You can create and manage storage to use when using Kubernetes Engine. Storage is created and then managed for each of PVC, PV, and StorageClass items.

Reference

PVC, PV, storage class service is set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.

  • To select a different cluster (namespace), click the gear button on the right side of the list. Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Notice

The items linked by storage type are as follows.

TypeDetailed Description
Block StorageSupports a storage class that uses the product’s volume in conjunction with the Block storage product within Virtual Server
Object StorageCan be linked with Samsung Cloud Platform products or external Object Storage
  • No separate configuration is required for the Kubernetes Engine, and it can be linked by directly configuring the workload (application) according to the Object Storage guide
File StorageSupports storage classes of NFS and CIFS protocol volumes in conjunction with the File Storage product
  • For NFS protocol volumes, selection is required when creating a Kubernetes Engine (supports HDD, SSD disk types)
  • For CIFS protocol volumes, selection can be made when creating a Kubernetes Engine or after creation
Table. Storage linkage items by type

PVC manage

Persistent Volume Claim(PVC) is an object defined to allocate the required storage capacity. PVC provides high usability through abstraction, and can prevent the problem where data disappears together when the container lifecycle expires (maintaining Data Persistence).

Create PVC

To create a PVC, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click Storage under the PVC menu. Navigate to the PVC List page.
  3. On the PVC List page, after selecting the cluster and namespace from the top left gear button, click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information on the concept of PVCs and object creation, refer to Kubernetes official documentation > Persistent Volumes.

Check PVC detailed information

To check the detailed PVC information, follow the steps below.

  1. All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
  2. Click PVC under the Storage menu on the Service Home page. You will be taken to the PVC List page.
  3. On the PVC List page, select the cluster and namespace from the top left gear button, then click Confirm.
  4. Select the item you want to view detailed information for on the PVC List page. You will be taken to the PVC Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Status DisplayDisplays the current status of the PVC.
    • Bound: Normal connection
    Delete PVCDelete PVC
    Detailed InformationPVC detailed information can be viewed
    YAMLPVC resource file can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within PVC
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck PVC metadata information
    Object InformationCheck PVC object information
    Table. PVC detailed information items

Delete PVC

To delete PVC, follow the steps below.

  1. All Services > Container > Kubernetes Engine menu를 클릭하세요. Kubernetes Engine의 Service Home 페이지로 이동합니다.
  2. Click PVC under the Storage menu on the Service Home page. Navigate to the PVC List page.
  3. PVC list page, select the cluster and namespace from the top left gear button, then click Confirm.
  4. PVC List Select the item you want to delete on the page. PVC Details Navigate to the page.
  5. Click Delete PVC on the PVC Details page.
  6. Notification confirmation window appears, click the Confirm button.
Caution

After selecting the item you want to delete on the PVC list page, you can delete the selected PVC by clicking Delete.

  • Check the backup status of the PV and volume to be deleted before deleting the PVC.

PV Management

Persistent Volume (PV) refers to the physical disk created by the system administrator in Kubernetes Engine.

Create PV

To create a PV, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click PV under the Storage menu on the Service Home page. It navigates to the PV List page.
  3. PV list page, select the cluster and namespace from the top left gear button, then click Create Object.
  4. In the Object Creation Popup에서 오브젝트 정보를 입력하고 Confirm 버튼을 클릭하세요.
Reference
For detailed information on the concept of PV and object creation, refer to the Kubernetes official documentation > Persistent Volumes.

Check PV detailed information

To view the detailed PV information, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Service Home page, click Storage under the PV menu. Navigate to the PV List page.
  3. PV List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to view detailed information for on the PV List page. You will be taken to the PV Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    CategoryDetailed description
    Status DisplayDisplays the current status of the PV.
    • Bound: Normal connection
    PV DeleteDelete PV
    Detailed InformationPV detailed information can be viewed
    YAMLPV’s resource file can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within PV
    Account InformationCheck basic information about the Account such as Account name, location, creation date and time
    Metadata InformationCheck PV’s metadata information
    Object InformationCheck PV’s object information
    Table. PV detailed information items

PV Delete

To delete PV, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click PV under the Storage menu on the Service Home page. You will be taken to the PV List page.
  3. On the PV List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. PV List page, select the item you want to delete. Go to the PV Details page.
  5. Click Delete PV on the PV Details page.
  6. Notification confirmation window appears, click the Confirm button.
Caution
On the PV list page, after selecting the item you want to delete, you can delete the selected PV by clicking Delete.

Managing StorageClass

Storage Class is a Kubernetes resource that defines the level of storage type, performance, etc.

Reference

Kubernetes Engine provides the nfs-subdir-external-sc and bs-sc storage classes by default, and has the following features.

  • nfs-subdir-external-sc storage class shares and uses file storage connected to the cluster.
    • Access mode: RWX - ReadWriteMany
    • Reclaim policy: Delete (when PVC is deleted, delete PV and stored data together), Retain (when PVC is deleted, keep PV and stored data)
    • Capacity expansion: individual PVC expansion not allowed/entire file storage expansion allowed
  • The bs-sc storage class supports using SSD-type volumes in conjunction with block storage products.
    • Access mode: RWO - ReadWriteOnce
    • Reclaim policy: Delete(when PVC is deleted, delete PV and stored data together), Retain(when PVC is deleted, retain PV and stored data)
    • Capacity expansion support: individual PVC expansion support (automatic volume expansion in 8 Gi increments)

Predefined Storage Class

Storage ClassReclaim Policy*Volume Expansion Allowed**Mount OptionsRemarks
nfs-subdir-external-sc (default)DeleteNot supportednfsvers=3, noresvportLinked with default Volume (NFS) settings
nfs-subdir-external-sc-retainRetainNot supportednfsvers=3, noresvportLinked with default Volume (NFS) settings
bs-scDeleteSupport-VirtualServer > BlockStorage product integration
bs-sc-retainRetainSupport-VirtualServer > BlockStorage product integration
  • (*) To use a storage class other than the default, you need to specify the storage class name in PVC’s spec.storageClassName
  • (**) User can directly change the default storage class (storageclass.kubernetes.io/is-default-class: “true” annotation adjustment)
Table. Predefined Storage Class List
Caution

The features of the reclaim policy are as follows.

  • Delete: If you delete the PVC, the associated PV and physical data will also be deleted.
  • Retain: Even if the PVC is deleted, the corresponding PV and physical data are not deleted and are retained. Since physical data not used by the workload may remain in storage, careful capacity management is required.
Caution

Consider the following when using volume expansion.

  1. nfs-subdir-external-sc storage class
    • Cannot adjust the capacity of PVC. (Volume expansion not supported)
    • All PVs share the total capacity of the File Storage volume, so volume expansion for each PVC is not required.
  2. bs-sc storage class
    • You can expand the PVC capacity. (Shrink function not supported)
    • The capacity of the PV is not guaranteed to be as much as requested by the PVC. (Supports expansion in 8 Gi units)

Create StorageClass

To create a storage class, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click StorageClass under the Storage menu. Navigate to the StorageClass list page.
  3. StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
    Reference
    For detailed information on the concept of storage classes and object creation, refer to the Kubernetes official documentation > Storage Class.

Check storage class detailed information

To view detailed storage class information, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Storage under the StorageClass menu. You will be taken to the StorageClass List page.
  3. On the StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. StorageClass List page, select the item you want to view detailed information for. Navigate to the StorageClass Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete StorageClassDelete the StorageClass
    Detailed InformationCan view detailed information of storage class
    YAMLResource files of the storage class can be edited in the YAML editor
    • Click the Edit button and modify the resource, then click the Save button to apply the changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the storage class
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the storage class
    Object InformationCheck the object information of the storage class
    Table. StorageClass detailed information items

Delete StorageClass

To delete the storage class, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. From the Service Home page, click Storage Class under the Storage menu. You will be taken to the Storage Class List page.
  3. On the StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. StorageClass List page, select the item you want to delete. Navigate to the StorageClass Details page.
  5. Click Delete StorageClass on the StorageClass Details page.
  6. When the notification confirmation window appears, click the Confirm button.
    Caution
    On the storage class list page, after selecting the item you want to delete, click Delete to delete the selected storage class.

2.6 - Configuration(Configuration) Management

When there is a need to manage values that change inside a container depending on various environments such as development and operation, creating and managing a separate image due to environment variables is inconvenient and incurs significant cost waste. In Kubernetes, you can manage environment variables or configuration values as variables so that they can be changed from outside and injected when a Pod is created, and you can use ConfigMap and Secret for this.

Reference

ConfigMaps and secret services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.

  • To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace you want to change and click the Confirm button. You can view the ConfigMap and Secret services created in the selected cluster/namespace.

Manage ConfigMap

You can write and manage the Config information used in the namespace as a ConfigMap.

Create ConfigMap

To create a ConfigMap, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
  2. Service Home on the page, click Configuration menu below ConfigMap. Go to the ConfigMap List page.
  3. ConfigMap List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information on the concept of ConfigMaps and object creation, refer to the Kubernetes official documentation > ConfigMap.

Check ConfigMap detailed information

To view detailed ConfigMap information, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click ConfigMap under the Configuration menu. Navigate to the ConfigMap List page.
  3. ConfigMap List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. ConfigMap List page, select the item you want to view detailed information for. You will be taken to the ConfigMap Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view the service information.
    Category
    Detailed description
    Config Map DeleteDelete Config Map
    Detailed InformationCan check detailed information of config map
    YAMLConfigMap’s resource file can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck the events that occurred within the config map
    Account informationCheck basic information about the Account such as Account name, location, creation date and time
    Metadata InformationCheck the metadata information of the ConfigMap
    Object InformationCheck the object information of the config map
    • In Data, rows are separated by - - -, and value is displayed in textarea format
    • Binary Data’s value outputs the length value
    Table. ConfigMap detailed information items

Delete ConfigMap

To delete a config map, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click ConfigMap under the Configuration menu. You will be taken to the ConfigMap List page.
  3. On the ConfigMap List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. ConfigMap List page, select the item you want to delete. Go to the ConfigMap Details page.
  5. On the ConfigMap Details page, click Delete ConfigMap.
  6. Alert confirmation window appears, click the Confirm button.
Caution
On the ConfigMap list page, after selecting the item you want to delete, you can delete the selected ConfigMap by clicking Delete.

Manage Secrets

By using secrets, you can securely store and manage sensitive information such as passwords, OAuth tokens, and SSH keys.

Create Secret

To create a secret, follow the steps below.

  1. All Services > Container > Kubernetes Engine menu, click. Go to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Secret under the Configuration menu. You will be taken to the Secret List page.
  3. Secret List page, select the cluster and namespace from the top left gear button, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information about the concept of secrets and object creation, please refer to Kubernetes official documentation > Secret.

Check Secret Detailed Information

To view the secret detailed information, follow the steps below.

  1. All Services > Container > Kubernetes Engine menu, click it. Go to the Service Home page of Kubernetes Engine.
  2. Click Secret under the Configuration menu on the Service Home page. You will be taken to the Secret List page.
  3. Secret List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Secret List page, select the item you want to view detailed information for. Secret Details page will be navigated.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete SecretDelete the secret
    Detailed InformationCan check secret’s detailed information
    YAMLSecret’s resource file can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within Secret
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the secret’s metadata information
    Object InformationCheck the secret’s object information
    Table. Secret Detailed Information Items

Delete Secret

To delete the secret, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
  2. Click Secret under the Configuration menu on the Service Home page. You will be taken to the Secret List page.
  3. Secret List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Secret List page, select the item you want to delete. Secret Detail page, navigate.
  5. Click Delete Secret on the Secret Details page.
  6. If the notification confirmation window appears, click the Confirm button.
Caution
Select the item you want to delete on the secret list page, then click Delete to delete the selected secret.

2.7 - Manage Permissions

Kubernetes clusters can be accessed by multiple users, and you can assign permissions per specific API or namespace to define access scope. By applying Kubernetes’ role-based access control (RBAC, Role-based access control) feature, you can set permissions per cluster or namespace. You can create and manage cluster roles, cluster role bindings, roles, and role bindings.

Reference

ClusterRole, ClusterRoleBinding, Role, and RoleBinding services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.

  • To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Reference

Managing Cluster Role

You can set and manage access permissions on a per-cluster basis. You can also set permissions for APIs or resources that are not limited to a namespace.

Create Cluster Role

To create a cluster role, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Cluster Role under the Permissions menu on the Service Home page. Go to the Cluster Role List page.
  3. Cluster Role List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. Object Creation Popup In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information about ClusterRole, refer to the Kubernetes official documentation > Using RBAC Authorization.

Check detailed information of cluster role

To view detailed information about the cluster role, follow these steps.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Cluster Role under the Permissions menu on the Service Home page. Go to the Cluster Role List page.
  3. On the Cluster Role List page, select the cluster and namespace from the top left gear button, then click Confirm.
  4. Cluster Role List page: select the item you want to view detailed information for. You will be taken to the Cluster Role Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete Cluster RoleDelete the cluster role
    Detailed InformationCan check detailed information of ClusterRole
    YAMLCluster role’s resource files can be edited in the YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changes
    EventCheck events that occurred within the cluster role
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the cluster role
    Policy Rule InformationView the policy rule information of the ClusterRole
    • Resources: List of resources to which the rule applies
    • Non-Resource URLs: Non-Resource URLs are the set of partial URLs that the user needs to access
      • * is allowed but only as the final segment of the path
      • Since non-resource URLs are not namespaced, this field only applies to ClusterRoles referenced by a ClusterRoleBinding
      • A rule can apply to API resources (e.g., “pods” or “secrets”) or non-resource URL paths (e.g., “/api”), but not both
    • Resource Names: Resource names are an optional whitelist of names the rule applies to. An empty set means everything is allowed
    Table. Cluster role detailed information items

Delete ClusterRole

To delete the cluster role, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Cluster Role under the Permissions menu. You will be taken to the Cluster Role List page.
  3. On the Cluster Role List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Cluster Role List page, select the item you want to delete. Move to the Cluster Role Details page.
  5. Click Delete Cluster Role on the Cluster Role Details page.
  6. Alert confirmation window appears, click the Confirm button.
Caution
On the cluster role list page, after selecting the item you want to delete, click Delete to delete the selected cluster role.

Managing ClusterRoleBinding

You can create and manage a cluster role binding by connecting a cluster role with a specific target.

Create Cluster Role Binding

To create a cluster role binding, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click ClusterRoleBinding under the Permissions menu. You will be taken to the ClusterRoleBinding list page.
  3. Cluster Role Binding List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information about ClusterRoleBinding, refer to the Kubernetes official documentation > Using RBAC Authorization.

Check detailed information of ClusterRoleBinding

To check the detailed information of cluster role binding, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click ClusterRoleBinding under the Permissions menu. You will be taken to the ClusterRoleBinding List page.
  3. Cluster Role Binding List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Cluster Role Binding List page, select the item you want to view detailed information. Navigate to the Cluster Role Binding Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete Cluster Role BindingDelete cluster role binding
    Detailed InformationCheck the detailed information of the cluster role binding
    YAMLThe resource file of ClusterRoleBinding can be edited in the YAML editor
    • Edit button click and modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the ClusterRoleBinding
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the cluster role binding
    Roll/Target InfoCheck the role and target information of the cluster roll
    Table. Cluster Role Binding Detailed Information Items

Delete Cluster Role Binding

To delete the cluster role binding, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click ClusterRoleBinding under the Permissions menu on the Service Home page. It will navigate to the ClusterRoleBinding List page.
  3. Cluster Role Binding List 페이지에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
  4. Cluster Role Binding List Select the item you want to delete on the page. Cluster Role Binding Details Navigate to the page.
  5. Click Delete Cluster Role Binding on the Cluster Role Binding Details page.
  6. Notification Confirmation Window appears, click the Confirm button.
Caution
On the ClusterRoleBinding list page, after selecting the item you want to delete, click Delete to delete the selected ClusterRoleBinding.

Manage Roll

A role refers to a rule that specifies permissions for a specific API or resource. You can create and manage permissions that can only access the namespace to which the role belongs.

Create Roll

To create a roll, follow the steps below.

  1. All Services > Container > Kubernetes Engine menu, click. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Role under the Permission menu on the Service Home page. It moves to the Role List page.
  3. On the Roll List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information about roles, refer to the Kubernetes official documentation > Using RBAC Authorization.

Check roll detailed information

To check detailed roll information, follow the steps below.

  1. Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Role under the Permissions menu. You will be taken to the Role List page.
  3. On the Role List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to view detailed information for on the Roll List page. You will be taken to the Roll Details page.
    • If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete roleDelete role
    Detailed InformationCheck detailed information of the roll
    YAMLRoll’s resource file can be edited in a YAML editor
    • Click the Edit button, modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within the roll
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of the roll
    Policy Rule InformationView the policy rule information of the role
    • Resources: List of resources to which the rule applies
    • Non-Resource URLs: Non-Resource (NonResource) URLs are the set of partial URLs the user must access
      • * is allowed but only as the final segment of the path
      • Non-resource URLs are not namespaced, so this field only applies to ClusterRoles referenced by a ClusterRoleBinding
      • Rules can apply to API resources (e.g., “pods” or “secrets”) or non-resource URL paths (e.g., “/api”), but not both
    • Resource Names: Resource names are an optional whitelist of names the rule applies to, an empty set means everything is allowed
    Table. Roll detailed information items

Delete roll

To delete the roll, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Role under the Permissions menu on the Service Home page. You will be taken to the Role List page.
  3. On the Roll List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Select the item you want to delete on the Role List page. Navigate to the Role Details page.
  5. Click Delete Roll on the Roll Details page.
  6. When the alert confirmation window appears, click the Confirm button.
Caution
After selecting the item you want to delete on the roll list page, you can delete the selected roll by clicking Delete.

Manage Roll Binding

You can connect a role with a specific target to create and manage role bindings.

Create Roll Binding

To create a role binding, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. On the Service Home page, click Roll Binding under the Permission menu. It will navigate to the Roll Binding List page.
  3. Roll Binding List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
  4. In the Object Creation Popup, enter the object information and click the Confirm button.
Reference
For detailed information about role binding, refer to the Kubernetes official documentation > Using RBAC Authorization.

Check Roll Binding Detailed Information

To check the detailed roll binding information, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Roll Binding under the Permission menu on the Service Home page. Navigate to the Roll Binding List page.
  3. Roll Binding List 페이지에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
  4. On the Roll Binding List page, select the item you want to view detailed information for. You will be taken to the Roll Binding Details page.
    • If you select Show system objects at the top of the list, items other than the Kubernetes object entries will be displayed.
  5. Click each tab to view service information.
    Category
    Detailed description
    Delete Roll BindingDelete roll binding
    Detailed InformationCheck detailed information of roll binding
    YAMLRoll binding’s resource files can be edited in a YAML editor
    • Edit button click and modify the resource, then click the Save button to apply changes
    • When editing content, click the Diff button to view the changed content
    EventCheck events that occurred within roll binding
    Account InformationCheck basic information about the Account such as Account name, location, creation date, etc.
    Metadata InformationCheck the metadata information of Roll Binding
    Role/Target InformationCheck the role’s function and target information
    Table. Roll Binding Detailed Information Items

Delete Roll Binding

To delete the roll binding, follow the steps below.

  1. All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
  2. Click Roll Binding under the Permissions menu on the Service Home page. Navigate to the Roll Binding List page.
  3. On the Role Binding List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
  4. Roll Binding List page, select the item you want to delete. Roll Binding Details page, navigate.
  5. Click Delete Roll Binding on the Roll Binding Details page.
  6. Alert confirmation window appears, click the Confirm button.
Caution
On the role binding list page, after selecting the item you want to delete, you can delete the selected role binding by clicking Delete.

3 - Using Kubernetes Engine

Configure external network communication to expose HTTP and HTTPS services from the cluster to the outside. To configure external network communication, you can create a service of type LoadBalancer.

Using Kubernetes Engine Guide

The Using Kubernetes Engine guide describes the following features. For more information, refer to the corresponding guide.

GuideDescription
Creating a LoadBalancer ServiceInstructions on how to create a LoadBalancer-type service through a service manifest file
Table. Description of Using Kubernetes Engine Guide

3.1 - Authentication and Authorization

Kubernetes Engine has Kubernetes’ authentication and RBAC authorization features applied. This explains the authentication and authorization features of Kubernetes and how to link them with Kubernetes Engine and IAM.

Kubernetes Authentication and Authorization

This explains the authentication and RBAC authorization features of Kubernetes.

Authentication

The Kubernetes API server acquires the necessary information for user or account authentication from certificates or authentication tokens and proceeds with the authentication process.

Note
For a detailed explanation of Kubernetes authentication, refer to the following document: https://kubernetes.io/docs/reference/access-authn-authz/authentication/
Note
For a detailed explanation of using kubectl and kubeconfig, refer to Accessing the Cluster.

Authorization

The Kubernetes API server checks if the user has permission for the requested action using the user information obtained through the authentication process and the RBAC-related objects. There are four types of RBAC-related objects as follows:

ObjectScopeDescription
ClusterRoleCluster-wideDefinition of permissions across all namespaces in the cluster
ClusterRoleBindingCluster-wideBinding definition between ClusterRole and user
RoleNamespaceDefinition of permissions for a specific namespace
RoleBindingNamespaceBinding definition between ClusterRole or Role and user
Table. RBAC-related objects
Note
For a detailed explanation of Kubernetes RBAC authorization, refer to the following document: https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Role

Kubernetes has several predefined ClusterRoles. Some of these ClusterRoles do not have the prefix system:, which means they are intended for user use. These include the cluster-admin role that can be applied to the entire cluster using ClusterRoleBinding, and the admin, edit, and view roles that can be applied to a specific namespace using RoleBinding.

Default ClusterRoleDefault ClusterRoleBindingDescription
cluster-adminsystem:masters groupGrants superuser access to perform all actions on all resources.
  • When used in ClusterRoleBinding, it grants full control over all resources in the cluster and all namespaces.
  • When used in RoleBinding, it grants full control over the namespace and all resources in the namespace bound to the RoleBinding.
adminNoneGrants administrator access to the namespace when used with RoleBinding. When used in RoleBinding, it grants read/write access to most resources in the namespace, including the ability to create roles and role bindings. However, this role does not grant write access to resource quotas or the namespace itself.
editNoneGrants read/write access to most objects in the namespace. This role does not grant the ability to view or modify roles and role bindings. However, this role allows access to secrets, which can be used to run pods in the namespace as any account, effectively granting API access at the account level.
viewNoneGrants read-only access to most objects in the namespace. Roles and role bindings cannot be viewed. This role does not grant access to secrets, as reading secret contents would allow access to account credentials and potentially grant API access at the account level (a form of privilege escalation).
Table. Default ClusterRole and ClusterRoleBinding descriptions
Note
For a detailed explanation of user-facing roles, refer to the following document: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles

In addition to the predefined ClusterRoles, you can define separate roles (or ClusterRoles) as needed. For example:

Color mode
# Role that grants permission to view pods in the "default" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
# Role that grants permission to view pods in the "default" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch"]
Code block. Role that grants permission to view pods in a namespace
Color mode
# ClusterRole that grants permission to view nodes
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: node-viewer
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list", "watch"]
# ClusterRole that grants permission to view nodes
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: node-viewer
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get", "list", "watch"]
Code block. ClusterRole that grants permission to view nodes
Note
For more information about roles and cluster roles, see the following document: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole

Role Binding

To manage access to the Kubernetes Engine using Samsung Cloud Platform IAM, you need to understand the relationship between Kubernetes’ role binding and IAM. The target (subjects) of role binding (or cluster role binding) can include individual users (User) or groups (Group).

  • User matches the Samsung Cloud Platform username, and Group matches the IAM user group name.

For role binding/cluster role binding, subjects.kind can be one of the following:

  • User: Binds to a Samsung Cloud Platform individual user.
  • Group: Binds to a Samsung Cloud Platform IAM user group.
Note
In addition to the above, a service account can also be specified, but a service account is generally not for users and cannot be bound to a Samsung Cloud Platform user.

The subjects.name of role binding/cluster role binding can be specified as follows:

  • User case: Samsung Cloud Platform individual username (e.g. jane.doe)
  • Group case: Samsung Cloud Platform IAM user group name (e.g. ReadPodsGroup)
Note
subjects.name is case-sensitive.

In this way, an IAM user group is bound to a role binding (or cluster role binding) written in the Kubernetes Engine cluster. Additionally, the permission to perform API operations included in the role (or cluster role) bound to the group is granted.

Example) Role Binding read-pods #1

An example of writing a User (Samsung Cloud Platform individual user) to a role binding is as follows:

Color mode
# This role binding allows the user "jane.doe@example.com" to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: default
roleRef:
  # "roleRef" specifies the binding to a role or cluster role.
  kind: Role       # Must be Role or ClusterRole.
  name: pod-reader # Must match the name of the role or cluster role to bind.
  apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: User
  name: jane.doe
  apiGroup: rbac.authorization.k8s.io
# This role binding allows the user "jane.doe@example.com" to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: read-pods
  namespace: default
roleRef:
  # "roleRef" specifies the binding to a role or cluster role.
  kind: Role       # Must be Role or ClusterRole.
  name: pod-reader # Must match the name of the role or cluster role to bind.
  apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: User
  name: jane.doe
  apiGroup: rbac.authorization.k8s.io
Code block. Example of writing a User (Samsung Cloud Platform individual user) to a role binding

If a role binding like the above is created in a cluster, a user with the username jane.doe is granted the permission to perform the API actions defined in the pod-reader role.

Example) Role Binding read-pods #2

An example of writing a group (IAM user group) to a role binding is as follows:

Color mode
# This role binding allows users in the "ReadPodsGroup" group to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: Group
  name: ReadPodsGroup
  apiGroup: rbac.authorization.k8s.io
# This role binding allows users in the "ReadPodsGroup" group to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: Group
  name: ReadPodsGroup
  apiGroup: rbac.authorization.k8s.io
Code block. Example of Role binding that allows the ReadPodsGroup group to view pods

If a role binding like the above is created in the cluster, users in the IAM user group ReadPodsGroup are granted the permission to perform API operations written in the pod-reader role.

Example) Cluster Role Binding read-nodes

Color mode
# This cluster role binding allows users in the "ReadNodesGroup" group to view nodes.
# A cluster role named "node-reader" must exist.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-nodes
roleRef:
  kind: ClusterRole
  name: node-reader
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ReadNodesGroup
  apiGroup: rbac.authorization.k8s.io
# This cluster role binding allows users in the "ReadNodesGroup" group to view nodes.
# A cluster role named "node-reader" must exist.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: read-nodes
roleRef:
  kind: ClusterRole
  name: node-reader
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ReadNodesGroup
  apiGroup: rbac.authorization.k8s.io
Code block. Example of a cluster role binding that allows the ReadNodesGroup group to view nodes

When a cluster role binding like the one above is created in the cluster, users in the IAM user group ReadNodesGroup are granted the permissions to perform the API actions written in the cluster role node-reader.

Note
For more detailed explanations on role binding creation, refer to the following document: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-binding-examples

Predefined Roles and Role Bindings for Samsung Cloud Platform

The Kubernetes Engine of Samsung Cloud Platform has predefined cluster role bindings scp-cluster-admin, scp-view, scp-namespace-view, and cluster roles scp-namespace-view. The following table shows the binding relationship between predefined roles and role bindings, and Samsung Cloud Platform users. Here, cluster roles cluster-admin and view are predefined within the Kubernetes cluster. For more detailed explanations, refer to the Roles section.

Cluster Role BindingCluster RoleSubjects (User)
scp-cluster-admincluster-admin
  • Group AdministratorGroup
  • Group OperatorGroup
  • User john.smith
scp-viewviewGroup ViewerGroup
scp-namespace-viewscp-namespace-viewAll authenticated users in the cluster
Table. Predefined Roles and Role Bindings for Samsung Cloud Platform, IAM User Groups, and User Binding Relationships
  • According to the cluster role binding scp-cluster-admin, users in the IAM user groups AdministratorGroup or OperatorGroup, as well as the Kubernetes Engine product applicant, are granted cluster administrator permissions.
  • According to the cluster role binding scp-view, users in the ViewerGroup are granted cluster viewer permissions. More precisely, since it is linked to the predefined cluster role view in Kubernetes, access permissions for cluster-scoped resources (e.g., namespaces, nodes, ingress classes, etc.) and secrets within namespaces are not included. For more detailed explanations, refer to the Roles section.
  • According to the cluster role binding scp-namespace-view, all authenticated users in the cluster are granted namespace viewer permissions.
Note
  • Predefined roles and role bindings for Samsung Cloud Platform are created only once when the cluster product is applied.
  • Users can modify or delete predefined cluster role bindings and cluster roles for Samsung Cloud Platform as needed.

The details of predefined roles and role bindings for Samsung Cloud Platform are as follows:

Cluster Role Binding scp-cluster-admin

The cluster role binding scp-cluster-admin is bound to the cluster role cluster-admin and bound to the IAM user groups AdministratorGroup, OperatorGroup, and the SCP user (Kubernetes Engine cluster creator) according to the subjects.

Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
  name: scp-cluster-admin
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: AdministratorGroup
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: OperatorGroup
  apiGroup: rbac.authorization.k8s.io
- kind: User                 # Cluster creator
  name: jane.doe # cluster creater name
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
  name: scp-cluster-admin
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: AdministratorGroup
  apiGroup: rbac.authorization.k8s.io
- kind: Group
  name: OperatorGroup
  apiGroup: rbac.authorization.k8s.io
- kind: User                 # Cluster creator
  name: jane.doe # cluster creater name
  apiGroup: rbac.authorization.k8s.io
Code Block. Example of Cluster Role Binding scp-cluster-admin

Cluster Role Binding scp-view

The cluster role binding scp-view is bound to the cluster role view and bound to the IAM user group ViewerGroup according to the subjects.

Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: scp-view
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ViewerGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: scp-view
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ViewerGroup
  apiGroup: rbac.authorization.k8s.io
Code Block. Example of Cluster Role Binding scp-view

Cluster Role and Cluster Role Binding scp-namespace-view

Cluster Role scp-namespace-view is a role that defines the authority to view namespaces. Cluster Role Binding scp-namespace-view is associated with Cluster Role scp-namespace-view and grants namespace view authority to all authenticated users in the cluster.

Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: scp-namespace-view
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: scp-namespace-view
roleRef:
  kind: ClusterRole
  name: scp-namespace-view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: scp-namespace-view
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: scp-namespace-view
roleRef:
  kind: ClusterRole
  name: scp-namespace-view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: system:authenticated
  apiGroup: rbac.authorization.k8s.io
Code Block. Cluster Role and Cluster Role Binding scp-namespace-view Example

IAM User Group RBAC Use Case

This chapter explains examples of granting authority by major user scenarios. The names of IAM user groups, ClusterRoleBindings/RoleBindings, and ClusterRoles presented here are examples for understanding. Administrators should define and apply appropriate names and authorities according to their needs.

ScopeUse CaseIAM User GroupClusterRoleBinding/RoleBindingClusterRoleNote
ClusterCluster AdministratorClusterAdminGroupClusterRoleBinding cluster-admin-groupcluster-adminAdministrator for a specific cluster
ClusterCluster EditorClusterEditGroupClusterRoleBinding cluster-edit-groupeditEditor for a specific cluster
ClusterCluster ViewerClusterViewGroupClusterRoleBinding cluster-view-groupviewViewer for a specific cluster
NamespaceNamespace AdministratorNamespaceAdminGroupRoleBinding namespace-admin-groupadminAdministrator for a specific namespace
NamespaceNamespace EditorNamespaceEditGroupRoleBinding namespace-edit-groupeditEditor for a specific namespace
NamespaceNamespace ViewerNamespaceViewGroupRoleBinding namespace-view-groupviewViewer for a specific namespace
Table. Predefined Roles and RoleBindings, IAM User Groups, and Binding Relationships for Samsung Cloud Platform
Note
The ClusterRoles (cluster-admin, admin, edit, view) in the table above are predefined in the Kubernetes cluster. For more information, see the Role section.

Cluster Administrator

To create a cluster administrator, follow these steps:

  1. Create an IAM user group named ClusterAdminGroup.
  2. Create a ClusterRoleBinding with the following content in the target cluster:
Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-group
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ClusterAdminGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-admin-group
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ClusterAdminGroup
  apiGroup: rbac.authorization.k8s.io
Code Block. Create Cluster Administrator
  • It is associated with the default ClusterRole cluster-admin, granting administrator authority for the cluster.

Cluster Editor

To create a cluster editor, follow these steps:

  1. Create an IAM user group named ClusterEditGroup.
  2. Create a ClusterRoleBinding with the following content in the target cluster:
Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-edit-group
roleRef:
  kind: ClusterRole
  name: edit
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ClusterEditGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-edit-group
roleRef:
  kind: ClusterRole
  name: edit
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ClusterEditGroup
  apiGroup: rbac.authorization.k8s.io
Code Block. Create Cluster Editor
  • The default cluster role edit is associated with it, and editor permissions are granted for the cluster.

Cluster Viewer

To create a cluster viewer, follow these steps:

  1. Create an IAM user group named ClusterViewGroup.
  2. Create a cluster role binding with the following content in the target cluster.
Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-view-group
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ClusterViewGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-view-group
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: ClusterViewGroup
  apiGroup: rbac.authorization.k8s.io
Code block. Create Cluster Viewer
  • The default cluster role view is associated with it, and viewer permissions are granted for the cluster.

Namespace Administrator

To create a namespace administrator, follow these steps:

  1. Create an IAM user group named NamespaceAdminGroup.
  2. Create a role binding with the following content in the target cluster.
Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-admin-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceAdminGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-admin-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceAdminGroup
  apiGroup: rbac.authorization.k8s.io
Code block. Create Namespace Administrator
  • The default cluster role admin is associated with it, and administrator permissions are granted for the namespace.

Namespace Editor

To create a namespace editor, follow these steps:

  1. Create an IAM user group named NamespaceEditGroup.
  2. Create a role binding with the following content in the target cluster.
Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-edit-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: edit
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceEditGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-edit-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: edit
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceEditGroup
  apiGroup: rbac.authorization.k8s.io
Code block. Create Namespace Editor
  • The default cluster role edit is associated with it, and editor permissions are granted for the namespace.

Namespace Viewer

To create a namespace viewer, follow these steps:

  1. Create an IAM user group named NamespaceViewGroup.
  2. Create a role binding with the following content in the target cluster.
Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-view-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceViewGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-view-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceViewGroup
  apiGroup: rbac.authorization.k8s.io
Code block. Create Namespace Viewer
  • The default cluster role view is associated with it, and viewer permissions are granted for the namespace. To create a namespace viewer, follow these steps:
  1. Create an IAM user group: Create an IAM user group named NamespaceViewGroup.
  2. Create a role binding: Create a role binding with the following content in the target cluster.
Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-view-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceViewGroup
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-view-group
  namespace: <namespace_name>
roleRef:
  kind: ClusterRole
  name: view
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
  name: NamespaceViewGroup
  apiGroup: rbac.authorization.k8s.io
Code Block. Create Namespace Viewer
  • The view cluster role is associated with the viewer permission for the specified namespace.

Practice Example

This chapter describes an example and procedure for applying an administrator to a specific namespace.

  • IAM user group: NamespaceAdminGroup
  • IAM policy: NamespaceAdminAccess
  • Role binding: namespace-admin-group

Create an IAM User Group

Note
For more information about IAM user groups, see IAM > User Group.

To create an IAM user group in Samsung Cloud Platform, follow these steps:

  1. Click All Services > Management > IAM. The Identity and Access Management (IAM) Service Home page appears.

  2. On the Service Home page, click User Group. The User Group List page appears.

  3. On the User Group List page, click Create User Group.

    • Enter the required information in the Basic Information, Add User, Attach Policy, and Additional Information sections.

      Category
      Required
      Description
      User Group NameRequiredEnter the user group name
      • Use Korean, English, numbers, and special characters (+=,.@-_) to enter a value between 3 and 24 characters
      • Enter NamespaceAdminGroup as the user group name
      DescriptionOptionalDescription of the user group name
      • Enter a detailed description of the user group name, up to 1,000 characters
      UserOptionalUsers to add to the user group
      • The list of users registered in the account is displayed, and the selected user’s name is displayed at the top of the screen when the checkbox is selected
      • Click the Delete button at the top of the screen or uncheck the checkbox in the user list to cancel the selection of the selected user
      • If there are no users to add, click Create User at the bottom of the user list to register a new user, and then refresh the user list to select the user
      PolicyOptionalPolicy to attach to the user group
      • The list of policies registered in the account is displayed, and the selected policy name is displayed at the top of the screen when the checkbox is selected
      • Select ViewerAccess in the policy list
      TagOptionalTags to add to the user group
      • Up to 50 tags can be added per resource
      Table. User Group Creation Information Input Items
  4. Click the Complete button. The User Group List page appears.

Note

In this practice example, the ViewerAccess policy (permission to view all resources) is attached for demonstration purposes.

  • If you do not need permission to view all resources in the Samsung Cloud Platform Console, you do not need to attach the ViewerAccess policy. Define and apply a separate policy according to your actual situation.

Create an IAM Policy

Note
If you do not need to grant Samsung Cloud Platform Console usage permissions, you do not need to perform this step.
Note
For more information about IAM policies, see IAM > Policy.

To create an IAM policy in Samsung Cloud Platform, follow these steps:

  1. Click All Services > Management > IAM. The Identity and Access Management (IAM) Service Home page appears.

  2. On the Service Home page, click Policy. The Policy List page appears.

  3. On the Policy List page, click Create Policy. The Create Policy page appears.

  4. Enter the required information in the Basic Information and Additional Information sections.

    Category
    Required
    Description
    Policy NameRequiredEnter the policy name
    • Use Korean, English, numbers, and special characters (+=,.@-_) to enter a value between 3 and 128 characters
    • Enter NamespaceAdminAccess as the policy name
    DescriptionOptionalDescription of the policy name
    • Enter a detailed description of the policy name, up to 1,000 characters
    TagOptionalTags to add to the policy
    • Up to 50 tags can be added per resource
    Table. Policy Creation Information Input Items - Basic Information and Additional Information
  5. Click the Next button. The Permission Settings section appears.

  6. Enter the required information in the Permission Settings section.

    • Select Kubernetes Engine in the Service section.

    • You can create a policy by importing an existing policy using Policy Import. For more information about Policy Import, see Policy Import.

      Category
      Required
      Description
      Control TypeRequiredSelect the policy control type
      • Allow Policy: A policy that allows defined permissions
      • Deny Policy: A policy that denies defined permissions
      The deny policy takes precedence for the same target
      ActionRequiredSelect actions provided by each service
      • Create: CreateKubernetesObject selected
      • Delete: DeleteKubernetesObject selected
      • List: ListKubernetesEngine, ListKubernetesObject selected
      • Read: DetailKubernetesObject selected
      • Update: UpdateKubernetesObject selected
      • Add Action Directly: Use wildcard * to specify multiple actions at once
      Applied ResourceRequiredResource to which the action is applied
      • All Resources: Apply to all resources for the selected action
      • Individual Resource: Apply only to the specified resource for the selected action
        • Individual resources are only possible when selecting actions that allow individual resource selection (purple actions)
        • Click the Add Resource button to specify the target resource by resource type
        • For more information on Add Resource, see Registering individual resources as applied resources
      Authentication TypeRequiredAuthentication method for the target user
      • All Authentication: Apply regardless of authentication method
      • API Key Authentication: Apply to users who use API key authentication
      • IAM Key Authentication, Console Login: Apply to users who use IAM key authentication or console login
      Applied IPRequiredIP addresses to which the policy is applied
      • User-specified IP: Register and manage IP addresses directly by the user
        • Applied IP: Register IP addresses directly by the user as IP addresses or ranges to which the policy is applied
        • Excluded IP: Register IP addresses to be excluded from Applied IP as IP addresses or ranges
      • All IP: Do not restrict IP access
        • Allow access to all IP addresses, but if exceptions are needed, register Excluded IP to restrict access to registered IP addresses
      Table. Policy creation information input items - Permission settings
Note

Permission settings provide Basic Mode and JSON Mode.

  • If you write in Basic Mode and enter JSON Mode or move to another screen, services with the same conditions will be integrated into one, and settings that are not completed will be deleted.
  • If the content written in JSON Mode does not match the JSON format, you cannot switch to Basic Mode.
  1. Click the Next button. Move to the Input Information Check page.
  2. Check the input information and click the Complete button. Move to the Policy List page.

Add a user to an IAM user group

Reference
For more information on managing IAM user groups, see IAM > Managing User Groups.

To add a user to an IAM user group in Samsung Cloud Platform, follow these steps.

  1. Click All Services > Management > IAM menu. Move to the Identity and Access Management (IAM) Service Home page.
  2. On the Service Home page, click the User menu. Move to the User List page.
  3. On the User List page, click the user to be added to the IAM user group. Move to the User Details page.
  4. On the User Details page, click the User Group tab.
  5. On the user group tab, select the Add User Group button. Move to the Add User Group page.
  6. On the Add User Group page, select the user group to be added and click the Complete button. Move to the User Details page.
    • Select NamespaceAdminGroup from the user group.

Create a role binding

Create a role binding by referring to the example below.

Color mode
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-admin-group
  namespace: dev # target namespace
roleRef:
  kind: ClusterRole
  name: admin # pre-defined cluster role in Kubernetes
  apiGroup: rbac.authorization.k8s.io
subjects: 
- kind: Group
  name: NamespaceAdminGroup # IAM user group created earlier
  apiGroup: rbac.authorization.k8s.io
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: namespace-admin-group
  namespace: dev # target namespace
roleRef:
  kind: ClusterRole
  name: admin # pre-defined cluster role in Kubernetes
  apiGroup: rbac.authorization.k8s.io
subjects: 
- kind: Group
  name: NamespaceAdminGroup # IAM user group created earlier
  apiGroup: rbac.authorization.k8s.io
Code block. Create a role binding

Verify the user

Verify that the user’s namespace permissions are applied normally.

To verify namespace user permissions in Samsung Cloud Platform, follow these steps.

  1. Click All Services > Container > Kubernetes Engine menu. Move to the Kubernetes Engine Service Home page.
  2. On the Service Home page, click Workload menu under Pod. Move to the Pod List page.
  3. On the Pod List page, select the cluster and namespace from the gear button at the top left and click Confirm.
  4. On the Pod List page, verify that the pod list is retrieved.
    • If you select a namespace with permissions, the pod list will be displayed.
    • If you select a namespace without permissions, a confirmation window will be displayed indicating that you do not have permission to retrieve the list.

3.2 - Accessing the Cluster

kubectl Installation and Usage Guide

After creating a Kubernetes Engine service, you can use the Kubernetes command-line tool kubectl to execute commands on a Kubernetes cluster. Using kubectl, you can deploy applications, inspect and manage cluster resources, and view logs. You can find how to install and use kubectl in the official Kubernetes documentation as follows.

Reference

You must use a kubectl version that is within the minor version difference of the cluster. For example, if the cluster version is 1.30, you can use kubectl versions 1.29, 1.30, or 1.31.

To access a Kubernetes cluster with kubectl, you need a kubeconfig file containing the Kubernetes server address and authentication information.

Reference
For detailed information on Kubernetes authentication and authorization, see Authentication and Authorization.

Kubernetes Engine supports authentication via admin certificate kubeconfig and user authentication key kubeconfig.

admin certificate kubeconfig

This kubeconfig uses the admin certificate as an authentication method when accessing the Kubernetes API.

Admin kubeconfig download

Kubernetes Engine > Cluster List > Cluster Details > Admin kubeconfig Download button to click and download the kubeconfig file.

Caution
  • Administrator kubeconfig download is only possible for Admin.
  • There are separate private endpoint and public endpoint versions, and you can download each only once.

Admin kubeconfig use

Reference
  • By default, kubectl looks for a file named config in the $HOME/.kube directory. Or you can set the KUBECONFIG environment variable or specify the kubeconfig flag to use a different kubeconfig file.
  • Private endpoints are by default only accessible from nodes of the respective cluster. For resources in the same Account and same region, you can allow access by adding them to the private endpoint access control settings.
  • If you need to access the cluster from the external internet, setting public endpoint access to enabled allows you to access using the public endpoint kubeconfig.

User authentication key kubeconfig

This kubeconfig uses the user’s Open API authentication key as the authentication method when accessing the Kubernetes API.

User kubeconfig download

Kubernetes Engine > Cluster List > Cluster Details > User kubeconfig download Click the button to download the kubeconfig file.

Caution
  • User kubeconfig download is only possible for users with cluster view permission.
  • There are separate ones for private endpoint and public endpoint.
  • Since the downloaded kubeconfig file does not contain the authentication key token, you need to add the authentication key token information before using it. (See the next paragraph)

Add authentication key token to user kubeconfig file

Below is an example of a user’s kubeconfig file. To use the kubeconfig file, you need to add the authentication key token (AUTHKEY_TOKEN) information in the token field inside the file.

Color mode
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t...
    server: https://my-cluster-a1c3e.ske.xxx.samsungsdscloud.com:6443
  name: my-cluster-a1c3e
contexts:
- context:
    cluster: my-cluster-a1c3e
    user: jane.doe
  name: jane.doe@my-cluster-a1c3e
current-context: jane.doe@my-cluster-a1c3e
kind: Config
preferences: {}
users:
- name: jane.doe
  user:
    token: <AUTHKEY_TOKEN> #### writing needed
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t...
    server: https://my-cluster-a1c3e.ske.xxx.samsungsdscloud.com:6443
  name: my-cluster-a1c3e
contexts:
- context:
    cluster: my-cluster-a1c3e
    user: jane.doe
  name: jane.doe@my-cluster-a1c3e
current-context: jane.doe@my-cluster-a1c3e
kind: Config
preferences: {}
users:
- name: jane.doe
  user:
    token: <AUTHKEY_TOKEN> #### writing needed
Code block. User kubeconfig file example

AUTHKEY_TOKEN can be generated by concatenating the authentication key’s ACCESS_KEY and SECRET_KEY with a colon (:) and then Base64 encoding it. The following is an example of creating AUTHKEY_TOKEN in a Linux environment.

Color mode
$ ACCESS_KEY=5df418813aed051548a72f4a814cf09e
$ SECRET_KEY=6ba7b810-9dad-11d1-80b4-00c04fd430c8
$ AUTHKEY_TOKEN=$(echo -n "$ACCESS_KEY:$SECRET_KEY" | base64 -w0)
$ echo $AUTHKEY_TOKEN
NWRmNDE4ODEzYWVkMDUxNTQ4YTcyZjRhODE0Y2YwOWU6NmJhN2I4MTAtOWRhZC0xMWQxLTgwYjQtMDBmMDRmZDQzMGM4r
$ ACCESS_KEY=5df418813aed051548a72f4a814cf09e
$ SECRET_KEY=6ba7b810-9dad-11d1-80b4-00c04fd430c8
$ AUTHKEY_TOKEN=$(echo -n "$ACCESS_KEY:$SECRET_KEY" | base64 -w0)
$ echo $AUTHKEY_TOKEN
NWRmNDE4ODEzYWVkMDUxNTQ4YTcyZjRhODE0Y2YwOWU6NmJhN2I4MTAtOWRhZC0xMWQxLTgwYjQtMDBmMDRmZDQzMGM4r
Code block. AUTHKEY_TOKEN value generation example
Note
  • For detailed information on authentication key generation, please refer to API Reference > Common > Samsung Cloud Platform Open API call procedure.

User kubeconfig execution example

You can see an example of executing the user kubeconfig.

When access is blocked by access control or a firewall

Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
Unable to connect to the server: dial tcp 123.123.123.123:6443: i/o timeout
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
Unable to connect to the server: dial tcp 123.123.123.123:6443: i/o timeout
Code block. Example execution when access is blocked by access control or firewall

When AUTHKEY_TOKEN does not match and authentication fails

Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
error: You must be logged in to the server (Unauthorized)
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
error: You must be logged in to the server (Unauthorized)
Code block. Example execution when authentication fails because AUTHKEY_TOKEN does not match

AUTHKEY_TOKEN When authentication succeeds

Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
...
kube-node-lease    Active 10d
kube-public        Active 10d
kube-system        Active 10d
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
...
kube-node-lease    Active 10d
kube-public        Active 10d
kube-system        Active 10d
Code block. Example execution when AUTHKEY_TOKEN authentication succeeds

AUTHKEY_TOKEN Authentication succeeded but no permission

Color mode
$ kubectl --kubeconfig=user-kubeconfig.yaml get nodes
Error from server (Forbidden): nodes is forbidden: User "jane.doe" cannot list resource "nodes" in API group "" at the cluster scope
$ kubectl --kubeconfig=user-kubeconfig.yaml get nodes
Error from server (Forbidden): nodes is forbidden: User "jane.doe" cannot list resource "nodes" in API group "" at the cluster scope
Code block. Example execution when AUTHKEY_TOKEN authentication succeeds but lacks permission
Reference
If AUTHKEY_TOKEN authentication succeeds but there is no permission, it means that the authentication process was completed correctly, but the authority to perform the requested operation was not granted (authorized). For detailed information about authorization, see Authentication and Authorization.

3.3 - Using type LoadBalancer Service

Service Configuration Method

You can configure a LoadBalancer type Service by writing and applying a Service manifest file (example: my-lb-svc.yaml ).

Caution
  • LoadBalancer is created in the cluster Subnet by default.
  • To create a LoadBalancer in a different Subnet, use the annotation service.beta.kubernetes.io/scp-load-balancer-subnet-id. For details, refer to Annotation Detailed Settings

Follow these steps to write and apply a type LoadBalancer Service.

  1. Write a Service manifest file my-lb-svc.yaml .

    Color mode
    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app.kubernetes.io/name: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          appProtocol: tcp # Refer to LB service protocol type setting section
      type: LoadBalancer
    apiVersion: v1
    kind: Service
    metadata:
      name: my-service
    spec:
      selector:
        app.kubernetes.io/name: MyApp
      ports:
        - protocol: TCP
          port: 80
          targetPort: 9376
          appProtocol: tcp # Refer to LB service protocol type setting section
      type: LoadBalancer
    Code block. Service manifest file my-lb-svc.yaml writing example

  2. Deploy the Service manifest using the kubectl apply command.

    Color mode
    kubectl apply -f my-lb-svc.yaml
    kubectl apply -f my-lb-svc.yaml
    Code block. Deploying Service manifest with kubectl apply command

Caution
  • When a type LoadBalancer Service is created, a corresponding Load Balancer service is automatically created. It may take a few minutes for the configuration to complete.
  • Do not arbitrarily modify the automatically created Load Balancer service and LB server group. Changes may be reverted or unexpected behavior may occur.
  • For detailed configurable features, refer to Annotation Detailed Settings.
  1. Check the Load Balancer configuration using the kubectl get service command.
    Color mode
    # kubectl get service my-lb-svc
    NAMESPACE     NAME         TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)         AGE
    default       my-lb-svc    LoadBalancer   172.20.49.206    123.123.123.123   80:32068/TCP    3m
    # kubectl get service my-lb-svc
    NAMESPACE     NAME         TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)         AGE
    default       my-lb-svc    LoadBalancer   172.20.49.206    123.123.123.123   80:32068/TCP    3m
    Code block. Checking Load Balancer configuration with kubectl get service command

Protocol Type

You can use it by writing a Service manifest. The following is a simple example.

Color mode
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    ...
  ports:
    - port: 80
      targetPort: 9376
      protocol: TCP    # Required (choose one of TCP, UDP)
      appProtocol: tcp # Optional (leave blank or choose one of tcp, http, https)
  type: LoadBalancer   # Type load balancer
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    ...
  ports:
    - port: 80
      targetPort: 9376
      protocol: TCP    # Required (choose one of TCP, UDP)
      appProtocol: tcp # Optional (leave blank or choose one of tcp, http, https)
  type: LoadBalancer   # Type load balancer
Code block. Service manifest writing example

The list of protocols (protocol and appProtocol) supported by Kubernetes Engine’s type Load Balancer Service and the settings applied to the Load Balancer service accordingly are as follows.

Category(k8s)
protocol
(k8s)
appProtocol
(LB)
Service Category
(LB)
LB Listener
(LB)
LB Server Group
(LB)
Health Check
L4 TCPTCP(tcp)L4TCP {port}TCP {nodePort}TCP {nodePort}
L4 UDPUDP-L4UDP {port}UDP {nodePort}TCP {nodePort}
L7 HTTPTCPhttpL7HTTP {port}TCP {nodePort}TCP/HTTP {nodePort}
L7 HTTPSTCPhttpsL7HTTPS {port}TCP {nodePort}TCP/HTTP {nodePort}
Table. k8s Service manifest and Load Balancer service application settings
  • According to the k8s Service manifest spec, you can specify multiple ports for a single service.
Caution

Depending on the Load Balancer service category (L4, L7), you cannot mix and use protocol layers within a single Service.

  • That is, L4(TCP, UDP) and L7(HTTP, HTTPS) cannot be used together in a single Service.

L4 Service Manifest Writing Example

Color mode
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  type: LoadBalancer
Code block. L4 Service manifest writing example

L7 Service Manifest Writing Example

Color mode
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/scp-load-balancer-layer-type: "L7" # Required
    service.beta.kubernetes.io/scp-load-balancer-client-cert-id: "24da35de187b450eb0cf09fb6fa146de" # Required
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - appProtocol: http # Required
      protocol: TCP
      port: 80
      targetPort: 9376
    - appProtocol: https # Required
      protocol: TCP
      port: 443
      targetPort: 9898
  type: LoadBalancer
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/scp-load-balancer-layer-type: "L7" # Required
    service.beta.kubernetes.io/scp-load-balancer-client-cert-id: "24da35de187b450eb0cf09fb6fa146de" # Required
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - appProtocol: http # Required
      protocol: TCP
      port: 80
      targetPort: 9376
    - appProtocol: https # Required
      protocol: TCP
      port: 443
      targetPort: 9898
  type: LoadBalancer
Code block. L7 Service manifest writing example

Annotation Detailed Settings

You can set detailed features by adding annotations to the service manifest.

Color mode
apiVersion: v1
kind: Service
metatdata:
  name: my-lb-svc
  annotations:
    service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled: "true"
    service.beta.kubernetes.io/scp-load-balancer-health-check-interval: "5"
    service.beta.kubernetes.io/scp-load-balancer-health-check-timeout: "5"
    service.beta.kubernetes.io/scp-load-balancer-health-check-count: "3"
    service.beta.kubernetes.io/scp-load-balancer-session-duration-time: "300"
  spec:
  type: LoadBalancer
  ...
apiVersion: v1
kind: Service
metatdata:
  name: my-lb-svc
  annotations:
    service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled: "true"
    service.beta.kubernetes.io/scp-load-balancer-health-check-interval: "5"
    service.beta.kubernetes.io/scp-load-balancer-health-check-timeout: "5"
    service.beta.kubernetes.io/scp-load-balancer-health-check-count: "3"
    service.beta.kubernetes.io/scp-load-balancer-session-duration-time: "300"
  spec:
  type: LoadBalancer
  ...
Code block. Example of adding annotations to service manifest
Note
  • If no separate annotation is added to the service, the annotation default value is applied.
  • Even if the annotation added to the service does not meet the allowed value, the annotation default value is applied.

Below is a description of all annotations available for type LoadBalancer service.

AnnotationProtocolDefault ValueAllowed ValueExampleDescription
service.beta.kubernetes.io/scp-load-balancer-source-ranges-firewall-rulesAllfalsetrue, falsefalseAutomatically add firewall rules (LB source ranges → LB service IP)
service.beta.kubernetes.io/scp-load-balancer-snat-healthcheck-firewall-rulesAllfalsetrue,falsefalseAutomatically add firewall rules (LB Source NAT IP, HealthCheck IP → member IP:Port)
  • When using this annotation, firewall rules are added as many as the number of ports in the type LB service, so a very large number of firewall rules may be added.
  • If having too many firewall rules is a burden, as an alternative, you can manually add firewall rules without using this annotation. For example, you can add firewall rules with the destination as the member IP’s NodePort range (30000-32767).
Table. Firewall-related settings in Kubernetes annotations
AnnotationProtocolDefault ValueAllowed ValueExampleDescription
service.beta.kubernetes.io/scp-load-balancer-security-group-idAll-UUID92d84b44-ee71-493d-9782-3a90481ce5f3Automatically add rules to the Security Group corresponding to the specified ID
  • When using this annotation, rules are added to the Security Group as many as the number of ports in the type LB service, so a very large number of Security Group rules may be added.
  • If having too many Security Group rules is a burden, as an alternative, you can manually add Security Group rules without using this annotation. For example, you can add Security Group rules with the destination address as the Load Balancer’s Source NAT IP and health check IP, and the allowed port as the NodePort range (30000-32767).
  • Security Group rules added by this annotation are not automatically deleted even if this annotation is deleted or changed.
  • Can add multiple separated by commas. (example: ddc25ad8-6d3f-4242-8c86-2a059212ddc6,26ab7fe1-b3ea-4aa9-9e9d-35a7c237904e)
  • This annotation can be used simultaneously with service.beta.kubernetes.io/scp-load-balancer-security-group-name annotation, and rules are automatically added to all Security Groups that meet the conditions.
service.beta.kubernetes.io/scp-load-balancer-security-group-nameAll-Stringsecurity-group-1Automatically add rules to the Security Group corresponding to the specified Name
  • When using this annotation, rules are added to the Security Group as many as the number of ports in the type LB service, so a very large number of Security Group rules may be added.
  • If having too many Security Group rules is a burden, as an alternative, you can manually add Security Group rules without using this annotation. For example, you can add Security Group rules with the destination address as the Load Balancer’s Source NAT IP and health check IP, and the allowed port as the NodePort range (30000-32767).
  • Security Group rules added by this annotation are not automatically deleted even if this annotation is deleted or changed.
  • Can add multiple separated by commas (example: security-group-1,security-group-2)
  • This annotation can be used simultaneously with service.beta.kubernetes.io/scp-load-balancer-security-group-id annotation, and rules are automatically added to all Security Groups that meet the conditions.
Table. Security Group-related settings in Kubernetes annotations
AnnotationProtocolDefault ValueAllowed ValueExampleDescription
service.beta.kubernetes.io/scp-load-balancer-layer-typeAllL4L4, L7L4Specify the Load Balancer service category
  • When using this annotation, if you want to use TCP or UDP, specify L4, and if you want to use HTTP or HTTPS, specify L7.
  • Cannot be changed after initial creation. To change, you must recreate the service.
service.beta.kubernetes.io/scp-load-balancer-subnet-idAll-ID7f05eda5e1cf4a45971227c57a6d60faSpecify the Load Balancer Service Subnet
  • If this annotation is not specified, the cluster’s Subnet is used.
  • Cannot be changed after initial creation. To change, you must recreate the service.
service.beta.kubernetes.io/scp-load-balancer-service-ipAll-IP Address192.168.10.7Specify the Load Balancer Service IP
  • Cannot be changed after initial creation. To change, you must recreate the service.
service.beta.kubernetes.io/scp-load-balancer-public-ip-enabledAllfalsetrue, falsefalseSpecify whether to use Load Balancer Public NAT IP
  • If this annotation is set to true and service.beta.kubernetes.io/scp-load-balancer-public-ip-id is not specified, IP is automatically assigned.
  • If this annotation is set to true and service.beta.kubernetes.io/scp-load-balancer-public-ip-id is specified, the Public IP corresponding to the specified ID is applied.
service.beta.kubernetes.io/scp-load-balancer-public-ip-idAll-ID4119894bd9614cef83db6f8dda667a20Specify the ID of the Public IP to use as the Load Balancer Public NAT IP
  • If service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled is not set to true, this annotation is ignored.
  • If service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled is set to true and this annotation is specified, the Public IP corresponding to the specified ID is applied.
Table. Load Balancer-related settings in Kubernetes annotations
AnnotationProtocolDefault ValueAllowed ValueExampleDescription
service.beta.kubernetes.io/scp-load-balancer-idle-timeoutHTTP, HTTPS-60 - 3600(in 60-second units)600Specify the LB Listener’s idle-timeout (seconds)
  • If annotation is not set or is not an allowed value (e.g., “”, “0”), the default value (not used) is applied.
  • Cannot change from used to not used. To change, you must recreate the service.
  • Cannot be set simultaneously with service.beta.kubernetes.io/scp-load-balancer-session-duration-time.
  • Cannot be set simultaneously with service.beta.kubernetes.io/scp-load-balancer-response-timeout.
service.beta.kubernetes.io/scp-load-balancer-session-duration-timeAllL4: 120
L7: -
L4 TCP: 60 - 3600(in 60-second units)
L4 UDP: 60 - 180(in 60-second units)
L7: 0 - 120
120Specify the LB Listener’s session-duration-time (seconds)
  • L4: If annotation is not set or is not an allowed value, the default value (“120”) is applied. (L4 cannot be not used)
  • L7: If annotation is not set or is not an allowed value (e.g., “”, “0”), the default value (not used) is applied.
  • Cannot change from used to not used. To change, you must recreate the service.
  • Cannot be set simultaneously with service.beta.kubernetes.io/scp-load-balancer-idle-timeout.
service.beta.kubernetes.io/scp-load-balancer-response-timeoutHTTP, HTTPS-0 - 12060Specify the LB Listener’s response-timeout (seconds)
  • If annotation is not set or is not an allowed value (e.g., “”, “0”), the default value (not used) is applied.
  • Cannot change from used to not used. To change, you must recreate the service.
  • Cannot be set simultaneously with service.beta.kubernetes.io/scp-load-balancer-idle-timeout.
service.beta.kubernetes.io/scp-load-balancer-insert-client-ipTCPfalsetrue, falsefalseSpecify the LB Listener’s Insert Client IP
service.beta.kubernetes.io/scp-load-balancer-x-forwarded-protoHTTP, HTTPSfalsetrue, falsefalseSpecify whether to use the LB Listener’s X-Forwarded-Proto header
service.beta.kubernetes.io/scp-load-balancer-x-forwarded-portHTTP, HTTPSfalsetrue, falsefalseSpecify whether to use the LB Listener’s X-Forwarded-Port header
service.beta.kubernetes.io/scp-load-balancer-x-forwarded-forHTTP, HTTPSfalsetrue, falsefalseSpecify whether to use the LB Listener’s X-Forwarded-For header
service.beta.kubernetes.io/scp-load-balancer-support-http2HTTP, HTTPSfalsetrue, falsefalseSpecify whether to support HTTP 2.0 for LB Listener
service.beta.kubernetes.io/scp-load-balancer-persistenceTCP, HTTP, HTTPS"""", source-ip, cookiesource-ipSpecify the LB Listener’s persistence (one of none, source IP, cookie)
  • For UDP, this annotation cannot be used.
  • For TCP, you can specify "" or source-ip to use.
  • For HTTP/HTTPS, you can specify one of "", source-ip, cookie to use.
service.beta.kubernetes.io/scp-load-balancer-client-cert-idHTTPS-UUID78b9105e00324715b63700933125fa83Specify the ID of the LB Listener’s client SSL certificate
  • Required field when specifying HTTPS.
service.beta.kubernetes.io/scp-load-balancer-client-cert-levelHTTPSHIGHHIGH, NORMAL, LOWHIGHSpecify the security level of the LB Listener’s client SSL certificate
service.beta.kubernetes.io/scp-load-balancer-server-cert-levelHTTPS-HIGH, NORMAL, LOWHIGHSpecify the security level of the LB Listener’s server SSL certificate
Table. LB Listener-related settings in Kubernetes annotations
AnnotationProtocolDefault ValueAllowed ValueExampleDescription
service.beta.kubernetes.io/scp-load-balancer-lb-methodAllROUND_ROBINROUND_ROBIN, LEAST_CONNECTION, IP_HASHROUND_ROBINSpecify the LB server group load balancing policy
Table. LB server group-related settings in Kubernetes annotations
AnnotationProtocolDefault ValueAllowed ValueExampleDescription
service.beta.kubernetes.io/scp-load-balancer-health-check-enabledAlltruetrue, falsetrueSpecify whether to use LB health check
service.beta.kubernetes.io/scp-load-balancer-health-check-protocolAllTCPTCP, HTTPTCPSpecify the LB health check protocol
service.beta.kubernetes.io/scp-load-balancer-health-check-portAll{nodeport}1 - 6553430000Specify the LB health check port
  • Set to {nodeport} by default, so generally you don’t need to specify it.
service.beta.kubernetes.io/scp-load-balancer-health-check-countAll31 - 103Specify the LB health check detection count
service.beta.kubernetes.io/scp-load-balancer-health-check-intervalAll51 - 1805Specify the LB health check interval
service.beta.kubernetes.io/scp-load-balancer-health-check-timeoutAll51 - 1805Specify the LB health check timeout
service.beta.kubernetes.io/scp-load-balancer-health-check-http-methodHTTPGETGET, POSTGETSpecify the LB health check HTTP method
service.beta.kubernetes.io/scp-load-balancer-health-check-urlHTTP/String/healthzSpecify the LB health check URL
service.beta.kubernetes.io/scp-load-balancer-health-check-response-codeHTTP200200 - 500200Specify the LB health check response code
service.beta.kubernetes.io/scp-load-balancer-health-check-request-dataHTTP-Stringusername=admin&password=1234Specify the LB health check request string
  • Required field when specifying POST method.
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-enabledAlltruetrue, falsetrueSpecify whether to use LB health check for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-protocolAllTCPTCP, HTTPTCPSpecify the LB health check protocol for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-portAll-1 - 6553430000Specify the LB health check port for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-countAll31 - 103Specify the LB health check detection count for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-intervalAll51 - 1805Specify the LB health check interval for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-timeoutAll51 - 1805Specify the LB health check timeout for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-http-methodHTTPGETGET, POSTGETSpecify the LB health check HTTP method for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-urlHTTP/String/healthzSpecify the LB health check URL for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-response-codeHTTP200200 - 500200Specify the LB health check response code for the Service’s {port} port number
service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-request-dataHTTP-Stringusername=admin&password=1234Specify the LB health check request string for the Service’s {port} port number
  • Required field when specifying POST method.
Table. LB health check-related settings in Kubernetes annotations

Constraints

The following are constraints to consider when using Kubernetes annotations.

ConstraintRelated Annotation
Rules created in existing Security Group are not automatically deleted when changing Security Groupservice.beta.kubernetes.io/scp-load-balancer-security-group-id
service.beta.kubernetes.io/scp-load-balancer-security-group-name
Cannot change Load Balancer service category (L4/L7)service.beta.kubernetes.io/scp-load-balancer-layer-type
Cannot use L4 and L7 together in the same k8s Serviceservice.beta.kubernetes.io/scp-load-balancer-layer-type
Cannot change Load Balancer subnetservice.beta.kubernetes.io/scp-load-balancer-subnet-id
Cannot change Load Balancer Service IPservice.beta.kubernetes.io/scp-load-balancer-service-ip
LB Listener idle-timeout cannot be changed from used to not usedservice.beta.kubernetes.io/scp-load-balancer-idle-timeout
LB Listener session-duration-time cannot be changed from used to not usedservice.beta.kubernetes.io/scp-load-balancer-session-duration-time
LB Listener response-timeout cannot be changed from used to not usedservice.beta.kubernetes.io/scp-load-balancer-response-timeout
LB Listener idle-timeout cannot be set simultaneously with session-duration-time or response-timeoutservice.beta.kubernetes.io/scp-load-balancer-idle-timeout
service.beta.kubernetes.io/scp-load-balancer-session-duration-time
service.beta.kubernetes.io/scp-load-balancer-response-timeout
Cannot use TCP and UDP together with the same port number in the same k8s Service-
L7 Listener’s routing rules only support the default URL path of the LB server group delivery method
  • To add other URL paths, add them directly in the Samsung Cloud Platform console
  • URL redirection is not supported
-
Table. Constraints when using Kubernetes annotations

3.4 - Considerations for Use

Managed Port Constraints

The following ports are used for SKE management and cannot be used for service use. In addition, if blocked by OS firewall, etc., node functions or some functions may not work normally.

PortDescription
UDP 4789calico-vxlan
TCP 5473calico-typha
TCP 10250kubelet
TCP 19100node-exporter
TCP 19400dcgm-exporter
Table. Managed Port List

kube-reserved resource constraints

kube-reserved is a feature that reserves resources for system daemons that do not run as pods on the node.

  • There are system daemons that do not run as pods, such as kubelet, container runtime, etc.
Reference

For more information on kube-reserved, please refer to the following document.

Kubernetes Engine reserves CPU and memory based on the following criteria.

CPU specificationMemory specification
  • First core’s 6%
  • Next core’s 1% (up to 2 cores)
  • Next 2 cores’ 0.5% (up to 4 cores)
  • Cores exceeding 4 cores’ 0.25%
  • First 4 GB memory’s 25%
  • Next 4 GB memory’s 20% (up to 8 GB)
  • Next 8 GB memory’s 10% (up to 16 GB)
  • Next 112 GB memory’s 6% (up to 128 GB)
  • Memory exceeding 128 GB’s 2%
Table. Resource reservation items based on CPU and memory
  • Example: For a Virtual Server with 16-core vCPU and 32G Memory, kube-reserved is calculated as follows.

    • CPU: (1 core × 0.06) + (1 core × 0.01) + (2 cores × 0.005) + (12 cores × 0.0025) = 0.11 core
    • Memory: (4 GB × 0.25) + (4 GB × 0.2) + (8 GB × 0.1) + (16 GB × 0.06) = 3.56 GB
  • Example: The resources reserved according to CPU size are as follows.

CPU specificationResource specification1Resource specification2Resource specification3Resource specification4
kube-reserved CPU70 m80 m90 m110 m
Table. Example of resources reserved according to CPU size
  • Example: The resources reserved according to the memory size are as follows.
Memory SpecificationResource Specification1Resource Specification2Resource Specification3Resource Specification4Resource Specification4Resource Specification4Resource Specification4
kube-reserved memory1 GB1.8 GB2.6 GB3.56 GB5.48 GB9.32 GB11.88 GB
Table. Example of resources reserved according to memory size

3.5 - Version Information

Kubernetes Version and Support Period

Kubernetes Version Lifecycle

The Kubernetes open source software (OSS) community releases three minor versions annually, with a release cycle of approximately 15 weeks. Released minor versions go through a support period of approximately 14 months (standard patch 12 months, maintenance 2 months) and become EOL (End of Life).

Information

For information on Kubernetes release and EOL timing, and support period, refer to the following links:

Samsung Cloud Platform Kubernetes Engine (SKE) Version Provision Plan

SKE verifies and provides Stable status patch versions among released OSS minor versions. Therefore, there is a difference between the release timing of versions provided by SKE and the release timing of the same OSS version.

Additionally, for previously released versions, technical support is terminated sequentially from older versions considering the open source EOL timing, etc. (End of Tech support, EoTS).

The release and termination schedules for OSS and SKE are as follows.

VersionOSS ReleaseOSS EOLSKE ReleaseSKE EoTS
v1.292023-12-132025-02-282024-102026-03-31
v1.302024-04-172025-06-282025-022026-06-30
v1.312024-08-132025-10-282025-072026-10-28
v1.322024-12-112026-02-282025-102027-02-28
v1.332025-04-232026-06-282025-122027-06-28
v1.342025-08-272026-10-272026-032027-10-27
Table. OSS and SKE release and termination schedules

Feature Limitations at End of Technical Support (EoTS)

When the Kubernetes version provided by SKE reaches the End of Technical Support (EoTS) state, features supported in that version may be limited.

  • New cluster creation → Creation not possible
  • Existing cluster upgrade → Upgrade possible (upgrade possible even if upper version is EoTS)
  • Creating node pools in existing cluster → Creation possible
Note
  • EOL versions may have vulnerabilities, so upgrading to a higher version is recommended.
  • You can upgrade the control plane and node pools in the Samsung Cloud Platform Console, and no separate cost is incurred for the upgrade.
    • For stable operation, perform compatibility testing for the upgrade version before proceeding with the upgrade.

OS and GPU Driver

The OS and GPU driver version information available for each K8s server type is as follows.

Caution
  • OS versions provided may vary by K8s version.
  • When using GPU nodes, related K8s components (nvidia-device-plugin, dcgm-exporter) are configured by default in the cluster.
    • When deploying gpu-operator, conflicts may occur due to duplicate component configuration. It is recommended to deploy and use excluding the default provided components.
  • For OS with ended support, node pool creation is possible, but using the latest OS version is recommended.
k8s VersionStandard and High CapacityGPU
v1.29
  • Ubuntu 22.04
  • RHEL 8.10
  • RHEL 8.8 (OS with ended support)
  • Ubuntu 22.04 (nvidia-535.183.06)
v1.30
  • Ubuntu 22.04
  • RHEL 8.10
  • RHEL 8.8 (OS with ended support)
  • Ubuntu 22.04 (nvidia-535.183.06)
v1.31
  • Ubuntu 22.04
  • RHEL 8.10
  • RHEL 8.8 (OS with ended support)
  • Ubuntu 22.04 (nvidia-535.183.06)
v1.32
  • Ubuntu 22.04
  • RHEL 9.4
  • Ubuntu 22.04 (nvidia-535.183.06)
v1.33
  • Ubuntu 22.04
  • RHEL 9.4
  • Ubuntu 22.04 (nvidia-535.183.06)
v1.34
  • Ubuntu 22.04
  • RHEL 9.4
  • Ubuntu 22.04 (nvidia-535.183.06)
Table. K8s version and server type-specific OS / GPU driver versions

4 - API Reference

API Reference

5 - CLI Reference

CLI Reference

6 - Release Note

Kubernetes Engine

2026.03.19
FEATURE Kubernetes version added, GPU VM custom image provision, k8s and OS version EoTS management logic provision, node pool OS image EOS response and upgrade default setting, Terraform kubeconfig not provided, type: LB setting related improvements
  • Kubernetes Engine feature changes
    • Supports Kubernetes v1.34 version.
    • Provides GPU VM custom image for node pools.
    • Provides EoTS management logic and display function for cluster and node pool k8s versions and node pool OS versions.
    • Provides OS selection dropdown feature when upgrading node pools.
    • type: LB L7 listener idle-timeout added and session-duration-time default value changed and improved.
    • Does not provide kubeconfig feature in Terraform.
2025.12.18
FEATURE Kubernetes version added, node pool GPU Driver version display, MNGC node support(SR), node pool default disk maximum capacity changed, node pool Validation added and supplemented
  • Kubernetes Engine feature changes
    • Supports Kubernetes v1.33 version.
    • Provides GPU Driver version information for node pool GPU nodes.
    • Provides MNGC nodes in SR request setting format.
    • Changes the maximum capacity of Block Storage for node pool OS to be the same as VM products from 1 TB → 12 TB.
    • Provides additional validation for label key when creating/modifying node pools and additional validation for GPU node pool server group not supported.
2025.10.23
FEATURE Kubernetes version added, node pool advanced setting feature, node pool server group setting, ServiceWatch integration, UserKubeconfig download, OS version consideration node pool upgrade supplemented
  • Kubernetes Engine feature changes
    • Supports Kubernetes v1.32 version.
    • Provides node pool advanced setting feature.
    • Provides node pool server group (Affinity or Anti-affinity) setting feature.
    • Provides user Kubeconfig download feature following the administrator Kubeconfig download button.
    • Provides additional upgrade logic considering OS version when upgrading node pools.
    • Provides log collection feature based on ServiceWatch integration.
2025.07.01
FEATURE Kubernetes version added, public endpoint provision, private endpoint access control target added, node pool Label/Taint, Block Storage CSI, kubectl login plugin added
  • Kubernetes Engine feature changes
    • Supports Kubernetes v1.31 version.
    • Provides public endpoint for the cluster.
    • Adds MNGC(Baremetal) product and DevOps Service product to private endpoint access control targets for the cluster.
    • Provides node pool Label and Taint setting feature.
    • Provides Block Storage CSI and kubectl login plugin features.
    • kubeconfig vulnerability has been improved.
2025.04.28
FEATURE Private endpoint access control, type: LB feature added
  • Kubernetes Engine feature changes
    • Provides private endpoint and access control features.
    • Provides type: LoadBalancer feature.
2025.02.27
FEATURE Kubernetes version added and Kubernetes version upgrade, Custom Image, GPU node creation feature added
  • Kubernetes Engine feature changes
    • Supports Kubernetes v1.30 version.
    • Provides Kubernetes version upgrade feature for cluster and node pools.
    • Provides Multi-Security Group feature.
    • Provides Custom Image node and GPU node creation feature.
  • Samsung Cloud Platform common feature changes
    • Reflected common CX changes for Account, IAM, Service Home, and tags.
2024.10.01
NEW Kubernetes Engine service official version release
  • Released Kubernetes Engine product that provides lightweight virtual computing Containers and Kubernetes clusters for managing them.
  • Creates Container nodes and manages them through the cluster to enable deployment of various Container applications.
2024.07.02
NEW Beta version release
  • Released Kubernetes Engine product Beta version.