This is the multi-page printable view of this section. Click here to print.
Kubernetes Engine
- 1: Overview
- 1.1: Monitoring Metrics
- 1.2: ServiceWatch Metrics
- 2: How-to guides
- 2.1: Node Management
- 2.2: Manage Namespaces
- 2.3: Manage Workload
- 2.4: Service and Ingress Management
- 2.5: Storage Management
- 2.6: Configuration(Configuration) Management
- 2.7: Manage Permissions
- 3: Using Kubernetes Engine
- 3.1: Authentication and Authorization
- 3.2: Accessing the Cluster
- 3.3: Using type LoadBalancer Service
- 3.4: Considerations for Use
- 3.5: Version Information
- 4: API Reference
- 5: CLI Reference
- 6: Release Note
1 - Overview
Service Overview
Kubernetes Engine is a service that provides lightweight virtual computing and containers, as well as a Kubernetes cluster to manage them. Users can utilize the Kubernetes environment without complex preparation by installing, operating, and maintaining the Kubernetes Control Plane.
Features
Standard Kubernetes Environment Configuration: The standard Kubernetes environment can be used without separate configuration through the default Kubernetes Control Plane provided. It is compatible with applications in other standard Kubernetes environments, so you can use standard Kubernetes applications without modifying the code.
Easy Kubernetes Deployment: Provides secure communication between worker nodes and managed control planes, and quickly provisions worker nodes, allowing users to focus on building applications on the provided container environment.
Convenient Kubernetes Management: Provides various management features to conveniently use the created Kubernetes cluster, such as cluster information inquiry and cluster management, namespace management, and workload management through the dashboard for enterprise environments.
Service Composition Diagram
Provided Features
Kubernetes Engine provides the following features.
- Cluster Management: You can create and manage clusters to use the Kubernetes Engine service. After creating a cluster, you can add services necessary for operation, such as nodes, namespaces, and workloads.
- Node Management: A node is a set of machines that run containerized applications. Every cluster must have at least one worker node to deploy applications. Nodes can be defined and used by defining a node pool. Nodes belonging to a node pool must have the same server type, size, and OS image, and multiple node pools can be created to establish a flexible deployment strategy.
- Namespace Management: Namespace is a logical separation unit within a Kubernetes cluster, and is used to specify access permissions or resource usage limits by namespace.
- Workload Management: Workload is an application running on Kubernetes Engine. You can create a namespace, then add or delete workloads. Workloads are created and managed item by item, such as deployments, pods, stateful sets, daemon sets, jobs, and cron jobs.
- Service and Ingress Management: Service is an abstraction method that exposes applications running in a set of pods as a network service, and Ingress is used to expose HTTP and HTTPS paths from outside the cluster to the inside. After creating a namespace, you can create or delete services, endpoints, ingresses, and ingress classes.
- Storage Management: When using Kubernetes Engine, you can create and manage the storage to be used. Storage is created and managed by items such as PVC, PV, and storage class.
- Configuration Management: When there is a need to manage values that change inside a container according to multiple environments such as Dev/Prod, managing them with separate images due to environment variables is inconvenient and causes significant cost waste. In Kubernetes, you can manage environment variables or configuration values as variables from the outside so that they can be inserted when a Pod is created, and at this time, ConfigMap and Secret can be used.
- Access Control: In cases where multiple users access a Kubernetes cluster, you can grant permissions for specific APIs or namespaces to restrict access. You can apply Kubernetes’ role-based access control (RBAC) feature to set permissions for clusters or namespaces. You can create and manage cluster roles, cluster role bindings, roles, and role bindings.
Component
Control Plane
The Control Plane is the master node role in the Kubernetes Engine service. The master node is the management node of the cluster, and it plays a role in managing other nodes in the cluster. The cluster is the basic creation unit of the Kubernetes Engine service, and it is used to manage node pools, objects, controllers, and other components within it. Users set up the cluster name, control plane, network, File Storage, and other settings, and then create a node pool within the cluster to use it. The master node assigns tasks to the cluster, monitors the status of the nodes, and plays a role in data communication between nodes.
The cluster name creation rule is as follows.
- It starts with English and can be set within 3-30 characters using English, numbers, and special characters (
-). - The cluster name must not be duplicated with the existing one.
Worker Node
The Worker Node is a work node in the cluster, playing a role in performing the cluster’s tasks. The Worker Node receives tasks from the cluster’s master node, performs them, and reports the task results to the cluster’s master node. All nodes created within the node pool and namespace play the role of a worker node.
The creation rule of the node pool, which is a collection of worker nodes, is as follows.
- A node pool must have at least one node to be created for application deployment to be possible.
- Up to 100 nodes can be created in a node pool.
- Since the maximum number of nodes is 100, if there are 100 node pools, 1 node per node pool, and if there are 50 node pools, 2 nodes per node pool, the total number of nodes can be created freely within 100 nodes.
- It is possible to set up Block Storage connected to the node pool.
- It is possible to set the server type, size, and OS image for nodes belonging to the node pool, and all must be the same.
- Auto-Scaling service allows you to set automatic node pool expansion/reduction according to the requirements of the deployed application.
Preceding Service
This is a list of services that must be pre-configured before creating this service. Please refer to the guide provided for each service and prepare in advance for more details.
| Service Category | Service | Detailed Description |
|---|---|---|
| Networking | VPC | A service that provides an independent virtual network in a cloud environment |
| Networking | Security Group | A virtual firewall that controls the server’s traffic |
| Storage | File Storage | A storage that allows multiple clients to share files over the network
|
1.1 - Monitoring Metrics
Kubernetes Engine Monitoring Metrics
The following table shows the monitoring metrics of Kubernetes Engine that can be checked through Cloud Monitoring. For detailed instructions on using Cloud Monitoring, refer to the Cloud Monitoring guide.
| Performance Item | Detailed Description | Unit |
|---|---|---|
| Cluster Namespaces [Active] | Number of active namespaces | cnt |
| Cluster Namespaces [Total] | Total number of namespaces in the cluster | cnt |
| Cluster Nodes [Ready] | Number of nodes in READY state | cnt |
| Cluster Nodes [Total] | Total number of nodes in the cluster | cnt |
| Cluster Pods [Failed] | Number of failed pods in the cluster | cnt |
| Cluster Pods [Pending] | Number of pending pods in the cluster | cnt |
| Cluster Pods [Running] | Number of running pods in the cluster | cnt |
| Cluster Pods [Succeeded] | Number of succeeded pods in the cluster | cnt |
| Cluster Pods [Unknown] | Number of unknown pods in the cluster | cnt |
| Instance Status | Cluster status | status |
| Namespace Pods [Failed] | Number of failed pods in the namespace | cnt |
| Namespace Pods [Pending] | Number of pending pods in the namespace | cnt |
| Namespace Pods [Running] | Number of running pods in the namespace | cnt |
| Namespace Pods [Succeeded] | Number of succeeded pods in the namespace | cnt |
| Namespace Pods [Unknown] | Number of unknown pods in the namespace | cnt |
| Namespace GPU Clock Frequency | SM clock frequency in the namespace | MHz |
| Namespace GPU Memory Usage | Memory utilization in the namespace | % |
| Namespace GPU Usage | GPU utilization in the namespace | % |
| Node CPU Size [Allocatable] | Allocatable CPU in the node | cnt |
| Node CPU Size [Capacity] | CPU capacity in the node | cnt |
| Node CPU Usage | CPU usage in the node | % |
| Node CPU Usage [Request] | CPU request ratio in the node | % |
| Node CPU Used | CPU utilization in the node | status |
| Node Filesystem Usage | Filesystem usage in the node | % |
| Node Memory Size [Allocatable] | Allocatable memory in the node | bytes |
| Node Memory Size [Capacity] | Memory capacity in the node | bytes |
| Node Memory Usage | Memory utilization in the node | % |
| Node Memory Usage [Request] | Memory request ratio in the node | % |
| Node Memory Workingset | Memory working set in the node | bytes |
| Node Network In Bytes | Node network received bytes | bytes |
| Node Network Out Bytes | Node network transmitted bytes | bytes |
| Node Network Total Bytes | Node network total bytes | bytes |
| Node Pods [Failed] | Number of failed pods in the node | cnt |
| Node Pods [Pending] | Number of pending pods in the node | cnt |
| Node Pods [Running] | Number of running pods in the node | cnt |
| Node Pods [Succeeded] | Number of succeeded pods in the node | cnt |
| Node Pods [Unknown] | Number of unknown pods in the node | cnt |
| Pod CPU Usage [Limit] | CPU usage limit ratio in the pod | % |
| Pod CPU Usage [Request] | CPU request ratio in the pod | % |
| Pod CPU Usage | CPU usage in the pod | % |
| Pod GPU Clock Frequency | SM clock frequency in the pod | MHz |
| Pod GPU Memory Usage | Memory utilization in the pod | % |
| Pod GPU Usage | GPU utilization in the pod | % |
| Pod Memory Usage [Limit] | Memory usage limit ratio in the pod | % |
| Pod Memory Usage [Request] | Memory request ratio in the pod | % |
| Pod Memory Usage | Memory usage in the pod | bytes |
| Pod Network In Bytes | Pod network received bytes | bytes |
| Pod Network Out Bytes | Pod network transmitted bytes | bytes |
| Pod Network Total Bytes | Pod network total bytes | bytes |
| Pod Restart Containers | Container restart count in the pod | cnt |
| Workload Pods [Running] | - | cnt |
1.2 - ServiceWatch Metrics
Kubernetes Engine sends metrics to ServiceWatch. The metrics provided as basic monitoring are data collected at 1-minute intervals.
Basic Metrics
The following are basic metrics for the Kubernetes Engine namespace.
Metrics with metric names shown in bold below are key metrics selected among the basic metrics provided by Kubernetes Engine. Key metrics are used to configure service dashboards that are automatically built for each service in ServiceWatch.
For each metric, the user guide describes which statistical value is meaningful when querying that metric, and the statistical value shown in bold among the meaningful statistics is the key statistic. You can query key metrics through key statistics in the service dashboard.
| Metric Name | Detailed Description | Unit | Meaningful Statistics |
|---|---|---|---|
| cluster_up | Cluster up | Count |
|
| cluster_node_count | Cluster node count | Count |
|
| cluster_failed_node_count | Cluster failed node count | Count |
|
| cluster_namespace_phase_count | Cluster namespace phase count | Count |
|
| cluster_pod_phase_count | Cluster pod phase count | Count |
|
| node_cpu_allocatable | Node CPU allocatable | - |
|
| node_cpu_capacity | Node CPU capacity | - |
|
| node_cpu_usage | Node CPU usage | - |
|
| node_cpu_utilization | Node CPU utilization | - |
|
| node_memory_allocatable | Node memory allocatable | Bytes |
|
| node_memory_capacity | Node memory capacity | Bytes |
|
| node_memory_usage | Node memory usage | Bytes |
|
| node_memory_utilization | Node memory utilization | - |
|
| node_network_rx_bytes | Node network receive bytes | Bytes/Second |
|
| node_network_tx_bytes | Node network transmit bytes | Bytes/Second |
|
| node_network_total_bytes | Node network total bytes | Bytes/Second |
|
| node_number_of_running_pods | Node number of running pods | Count |
|
| namespace_number_of_running_pods | Namespace number of running pods | Count |
|
| namespace_deployment_pod_count | Namespace deployment pod count | Count |
|
| namespace_statefulset_pod_count | Namespace statefulset pod count | Count |
|
| namespace_daemonset_pod_count | Namespace daemonset pod count | Count |
|
| namespace_job_active_count | Namespace job active count | Count |
|
| namespace_cronjob_active_count | Namespace cronjob active count | Count |
|
| pod_cpu_usage | Pod CPU usage | - |
|
| pod_memory_usage | Pod memory usage | Bytes |
|
| pod_network_rx_bytes | Pod network receive bytes | Bytes/Second |
|
| pod_network_tx_bytes | Pod network transmit bytes | Bytes/Second |
|
| pod_network_total_bytes | Pod network total bytes | Count |
|
| container_cpu_usage | Container CPU usage | - |
|
| container_cpu_limit | Container CPU limit | - |
|
| container_cpu_utilization | Container CPU utilization | - |
|
| container_memory_usage | Container memory usage | Bytes |
|
| container_memory_limit | Container memory limit | Bytes |
|
| container_memory_utilization | Container memory utilization | - |
|
| node_gpu_count | Node GPU count | Count |
|
| gpu_temp | GPU temperature | - |
|
| gpu_power_usage | GPU power usage | - |
|
| gpu_util | GPU utilization | Percent |
|
| gpu_sm_clock | GPU SM clock | - |
|
| gpu_fb_used | GPU FB usage | Megabytes |
|
| gpu_tensor_active | GPU tensor active rate | - |
|
| pod_gpu_util | Pod GPU utilization | Percent |
|
| pod_gpu_tensor_active | Pod GPU tensor active rate | - |
|
2 - How-to guides
Users can enter the required information for the Kubernetes Engine and select detailed options to create a service through the Samsung Cloud Platform Console.
Create Kubernetes Engine
You can create and use the Kubernetes Engine service from the Samsung Cloud Platform Console.
You can create and manage clusters to use the Kubernetes Engine service. After creating a cluster, you can add services needed for operation such as nodes, namespaces, and workloads.
You can select up to 4 Security Groups in the network settings of Kubernetes Engine.
- If you directly add a Security Group to nodes created by Kubernetes Engine on the Virtual Server service page, they may be automatically detached because they are not managed by Kubernetes Engine.
- For nodes, the Security Group must be added/managed in the network settings of the Kubernetes Engine service.
Managed Security Group is automatically managed in Kubernetes Engine.
- Do not use Managed Security Group for arbitrary user purposes because if you delete it or add/delete rules, it will automatically be restored.
Creating a cluster
You can create and use a Kubernetes Engine cluster service from the Samsung Cloud Platform Console.
To create a Kubernetes Engine cluster, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Create Cluster button on the Service Home page. You will be taken to the Create Cluster page.
- Cluster Creation page, enter the information required for service creation, and select detailed options.
- Service Information Input area, please enter or select the required information.
Category RequiredDetailed description Cluster Name Required Cluster Name - Start with an English letter and use English letters, numbers, and the special character (
-) within 3-30 characters
Control Plane Settings > Kubernetes Version Required Select Kubernetes Version Control Area Settings > Private Endpoint Access Control Select Select whether to use Private Endpoint Access Control - After selecting Use, click Add to select resources that are allowed to access the private endpoint
- Only resources in the same Account and same region can be registered
- Regardless of the Use setting, the nodes of the cluster can access the private endpoint
Control Area Settings > Public Endpoint Access/Access Control Select Select whether to use Public Endpoint Access/Access Control - After selecting Use, enter the Allowed Access IP Range as 192.168.99.0/24
- Set the access control IP range so that external users can access the Kubernetes API server endpoint
- If external access is not needed, you can disable it to reduce security threats
ServiceWatch log collection Select Set whether to enable log collection so that logs for the cluster can be viewed in ServiceWatch - Use to select provides 5 GB of log storage for free for all services within the Account, and if it exceeds 5 GB, charges are applied based on storage amount
- If you need to check cluster logs, it is recommended to enable the ServiceWatch log collection feature
Cloud Monitoring log collection Select Set whether to enable log collection so that logs for the cluster can be viewed in Cloud Monitoring - Enable: If selected, 1 GB of log storage is provided for free for all services within the Account, and any amount exceeding 1 GB will be deleted sequentially
Network Settings Required Network connection settings for node pool - VPC Name: Select a pre-created VPC
- Subnet Name: Choose a standard Subnet to use among the subnets of the selected VPC
- Security Group: Select button after clicking then Select Security Group popup window select Security Group
- Up to 4 Security Group can be selected
File Storage Settings Required Select the file storage volume to be used in the cluster - Default Volume (NFS): Click the Search button and then select the file storage in the File Storage Selection popup. The default Volume file storage can only use the NFS format
Table. Kubernetes Engine service information input items - Start with an English letter and use English letters, numbers, and the special character (
- Enter additional information area, input or select the required information.
Category Required or notDetailed description Tag Select Add Tag - Up to 50 can be added per resource
- After clicking the Add Tag button, enter or select Key, Value values
Table. Kubernetes Engine Additional Information Input Items
- Service Information Input area, please enter or select the required information.
- Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
- When creation is complete, check the created resources on the Cluster List page.
Check cluster details
Kubernetes Engine service allows you to view and edit the full resource list and detailed information. Cluster Details page consists of Details, Node Pools, Tags, Activity History tabs.
To view detailed cluster information, follow the steps below.
- All Services > Container > Kubernetes Engine 메뉴를 클릭하세요. Kubernetes Engine의 Service Home 페이지로 이동합니다.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Click the resource (cluster) you want to view detailed information for on the Cluster List page. You will be taken to the Cluster Details page.
- Cluster Details page displays the cluster’s status information and detailed information, and consists of Details, Node Pool, Tags, Job History tabs.
Category Detailed description Cluster Status Kubernetes Engine cluster status - Creating: Creating
- Running: Created / Running
- Updating: Version upgrade in progress
- Deleting: Deleting
- Error: Error occurred
Service Termination Button to terminate a Kubernetes Engine cluster - To terminate the Kubernetes Engine service, you must delete all node pools added to the cluster
- If the service is terminated, the running service may be stopped immediately, so termination is necessary considering the impact of service interruption
Table. Cluster status information and additional functions
- Cluster Details page displays the cluster’s status information and detailed information, and consists of Details, Node Pool, Tags, Job History tabs.
Detailed Information
You can view detailed information of the selected resource on the Cluster List page, and modify the information if necessary.
| Category | Detailed description |
|---|---|
| service | service name |
| Resource Type | Resource Type |
| SRN | Unique resource ID in Samsung Cloud Platform |
| Resource Name | Resource Name
|
| Resource ID | Unique resource ID in the service |
| Creator | User who created the service |
| Creation DateTime | DateTime when the service was created |
| Modifier | User who modified the service information |
| Modification DateTime | DateTime when service information was modified |
| Cluster Name | Cluster Name |
| LLM Endpoint | LLM Endpoint information |
| Control Plane Settings | Check assigned Kubernetes control plane (Control Plane) version and access permission scope
|
| Network Settings | View VPC, Subnet, and Security Group information set when creating a Kubernetes Engine cluster
|
| File Storage Settings | If you click the volume name, you can view detailed information on the storage detail page |
- The version of Kubernetes Engine is denoted in the order
[major].[minor].[patch], and you can upgrade only one minor version at a time.- Example: Version
1.11.x > 1.13.x(Not allowed) / Version1.11.x > 1.12.x(Allowed)
- Example: Version
- If you are using a Kubernetes version that has reached end of support or a version that is scheduled to reach end of support, a red exclamation mark will appear to the right of the version. If this icon is displayed, we recommend upgrading the Kubernetes version.
Node Pool
You can view cluster node pool information and add, modify, or delete. For detailed information on using node pools, refer to Managing Nodes.
| Category | Detailed description |
|---|---|
| Add Node Pool | Add node pool to current cluster
|
| Node Pool List | Check the list of node pools created in the current cluster
|
| More menu | Provides node pool management features
|
If a red exclamation mark icon appears on the version of the node pool information, the server OS of that node pool is not supported in newer versions of Kubernetes. To ensure stable service, the node pool server OS must be upgraded.
- To upgrade the node pool version, delete the existing node pool and then create a new node pool with a higher server OS version.
Tag
Cluster List page allows you to view the tag information of the selected resource, and you can add, modify, or delete it.
| Category | Detailed description |
|---|---|
| Tag List | Tag List
|
Work History
You can view the operation history of the selected resource on the Cluster List page.
| Category | Detailed description |
|---|---|
| Work History List | Resource Change History
|
Managing Cluster Resources
We provide cluster version upgrade, kubeconfig download, and control plane logging modification features for cluster resource management.
Even without create/delete permissions, Security Group and Virtual Server are created/deleted by Kubernetes Engine for lifecycle management purposes, and the creator/modifier is indicated as System.
Cluster Version Upgrade
If there is a version that can be upgraded from the cluster’s Kubernetes version, you can perform the upgrade on the Cluster Details page.
- Before the cluster upgrade, check the following items.
- Check if the cluster status is Running
- Check that the status of all node pools in the cluster is Running or Deleting
- Check that all node pool versions in the cluster are the same version as the cluster
- Check if automatic scaling/downsizing of all node pools in the cluster and node auto-recovery feature are disabled
- After upgrading the cluster, proceed with the node pool upgrade. The control plane and node pool upgrades of the Kubernetes cluster are performed separately.
- You can upgrade only one minor version at a time.
- Example: version 1.12.x > 1.13.x (possible) / version 1.11.x > 1.13.x (not possible)
- After an upgrade, you cannot perform a downgrade or rollback, so to use the previous version again you must create a new cluster.
- Since user systems using an end-of-support Kubernetes version may become vulnerable, upgrade the control plane and node pool versions directly in the Samsung Cloud Platform Console.
- No separate cost will be incurred due to the upgrade.
- Please perform compatibility testing for the upgrade version in advance to ensure stable system operation for users.
Cluster version upgrade preparation
There is no need to delete and recreate API objects when upgrading the cluster version. For the transitioned API, all existing API objects can be read and updated using the new API version. However, due to deprecated APIs in older Kubernetes versions, you may be unable to read or modify existing objects or create new ones. Therefore, to ensure system stability, it is recommended to migrate clients and manifests before the upgrade.
Migrate the client and manifest using the following method.
- Download the new version of the client (e.g., kubectl), install it on the cluster, and modify the YAML to refer to the new API.
- Or use a separate plugin (kubectl convert) to automatically convert it. For detailed explanation, refer to Kubernetes official documentation > Install and configure kubectl on Linux.
Upgrade cluster and node pool version
To update the cluster and node pool, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
- Service Home page, click the Cluster menu. Go to the Cluster List page.
- Click the resource (cluster) to upgrade the version on the Cluster List page. You will be taken to the Cluster Details page.
- Click the edit icon of Kubernetes version on the Cluster Details page. The Cluster version upgrade popup opens.
- Select the Kubernetes version to upgrade, and click the Confirm button.
- It may take a few minutes until the cluster upgrade is complete
- During the upgrade, the cluster status is shown as Updating, and when the upgrade is complete, it is shown as Running.
- When the upgrade is complete, select the Node Pool tab. Go to the Node Pool page.
- Click the More button of the node pool item and click Node Pool Upgrade. The Node Pool Version Upgrade popup window opens.
- Node Pool Version Upgrade After checking the message in the popup window, click the Confirm button.
- It may take a few minutes until the node pool upgrade is completed.
- During the upgrade, the node pool status is shown as Updating, and when the upgrade is complete, it is shown as Running.
kubeconfig download
You can download the admin/user kubeconfig settings of the cluster’s public and private endpoints as a yaml document.
To download the kubeconfig settings of the cluster, follow the steps below.
- Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engines.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Click the resource (cluster) to download the kubeconfig on the Cluster List page. You will be taken to the Cluster Details page.
- Cluster Details on the page, select the desired endpoint’s Admin kubeconfig download/User kubeconfig download button and click it.
- You can download the kubeconfig file in yaml format for each permission.
Modify private endpoint access control
You can change the private endpoint access control settings of the cluster.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
- Click the Cluster menu on the Service Home page. Navigate to the Cluster List page.
- Cluster List page, click the resource (cluster) for which you want to modify the private endpoint access control. You will be taken to the Cluster Details page.
- Click the Edit icon of Private Endpoint Access Control on the Cluster Details page. The Edit Private Endpoint Access Control popup opens.
- In the Private Endpoint Access Control Edit popup, set the Use status of Private Endpoint Access Control, add the allowed access resources, and then click the Confirm button.
Modify public endpoint access/access control
You can change the public endpoint access control settings of the cluster.
- All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engines.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Cluster List page, click the resource (cluster) you want to modify public endpoint access control. Navigate to the Cluster Details page.
- Click the Edit icon of Public Endpoint Access/Access Control on the Cluster Details page. The Public Endpoint Access/Access Control Edit popup opens.
- Public endpoint access/access control modification In the popup, set the use of Public endpoint access control, add the allowed IP range, and then click the Confirm button.
Modify control area log collection settings
You can change the log collection settings of the cluster’s control plane. Detailed logs of the cluster can be viewed in the ServiceWatch service or the Cloud Monitoring service.
Even if you set up Cloud Monitoring log collection, you can check the cluster logs.
- However, the Cloud Moniotring log collection feature is scheduled for termination, so we recommend using ServiceWatch log collection.
To change the control plane log collection settings of the cluster, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
- Click the Cluster menu on the Service Home page. Go to the Cluster List page.
- Click the resource (cluster) to modify control plane logging on the Cluster List page. You will be taken to the Cluster Details page.
- On the Cluster Details page, click the Edit icon of ServiceWatch Log Collection. The ServiceWatch Log Collection popup opens.
- Cloud Monitoring log collection feature can also be set the same way.
- ServiceWatch log collection in the popup window, after setting the use of ServiceWatch log modification, click the Confirm button.
When log collection is used, you can view the Audit/Event logs of the cluster control plane in each service. Detailed logs can be viewed on the next page.
Security Group Edit
You can modify the cluster’s Security Group.
You can select up to 4 Security Groups in the network settings of Kubernetes Engine.
- If you directly add a Security Group on the Virtual Server service page for nodes created by Kubernetes Engine, it may be automatically released because it is not managed by Kubernetes Engine.
- For nodes, the Security Group must be added/managed in the network settings of the Kubernetes Engine service.
Managed Security Group is automatically managed in Kubernetes Engine.
- Do not use Managed Security Group for arbitrary user purposes because if you delete it or add/delete rules, it will automatically be restored.
Follow the steps below to modify the cluster’s Security Group.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Click the resource (cluster) to modify the Security Group on the Cluster List page. You will be taken to the Cluster Details page.
- Click the Edit icon of Security Group on the Cluster Details page. The Edit Security Group popup window opens.
- After selecting or deselecting the Security Group to modify, click the Confirm button.
Cancel Cluster
To cancel the cluster, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engines.
- Click the Cluster menu on the Service Home page. Move to the Cluster List page.
- Cluster List page, click the resource (cluster) for which you want to view detailed information. You will be taken to the Cluster Detail page.
- Click Cancel Service on the Cluster Details page.
- Service termination After reviewing the content in the popup window, click the Confirm button.
2.1 - Node Management
A node is a collection of machines that run containerized applications. Every cluster must have at least one worker node to be able to deploy applications. Nodes can be used by defining node pools. Nodes belonging to a node pool must have the same server type, size, and OS image, and by creating multiple node pools, a flexible deployment strategy can be established.
After creating a Kubernetes Engine cluster, add a node pool and modify or delete it as needed.
- It is recommended not to use the OS firewall on Kubernetes Engine nodes that use Calico.
- The firewall settings of Samsung Cloud Platform are set to Inactive by default.
- As recommended in the reference link below, in environments using Calico, it is recommended to set the firewall to an unused state.
- If the node is designated as a Backup service target, node deletion is not possible, so the function below cannot be used.
- Node pool reduction (including auto-scaling)
- Node Pool Upgrade
- Node pool auto recovery
- Delete node pool
Add node pool
A node refers to a machine that runs containerized applications, and at least one node is required to deploy applications in a Kubernetes cluster. After the creation of a Kubernetes Engine cluster is complete, add a node pool on the details page.
- You can define and use node pools, which are sets of nodes, in Kubernetes Engine. Nodes belonging to a node pool use the same server type, size, and OS image, so users can establish flexible deployment strategies by using multiple node pools.
In the Virtual Server menu, you can create a node pool using the user’s Custom Image. To create a node pool using a Custom Image, follow these steps.
- Create a Virtual Server that includes the Kubernetes Engine image of Samsung Cloud Platform.
- Use the Image creation of the corresponding Virtual Server to proceed with image creation.
- Select the registered Custom Image to create a node pool.
- For more details, please refer to Virtual Server > Image Creation.
To add a node pool, follow the steps below.
- Click the All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to add a node pool to. Navigate to the Cluster Details page.
- Cluster Details page, after selecting the Node Pool tab, click the Add Node Pool button. Add Cluster Node Pool page will be displayed.
- On the Add Cluster Node Pool page, enter the information required to create a node pool and select detailed options.
- Service Information Input area, enter or select the required information.
Category Required or notDetailed description Node Pool Name Required Node Pool Name - Start with a lowercase English letter and use lowercase English letters, numbers, and the special character (
-) within 3 - 20 characters- The special character (
-) cannot be used at the end of the name
- The special character (
Node Pool > Server Type Required Virtual Server server type of worker node - Standard: Standard specifications commonly used
- High Capacity: Large-capacity server specifications above Standard
- GPU: GPU specifications available when securing resources for special requirements such as AI/ML
- For detailed information on server types provided by Virtual Server, refer to Virtual Server Server Type
Node Pool > Server OS Required Worker node’s Virtual Server OS image - Standard: RHEL 8.10, Ubuntu 22.04
- Custom: Custom image for Kubernetes created from Virtual Server product (RHEL, Ubuntu)
Node Pool > Block Storage Required Block Storage settings used by the worker node’s Virtual Server - SSD: High-performance general volume
- HDD: General volume
- SSD/HDD_KMS: Additional encrypted volume using Samsung Cloud Platform KMS (Key Management System) encryption key
- Encryption can only be applied at initial creation and cannot be changed after service creation
- Performance degradation occurs when using the SSD_KMS disk type
- Enter capacity in Units, with a value between 13 and 125
- Since 1 Unit is 8 GB, 104 ~ 1,000 GB will be created
Node Pool > Server Group Select Apply the pre-created Server Group in Virtual Server service to worker nodes - Click Use to set Server Group usage
- When usage is set, select Server Group
- Supports Affinity or Anti-Affinity policies
- Partition policy not supported
- Cannot modify after node pool creation
- GPU server type cannot be selected
Node Pool Auto Scaling Required Automatically adjust the number of nodes in the node pool - Refer to the configuration method Node Pool Auto Scaling
Number of Nodes Required Number of worker nodes to create within a single node pool - Enter a value within the range 1 - 100
Node Auto Recovery Required When an abnormal node is found in the node pool, automatically delete and create a new one - Refer to Node Pool Auto Recovery for configuration method
Keypair Required User authentication method used to connect to the worker node’s Virtual Server - Create new: Create new if a new Keypair is needed
- Refer to Create Keypair
- List of default login accounts by OS
- Alma Linux: almalinux
- RHEL: cloud-user
- Rocky Linux: rocky
- Ubuntu: ubuntu
- Windows: sysadmin
Label Select Optionally schedule workloads to nodes - Click the Add button to enter label key and value
- Refer to Setting Node Pool Labels
Taint Select Prevent workloads from being scheduled onto nodes - Click the **Add** button to input taint effect, key, and value
- For configuration method, see [Node Pool Taint Settings](#노드-풀-테인트-설정하기)
- Click **Use** to select whether to apply advanced settings items for the node pool to be created
- Refer to [Configure Node Pool Advanced Settings](#노드-풀-고급-설정하기) for the configuration method
Table. Kubernetes Engine node pool service information input items - Start with a lowercase English letter and use lowercase English letters, numbers, and the special character (
- Service Information Input area, enter or select the required information.
- Summary Check the detailed information and estimated billing amount generated in the panel, and click the Create button.
- When creation is complete, check the created resources on the Cluster Details > Node Pool tab > Node Pool List page.
- If the notification popup opens, click the Confirm button.
Edit Node Pool
If needed, modify the number of nodes in the node pool on the Kubernetes Engine details page.
To modify the number of nodes, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click the Cluster menu. Navigate to the Cluster List page.
- Cluster List page, select the cluster you want to modify the node count for. Navigate to the Cluster Details page.
- Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- On the Node Pool Details page, click the Edit icon on the right of Node Pool Information. The Node Pool Edit popup window will open.
- Node Pool Edit In the popup window, after modifying the node pool information, click the Confirm button.
Upgrade Node Pool
If the Kubernetes version of the control plane and the version of the node pool are different, you can upgrade the node pool to synchronize the versions.
- After upgrading the cluster, proceed with the node pool upgrade. The control plane and node pool upgrades of the Kubernetes cluster are performed separately.
- When performing a node pool upgrade, a rolling update is carried out on the nodes belonging to the node pool. At this time, a momentary service interruption may occur, but this is a normal phenomenon due to the rolling update and will automatically normalize after a certain period.
- The server OS version may differ depending on the Kubernetes version of the node pool.
To upgrade the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine menu, click. Go to the Service Home page of Kubernetes Engine.
- On the Service Home page, click the Cluster menu. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to perform a node pool version upgrade on. Navigate to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click More > Node Pool Upgrade at the far right of the Node Pool row. The Node Pool Version Upgrade popup will open.
- You can only upgrade the node pool when the node’s status is Running.
- Node Pool Version Upgrade After checking the information in the popup window, click the Confirm button.
Node pool auto scaling/downsizing
Node pool auto scaling is a feature that automatically adjusts the number of node pools by adding new nodes to a specified node pool or removing existing nodes according to workload demands. This feature operates based on the node pool.
- When node pool auto scaling/downsizing, it is adjusted based on the resource requests of pods running on the node pool’s nodes rather than actual resource usage, and it periodically checks the status of pods and nodes and executes auto scaling/downsizing tasks.
To set up the auto-scaling/auto-shrinking feature of the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. Go to the Cluster List page.
- Cluster List page, select the cluster you want to use the node auto‑scaling/scale‑down feature. Then go to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
- Node Pool Edit in the popup window, select Node Pool Auto Scaling to Enable.
- After entering the minimum and maximum number of nodes, click the Confirm button.Reference
Node pool auto-scaling settings can also be configured on the cluster node pool creation page.
- Node pool expansion conditions
- When pod fails to run on the cluster due to insufficient resources (Pending pod occurs)
- Node pool reduction condition (when all satisfied)
- If the sum of resource requests (CPU/Memory) of all pods running on a node is less than 50% of the node’s allocatable resources
- If all pods running on the node can be run on another node (there must be no pods with PDB restrictions, etc.)
- While using node pool auto scaling, to prevent deletion due to node reduction, please add the following annotation to the node.
cluster-autoscaler.kubernetes.io/scale-down-disabled: “true”
- Node pool expansion conditions
- Node pool auto-scaling works only when the NotReady nodes among all nodes in the cluster are 45% or less of the total and no more than 3.
- If there are directly connected nodes that are not node pools created by the Kubernete Engine service, using the feature may cause malfunction.
Auto-recover node pool
Node auto-recovery is a feature that, when an abnormal node is detected in the cluster, automatically deletes it and creates a new node to restore all node counts in the node pool to a normal state. This feature operates based on the node pool.
Node auto-recovery deletes the existing node and creates a new node when communication between K8S Control Planes fails due to node (Virtual Server) issues, stopped state, network issues, etc., according to the node auto-recovery conditions, so caution is required when using it.
- When creating a node pool, it is restored according to the initially set conditions, and custom settings made after node creation are not restored.
If there are directly connected nodes that are not part of the node pool created by the Kubernete Engine service, the feature may malfunction when used.
To set up the node auto-recovery feature, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to use the node auto-recovery feature. Move to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- Click the Edit icon on the right of Node Pool Information on the Node Pool Details page. The Edit Node Pool popup window opens.
- Node Pool Edit In the popup, select Node Auto Recovery as Enable, then click the Confirm button.
Node auto-recovery settings can also be configured on the cluster node pool creation page.
- When it is a node auto-recovery target
- If a node reports NotReady status in consecutive checks for a certain time threshold (about 10 minutes)
- If the node does not report any status for a certain time threshold (about 10 minutes)
- If not a node automatic recovery target
- Node that remains in Creating state and does not become Running when initially created
- When five or more abnormal nodes occur simultaneously in the same node pool
Setting Node Pool Labels
Node pool labels are a feature for selectively scheduling workloads onto nodes.
- When applying node pool label, it is not applied to existing nodes, and the label is applied only to newly created nodes.
- If you need to apply a label to an existing node, the user must set it directly with kubectl.
To set the node pool label, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- On the Cluster List page, select the cluster for which you want to set the node pool label. It navigates to the Cluster Details page.
- Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- Node Pool Details page, when you click the Edit icon of the label, the Edit Label popup window opens.
- Label Edit In the popup window, click the Add button to add the required number of labels.
- Enter the label information and click the Confirm button.
Setting Node Pool Taint
Node pool taint is a feature to prevent workloads from being scheduled onto nodes.
- If you set a taint on all node pools, pods required for normal cluster operation may not run.
- When applying node pool taint, it is not applied to existing nodes, and the taint is applied only to newly created nodes.
- If you need to apply a taint to an existing node, the user must set it directly with kubectl.
To set the node pool taint, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. Go to the Cluster List page.
- Cluster List page, select the cluster you want to set the node pool label for. Move to the Cluster Details page.
- Cluster Details page, select the Node Pool tab, then click the Node Pool Name you want to edit. You will be taken to the Node Pool Details page.
- On the Node Pool Details page, clicking the Edit icon of a taint opens the Edit Taint popup.
- Tint Edit In the popup window, click the Add button to add tints as many as needed.
- Enter the tint information and click the Confirm button.
Advanced Node Pool Settings
Node pool advanced settings is a feature to apply detailed settings such as the number of pods, PID, logs, image GC, etc. within a worker node.
Each setting corresponds to the kubelet configuration as follows.
- Maximum pods per node: maxPods
- Image GC upper limit percent: imageGCHighThresholdPercent
- Image GC low threshold percent: imageGCLowThresholdPercent
- Container log maximum size MB: containerLogMaxSize
- Container log maximum file count: containerLogMaxFiles
- Pod PID limit: podPidsLimit
- Unsafe Sysctl allowed: allowedUnsafeSysctls
To perform advanced settings for the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- Cluster List page, select the cluster you want to configure node pool advanced settings. Navigate to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click Create Node Pool. You will be taken to the Create Node Pool page.
- On the Node Pool Creation page, select Advanced Settings to Enable.
- After selecting Use, enter the required information for the items that appear.
- Summary tab, after confirming that the required information has been entered correctly, click the Create button.
Delete node pool
If necessary, delete the node pool from the Kubernetes Engine details page.
To delete the node pool, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Cluster menu on the Service Home page. You will be taken to the Cluster List page.
- On the Cluster List page, select the cluster whose node count you want to modify. You will be taken to the Cluster Details page.
- On the Cluster Details page, select the Node Pool tab, then click the More button at the far right of the node pool row. In the More menu, click Delete Node Pool.
- Delete Node Pool In the popup window, select the checkbox and enter the name of the node pool to delete, then click the Confirm button.
- You must select the checkbox of the node deletion confirmation message for the confirm button to be enabled.
Check node details
A node is a working machine used in a Kubernetes cluster, containing essential services required to run Pods. Each node is managed by the master components, and depending on the cluster configuration, virtual machines or physical machines can be used as nodes.
After creating the cluster, you can view information such as metadata and object information of the added nodes, and edit the resource file with a YAML editor.
To view detailed information of the node pool, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Node menu on the Service Home page. Navigate to the Node List page.
- Node List page, after selecting the cluster you want to view detailed information for from the gear button at the top left, click the Confirm button.
- Select the node you want to view detailed information for and click. You will be taken to the Node Details page.
Category Detailed descriptionStatus Display Displays the current status of the node Detailed Information Check the node’s Account information, metadata, and object information YAML Node resources can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred on the node Pod Check node’s pod information - Pod (Pod) is the smallest compute unit that can be created, managed, and deployed in Kubernetes Engine
Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check metadata information such as node labels, annotations, taints Object Information Displays the object information of the created node, such as internal IP, machine ID, capacity, resources, etc. - If GPU resources are present, check the number of GPUs in the Capacity > Nvidia.com/GPU column
Table. Node Detailed Information Items
2.2 - Manage Namespaces
A namespace is a logical separation unit within a Kubernetes cluster, and it is used to specify access permissions or resource usage limits per namespace.
Create Namespace
To create a namespace, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Namespace menu on the Service Home page. Navigate to the Namespace List page.
- On the Namespace List page, select the cluster where you want to create a namespace from the gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check namespace detailed information
You can check the namespace status and detailed information on the namespace detail page.
To view detailed namespace information, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Click the Namespace menu on the Service Home page. Navigate to the Namespace List page.
- On the Namespace List page, select the cluster that the namespace requiring detailed information belongs to from the gear button at the top left, then click Confirm.
- Click on the item you want to view detailed information for on the Namespace List page. You will be taken to the Namespace Details page.
Category Detailed description Status Display Displays the current status of the namespace Namespace Deletion Delete namespace - A namespace containing workloads cannot be deleted. To delete a namespace, all associated workloads must be deleted.
Detailed Information Check the Account information and metadata information of the namespace YAML Namespaces can be edited in the YAML editor - Click the Edit button, modify the namespace, then click the Save button to apply changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred within the namespace Pod Check pod information of the namespace Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the namespace Table. Namespace detailed information items
Delete namespace
To delete a namespace, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click the Namespace menu on the Service Home page. You will be taken to the Namespace List page.
- Namespace List page, after selecting the cluster that the namespace you want to delete belongs to from the gear button at the top left, click the Confirm button.
- Namespace List page, select the item you want to view detailed information and click. You will be taken to the Namespace Details page.
- Click Delete Namespace on the Namespace Details page.
- When the alert confirmation window appears, click the Confirm button.
- After selecting the item you want to delete on the namespace list page, click Delete to delete the selected namespace.
- A namespace that contains workloads cannot be deleted. To delete the namespace, delete all associated workloads.
2.3 - Manage Workload
A workload is an application that runs on Kubernetes Engine. You can create a namespace and then add or delete workloads. Workloads are created and managed per deployment, pod, stateful set, daemon set, job, and cron job.
Deployments, Pods, StatefulSets, DaemonSets, Jobs, and CronJobs services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
- To select a different cluster (namespace), click the gear button on the right side of the list. Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Managing Deployments
A Deployment refers to a resource that provides updates for Pods and ReplicaSets. In workloads, you can create a Deployment and view detailed information or delete it.
Create Deployment
To create a deployment, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Deployment under the Workload menu on the Service Home page. You will be taken to the Deployment List page.
- On the Deployment List page, select the cluster and namespace from the top-left gear button, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
- The following is an example
.yamlfile showing the required fields and object Spec for creating a deployment. (application/deployment.yaml)Color modeapiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80Code block. Required fields and object Spec for deployment creation
- The following is an example
Check deployment detailed information
To view the deployment details, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- From the Service Home page, click Deployment under the Workloads menu. Navigate to the Deployment List page.
- Deployment List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to view detailed information for on the Deployment List page. You will be taken to the Deployment Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Deployment Delete deployment Detailed Information Can check detailed information of deployment YAML Deployment resource files can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply the changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the deployment Pod Check the pod information of the deployment - Pod (pod) is the smallest computing unit that can be created, managed, and deployed in Kubernetes Engine
Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the deployment Object Information Check the object information of the deployment Table. Deployment detailed information items
Delete Deployment
To delete the deployment, follow these steps.
- All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Deployment under the Workload menu. Navigate to the Deployment List page.
- Deployment List page, select the cluster and namespace from the top left gear button, then click Confirm.
- Select the item you want to delete on the Deployment List page. Go to the Deployment Details page.
- Click Delete Deployment on the Deployment Details page.
- Alert confirmation window appears, click the Confirm button.
Managing Pods
A pod (Pod) is the smallest computing unit that can be created, managed, and deployed in Kubernetes, referring to a group of one or more containers. In a workload, you can create a pod and view detailed information or delete it.
Create a pod
To create a pod, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click Pod under the Workload menu. Navigate to the Pod List page.
- Pod List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
- In the Object Creation Popup from, enter the object information and click the Confirm button.
Check pod detailed information
To check the detailed pod information, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Pod under the Workload menu on the Service Home page. You will be taken to the Pod List page.
- On the Pod List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to view detailed information for on the Pod List page. You will be taken to the Pod Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionStatus Display Displays the current status of the pod Delete Pod Delete pod Detailed Information Can view detailed information of the pod YAML Pod resource files can be edited in the YAML editor - Edit button click and modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the pod Log When you select a container, you can view the container information that the pod has Account information Check basic information about the Account such as Account name, location, creation date and time Metadata Information Check the pod’s metadata information Object Information Check the pod’s object information Init Container Information Check the init container information of the pod Container Information Check the pod’s container information Table. Pod detailed information items
Delete Pod
To delete a pod, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Click Pod under the Workload menu on the Service Home page. Navigate to the Pod List page.
- On the Pod List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Pod List page, select the item you want to delete. Pod Detail page, navigate.
- Click Delete Pod on the Pod Details page.
- Notification Confirmation Window appears, click the Confirm button.
Managing StatefulSet
StatefulSet refers to a workload API object used to manage the stateful aspects of an application. In a workload, you can create a StatefulSet and view detailed information or delete it.
Creating a StatefulSet
To create a StatefulSet, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click StatefulSet under the Workload menu on the Service Home page. You will be taken to the StatefulSet List page.
- On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- Object Creation Popup에서 enter the object information and click the Confirm button.
Check detailed information of StatefulSet
To view the detailed information of the StatefulSet, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home on the page, click StatefulSet under the Workload menu. Navigate to the StatefulSet List page.
- On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to view detailed information for on the StatefulSet List page. You will be taken to the StatefulSet Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view the service information.
Category Detailed descriptionDelete StatefulSet Delete the StatefulSet Detailed Information Can check detailed information of StatefulSet YAML StatefulSet resource files can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply the changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the StatefulSet Pod Check the pod information of the StatefulSet Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the StatefulSet Object Information Check the object information of the StatefulSet Table. StatefulSet detailed information items
Delete StatefulSet
To delete a StatefulSet, follow the steps below.
- Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click StatefulSet under the Workload menu. Navigate to the StatefulSet List page.
- On the StatefulSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to delete on the StatefulSet List page. Go to the StatefulSet Details page.
- Click Delete StatefulSet on the StatefulSet Details page.
- If the notification confirmation window appears, click the Confirm button.
Managing DaemonSets
DaemonSet refers to a resource that ensures that a copy of a pod runs on all nodes or some nodes. In workloads, you can create a DaemonSet and view detailed information or delete it.
Creating a DaemonSet
To create a DaemonSet, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click DaemonSet under the Workload menu. You will be taken to the DaemonSet List page.
- On the DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check DaemonSet detailed information
To view the detailed information of the DaemonSet, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click DaemonSet under the Workload menu on the Service Home page. You will be taken to the DaemonSet List page.
- DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- DaemonSet List page, select the item you want to view detailed information for. It navigates to the DaemonSet Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view the service information.
Category Detailed descriptionDaemonSet Delete Delete DaemonSet Detailed Information Can view detailed information of DaemonSet YAML DaemonSet resource files can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the DaemonSet Pod Check the pod information of the DaemonSet Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the DaemonSet Object Information Check the object information of the DaemonSet Table. DaemonSet detailed information items
Delete DaemonSet
To delete a DaemonSet, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click DaemonSet under the Workload menu on the Service Home page. Navigate to the DaemonSet List page.
- DaemonSet List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- DaemonSet List page, select the item you want to delete. Move to the DaemonSet Details page.
- Click Delete DaemonSet on the DaemonSet Details page.
- If the Alert confirmation window appears, click the Confirm button.
Job Management
A job refers to a resource that creates one or more pods and continues to run pods until the specified number of pods have successfully terminated. In a workload, you can create a job and view detailed information or delete it.
Create Job
To create a job, follow the steps below.
- All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
- Click Job under the Workload menu on the Service Home page. You will be taken to the Job List page.
- On the Job List page, select the cluster and namespace from the top left gear button, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check job details
To view detailed job information, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Job under the Workload menu on the Service Home page. Navigate to the Job List page.
- On the Job List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- On the Job List page, select the item for which you want to view detailed information. You will be taken to the Job Details page.
- Selecting Show system objects at the top of the list will display all items except the Kubernetes object entries.
- Click each tab to view service information.
Category Detailed descriptionJob Delete Delete Job Detailed Information Can view detailed information of the job YAML Job resource file can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred within the job Pod Check the pod information of the job Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the job’s metadata information Object Information Check the job’s object information Table. Job Detailed Information Items
Delete Job
To delete a job, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Click Job under the Workload menu on the Service Home page. You will be taken to the Job List page.
- Job List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Job List page, select the item you want to delete. Go to the Job Details page.
- Click Delete Job on the Job Details page.
- Alert confirmation window appears, click the Confirm button.
Managing Cron Jobs
Cron jobs refer to resources that periodically execute a job according to a schedule written in cron format. They can be used to run repetitive tasks at regular intervals such as backups, report generation, etc. In the workload, you can create a cron job and view or delete its detailed information.
Create Cron Job
To create a cron job, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click CronJob under the Workload menu on the Service Home page. You will be taken to the CronJob List page.
- CronJob List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check Cron Job Detailed Information
To check the detailed information of the cron job, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Cron Job under the Workload menu on the Service Home page. You will be taken to the Cron Job List page.
- On the CronJob List page, select the cluster and namespace from the top left gear button, then click Confirm.
- Cron Job List page: select the item you want to view detailed information for. You will be taken to the Cron Job Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionCron job delete Delete cron job Detailed Information Can view detailed information of cron job YAML Cron job resource files can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, you can click the Diff button to view the changed content
Event Check events that occurred within the cron job Job Check the job information of the Cron job. Selecting a job item moves to the job detail page Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the cron job Object Information Check the object information of the cron job Table. Cronjob detailed information items
Delete Cron Job
To delete a cron job, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Cron Job under the Workload menu on the Service Home page. You will be taken to the Cron Job List page.
- CronJob List page에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
- Cron Job List page, select the item you want to delete. Navigate to the Cron Job Details page.
- Click Delete Cron Job on the Cron Job Details page.
- If the Notification Confirmation Window appears, click the Confirm button.
2.4 - Service and Ingress Management
A service is an abstraction method that exposes applications running in a set of pods as a network service, and an ingress is used to expose HTTP and HTTPS paths from outside the cluster to inside the cluster. After creating a namespace, you can create or delete services, endpoints, ingresses, and ingress classes.
Service, endpoint, ingress, ingress class services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
- To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace you want to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
Service Management
You can create a service and view or delete its detailed information.
Create Service
To create a service, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Service under the Service and Ingress menu on the Service Home page. You will be taken to the Service List page.
- Service List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check service detailed information
To view detailed service information, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Service under the Service and Ingress menu. You will be taken to the Service List page.
- On the Service List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- On the Service List page, select the item for which you want to view detailed information. You will be taken to the Service Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Service Delete the service Detailed Information Can check detailed service information YAML Service resource files can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply the changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred within the service Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the service’s metadata information Object Information Check the service’s object information Table. Service Detailed Information Items
Delete Service
To delete the service, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Service under the Service and Ingress menu. You will be taken to the Service List page.
- Service List page, select the cluster and namespace from the top left gear button, then click Confirm.
- Service List page, select the item you want to delete. Service Details page will be opened.
- Click Delete Service on the Service Details page.
- If the Notification Confirmation Window appears, click the Confirm button.
Manage Endpoints
You can create an endpoint and view or delete its detailed information.
Create Endpoint
To create an endpoint, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Endpoint under the Service and Ingress menu on the Service Home page. Navigate to the Endpoint List page.
- Endpoint List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check endpoint detailed information
To view detailed endpoint information, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click Endpoint under the Service and Ingress menu. Navigate to the Endpoint List page.
- On the Endpoint List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Endpoint List page, select the item you want to view detailed information for. Endpoint Details page will be opened.
- If you select Show System Objects at the top of the list, all items except the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionEndpoint Deletion Delete endpoint Detailed Information Can check detailed information of the endpoint YAML Endpoint resource files can be edited in the YAML editor - click the Edit button and modify the resource, then click the Save button to apply the changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred within the endpoint Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the endpoint Object Information Check the endpoint’s object information Table. Endpoint Detailed Information Items
Delete Endpoint
To delete the endpoint, follow the steps below.
- Click the All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Endpoint under the Service and Ingress menu. You will be taken to the Endpoint List page.
- Endpoint List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Endpoint List page, select the item you want to delete. Navigate to the Endpoint Detail page.
- Click Delete Endpoint on the Endpoint Details page.
- Notification Confirmation Window appears, click the Confirm button.
Manage Ingress
Ingress is an API object that manages external access (HTTP, HTTPS) to services within the Kubernetes Engine, used to expose workloads externally, and provides L7 load balancing functionality.
Create Ingress
To create an ingress, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click Ingress under the Service and Ingress menu. Go to the Ingress List page.
- Ingress List page에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Create Object을 클릭하세요.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check Ingress Detailed Information
To view the ingress detailed information, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Ingress under the Service and Ingress menu on the Service Home page. Navigate to the Ingress List page.
- On the Ingress List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to view detailed information for on the Ingress List page. You will be taken to the Ingress Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Ingress Delete Ingress Detailed Information Can view detailed information of Ingress YAML Ingress resource files can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the ingress Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the ingress Object Information Check the object information of Ingress Table. Ingress detailed information items
Delete Ingress
To delete Ingress, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click Ingress under the Service and Ingress menu. Navigate to the Ingress List page.
- On the Ingress List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Ingress List page, select the item you want to delete. Go to the Ingress Details page.
- Click Delete Ingress on the Ingress Detail page.
- Alert confirmation window appears, click the Confirm button.
Manage Ingress Class
IngressClass refers to an API resource that allows multiple ingress controllers to be used in a single cluster. In each ingress, you must specify a reference class to the IngressClass resource that includes the configuration, including the controller that must implement the class.
Create Ingress Class
To create an Ingress class, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Click IngressClass under the Service and Ingress menu on the Service Home page. Go to the IngressClass List page.
- On the IngressClass List page, select the cluster and namespace from the top-left gear button, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check Ingress Class Detailed Information
To view detailed information of the Ingress class, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click IngressClass under the Service and Ingress menu. Navigate to the IngressClass List page.
- IngressClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- On the IngressClass List page, select the item for which you want to view detailed information. You will be taken to the IngressClass Detail page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Ingress Class Delete the ingress class Detailed Information Can check detailed information of IngressClass YAML Ingress class resource file can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply the changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred within the Ingress class Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the Ingress class Object Information Check the object information of the Ingress class Table. Ingress Class Detailed Information Items
Delete Ingress Class
To delete the Ingress class, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click IngressClass under the Service and Ingress menu. Navigate to the IngressClass List page.
- IngressClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to delete on the IngressClass List page. Move to the IngressClass Details page.
- Click Delete Ingress Class on the Ingress Class Details page.
- Notification confirmation window appears, click the Confirm button.
2.5 - Storage Management
You can create and manage storage to use when using Kubernetes Engine. Storage is created and then managed for each of PVC, PV, and StorageClass items.
PVC, PV, storage class service is set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
- To select a different cluster (namespace), click the gear button on the right side of the list. Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
The items linked by storage type are as follows.
| Type | Detailed Description |
|---|---|
| Block Storage | Supports a storage class that uses the product’s volume in conjunction with the Block storage product within Virtual Server |
| Object Storage | Can be linked with Samsung Cloud Platform products or external Object Storage
|
| File Storage | Supports storage classes of NFS and CIFS protocol volumes in conjunction with the File Storage product
|
PVC manage
Persistent Volume Claim(PVC) is an object defined to allocate the required storage capacity. PVC provides high usability through abstraction, and can prevent the problem where data disappears together when the container lifecycle expires (maintaining Data Persistence).
Create PVC
To create a PVC, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click Storage under the PVC menu. Navigate to the PVC List page.
- On the PVC List page, after selecting the cluster and namespace from the top left gear button, click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check PVC detailed information
To check the detailed PVC information, follow the steps below.
- All Services > Container > Kubernetes Engine menu. Go to the Service Home page of Kubernetes Engine.
- Click PVC under the Storage menu on the Service Home page. You will be taken to the PVC List page.
- On the PVC List page, select the cluster and namespace from the top left gear button, then click Confirm.
- Select the item you want to view detailed information for on the PVC List page. You will be taken to the PVC Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionStatus Display Displays the current status of the PVC. - Bound: Normal connection
Delete PVC Delete PVC Detailed Information PVC detailed information can be viewed YAML PVC resource file can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within PVC Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check PVC metadata information Object Information Check PVC object information Table. PVC detailed information items
Delete PVC
To delete PVC, follow the steps below.
- All Services > Container > Kubernetes Engine menu를 클릭하세요. Kubernetes Engine의 Service Home 페이지로 이동합니다.
- Click PVC under the Storage menu on the Service Home page. Navigate to the PVC List page.
- PVC list page, select the cluster and namespace from the top left gear button, then click Confirm.
- PVC List Select the item you want to delete on the page. PVC Details Navigate to the page.
- Click Delete PVC on the PVC Details page.
- Notification confirmation window appears, click the Confirm button.
After selecting the item you want to delete on the PVC list page, you can delete the selected PVC by clicking Delete.
- Check the backup status of the PV and volume to be deleted before deleting the PVC.
PV Management
Persistent Volume (PV) refers to the physical disk created by the system administrator in Kubernetes Engine.
Create PV
To create a PV, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click PV under the Storage menu on the Service Home page. It navigates to the PV List page.
- PV list page, select the cluster and namespace from the top left gear button, then click Create Object.
- In the Object Creation Popup에서 오브젝트 정보를 입력하고 Confirm 버튼을 클릭하세요.
Check PV detailed information
To view the detailed PV information, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Service Home page, click Storage under the PV menu. Navigate to the PV List page.
- PV List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to view detailed information for on the PV List page. You will be taken to the PV Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed description Status Display Displays the current status of the PV. - Bound: Normal connection
PV Delete Delete PV Detailed Information PV detailed information can be viewed YAML PV’s resource file can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within PV Account Information Check basic information about the Account such as Account name, location, creation date and time Metadata Information Check PV’s metadata information Object Information Check PV’s object information Table. PV detailed information items
PV Delete
To delete PV, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click PV under the Storage menu on the Service Home page. You will be taken to the PV List page.
- On the PV List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- PV List page, select the item you want to delete. Go to the PV Details page.
- Click Delete PV on the PV Details page.
- Notification confirmation window appears, click the Confirm button.
Managing StorageClass
Storage Class is a Kubernetes resource that defines the level of storage type, performance, etc.
Kubernetes Engine provides the nfs-subdir-external-sc and bs-sc storage classes by default, and has the following features.
- nfs-subdir-external-sc storage class shares and uses file storage connected to the cluster.
- Access mode: RWX - ReadWriteMany
- Reclaim policy: Delete (when PVC is deleted, delete PV and stored data together), Retain (when PVC is deleted, keep PV and stored data)
- Capacity expansion: individual PVC expansion not allowed/entire file storage expansion allowed
- The bs-sc storage class supports using SSD-type volumes in conjunction with block storage products.
- Access mode: RWO - ReadWriteOnce
- Reclaim policy: Delete(when PVC is deleted, delete PV and stored data together), Retain(when PVC is deleted, retain PV and stored data)
- Capacity expansion support: individual PVC expansion support (automatic volume expansion in 8 Gi increments)
Predefined Storage Class
| Storage Class | Reclaim Policy* | Volume Expansion Allowed** | Mount Options | Remarks |
|---|---|---|---|---|
| nfs-subdir-external-sc (default) | Delete | Not supported | nfsvers=3, noresvport | Linked with default Volume (NFS) settings |
| nfs-subdir-external-sc-retain | Retain | Not supported | nfsvers=3, noresvport | Linked with default Volume (NFS) settings |
| bs-sc | Delete | Support | - | VirtualServer > BlockStorage product integration |
| bs-sc-retain | Retain | Support | - | VirtualServer > BlockStorage product integration |
- (*) To use a storage class other than the default, you need to specify the storage class name in PVC’s spec.storageClassName
- (**) User can directly change the default storage class (storageclass.kubernetes.io/is-default-class: “true” annotation adjustment)
The features of the reclaim policy are as follows.
- Delete: If you delete the PVC, the associated PV and physical data will also be deleted.
- Retain: Even if the PVC is deleted, the corresponding PV and physical data are not deleted and are retained. Since physical data not used by the workload may remain in storage, careful capacity management is required.
Consider the following when using volume expansion.
- nfs-subdir-external-sc storage class
- Cannot adjust the capacity of PVC. (Volume expansion not supported)
- All PVs share the total capacity of the File Storage volume, so volume expansion for each PVC is not required.
- bs-sc storage class
- You can expand the PVC capacity. (Shrink function not supported)
- The capacity of the PV is not guaranteed to be as much as requested by the PVC. (Supports expansion in 8 Gi units)
Create StorageClass
To create a storage class, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click StorageClass under the Storage menu. Navigate to the StorageClass list page.
- StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.ReferenceFor detailed information on the concept of storage classes and object creation, refer to the Kubernetes official documentation > Storage Class.
Check storage class detailed information
To view detailed storage class information, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Storage under the StorageClass menu. You will be taken to the StorageClass List page.
- On the StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- StorageClass List page, select the item you want to view detailed information for. Navigate to the StorageClass Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete StorageClass Delete the StorageClass Detailed Information Can view detailed information of storage class YAML Resource files of the storage class can be edited in the YAML editor - Click the Edit button and modify the resource, then click the Save button to apply the changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the storage class Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the storage class Object Information Check the object information of the storage class Table. StorageClass detailed information items
Delete StorageClass
To delete the storage class, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- From the Service Home page, click Storage Class under the Storage menu. You will be taken to the Storage Class List page.
- On the StorageClass List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- StorageClass List page, select the item you want to delete. Navigate to the StorageClass Details page.
- Click Delete StorageClass on the StorageClass Details page.
- When the notification confirmation window appears, click the Confirm button.CautionOn the storage class list page, after selecting the item you want to delete, click Delete to delete the selected storage class.
2.6 - Configuration(Configuration) Management
When there is a need to manage values that change inside a container depending on various environments such as development and operation, creating and managing a separate image due to environment variables is inconvenient and incurs significant cost waste. In Kubernetes, you can manage environment variables or configuration values as variables so that they can be changed from outside and injected when a Pod is created, and you can use ConfigMap and Secret for this.
ConfigMaps and secret services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
- To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace you want to change and click the Confirm button. You can view the ConfigMap and Secret services created in the selected cluster/namespace.
Manage ConfigMap
You can write and manage the Config information used in the namespace as a ConfigMap.
Create ConfigMap
To create a ConfigMap, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Service Home on the page, click Configuration menu below ConfigMap. Go to the ConfigMap List page.
- ConfigMap List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check ConfigMap detailed information
To view detailed ConfigMap information, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click ConfigMap under the Configuration menu. Navigate to the ConfigMap List page.
- ConfigMap List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- ConfigMap List page, select the item you want to view detailed information for. You will be taken to the ConfigMap Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view the service information.
Category Detailed descriptionConfig Map Delete Delete Config Map Detailed Information Can check detailed information of config map YAML ConfigMap’s resource file can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check the events that occurred within the config map Account information Check basic information about the Account such as Account name, location, creation date and time Metadata Information Check the metadata information of the ConfigMap Object Information Check the object information of the config map - In Data, rows are separated by
- - -, and value is displayed in textarea format - Binary Data’s value outputs the length value
Table. ConfigMap detailed information items
Delete ConfigMap
To delete a config map, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click ConfigMap under the Configuration menu. You will be taken to the ConfigMap List page.
- On the ConfigMap List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- ConfigMap List page, select the item you want to delete. Go to the ConfigMap Details page.
- On the ConfigMap Details page, click Delete ConfigMap.
- Alert confirmation window appears, click the Confirm button.
Manage Secrets
By using secrets, you can securely store and manage sensitive information such as passwords, OAuth tokens, and SSH keys.
Create Secret
To create a secret, follow the steps below.
- All Services > Container > Kubernetes Engine menu, click. Go to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Secret under the Configuration menu. You will be taken to the Secret List page.
- Secret List page, select the cluster and namespace from the top left gear button, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check Secret Detailed Information
To view the secret detailed information, follow the steps below.
- All Services > Container > Kubernetes Engine menu, click it. Go to the Service Home page of Kubernetes Engine.
- Click Secret under the Configuration menu on the Service Home page. You will be taken to the Secret List page.
- Secret List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Secret List page, select the item you want to view detailed information for. Secret Details page will be navigated.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Secret Delete the secret Detailed Information Can check secret’s detailed information YAML Secret’s resource file can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within Secret Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the secret’s metadata information Object Information Check the secret’s object information Table. Secret Detailed Information Items
Delete Secret
To delete the secret, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Go to the Service Home page of Kubernetes Engine.
- Click Secret under the Configuration menu on the Service Home page. You will be taken to the Secret List page.
- Secret List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Secret List page, select the item you want to delete. Secret Detail page, navigate.
- Click Delete Secret on the Secret Details page.
- If the notification confirmation window appears, click the Confirm button.
2.7 - Manage Permissions
Kubernetes clusters can be accessed by multiple users, and you can assign permissions per specific API or namespace to define access scope. By applying Kubernetes’ role-based access control (RBAC, Role-based access control) feature, you can set permissions per cluster or namespace. You can create and manage cluster roles, cluster role bindings, roles, and role bindings.
ClusterRole, ClusterRoleBinding, Role, and RoleBinding services are set by default to the cluster (namespace) selected when creating the service. Even if you select other items in the list, the default cluster (namespace) setting is retained.
- To select a different cluster (namespace), click the gear button on the right side of the list. In the Cluster/Namespace Settings popup, select the cluster and namespace to change and click the Confirm button. You can view the services created in the selected cluster/namespace.
- RBAC API declares the following four types of Kubernetes objects.
- Role
- ClusterRole
- RoleBinding
- ClusterRoleBinding
- For detailed explanation of RBAC description and modification, refer to the Kubernetes authentication and authorization documentation. (https://kubernetes.io/docs/reference/access-authn-authz/authentication/)
Managing Cluster Role
You can set and manage access permissions on a per-cluster basis. You can also set permissions for APIs or resources that are not limited to a namespace.
Create Cluster Role
To create a cluster role, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Cluster Role under the Permissions menu on the Service Home page. Go to the Cluster Role List page.
- Cluster Role List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- Object Creation Popup In the Object Creation Popup, enter the object information and click the Confirm button.
Check detailed information of cluster role
To view detailed information about the cluster role, follow these steps.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Cluster Role under the Permissions menu on the Service Home page. Go to the Cluster Role List page.
- On the Cluster Role List page, select the cluster and namespace from the top left gear button, then click Confirm.
- Cluster Role List page: select the item you want to view detailed information for. You will be taken to the Cluster Role Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Cluster Role Delete the cluster role Detailed Information Can check detailed information of ClusterRole YAML Cluster role’s resource files can be edited in the YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changes
Event Check events that occurred within the cluster role Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the cluster role Policy Rule Information View the policy rule information of the ClusterRole - Resources: List of resources to which the rule applies
- Non-Resource URLs: Non-Resource URLs are the set of partial URLs that the user needs to access
*is allowed but only as the final segment of the path
- Since non-resource URLs are not namespaced, this field only applies to ClusterRoles referenced by a ClusterRoleBinding
- A rule can apply to API resources (e.g., “pods” or “secrets”) or non-resource URL paths (e.g., “/api”), but not both
- Resource Names: Resource names are an optional whitelist of names the rule applies to. An empty set means everything is allowed
- Verbs: Verb refers to the API verbs used in resource requests such as get, list, create, update, path, watch, delete, deletecollection
- For more details, refer to the Kubernetes official documentation > API Verbs
Table. Cluster role detailed information items
Delete ClusterRole
To delete the cluster role, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Cluster Role under the Permissions menu. You will be taken to the Cluster Role List page.
- On the Cluster Role List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Cluster Role List page, select the item you want to delete. Move to the Cluster Role Details page.
- Click Delete Cluster Role on the Cluster Role Details page.
- Alert confirmation window appears, click the Confirm button.
Managing ClusterRoleBinding
You can create and manage a cluster role binding by connecting a cluster role with a specific target.
Create Cluster Role Binding
To create a cluster role binding, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click ClusterRoleBinding under the Permissions menu. You will be taken to the ClusterRoleBinding list page.
- Cluster Role Binding List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check detailed information of ClusterRoleBinding
To check the detailed information of cluster role binding, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click ClusterRoleBinding under the Permissions menu. You will be taken to the ClusterRoleBinding List page.
- Cluster Role Binding List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Cluster Role Binding List page, select the item you want to view detailed information. Navigate to the Cluster Role Binding Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Cluster Role Binding Delete cluster role binding Detailed Information Check the detailed information of the cluster role binding YAML The resource file of ClusterRoleBinding can be edited in the YAML editor - Edit button click and modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the ClusterRoleBinding Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the cluster role binding Roll/Target Info Check the role and target information of the cluster roll Table. Cluster Role Binding Detailed Information Items
Delete Cluster Role Binding
To delete the cluster role binding, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click ClusterRoleBinding under the Permissions menu on the Service Home page. It will navigate to the ClusterRoleBinding List page.
- Cluster Role Binding List 페이지에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
- Cluster Role Binding List Select the item you want to delete on the page. Cluster Role Binding Details Navigate to the page.
- Click Delete Cluster Role Binding on the Cluster Role Binding Details page.
- Notification Confirmation Window appears, click the Confirm button.
Manage Roll
A role refers to a rule that specifies permissions for a specific API or resource. You can create and manage permissions that can only access the namespace to which the role belongs.
Create Roll
To create a roll, follow the steps below.
- All Services > Container > Kubernetes Engine menu, click. Navigate to the Service Home page of Kubernetes Engine.
- Click Role under the Permission menu on the Service Home page. It moves to the Role List page.
- On the Roll List page, select the cluster and namespace from the Gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check roll detailed information
To check detailed roll information, follow the steps below.
- Click the All Services > Container > Kubernetes Engine menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Role under the Permissions menu. You will be taken to the Role List page.
- On the Role List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to view detailed information for on the Roll List page. You will be taken to the Roll Details page.
- If you select Show System Objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete role Delete role Detailed Information Check detailed information of the roll YAML Roll’s resource file can be edited in a YAML editor - Click the Edit button, modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within the roll Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of the roll Policy Rule Information View the policy rule information of the role - Resources: List of resources to which the rule applies
- Non-Resource URLs: Non-Resource (NonResource) URLs are the set of partial URLs the user must access
*is allowed but only as the final segment of the path
- Non-resource URLs are not namespaced, so this field only applies to ClusterRoles referenced by a ClusterRoleBinding
- Rules can apply to API resources (e.g., “pods” or “secrets”) or non-resource URL paths (e.g., “/api”), but not both
- Resource Names: Resource names are an optional whitelist of names the rule applies to, an empty set means everything is allowed
- Verbs: Verb refers to the API verbs used in resource requests such as get, list, create, update, path, watch, delete, deletecollection
- For more details, refer to the Kubernetes official documentation > API Verbs
Table. Roll detailed information items
Delete roll
To delete the roll, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Role under the Permissions menu on the Service Home page. You will be taken to the Role List page.
- On the Roll List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Select the item you want to delete on the Role List page. Navigate to the Role Details page.
- Click Delete Roll on the Roll Details page.
- When the alert confirmation window appears, click the Confirm button.
Manage Roll Binding
You can connect a role with a specific target to create and manage role bindings.
Create Roll Binding
To create a role binding, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- On the Service Home page, click Roll Binding under the Permission menu. It will navigate to the Roll Binding List page.
- Roll Binding List page, select the cluster and namespace from the gear button at the top left, then click Create Object.
- In the Object Creation Popup, enter the object information and click the Confirm button.
Check Roll Binding Detailed Information
To check the detailed roll binding information, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Roll Binding under the Permission menu on the Service Home page. Navigate to the Roll Binding List page.
- Roll Binding List 페이지에서 클러스터와 네임스페이스를 왼쪽 상단의 gear 버튼에서 선택 후, Confirm을 클릭하세요.
- On the Roll Binding List page, select the item you want to view detailed information for. You will be taken to the Roll Binding Details page.
- If you select Show system objects at the top of the list, items other than the Kubernetes object entries will be displayed.
- Click each tab to view service information.
Category Detailed descriptionDelete Roll Binding Delete roll binding Detailed Information Check detailed information of roll binding YAML Roll binding’s resource files can be edited in a YAML editor - Edit button click and modify the resource, then click the Save button to apply changes
- When editing content, click the Diff button to view the changed content
Event Check events that occurred within roll binding Account Information Check basic information about the Account such as Account name, location, creation date, etc. Metadata Information Check the metadata information of Roll Binding Role/Target Information Check the role’s function and target information Table. Roll Binding Detailed Information Items
Delete Roll Binding
To delete the roll binding, follow the steps below.
- All Services > Container > Kubernetes Engine Click the menu. Navigate to the Service Home page of Kubernetes Engine.
- Click Roll Binding under the Permissions menu on the Service Home page. Navigate to the Roll Binding List page.
- On the Role Binding List page, select the cluster and namespace from the gear button at the top left, then click Confirm.
- Roll Binding List page, select the item you want to delete. Roll Binding Details page, navigate.
- Click Delete Roll Binding on the Roll Binding Details page.
- Alert confirmation window appears, click the Confirm button.
3 - Using Kubernetes Engine
Configure external network communication to expose HTTP and HTTPS services from the cluster to the outside. To configure external network communication, you can create a service of type LoadBalancer.
Using Kubernetes Engine Guide
The Using Kubernetes Engine guide describes the following features. For more information, refer to the corresponding guide.
| Guide | Description |
|---|---|
| Creating a LoadBalancer Service | Instructions on how to create a LoadBalancer-type service through a service manifest file
|
3.1 - Authentication and Authorization
Kubernetes Engine has Kubernetes’ authentication and RBAC authorization features applied. This explains the authentication and authorization features of Kubernetes and how to link them with Kubernetes Engine and IAM.
Kubernetes Authentication and Authorization
This explains the authentication and RBAC authorization features of Kubernetes.
Authentication
The Kubernetes API server acquires the necessary information for user or account authentication from certificates or authentication tokens and proceeds with the authentication process.
Authorization
The Kubernetes API server checks if the user has permission for the requested action using the user information obtained through the authentication process and the RBAC-related objects. There are four types of RBAC-related objects as follows:
| Object | Scope | Description |
|---|---|---|
| ClusterRole | Cluster-wide | Definition of permissions across all namespaces in the cluster |
| ClusterRoleBinding | Cluster-wide | Binding definition between ClusterRole and user |
| Role | Namespace | Definition of permissions for a specific namespace |
| RoleBinding | Namespace | Binding definition between ClusterRole or Role and user |
Role
Kubernetes has several predefined ClusterRoles. Some of these ClusterRoles do not have the prefix system:, which means they are intended for user use. These include the cluster-admin role that can be applied to the entire cluster using ClusterRoleBinding, and the admin, edit, and view roles that can be applied to a specific namespace using RoleBinding.
| Default ClusterRole | Default ClusterRoleBinding | Description |
|---|---|---|
| cluster-admin | system:masters group | Grants superuser access to perform all actions on all resources.
|
| admin | None | Grants administrator access to the namespace when used with RoleBinding. When used in RoleBinding, it grants read/write access to most resources in the namespace, including the ability to create roles and role bindings. However, this role does not grant write access to resource quotas or the namespace itself. |
| edit | None | Grants read/write access to most objects in the namespace. This role does not grant the ability to view or modify roles and role bindings. However, this role allows access to secrets, which can be used to run pods in the namespace as any account, effectively granting API access at the account level. |
| view | None | Grants read-only access to most objects in the namespace. Roles and role bindings cannot be viewed. This role does not grant access to secrets, as reading secret contents would allow access to account credentials and potentially grant API access at the account level (a form of privilege escalation). |
In addition to the predefined ClusterRoles, you can define separate roles (or ClusterRoles) as needed. For example:
# Role that grants permission to view pods in the "default" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]# Role that grants permission to view pods in the "default" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]# ClusterRole that grants permission to view nodes
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-viewer
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]# ClusterRole that grants permission to view nodes
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-viewer
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]Role Binding
To manage access to the Kubernetes Engine using Samsung Cloud Platform IAM, you need to understand the relationship between Kubernetes’ role binding and IAM. The target (subjects) of role binding (or cluster role binding) can include individual users (User) or groups (Group).
- User matches the Samsung Cloud Platform username, and Group matches the IAM user group name.
For role binding/cluster role binding, subjects.kind can be one of the following:
- User: Binds to a Samsung Cloud Platform individual user.
- Group: Binds to a Samsung Cloud Platform IAM user group.
The subjects.name of role binding/cluster role binding can be specified as follows:
- User case: Samsung Cloud Platform individual username (e.g. jane.doe)
- Group case: Samsung Cloud Platform IAM user group name (e.g. ReadPodsGroup)
In this way, an IAM user group is bound to a role binding (or cluster role binding) written in the Kubernetes Engine cluster. Additionally, the permission to perform API operations included in the role (or cluster role) bound to the group is granted.
Example) Role Binding read-pods #1
An example of writing a User (Samsung Cloud Platform individual user) to a role binding is as follows:
# This role binding allows the user "jane.doe@example.com" to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
roleRef:
# "roleRef" specifies the binding to a role or cluster role.
kind: Role # Must be Role or ClusterRole.
name: pod-reader # Must match the name of the role or cluster role to bind.
apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: User
name: jane.doe
apiGroup: rbac.authorization.k8s.io# This role binding allows the user "jane.doe@example.com" to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
roleRef:
# "roleRef" specifies the binding to a role or cluster role.
kind: Role # Must be Role or ClusterRole.
name: pod-reader # Must match the name of the role or cluster role to bind.
apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: User
name: jane.doe
apiGroup: rbac.authorization.k8s.ioIf a role binding like the above is created in a cluster, a user with the username jane.doe is granted the permission to perform the API actions defined in the pod-reader role.
Example) Role Binding read-pods #2
An example of writing a group (IAM user group) to a role binding is as follows:
# This role binding allows users in the "ReadPodsGroup" group to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: Group
name: ReadPodsGroup
apiGroup: rbac.authorization.k8s.io# This role binding allows users in the "ReadPodsGroup" group to view pods in the "default" namespace.
# A "pod-reader" role must exist in the namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
subjects:
# One or more "targets" can be specified.
- kind: Group
name: ReadPodsGroup
apiGroup: rbac.authorization.k8s.ioIf a role binding like the above is created in the cluster, users in the IAM user group ReadPodsGroup are granted the permission to perform API operations written in the pod-reader role.
Example) Cluster Role Binding read-nodes
# This cluster role binding allows users in the "ReadNodesGroup" group to view nodes.
# A cluster role named "node-reader" must exist.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-nodes
roleRef:
kind: ClusterRole
name: node-reader
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ReadNodesGroup
apiGroup: rbac.authorization.k8s.io# This cluster role binding allows users in the "ReadNodesGroup" group to view nodes.
# A cluster role named "node-reader" must exist.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-nodes
roleRef:
kind: ClusterRole
name: node-reader
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ReadNodesGroup
apiGroup: rbac.authorization.k8s.ioWhen a cluster role binding like the one above is created in the cluster, users in the IAM user group ReadNodesGroup are granted the permissions to perform the API actions written in the cluster role node-reader.
Predefined Roles and Role Bindings for Samsung Cloud Platform
The Kubernetes Engine of Samsung Cloud Platform has predefined cluster role bindings scp-cluster-admin, scp-view, scp-namespace-view, and cluster roles scp-namespace-view. The following table shows the binding relationship between predefined roles and role bindings, and Samsung Cloud Platform users. Here, cluster roles cluster-admin and view are predefined within the Kubernetes cluster. For more detailed explanations, refer to the Roles section.
| Cluster Role Binding | Cluster Role | Subjects (User) |
|---|---|---|
| scp-cluster-admin | cluster-admin |
|
| scp-view | view | Group ViewerGroup |
| scp-namespace-view | scp-namespace-view | All authenticated users in the cluster |
- According to the cluster role binding scp-cluster-admin, users in the IAM user groups AdministratorGroup or OperatorGroup, as well as the Kubernetes Engine product applicant, are granted cluster administrator permissions.
- According to the cluster role binding scp-view, users in the ViewerGroup are granted cluster viewer permissions. More precisely, since it is linked to the predefined cluster role view in Kubernetes, access permissions for cluster-scoped resources (e.g., namespaces, nodes, ingress classes, etc.) and secrets within namespaces are not included. For more detailed explanations, refer to the Roles section.
- According to the cluster role binding scp-namespace-view, all authenticated users in the cluster are granted namespace viewer permissions.
- Predefined roles and role bindings for Samsung Cloud Platform are created only once when the cluster product is applied.
- Users can modify or delete predefined cluster role bindings and cluster roles for Samsung Cloud Platform as needed.
The details of predefined roles and role bindings for Samsung Cloud Platform are as follows:
Cluster Role Binding scp-cluster-admin
The cluster role binding scp-cluster-admin is bound to the cluster role cluster-admin and bound to the IAM user groups AdministratorGroup, OperatorGroup, and the SCP user (Kubernetes Engine cluster creator) according to the subjects.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
name: scp-cluster-admin
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: AdministratorGroup
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: OperatorGroup
apiGroup: rbac.authorization.k8s.io
- kind: User # Cluster creator
name: jane.doe # cluster creater name
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
name: scp-cluster-admin
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: AdministratorGroup
apiGroup: rbac.authorization.k8s.io
- kind: Group
name: OperatorGroup
apiGroup: rbac.authorization.k8s.io
- kind: User # Cluster creator
name: jane.doe # cluster creater name
apiGroup: rbac.authorization.k8s.ioCluster Role Binding scp-view
The cluster role binding scp-view is bound to the cluster role view and bound to the IAM user group ViewerGroup according to the subjects.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: scp-view
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ViewerGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: scp-view
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ViewerGroup
apiGroup: rbac.authorization.k8s.ioCluster Role and Cluster Role Binding scp-namespace-view
Cluster Role scp-namespace-view is a role that defines the authority to view namespaces. Cluster Role Binding scp-namespace-view is associated with Cluster Role scp-namespace-view and grants namespace view authority to all authenticated users in the cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: scp-namespace-view
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: scp-namespace-view
roleRef:
kind: ClusterRole
name: scp-namespace-view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: scp-namespace-view
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: scp-namespace-view
roleRef:
kind: ClusterRole
name: scp-namespace-view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: system:authenticated
apiGroup: rbac.authorization.k8s.ioIAM User Group RBAC Use Case
This chapter explains examples of granting authority by major user scenarios. The names of IAM user groups, ClusterRoleBindings/RoleBindings, and ClusterRoles presented here are examples for understanding. Administrators should define and apply appropriate names and authorities according to their needs.
| Scope | Use Case | IAM User Group | ClusterRoleBinding/RoleBinding | ClusterRole | Note |
|---|---|---|---|---|---|
| Cluster | Cluster Administrator | ClusterAdminGroup | ClusterRoleBinding cluster-admin-group | cluster-admin | Administrator for a specific cluster |
| Cluster | Cluster Editor | ClusterEditGroup | ClusterRoleBinding cluster-edit-group | edit | Editor for a specific cluster |
| Cluster | Cluster Viewer | ClusterViewGroup | ClusterRoleBinding cluster-view-group | view | Viewer for a specific cluster |
| Namespace | Namespace Administrator | NamespaceAdminGroup | RoleBinding namespace-admin-group | admin | Administrator for a specific namespace |
| Namespace | Namespace Editor | NamespaceEditGroup | RoleBinding namespace-edit-group | edit | Editor for a specific namespace |
| Namespace | Namespace Viewer | NamespaceViewGroup | RoleBinding namespace-view-group | view | Viewer for a specific namespace |
Cluster Administrator
To create a cluster administrator, follow these steps:
- Create an IAM user group named ClusterAdminGroup.
- Create a ClusterRoleBinding with the following content in the target cluster:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-group
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ClusterAdminGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-group
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ClusterAdminGroup
apiGroup: rbac.authorization.k8s.io- It is associated with the default ClusterRole cluster-admin, granting administrator authority for the cluster.
Cluster Editor
To create a cluster editor, follow these steps:
- Create an IAM user group named ClusterEditGroup.
- Create a ClusterRoleBinding with the following content in the target cluster:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-edit-group
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ClusterEditGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-edit-group
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ClusterEditGroup
apiGroup: rbac.authorization.k8s.io- The default cluster role edit is associated with it, and editor permissions are granted for the cluster.
Cluster Viewer
To create a cluster viewer, follow these steps:
- Create an IAM user group named ClusterViewGroup.
- Create a cluster role binding with the following content in the target cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-view-group
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ClusterViewGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-view-group
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: ClusterViewGroup
apiGroup: rbac.authorization.k8s.io- The default cluster role view is associated with it, and viewer permissions are granted for the cluster.
Namespace Administrator
To create a namespace administrator, follow these steps:
- Create an IAM user group named NamespaceAdminGroup.
- Create a role binding with the following content in the target cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-admin-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceAdminGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-admin-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceAdminGroup
apiGroup: rbac.authorization.k8s.io- The default cluster role admin is associated with it, and administrator permissions are granted for the namespace.
Namespace Editor
To create a namespace editor, follow these steps:
- Create an IAM user group named NamespaceEditGroup.
- Create a role binding with the following content in the target cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-edit-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceEditGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-edit-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceEditGroup
apiGroup: rbac.authorization.k8s.io- The default cluster role edit is associated with it, and editor permissions are granted for the namespace.
Namespace Viewer
To create a namespace viewer, follow these steps:
- Create an IAM user group named NamespaceViewGroup.
- Create a role binding with the following content in the target cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-view-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceViewGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-view-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceViewGroup
apiGroup: rbac.authorization.k8s.io- The default cluster role view is associated with it, and viewer permissions are granted for the namespace. To create a namespace viewer, follow these steps:
- Create an IAM user group: Create an IAM user group named NamespaceViewGroup.
- Create a role binding: Create a role binding with the following content in the target cluster.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-view-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceViewGroup
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-view-group
namespace: <namespace_name>
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceViewGroup
apiGroup: rbac.authorization.k8s.io- The view cluster role is associated with the viewer permission for the specified namespace.
Practice Example
This chapter describes an example and procedure for applying an administrator to a specific namespace.
- IAM user group: NamespaceAdminGroup
- IAM policy: NamespaceAdminAccess
- Role binding: namespace-admin-group
Create an IAM User Group
To create an IAM user group in Samsung Cloud Platform, follow these steps:
Click All Services > Management > IAM. The Identity and Access Management (IAM) Service Home page appears.
On the Service Home page, click User Group. The User Group List page appears.
On the User Group List page, click Create User Group.
Enter the required information in the Basic Information, Add User, Attach Policy, and Additional Information sections.
Category RequiredDescription User Group Name Required Enter the user group name - Use Korean, English, numbers, and special characters (
+=,.@-_) to enter a value between 3 and 24 characters - Enter NamespaceAdminGroup as the user group name
Description Optional Description of the user group name - Enter a detailed description of the user group name, up to 1,000 characters
User Optional Users to add to the user group - The list of users registered in the account is displayed, and the selected user’s name is displayed at the top of the screen when the checkbox is selected
- Click the Delete button at the top of the screen or uncheck the checkbox in the user list to cancel the selection of the selected user
- If there are no users to add, click Create User at the bottom of the user list to register a new user, and then refresh the user list to select the user
Policy Optional Policy to attach to the user group - The list of policies registered in the account is displayed, and the selected policy name is displayed at the top of the screen when the checkbox is selected
- Select ViewerAccess in the policy list
Tag Optional Tags to add to the user group - Up to 50 tags can be added per resource
Table. User Group Creation Information Input Items- Use Korean, English, numbers, and special characters (
Click the Complete button. The User Group List page appears.
In this practice example, the ViewerAccess policy (permission to view all resources) is attached for demonstration purposes.
- If you do not need permission to view all resources in the Samsung Cloud Platform Console, you do not need to attach the ViewerAccess policy. Define and apply a separate policy according to your actual situation.
Create an IAM Policy
To create an IAM policy in Samsung Cloud Platform, follow these steps:
Click All Services > Management > IAM. The Identity and Access Management (IAM) Service Home page appears.
On the Service Home page, click Policy. The Policy List page appears.
On the Policy List page, click Create Policy. The Create Policy page appears.
Enter the required information in the Basic Information and Additional Information sections.
Category RequiredDescription Policy Name Required Enter the policy name - Use Korean, English, numbers, and special characters (
+=,.@-_) to enter a value between 3 and 128 characters - Enter NamespaceAdminAccess as the policy name
Description Optional Description of the policy name - Enter a detailed description of the policy name, up to 1,000 characters
Tag Optional Tags to add to the policy - Up to 50 tags can be added per resource
Table. Policy Creation Information Input Items - Basic Information and Additional Information- Use Korean, English, numbers, and special characters (
Click the Next button. The Permission Settings section appears.
Enter the required information in the Permission Settings section.
Select Kubernetes Engine in the Service section.
You can create a policy by importing an existing policy using Policy Import. For more information about Policy Import, see Policy Import.
Category RequiredDescription Control Type Required Select the policy control type - Allow Policy: A policy that allows defined permissions
- Deny Policy: A policy that denies defined permissions
Action Required Select actions provided by each service - Create: CreateKubernetesObject selected
- Delete: DeleteKubernetesObject selected
- List: ListKubernetesEngine, ListKubernetesObject selected
- Read: DetailKubernetesObject selected
- Update: UpdateKubernetesObject selected
- Add Action Directly: Use wildcard
*to specify multiple actions at once
Applied Resource Required Resource to which the action is applied - All Resources: Apply to all resources for the selected action
- Individual Resource: Apply only to the specified resource for the selected action
- Individual resources are only possible when selecting actions that allow individual resource selection (purple actions)
- Click the Add Resource button to specify the target resource by resource type
- For more information on Add Resource, see Registering individual resources as applied resources
Authentication Type Required Authentication method for the target user - All Authentication: Apply regardless of authentication method
- API Key Authentication: Apply to users who use API key authentication
- IAM Key Authentication, Console Login: Apply to users who use IAM key authentication or console login
Applied IP Required IP addresses to which the policy is applied - User-specified IP: Register and manage IP addresses directly by the user
- Applied IP: Register IP addresses directly by the user as IP addresses or ranges to which the policy is applied
- Excluded IP: Register IP addresses to be excluded from Applied IP as IP addresses or ranges
- All IP: Do not restrict IP access
- Allow access to all IP addresses, but if exceptions are needed, register Excluded IP to restrict access to registered IP addresses
Table. Policy creation information input items - Permission settings
Permission settings provide Basic Mode and JSON Mode.
- If you write in Basic Mode and enter JSON Mode or move to another screen, services with the same conditions will be integrated into one, and settings that are not completed will be deleted.
- If the content written in JSON Mode does not match the JSON format, you cannot switch to Basic Mode.
- Click the Next button. Move to the Input Information Check page.
- Check the input information and click the Complete button. Move to the Policy List page.
Add a user to an IAM user group
To add a user to an IAM user group in Samsung Cloud Platform, follow these steps.
- Click All Services > Management > IAM menu. Move to the Identity and Access Management (IAM) Service Home page.
- On the Service Home page, click the User menu. Move to the User List page.
- On the User List page, click the user to be added to the IAM user group. Move to the User Details page.
- On the User Details page, click the User Group tab.
- On the user group tab, select the Add User Group button. Move to the Add User Group page.
- On the Add User Group page, select the user group to be added and click the Complete button. Move to the User Details page.
- Select NamespaceAdminGroup from the user group.
Create a role binding
Create a role binding by referring to the example below.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-admin-group
namespace: dev # target namespace
roleRef:
kind: ClusterRole
name: admin # pre-defined cluster role in Kubernetes
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceAdminGroup # IAM user group created earlier
apiGroup: rbac.authorization.k8s.ioapiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: namespace-admin-group
namespace: dev # target namespace
roleRef:
kind: ClusterRole
name: admin # pre-defined cluster role in Kubernetes
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: Group
name: NamespaceAdminGroup # IAM user group created earlier
apiGroup: rbac.authorization.k8s.ioVerify the user
Verify that the user’s namespace permissions are applied normally.
To verify namespace user permissions in Samsung Cloud Platform, follow these steps.
- Click All Services > Container > Kubernetes Engine menu. Move to the Kubernetes Engine Service Home page.
- On the Service Home page, click Workload menu under Pod. Move to the Pod List page.
- On the Pod List page, select the cluster and namespace from the gear button at the top left and click Confirm.
- On the Pod List page, verify that the pod list is retrieved.
- If you select a namespace with permissions, the pod list will be displayed.
- If you select a namespace without permissions, a confirmation window will be displayed indicating that you do not have permission to retrieve the list.
3.2 - Accessing the Cluster
kubectl Installation and Usage Guide
After creating a Kubernetes Engine service, you can use the Kubernetes command-line tool kubectl to execute commands on a Kubernetes cluster. Using kubectl, you can deploy applications, inspect and manage cluster resources, and view logs. You can find how to install and use kubectl in the official Kubernetes documentation as follows.
| Category | Reference URL |
|---|---|
| kubectl installation (Linux) | https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ |
| kubectl install (Windows) | https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/ |
| kubectl introduction | https://kubernetes.io/docs/reference/kubectl/ |
| kubectl Quick Reference | https://kubernetes.io/docs/reference/kubectl/quick-reference/ |
| kubectl command reference | https://kubernetes.io/docs/reference/kubectl/kubectl/ |
You must use a kubectl version that is within the minor version difference of the cluster. For example, if the cluster version is 1.30, you can use kubectl versions 1.29, 1.30, or 1.31.
- Please refer to the following document about kubectl’s version skew policy. https://kubernetes.io/releases/version-skew-policy/#kubectl
To access a Kubernetes cluster with kubectl, you need a kubeconfig file containing the Kubernetes server address and authentication information.
Kubernetes Engine supports authentication via admin certificate kubeconfig and user authentication key kubeconfig.
admin certificate kubeconfig
This kubeconfig uses the admin certificate as an authentication method when accessing the Kubernetes API.
Admin kubeconfig download
Kubernetes Engine > Cluster List > Cluster Details > Admin kubeconfig Download button to click and download the kubeconfig file.
- Administrator kubeconfig download is only possible for Admin.
- There are separate private endpoint and public endpoint versions, and you can download each only once.
Admin kubeconfig use
- By default, kubectl looks for a file named config in the $HOME/.kube directory. Or you can set the KUBECONFIG environment variable or specify the
kubeconfigflag to use a different kubeconfig file. - Private endpoints are by default only accessible from nodes of the respective cluster. For resources in the same Account and same region, you can allow access by adding them to the private endpoint access control settings.
- If you need to access the cluster from the external internet, setting public endpoint access to enabled allows you to access using the public endpoint kubeconfig.
User authentication key kubeconfig
This kubeconfig uses the user’s Open API authentication key as the authentication method when accessing the Kubernetes API.
User kubeconfig download
Kubernetes Engine > Cluster List > Cluster Details > User kubeconfig download Click the button to download the kubeconfig file.
- User kubeconfig download is only possible for users with cluster view permission.
- There are separate ones for private endpoint and public endpoint.
- Since the downloaded kubeconfig file does not contain the authentication key token, you need to add the authentication key token information before using it. (See the next paragraph)
Add authentication key token to user kubeconfig file
Below is an example of a user’s kubeconfig file. To use the kubeconfig file, you need to add the authentication key token (AUTHKEY_TOKEN) information in the token field inside the file.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t...
server: https://my-cluster-a1c3e.ske.xxx.samsungsdscloud.com:6443
name: my-cluster-a1c3e
contexts:
- context:
cluster: my-cluster-a1c3e
user: jane.doe
name: jane.doe@my-cluster-a1c3e
current-context: jane.doe@my-cluster-a1c3e
kind: Config
preferences: {}
users:
- name: jane.doe
user:
token: <AUTHKEY_TOKEN> #### writing neededapiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0t...
server: https://my-cluster-a1c3e.ske.xxx.samsungsdscloud.com:6443
name: my-cluster-a1c3e
contexts:
- context:
cluster: my-cluster-a1c3e
user: jane.doe
name: jane.doe@my-cluster-a1c3e
current-context: jane.doe@my-cluster-a1c3e
kind: Config
preferences: {}
users:
- name: jane.doe
user:
token: <AUTHKEY_TOKEN> #### writing neededAUTHKEY_TOKEN can be generated by concatenating the authentication key’s ACCESS_KEY and SECRET_KEY with a colon (:) and then Base64 encoding it. The following is an example of creating AUTHKEY_TOKEN in a Linux environment.
$ ACCESS_KEY=5df418813aed051548a72f4a814cf09e
$ SECRET_KEY=6ba7b810-9dad-11d1-80b4-00c04fd430c8
$ AUTHKEY_TOKEN=$(echo -n "$ACCESS_KEY:$SECRET_KEY" | base64 -w0)
$ echo $AUTHKEY_TOKEN
NWRmNDE4ODEzYWVkMDUxNTQ4YTcyZjRhODE0Y2YwOWU6NmJhN2I4MTAtOWRhZC0xMWQxLTgwYjQtMDBmMDRmZDQzMGM4r$ ACCESS_KEY=5df418813aed051548a72f4a814cf09e
$ SECRET_KEY=6ba7b810-9dad-11d1-80b4-00c04fd430c8
$ AUTHKEY_TOKEN=$(echo -n "$ACCESS_KEY:$SECRET_KEY" | base64 -w0)
$ echo $AUTHKEY_TOKEN
NWRmNDE4ODEzYWVkMDUxNTQ4YTcyZjRhODE0Y2YwOWU6NmJhN2I4MTAtOWRhZC0xMWQxLTgwYjQtMDBmMDRmZDQzMGM4r- For detailed information on authentication key generation, please refer to API Reference > Common > Samsung Cloud Platform Open API call procedure.
User kubeconfig execution example
You can see an example of executing the user kubeconfig.
When access is blocked by access control or a firewall
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
Unable to connect to the server: dial tcp 123.123.123.123:6443: i/o timeout$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
Unable to connect to the server: dial tcp 123.123.123.123:6443: i/o timeoutWhen AUTHKEY_TOKEN does not match and authentication fails
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
error: You must be logged in to the server (Unauthorized)$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
error: You must be logged in to the server (Unauthorized)AUTHKEY_TOKEN When authentication succeeds
$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
...
kube-node-lease Active 10d
kube-public Active 10d
kube-system Active 10d$ kubectl --kubeconfig=user-kubeconfig.yaml get namespaces
...
kube-node-lease Active 10d
kube-public Active 10d
kube-system Active 10dAUTHKEY_TOKEN Authentication succeeded but no permission
$ kubectl --kubeconfig=user-kubeconfig.yaml get nodes
Error from server (Forbidden): nodes is forbidden: User "jane.doe" cannot list resource "nodes" in API group "" at the cluster scope$ kubectl --kubeconfig=user-kubeconfig.yaml get nodes
Error from server (Forbidden): nodes is forbidden: User "jane.doe" cannot list resource "nodes" in API group "" at the cluster scope3.3 - Using type LoadBalancer Service
Service Configuration Method
You can configure a LoadBalancer type Service by writing and applying a Service manifest file (example:
my-lb-svc.yaml
).
- LoadBalancer is created in the cluster Subnet by default.
- To create a LoadBalancer in a different Subnet, use the annotation service.beta.kubernetes.io/scp-load-balancer-subnet-id. For details, refer to Annotation Detailed Settings
Follow these steps to write and apply a type LoadBalancer Service.
Write a Service manifest file
my-lb-svc.yaml.Color modeapiVersion: v1 kind: Service metadata: name: my-service spec: selector: app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 appProtocol: tcp # Refer to LB service protocol type setting section type: LoadBalancerapiVersion: v1 kind: Service metadata: name: my-service spec: selector: app.kubernetes.io/name: MyApp ports: - protocol: TCP port: 80 targetPort: 9376 appProtocol: tcp # Refer to LB service protocol type setting section type: LoadBalancerCode block. Service manifest file my-lb-svc.yaml writing example Deploy the Service manifest using the kubectl apply command.
Color modekubectl apply -f my-lb-svc.yamlkubectl apply -f my-lb-svc.yamlCode block. Deploying Service manifest with kubectl apply command
- When a type LoadBalancer Service is created, a corresponding Load Balancer service is automatically created. It may take a few minutes for the configuration to complete.
- Do not arbitrarily modify the automatically created Load Balancer service and LB server group. Changes may be reverted or unexpected behavior may occur.
- For detailed configurable features, refer to Annotation Detailed Settings.
- Check the Load Balancer configuration using the
kubectl get servicecommand.Color mode# kubectl get service my-lb-svc NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default my-lb-svc LoadBalancer 172.20.49.206 123.123.123.123 80:32068/TCP 3m# kubectl get service my-lb-svc NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default my-lb-svc LoadBalancer 172.20.49.206 123.123.123.123 80:32068/TCP 3mCode block. Checking Load Balancer configuration with kubectl get service command
Protocol Type
You can use it by writing a Service manifest. The following is a simple example.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
...
ports:
- port: 80
targetPort: 9376
protocol: TCP # Required (choose one of TCP, UDP)
appProtocol: tcp # Optional (leave blank or choose one of tcp, http, https)
type: LoadBalancer # Type load balancerapiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
...
ports:
- port: 80
targetPort: 9376
protocol: TCP # Required (choose one of TCP, UDP)
appProtocol: tcp # Optional (leave blank or choose one of tcp, http, https)
type: LoadBalancer # Type load balancerThe list of protocols (protocol and appProtocol) supported by Kubernetes Engine’s type Load Balancer Service and the settings applied to the Load Balancer service accordingly are as follows.
| Category | (k8s) protocol | (k8s) appProtocol | (LB) Service Category | (LB) LB Listener | (LB) LB Server Group | (LB) Health Check |
|---|---|---|---|---|---|---|
| L4 TCP | TCP | (tcp) | L4 | TCP {port} | TCP {nodePort} | TCP {nodePort} |
| L4 UDP | UDP | - | L4 | UDP {port} | UDP {nodePort} | TCP {nodePort} |
| L7 HTTP | TCP | http | L7 | HTTP {port} | TCP {nodePort} | TCP/HTTP {nodePort} |
| L7 HTTPS | TCP | https | L7 | HTTPS {port} | TCP {nodePort} | TCP/HTTP {nodePort} |
- According to the k8s Service manifest spec, you can specify multiple ports for a single service.
Depending on the Load Balancer service category (L4, L7), you cannot mix and use protocol layers within a single Service.
- That is, L4(TCP, UDP) and L7(HTTP, HTTPS) cannot be used together in a single Service.
L4 Service Manifest Writing Example
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancerapiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancerL7 Service Manifest Writing Example
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/scp-load-balancer-layer-type: "L7" # Required
service.beta.kubernetes.io/scp-load-balancer-client-cert-id: "24da35de187b450eb0cf09fb6fa146de" # Required
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- appProtocol: http # Required
protocol: TCP
port: 80
targetPort: 9376
- appProtocol: https # Required
protocol: TCP
port: 443
targetPort: 9898
type: LoadBalancerapiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/scp-load-balancer-layer-type: "L7" # Required
service.beta.kubernetes.io/scp-load-balancer-client-cert-id: "24da35de187b450eb0cf09fb6fa146de" # Required
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- appProtocol: http # Required
protocol: TCP
port: 80
targetPort: 9376
- appProtocol: https # Required
protocol: TCP
port: 443
targetPort: 9898
type: LoadBalancerAnnotation Detailed Settings
You can set detailed features by adding annotations to the service manifest.
apiVersion: v1
kind: Service
metatdata:
name: my-lb-svc
annotations:
service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled: "true"
service.beta.kubernetes.io/scp-load-balancer-health-check-interval: "5"
service.beta.kubernetes.io/scp-load-balancer-health-check-timeout: "5"
service.beta.kubernetes.io/scp-load-balancer-health-check-count: "3"
service.beta.kubernetes.io/scp-load-balancer-session-duration-time: "300"
spec:
type: LoadBalancer
...apiVersion: v1
kind: Service
metatdata:
name: my-lb-svc
annotations:
service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled: "true"
service.beta.kubernetes.io/scp-load-balancer-health-check-interval: "5"
service.beta.kubernetes.io/scp-load-balancer-health-check-timeout: "5"
service.beta.kubernetes.io/scp-load-balancer-health-check-count: "3"
service.beta.kubernetes.io/scp-load-balancer-session-duration-time: "300"
spec:
type: LoadBalancer
...- If no separate annotation is added to the service, the annotation default value is applied.
- Even if the annotation added to the service does not meet the allowed value, the annotation default value is applied.
Below is a description of all annotations available for type LoadBalancer service.
| Annotation | Protocol | Default Value | Allowed Value | Example | Description |
|---|---|---|---|---|---|
| service.beta.kubernetes.io/scp-load-balancer-source-ranges-firewall-rules | All | false | true, false | false | Automatically add firewall rules (LB source ranges → LB service IP) |
| service.beta.kubernetes.io/scp-load-balancer-snat-healthcheck-firewall-rules | All | false | true,false | false | Automatically add firewall rules (LB Source NAT IP, HealthCheck IP → member IP:Port)
|
| Annotation | Protocol | Default Value | Allowed Value | Example | Description |
|---|---|---|---|---|---|
| service.beta.kubernetes.io/scp-load-balancer-security-group-id | All | - | UUID | 92d84b44-ee71-493d-9782-3a90481ce5f3 | Automatically add rules to the Security Group corresponding to the specified ID
|
| service.beta.kubernetes.io/scp-load-balancer-security-group-name | All | - | String | security-group-1 | Automatically add rules to the Security Group corresponding to the specified Name
|
| Annotation | Protocol | Default Value | Allowed Value | Example | Description |
|---|---|---|---|---|---|
| service.beta.kubernetes.io/scp-load-balancer-layer-type | All | L4 | L4, L7 | L4 | Specify the Load Balancer service category
|
| service.beta.kubernetes.io/scp-load-balancer-subnet-id | All | - | ID | 7f05eda5e1cf4a45971227c57a6d60fa | Specify the Load Balancer Service Subnet
|
| service.beta.kubernetes.io/scp-load-balancer-service-ip | All | - | IP Address | 192.168.10.7 | Specify the Load Balancer Service IP
|
| service.beta.kubernetes.io/scp-load-balancer-public-ip-enabled | All | false | true, false | false | Specify whether to use Load Balancer Public NAT IP
|
| service.beta.kubernetes.io/scp-load-balancer-public-ip-id | All | - | ID | 4119894bd9614cef83db6f8dda667a20 | Specify the ID of the Public IP to use as the Load Balancer Public NAT IP
|
| Annotation | Protocol | Default Value | Allowed Value | Example | Description |
|---|---|---|---|---|---|
| service.beta.kubernetes.io/scp-load-balancer-idle-timeout | HTTP, HTTPS | - | 60 - 3600(in 60-second units) | 600 | Specify the LB Listener’s idle-timeout (seconds)
|
| service.beta.kubernetes.io/scp-load-balancer-session-duration-time | All | L4: 120L7: - | L4 TCP: 60 - 3600(in 60-second units)L4 UDP: 60 - 180(in 60-second units)L7: 0 - 120 | 120 | Specify the LB Listener’s session-duration-time (seconds)
|
| service.beta.kubernetes.io/scp-load-balancer-response-timeout | HTTP, HTTPS | - | 0 - 120 | 60 | Specify the LB Listener’s response-timeout (seconds)
|
| service.beta.kubernetes.io/scp-load-balancer-insert-client-ip | TCP | false | true, false | false | Specify the LB Listener’s Insert Client IP |
| service.beta.kubernetes.io/scp-load-balancer-x-forwarded-proto | HTTP, HTTPS | false | true, false | false | Specify whether to use the LB Listener’s X-Forwarded-Proto header |
| service.beta.kubernetes.io/scp-load-balancer-x-forwarded-port | HTTP, HTTPS | false | true, | false | Specify whether to use the LB Listener’s X-Forwarded-Port header |
| service.beta.kubernetes.io/scp-load-balancer-x-forwarded-for | HTTP, HTTPS | false | true, false | false | Specify whether to use the LB Listener’s X-Forwarded-For header |
| service.beta.kubernetes.io/scp-load-balancer-support-http2 | HTTP, HTTPS | false | true, false | false | Specify whether to support HTTP 2.0 for LB Listener |
| service.beta.kubernetes.io/scp-load-balancer-persistence | TCP, HTTP, HTTPS | "" | "", source-ip, cookie | source-ip | Specify the LB Listener’s persistence (one of none, source IP, cookie)
|
| service.beta.kubernetes.io/scp-load-balancer-client-cert-id | HTTPS | - | UUID | 78b9105e00324715b63700933125fa83 | Specify the ID of the LB Listener’s client SSL certificate
|
| service.beta.kubernetes.io/scp-load-balancer-client-cert-level | HTTPS | HIGH | HIGH, NORMAL, LOW | HIGH | Specify the security level of the LB Listener’s client SSL certificate |
| service.beta.kubernetes.io/scp-load-balancer-server-cert-level | HTTPS | - | HIGH, NORMAL, LOW | HIGH | Specify the security level of the LB Listener’s server SSL certificate |
| Annotation | Protocol | Default Value | Allowed Value | Example | Description |
|---|---|---|---|---|---|
| service.beta.kubernetes.io/scp-load-balancer-lb-method | All | ROUND_ROBIN | ROUND_ROBIN, LEAST_CONNECTION, IP_HASH | ROUND_ROBIN | Specify the LB server group load balancing policy |
| Annotation | Protocol | Default Value | Allowed Value | Example | Description |
|---|---|---|---|---|---|
| service.beta.kubernetes.io/scp-load-balancer-health-check-enabled | All | true | true, false | true | Specify whether to use LB health check |
| service.beta.kubernetes.io/scp-load-balancer-health-check-protocol | All | TCP | TCP, HTTP | TCP | Specify the LB health check protocol |
| service.beta.kubernetes.io/scp-load-balancer-health-check-port | All | {nodeport} | 1 - 65534 | 30000 | Specify the LB health check port
|
| service.beta.kubernetes.io/scp-load-balancer-health-check-count | All | 3 | 1 - 10 | 3 | Specify the LB health check detection count |
| service.beta.kubernetes.io/scp-load-balancer-health-check-interval | All | 5 | 1 - 180 | 5 | Specify the LB health check interval |
| service.beta.kubernetes.io/scp-load-balancer-health-check-timeout | All | 5 | 1 - 180 | 5 | Specify the LB health check timeout |
| service.beta.kubernetes.io/scp-load-balancer-health-check-http-method | HTTP | GET | GET, POST | GET | Specify the LB health check HTTP method |
| service.beta.kubernetes.io/scp-load-balancer-health-check-url | HTTP | / | String | /healthz | Specify the LB health check URL |
| service.beta.kubernetes.io/scp-load-balancer-health-check-response-code | HTTP | 200 | 200 - 500 | 200 | Specify the LB health check response code |
| service.beta.kubernetes.io/scp-load-balancer-health-check-request-data | HTTP | - | String | username=admin&password=1234 | Specify the LB health check request string
|
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-enabled | All | true | true, false | true | Specify whether to use LB health check for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-protocol | All | TCP | TCP, HTTP | TCP | Specify the LB health check protocol for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-port | All | - | 1 - 65534 | 30000 | Specify the LB health check port for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-count | All | 3 | 1 - 10 | 3 | Specify the LB health check detection count for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-interval | All | 5 | 1 - 180 | 5 | Specify the LB health check interval for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-timeout | All | 5 | 1 - 180 | 5 | Specify the LB health check timeout for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-http-method | HTTP | GET | GET, POST | GET | Specify the LB health check HTTP method for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-url | HTTP | / | String | /healthz | Specify the LB health check URL for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-response-code | HTTP | 200 | 200 - 500 | 200 | Specify the LB health check response code for the Service’s {port} port number |
| service.beta.kubernetes.io/scp-load-balancer-port-{port}-health-check-request-data | HTTP | - | String | username=admin&password=1234 | Specify the LB health check request string for the Service’s {port} port number
|
Constraints
The following are constraints to consider when using Kubernetes annotations.
| Constraint | Related Annotation |
|---|---|
| Rules created in existing Security Group are not automatically deleted when changing Security Group | service.beta.kubernetes.io/scp-load-balancer-security-group-id service.beta.kubernetes.io/scp-load-balancer-security-group-name |
| Cannot change Load Balancer service category (L4/L7) | service.beta.kubernetes.io/scp-load-balancer-layer-type |
| Cannot use L4 and L7 together in the same k8s Service | service.beta.kubernetes.io/scp-load-balancer-layer-type |
| Cannot change Load Balancer subnet | service.beta.kubernetes.io/scp-load-balancer-subnet-id |
| Cannot change Load Balancer Service IP | service.beta.kubernetes.io/scp-load-balancer-service-ip |
| LB Listener idle-timeout cannot be changed from used to not used | service.beta.kubernetes.io/scp-load-balancer-idle-timeout |
| LB Listener session-duration-time cannot be changed from used to not used | service.beta.kubernetes.io/scp-load-balancer-session-duration-time |
| LB Listener response-timeout cannot be changed from used to not used | service.beta.kubernetes.io/scp-load-balancer-response-timeout |
| LB Listener idle-timeout cannot be set simultaneously with session-duration-time or response-timeout | service.beta.kubernetes.io/scp-load-balancer-idle-timeout service.beta.kubernetes.io/scp-load-balancer-session-duration-time service.beta.kubernetes.io/scp-load-balancer-response-timeout |
| Cannot use TCP and UDP together with the same port number in the same k8s Service | - |
L7 Listener’s routing rules only support the default URL path of the LB server group delivery method
| - |
3.4 - Considerations for Use
Managed Port Constraints
The following ports are used for SKE management and cannot be used for service use. In addition, if blocked by OS firewall, etc., node functions or some functions may not work normally.
| Port | Description |
|---|---|
| UDP 4789 | calico-vxlan |
| TCP 5473 | calico-typha |
| TCP 10250 | kubelet |
| TCP 19100 | node-exporter |
| TCP 19400 | dcgm-exporter |
kube-reserved resource constraints
kube-reserved is a feature that reserves resources for system daemons that do not run as pods on the node.
- There are system daemons that do not run as pods, such as kubelet, container runtime, etc.
For more information on kube-reserved, please refer to the following document.
Kubernetes Engine reserves CPU and memory based on the following criteria.
| CPU specification | Memory specification |
|---|---|
|
|
Example: For a Virtual Server with 16-core vCPU and 32G Memory, kube-reserved is calculated as follows.
- CPU: (1 core × 0.06) + (1 core × 0.01) + (2 cores × 0.005) + (12 cores × 0.0025) = 0.11 core
- Memory: (4 GB × 0.25) + (4 GB × 0.2) + (8 GB × 0.1) + (16 GB × 0.06) = 3.56 GB
Example: The resources reserved according to CPU size are as follows.
| CPU specification | Resource specification1 | Resource specification2 | Resource specification3 | Resource specification4 |
|---|---|---|---|---|
| kube-reserved CPU | 70 m | 80 m | 90 m | 110 m |
- Example: The resources reserved according to the memory size are as follows.
| Memory Specification | Resource Specification1 | Resource Specification2 | Resource Specification3 | Resource Specification4 | Resource Specification4 | Resource Specification4 | Resource Specification4 |
|---|---|---|---|---|---|---|---|
| kube-reserved memory | 1 GB | 1.8 GB | 2.6 GB | 3.56 GB | 5.48 GB | 9.32 GB | 11.88 GB |
3.5 - Version Information
Kubernetes Version and Support Period
Kubernetes Version Lifecycle
The Kubernetes open source software (OSS) community releases three minor versions annually, with a release cycle of approximately 15 weeks. Released minor versions go through a support period of approximately 14 months (standard patch 12 months, maintenance 2 months) and become EOL (End of Life).
For information on Kubernetes release and EOL timing, and support period, refer to the following links:
Samsung Cloud Platform Kubernetes Engine (SKE) Version Provision Plan
SKE verifies and provides Stable status patch versions among released OSS minor versions. Therefore, there is a difference between the release timing of versions provided by SKE and the release timing of the same OSS version.
Additionally, for previously released versions, technical support is terminated sequentially from older versions considering the open source EOL timing, etc. (End of Tech support, EoTS).
The release and termination schedules for OSS and SKE are as follows.
| Version | OSS Release | OSS EOL | SKE Release | SKE EoTS |
|---|---|---|---|---|
| v1.29 | 2023-12-13 | 2025-02-28 | 2024-10 | 2026-03-31 |
| v1.30 | 2024-04-17 | 2025-06-28 | 2025-02 | 2026-06-30 |
| v1.31 | 2024-08-13 | 2025-10-28 | 2025-07 | 2026-10-28 |
| v1.32 | 2024-12-11 | 2026-02-28 | 2025-10 | 2027-02-28 |
| v1.33 | 2025-04-23 | 2026-06-28 | 2025-12 | 2027-06-28 |
| v1.34 | 2025-08-27 | 2026-10-27 | 2026-03 | 2027-10-27 |
Feature Limitations at End of Technical Support (EoTS)
When the Kubernetes version provided by SKE reaches the End of Technical Support (EoTS) state, features supported in that version may be limited.
- New cluster creation → Creation not possible
- Existing cluster upgrade → Upgrade possible (upgrade possible even if upper version is EoTS)
- Creating node pools in existing cluster → Creation possible
- EOL versions may have vulnerabilities, so upgrading to a higher version is recommended.
- You can upgrade the control plane and node pools in the Samsung Cloud Platform Console, and no separate cost is incurred for the upgrade.
- For stable operation, perform compatibility testing for the upgrade version before proceeding with the upgrade.
OS and GPU Driver
The OS and GPU driver version information available for each K8s server type is as follows.
- OS versions provided may vary by K8s version.
- When using GPU nodes, related K8s components (nvidia-device-plugin, dcgm-exporter) are configured by default in the cluster.
- When deploying gpu-operator, conflicts may occur due to duplicate component configuration. It is recommended to deploy and use excluding the default provided components.
- For OS with ended support, node pool creation is possible, but using the latest OS version is recommended.
| k8s Version | Standard and High Capacity | GPU |
|---|---|---|
| v1.29 |
|
|
| v1.30 |
|
|
| v1.31 |
|
|
| v1.32 |
|
|
| v1.33 |
|
|
| v1.34 |
|
|
4 - API Reference
5 - CLI Reference
6 - Release Note
Kubernetes Engine
- Kubernetes Engine feature changes
- Supports Kubernetes v1.34 version.
- Provides GPU VM custom image for node pools.
- Provides EoTS management logic and display function for cluster and node pool k8s versions and node pool OS versions.
- Provides OS selection dropdown feature when upgrading node pools.
- type: LB L7 listener idle-timeout added and session-duration-time default value changed and improved.
- Does not provide kubeconfig feature in Terraform.
- Kubernetes Engine feature changes
- Supports Kubernetes v1.33 version.
- Provides GPU Driver version information for node pool GPU nodes.
- Provides MNGC nodes in SR request setting format.
- Changes the maximum capacity of Block Storage for node pool OS to be the same as VM products from 1 TB → 12 TB.
- Provides additional validation for label key when creating/modifying node pools and additional validation for GPU node pool server group not supported.
- Kubernetes Engine feature changes
- Supports Kubernetes v1.32 version.
- Provides node pool advanced setting feature.
- Provides node pool server group (Affinity or Anti-affinity) setting feature.
- Provides user Kubeconfig download feature following the administrator Kubeconfig download button.
- Provides additional upgrade logic considering OS version when upgrading node pools.
- Provides log collection feature based on ServiceWatch integration.
- Kubernetes Engine feature changes
- Supports Kubernetes v1.31 version.
- Provides public endpoint for the cluster.
- Adds MNGC(Baremetal) product and DevOps Service product to private endpoint access control targets for the cluster.
- Provides node pool Label and Taint setting feature.
- Provides Block Storage CSI and kubectl login plugin features.
- kubeconfig vulnerability has been improved.
- Kubernetes Engine feature changes
- Provides private endpoint and access control features.
- Provides type: LoadBalancer feature.
- Kubernetes Engine feature changes
- Supports Kubernetes v1.30 version.
- Provides Kubernetes version upgrade feature for cluster and node pools.
- Provides Multi-Security Group feature.
- Provides Custom Image node and GPU node creation feature.
- Samsung Cloud Platform common feature changes
- Reflected common CX changes for Account, IAM, Service Home, and tags.
- Released Kubernetes Engine product that provides lightweight virtual computing Containers and Kubernetes clusters for managing them.
- Creates Container nodes and manages them through the cluster to enable deployment of various Container applications.
- Released Kubernetes Engine product Beta version.
